请问HN:你对大型语言模型(LLMs)和用户隐私有什么看法?

2作者: eniz9 个月前原帖
我对使用托管的大型语言模型(LLM)时的用户隐私越来越感兴趣。许多流行的LLM应用和API需要将您的提示、消息以及潜在的敏感信息发送到远程服务器进行处理——有时还会经过多个第三方提供商。 - 您对主要的LLM提供商(如OpenAI、Anthropic、Google等)在数据处理方面的信任度有多高? - 对于那些正在开发或部署LLM应用的人,您采取了哪些措施来最大化用户隐私? - 您认为最终用户一般是否意识到他们的数据去向以及如何被使用,还是这个问题仍然被忽视? 我很想听听您在部署基于LLM的用例时关于隐私的看法、经验或推荐的最佳实践。
查看原文
I&#x27;m increasingly curious about user privacy when it comes to hosted LLM usage. Many popular LLM apps and APIs require sending your prompts, messages, and potentially sensitive information to remote servers for processing—sometimes routed through multiple third-party providers.<p>- How much do you trust major LLM providers (OpenAI, Anthropic, Google, etc.) with your data?<p>- For those working on or deploying LLM applications, what approaches do you take to maximize user privacy?<p>- Do you think end users are generally aware of where their data is going and how it&#x27;s being used, or is this still an overlooked issue?<p>I&#x27;d love to hear your perspectives, experiences, or any best practices you recommend on privacy when deploying LLM-powered use cases.