问HN:托管的人工智能模型是否能实现有意义的隐私保护?
我一直在思考使用像Claude或ChatGPT这样的前沿人工智能模型时的隐私权权衡问题。即使使用VPN,服务提供商仍然能够看到输入内容,并且使用情况与某种方式与身份相关的账户和支付方式绑定在一起。我一直在寻找一种方法,可以在不创建与我身份相关的特定服务提供商账户的情况下访问这些模型。理想情况下,通过某种中介来抽象身份,并且不保留输入内容。从技术和经济的角度来看,这种设置是否能显著改善隐私,还是仅仅转移了信任?在托管人工智能的情况下,真正的隐私是否在根本上是不现实的,无论架构如何?
查看原文
I've been thinking about the privacy tradeoffs when using frontier AI models like Claude or chatgpt.<p>Even with a VPN, providers still see prompts, and usage is tied to accounts and payment methods linked to identity in some way.<p>I've been trying to find a way to access these models without creating provider-specific accounts tied to my identity. Ideally, through some kind of intermediary that abstracts identity and doesn't retain prompts.<p>From a technical and economic perspective, would that kind of setup meaningfully improve privacy, or does it just shift trust? Is meaningful privacy with hosted AI fundamentally unrealistic regardless of architecture?