请问HN:端到端加密的LLM聊天(开放模型和闭合模型)
我正在探索一个软件层——类似于公钥/私钥加密——使用户能够与大型语言模型(LLM)进行对话,同时保证提示和响应对所有中介(包括模型托管方)都是不可读的。(我指的是加密学意义上的“密码”。)
两个案例:
1. 开放权重模型:确保操作员仍然无法读取提示/响应。
2. 闭合托管模型:真正的端到端加密(E2EE),以至于提供者也无法检查内容。
我们可以讨论的话题:
- 最佳的短期路径:带有证明的可信执行环境(TEE)、同态加密(FHE/HE)、多方计算(MPC)/分离推理、用于检索的私密信息检索(PIR)、差分隐私或混合方案?
- 如何处理前向保密的密钥交换/轮换?
- 实际性能/准确性限制(例如,非线性、键值缓存、流式处理)?
- 最小可行架构和现实威胁模型?
- 你能推荐的任何先前的研究或团队?
如果你有兴趣与我合作,请私信我。
查看原文
I’m exploring a software layer—analogous to public/private-key crypto—so a user can converse with an LLM where prompts and responses remain unreadable to all intermediaries, including the model host. (I mean “cipher” in the cryptographic sense.)<p>Two cases:
Open-weights model: ensure the operator still can’t read prompts/responses.
Closed, hosted model: true E2EE so even the provider can’t inspect content.<p>Topics we can discuss:
Best near-term path: TEEs with attestation, FHE/HE, MPC/split inference, PIR for retrieval, differential privacy, or hybrids?
How to handle key exchange/rotation for forward secrecy?
Practical performance/accuracy limits (e.g., non-linearities, KV-cache, streaming)?
Minimal viable architecture and realistic threat model?
Any prior art or teams you’d point me to?<p>Please DM if you are interested in working with me.