Warp在未经用户同意的情况下,将终端会话发送给大型语言模型(LLM)。
想知道被使用的感觉是什么样的吗?一种体验方式是发现你的终端悄悄开始将命令输出发送给大型语言模型(LLMs)。
今天,在尝试运行一个测试后,我收到了一个关于如何修复语法错误的LLM建议。
于是,我去了Warp的Discord询问发生了什么,果然,他们的“友好支持机器人”也发现了这一点。
> Warp引入了像提示建议和下一个命令这样的功能,利用LLMs提供上下文建议。这些功能是Warp的主动AI系统的一部分,能够根据你的终端会话主动推荐修复方案和后续操作,包括错误、输入和输出。
这里的“主动”也意味着在没有明确用户同意的情况下。
我确实喜欢Warp,但这种信任的破裂实在太大了,我现在就决定卸载它。
这件事充分说明了伦理和重要性的问题。
参考链接:https://docs.warp.dev/agents/active-ai
查看原文
Wonder how being used feels like? One way to experience that is to discover that your terminal silently started to send command outputs to LLMs.<p>Today, I got an LLM suggestion on how to fix a syntactic error after following an attempt to run a test.<p>So, I went on to Warp's Discord to ask what's going on, and sure enough, their "Friendly support bot" and I discovered that.<p>> Warp has introduced features like Prompt Suggestions and Next Command that use LLMs to provide contextual suggestions. These features are part of Warp's Active AI system, which proactively recommends fixes and next actions based on your terminal session, including errors, inputs, and outputs.<p>"Proactively" here also means without explicit user consent.<p>I did enjoy Warp, but that breach of trust is so enormous I'm removing it just now.<p>This tells volumes about ethics and what's important.<p>Ref: https://docs.warp.dev/agents/active-ai