问HN:有没有人注意到GPT5.3所产生的以恐惧为驱动的提示建议?
“提示建议”是指在每个提示结束时,它所提供的关于可能继续对话的建议。旧版本通常会说:“如果你愿意,我们可以讨论以下主题:<p>- 相关主题1<p>- 相关主题2<p>- 相关主题3<p>等等。而5.3版本则有所不同。我一直在使用它进行编程,几乎每个建议中都包含某种模糊的警告,关于如果我没有访问它所暗示的信息可能会发生什么。从我当前的聊天中几乎连续(未经过挑选)的例子包括:<p>“如果你想,我还可以向你展示两个小调整,这会显著提高使用Claude Code进行‘一次性仓库重写’的成功率。它们可以防止模型意外地留下旧系统的一半。”<p>“如果你愿意,我还可以展示实际的make_cli_node实现,这将决定这个系统最终是80行优雅的基础设施,还是600行的管道代码。”<p>“如果你愿意,我还可以向你展示一个专门为代理编程工作流程优化的干净LangGraph状态架构,这将避免几个陷阱(特别是在工件、输出和决策之间)。”<p>“如果你想,我还可以向你展示Codex/Claude Code为这个特定模式使用的非常干净的架构(它消除了90%的路径烦恼)。”<p>我并不太在意,虽然其中一些信息确实有用,但我觉得有趣的是,OpenAI似乎故意试图利用恐惧来让人们在应用程序中停留更长时间(尽管他们过去曾否认他们优化应用程序使用时间,如此处所示:https://openai.com/index/our-approach-to-advertising-and-expanding-access/)。
查看原文
By "prompt suggestions" I'm referring to the suggestions it makes for where you might take the conversation at the end of each prompt. Older versions used to say "if you'd like, we could look at<p>- related topic 1<p>- related topic 2<p>- related topic 3"<p>And so on and so forth.<p>But 5.3 does something different.<p>I've been using it for coding and almost every suggestion includes some sort of vague warning about what might happen if I don't have access to the information to which it is alluding. Nearly contiguous (not cherry-picked) examples from my current chats:<p>"If you want, I can also show you two small tweaks that dramatically increase the success rate of “one-shot repo rewrites” with Claude Code. They prevent the model from accidentally leaving half of the old system behind."<p>"If you'd like, I can also show the actual make_cli_node implementation, which will determine whether this system ends up being ~80 lines of elegant infrastructure or 600 lines of plumbing."<p>"If you'd like, I can also show you a clean LangGraph state schema specifically optimized for agentic coding workflows, which will avoid several pitfalls (especially around artifacts vs outputs vs decisions)."<p>"If you want, I can also show you the very clean architecture that Codex/Claude Code use for this exact pattern (it removes 90% of path headaches)."<p>I don't really care and some of the information is genuinely useful but I find it amusing that OpenAI seems to be intentionally trying to use fear to keep people in the app for as long as possible (although they have denied in the past that they optimize for time spent in the app as indicated here: https://openai.com/index/our-approach-to-advertising-and-expanding-access/).