提示工程正在崩溃——GPT-5刚刚证明了这一点。
GPT-5 是一款强大的模型。但有一点是没人愿意大声说出来的:它彻底终结了提示工程作为一种可持续实践。<p>经过精心调整的 GPT-4 的提示?完全失效。<p>风格、逻辑、回答习惯?全部改变。<p>公司们?被迫在一夜之间回滚或重新测试成千上万的提示。<p>这不是进步,而是伪装成创新的技术债务。每一次新版本的发布都意味着要支付提示迁移税:重写、回归测试和重新培训团队。<p>与此同时:<p>用户正在失去信任——要么坚持使用旧模型,要么更换服务提供商。<p>安全性令人堪忧——OWASP 已经将提示注入标记为首要的 LLM 风险,NIST 也持相同观点。<p>供应商不断推出“最佳实践”,比如更长的分隔符或系统提示……这只是对结构性伤口的临时修补。<p>这个循环是这样的:升级 → 崩溃 → 修补 → 再次崩溃 → 再次修补。在整个行业意识到这是一条死胡同时,还要多久?<p>提示工程不是未来,而是一种陷阱。GPT-5 则让这一点变得异常清晰。
查看原文
GPT-5 is a beast. But here’s the thing nobody wants to say out loud: it just killed prompt engineering as a sustainable practice.<p>Carefully tuned prompts from GPT-4o? Broken.<p>Styles, logic, answer habits? All shifted.<p>Companies? Forced to roll back or re-test thousands of prompts overnight.<p>This isn’t progress. It’s technical debt disguised as innovation. Every new release means paying a Prompt Migration Tax: rewriting, regression-testing, and re-training teams.<p>Meanwhile:<p>Users are losing trust — sticking with old models or switching providers.<p>Security is a joke — OWASP already flagged prompt injection as the #1 LLM risk, and NIST said the same.<p>Vendors keep pushing “best practices” like longer separators or system prompts… band-aids on a structural wound.<p>The cycle looks like this: upgrade → break → patch → break again → patch again. How long before the entire industry realizes this is a dead end?<p>Prompt engineering isn’t the future. It’s a trap.
And GPT-5 just made that painfully clear.