垃圾进,垃圾出:大语言模型时代人类需求的退化
大型语言模型的悖论:我们正在忘记如何与人类交流
随着我们越来越多地使用大型语言模型(LLM)服务,我在职场上看到一种特定的“精神错乱”现象在蔓延。LLM能够从模糊的提示中幻化出连贯的答案,以至于人们开始相信他们的模糊提示实际上是连贯的。
LLM并不是人类
这听起来显而易见,但我们正在逐渐失去对这一事实的把握。人们开始把同事视为一个黑箱型的LLM。他们忘记了人类沟通需要精准、共享的背景和责任。在LLM出现之前,“让它更出彩”这个短语通常是留给无知客户的。现在,它正逐渐成为工程团队内部的标准操作程序。
“做好它,你自己想办法”的谬论
我看到一些经理——即使是那些有工程背景的——对自己糟糕的想法被追责感到恐惧。他们躲在模糊的表述后面,利用像Claude Code这样的工具作为盾牌,来绕过技术债务的讨论。
当一名工程师花费数天时间修复一个不成熟的需求并管理技术约束时,得到的反馈不是“谢谢你的尽职调查。”而是:“看吧,毕竟是可能的。你为什么那么强烈反对?LLM几秒钟就能做到。”这就是精神操控。他们想要的是高级工程师的输出,却提供了垃圾提示的输入。
表达能力的丧失
LLM接受“垃圾输入”,并提供“合理输出”。这已经成为一种毒瘤。人们正在失去表达自己想法的能力。他们把一堆杂乱无章的话扔给你,期待奇迹。如果这种情况持续下去,我们不仅仅是在面对糟糕的软件;我们还在目睹专业理智的崩溃。
我自己也感受到了这些症状。最近,我发现自己在想:“向我的团队解释这个简直是浪费‘沟通成本’。我宁愿多花点钱买API令牌,自己来做。”
但我们必须记住:一个高效的团队并不是一群提示工程师。真正的团队合作效率是单个开发者与LLM相比的几何倍数。我们不能失去彼此交流的艺术。
查看原文
The LLM Paradox: We’re Forgetting How to Speak to Humans<p>The longer we use LLM services, the more I see a specific kind of "psychosis" spreading in the workplace. LLMs are so good at hallucinating a coherent answer from a vague prompt that people have started to believe their vague prompts were actually coherent.<p>LLMs Are Not Humans
It sounds obvious, but we are losing our grip on this fact. People are beginning to treat their colleagues like a black-box LLM. They’ve forgotten that human communication requires precision, shared context, and accountability. In the pre-LLM era, "make it pop" was a phrase reserved for clueless clients. Now, it’s becoming the standard operating procedure inside engineering teams.<p>The "Do It Well, You Figure It Out" Fallacy
I see managers—even those with engineering backgrounds—who are terrified of being held accountable for their own bad ideas. They hide behind vagueness. They use tools like Claude Code as a shield to bypass technical debt discussions.<p>When an engineer spends days fixing a half-baked requirement and managing technical constraints, the feedback isn't "Thank you for the due diligence." Instead, it’s: "See? It was possible after all. Why did you push back so hard? LLMs could've done it in seconds." This is gaslighting. They want the output of a senior engineer while providing the input of a garbage prompt.<p>The Death of Articulation
LLMs accept "garbage in" and provide "plausible out." This has become a drug. People are losing the ability to articulate their own thoughts. They throw a mess of words at you and expect a miracle. If this continues, we aren't just looking at bad software; we’re looking at a breakdown of professional sanity.<p>I’ve felt the symptoms myself. Lately, I’ve caught myself thinking, "Explaining this to my team is a waste of 'communication cost.' I’d rather just pay for more API tokens and do it myself."<p>But we must remember: A high-functioning team is not a collection of prompt engineers. True teamwork is exponentially more efficient than a lone developer with an LLM. We cannot afford to lose the art of talking to each other.