人工智能编程是否已经走得太远?我感觉自己对项目的控制力正在减弱。
我想分享一些关于人工智能编码助手的想法,这些想法困扰了我一段时间,我认为“一个拿着信用卡的孩子”的比喻恰如其分地捕捉到了某些人所称的“氛围编码”的危险。至少在我们拥有真正的通用人工智能之前,这似乎是一个严重的问题。
在过去一年里,我密集使用Cursor,惊讶于它的速度之快。它可以在几秒钟内搭建整个功能、连接组件并编写复杂的逻辑。这样的感觉就像是驾驶手动挡和自动挡汽车之间的区别。或者更准确地说,就像是阅读详细文档与观看总结视频之间的区别。
这让我回想起我在2023年首次使用GitHub Copilot的时光。当时,它主要用于自动补全方法和提供上下文建议。那种程度的帮助感觉刚刚好。对于更复杂的问题,我会有意识地切换上下文,向像ChatGPT这样的网络AI提问。我仍然是驾驶者。
但像Cursor这样的工具彻底改变了这种动态。它们如此主动,以至于我逐渐失去了深入思考业务逻辑的习惯。这并不是说我失去了思考的能力,而是我正在失去这种根深蒂固的、潜意识的行为。我不再被迫将整个架构牢记于心。
这导致我对项目的归属感逐渐减弱。工作流程变成了:
告诉AI写一个函数。
调试并测试它。
告诉AI写下一个与之连接的函数。
反复进行。虽然速度很快,但我最终得到的是一系列我促使其存在的黑箱。我的角色从“我知道我在构建什么”转变为“我知道我想要什么”。这之间有一个微妙但至关重要的区别。我正在变成一个指导AI实习生的项目经理,而不是一个打造解决方案的工程师。
这对个人开发者和项目的长期健康都是有害的。如果团队中的每个人都采用这种工作流程,那么谁真正理解全局呢?
这里有一个具体的例子,完美地说明了我的观点:编写git提交信息。
每次提交时,我都有一个个人规则:审查所有更改的文件,并用我自己的话写提交信息。这迫使我综合这些更改,并巩固我对项目在特定时间点状态的理解。这保持了我对项目的控制感。
如果让我让AI从差异中自动生成提交信息,我可能会节省几分钟。但一个月后回头看,我对那个提交将没有真正的记忆或上下文。它只会是一个在技术上准确但没有灵魂的日志条目。
我担心,通过优化短期速度,我们正在牺牲长期的理解和控制。
还有其他人感受到这种紧张吗?你们是如何在这些工具的强大功能与保持对自己代码库的掌控之间取得平衡的?
查看原文
I wanted to share some thoughts on AI coding assistants that have been bothering me for a while, and I think the analogy of "a kid with a credit card" perfectly captures the danger of what some call "vibecoding." At least until we have true AGI, this feels like a serious issue.<p>After using Cursor intensively for the better part of a year, I'm stunned by how fast it is. It can scaffold entire features, wire up components, and write complex logic in seconds. The feeling is like the difference between driving a car with a manual versus an automatic transmission. Or maybe, more accurately, like the difference between reading detailed documentation versus just watching a summary video.<p>It's brought me back to when I first started using GitHub Copilot in 2023. Back then, it was mostly for autocompleting methods and providing in-context suggestions. That level of assistance felt just right. For more complex problems, I'd consciously switch contexts and ask a web-based AI like ChatGPT. I was still the one driving.<p>But tools like Cursor have changed the dynamic entirely. They are so proactive that they're stripping me of the habit of thinking deeply about the business logic. It's not that I've lost the ability to think, but I'm losing the ingrained, subconscious behavior of doing it. I'm no longer forced to hold the entire architecture in my head.<p>This is leading to a progressively weaker sense of ownership over the project. The workflow becomes:<p>Tell the AI to write a function.<p>Debug and test it.<p>Tell the AI to write the next function that connects to it.<p>Rinse and repeat. While fast, I end up with a series of black boxes I've prompted into existence. My role shifts from "I know what I'm building" to "I know what I want." There's a subtle but crucial difference. I'm becoming a project manager directing an AI intern, not an engineer crafting a solution.<p>This is detrimental for both the individual developer and the long-term health of a project. If everyone on the team adopts this workflow, who truly understands the full picture?<p>Here’s a concrete example that illustrates my point perfectly: writing git commit messages.<p>Every time I commit, I have a personal rule to review all changed files and write the commit message myself, in my own words. This forces me to synthesize the changes and solidifies my understanding of the project's state at that specific point in time. It keeps my sense of control strong.<p>If I were to let an AI auto-generate the commit message from the diff, I might save a few minutes. But a month later, looking back, I’d have no real memory or context for that commit. It would just be a technically accurate but soulless log entry.<p>I worry that by optimizing for short-term speed, we're sacrificing long-term understanding and control.<p>Is anyone else feeling this tension? How are you balancing the incredible power of these tools with the need to remain the master of your own codebase?