我在Cursor和Claude Code上花了4个月和每月800美元的人工智能费用。后者更好吗?

1作者: ianberdin1 天前原帖
嗨,HN。 关于Claude Code和整体AI代理的热潮非常高涨。经过四个月使用Cursor和一个月使用Claude Code,我已经成为了超级用户。在切换到他们的新订阅之前,我每月为Cursor支付高达700美元的使用费,而在过去一个月里,我一直在使用付费的Claude Code计划。我每天都在使用这些工具进行编码,使用的是Sonnet 4.0和Gemini 2.5 Pro。这是一份基于经验和挫折的指南。 首先,关于Claude Code(命令行代理)的评判。这个想法很棒——在终端上编程,甚至在服务器上。但在实践中,它的表现不尽如人意。你无法轻松追踪它的更改,几天之内,代码库就会变成一团糟,充满了临时解决方案和权宜之计。与Cursor相比,质量和生产力至少差了三倍。这是一个倒退。但能够一次性制作原型而不必担心代码库也是不错的。 现在,让我们谈谈大型语言模型(LLMs)。这是最重要的教训:模型并不具备思考能力。它们不是你的合作伙伴,而是超敏感的计算器。最好的类比是时间旅行:在过去改变一个微小的细节,整个未来都会不同。LLM也是如此。你输入上下文中的一个小变化会完全改变输出。垃圾进,垃圾出。没有懒惰的空间。 理解这一点改变了一切。你不再指望AI“搞定”一切,而是开始设计完美的输入。在我与LLM进行广泛的工作后,无论是在编辑器中还是通过它们的API,以下是获得高级代码而非初级代码的不可妥协的规则。 绝对上下文是不可妥协的。你必须提供99%的相关代码作为上下文。如果你遗漏了哪怕一点,模型就会不知道自己的边界;它会产生幻觉来填补空白。这是错误的主要来源。 为AI重构你的代码。如果你的代码太大,无法适应上下文窗口(Cursor的最大限制是20万标记),那么LLM在复杂任务中就毫无用处。你必须编写干净、模块化的代码,将其拆分成AI可以消化的小块。架构必须服务于AI。 强制提供上下文。Cursor试图通过限制发送的上下文来节省成本。这是一个致命的缺陷。我构建了一个简单的命令行工具,使用正则表达式抓取所有相关文件,将它们合并成一个文本块,并打印到我的终端。我将这个150k-200k标记的完整块复制并直接粘贴到聊天中。这是获得良好结果的最重要技巧。 隔离任务。只给LLM一个小的、孤立的工作片段,这样你可以自己跟踪。如果你无法明确任务的确切范围和边界,AI就会失控,你将面临一团无法解开的混乱。 “糟糕!重做。”永远不要让AI修复它自己写的糟糕代码。它只会让问题更严重。如果输出错误,完全舍弃它。撤回更改,优化你的上下文和提示,从头开始。 与LLM合作就像处理一只攻击性强、力量巨大的斗牛犬。你需要一个带刺的项圈——严格的规则和完美的上下文——来控制它。
查看原文
Hi HN. There is a huge hype around Claude Code, and AI agents overall.<p>After four months with Cursor and one with Claude Code, I&#x27;m a super-user. I was paying up to $700&#x2F;mo for Cursor on a usage basis before switching to their new subscription, and I&#x27;ve been on a paid Claude Code plan for the last month. I code every day with these tools, using Sonnet 4.0 and Gemini 2.5 Pro. This is a guide born from experience and frustration.<p>First, the verdict on Claude Code (the CLI agent). The idea is great—programming from the terminal, even on a server. But in practice, it&#x27;s inferior. You can&#x27;t easily track its changes, and within days, the codebase becomes a mess of hacks and crutches. Compared to Cursor, the quality and productivity are at least three times worse. It’s a step backward. But it is nice to make one-time prototypes without worrying about codebase.<p>Now, let&#x27;s talk about LLMs. This is the most important lesson: models do not think. They are not your partner. They are hyper-sensitive calculators. The best analogy is time travel: change one tiny detail in the past, and the entire future is different. It’s the same with an LLM. One small change in your input context completely alters the output. Garbage in, garbage out. There is no room for laziness.<p>Understanding this changes everything. You stop hoping the AI will &quot;figure it out&quot; and start engineering the perfect input. After extensive work with LLMs both in my editor and via their APIs, here are the non-negotiable rules for getting senior-level code instead of junior-level spaghetti.<p>Absolute Context is Non-Negotiable. You must provide 99% of the relevant code in the context. If you miss even a little, the model will not know its boundaries; it will hallucinate to fill the gap. This is the primary source of errors.<p>Refactor Your Code for the AI. If your code is too large to fit in the context window (Cursor&#x27;s max is 200k tokens), the LLM is useless for complex tasks. You must write clean, modular code broken into small pieces that an AI can digest. The architecture must serve the AI.<p>Force-Feed the Context. Cursor tries to save money by limiting the context it sends. This is a fatal flaw. I built a simple CLI tool that uses regex to grab all relevant files, concatenates them into a single text block, and prints it to my terminal. I copy this entire 150k-200k token block and paste it directly into the chat. This is the single most important hack for good results.<p>Isolate the Task. Only give the LLM a small, isolated piece of work that you can track yourself. If you can&#x27;t define the exact scope and boundaries of the task, the AI will run wild and you will be left with a mess you can&#x27;t untangle.<p>&quot;Shit! Redo.&quot; Never ask the AI to fix its own bad code. It will only dig a deeper hole. If the output is wrong, scrap it completely. Revert the changes, refine your context and prompt, and start from scratch.<p>Working with an LLM is like handling an aggressive, powerful pitbull. You need a spiked collar—strict rules and perfect context—to control it.