请问HN:如何在广泛使用的AI编程工具下保持代码质量?
我注意到一个趋势:随着我公司(以及我参与的项目)中越来越多的开发者采用人工智能编码助手,代码质量似乎在下降。这是一个微妙的变化,但确实存在。
我不断注意到的问题包括:
- 更多“几乎正确”的代码,导致微妙的错误
- 代码库的架构一致性降低
- 更多应该重构的复制粘贴的模板代码
我知道,也许我们不应该过于关注整体质量,毕竟只有人工智能会进一步检查代码。但那是一个相对遥远的未来。就目前而言,我们应该自己在速度与质量之间找到平衡,同时借助人工智能助手的帮助。
所以,我很好奇,对于那些在不牺牲质量的情况下使人工智能工具发挥作用的团队,你们的做法是什么?你们是否采取了新的措施,比如特殊的审查流程、新的指标、培训或团队指导方针?
查看原文
I've noticed a trend: as more devs at my company (and in projects I contribute to) adopt AI coding assistants, code quality seems to be slipping. It's a subtle change, but it's there.<p>The issues I keep noticing:
- More "almost correct" code that causes subtle bugs
- The codebase has less consistent architecture
- More copy-pasted boilerplate that should be refactored<p>I know, maybe we shouldn't care about the overall quality and it's only AI that will look into the code further. But that's a somewhat distant variant of the future. For now, we should deal with speed/quality balance ourselves, with AI agents in help.<p>So, I'm curious, what's your approach for teams that are making AI tools work without sacrificing quality?
Is there anything new you're doing, like special review processes, new metrics, training, or team guidelines?