问HN:对同事主要由AI生成的PR提供反馈的礼仪是什么?

2作者: chfritz大约 2 个月前原帖
我在寻找合适的方式对主要由AI生成代码的拉取请求(PR)提供反馈时感到困惑。提交这些请求的同事们已经学会了披露这一点——当他们没有这样做时,我感到很沮丧——而现在他们表示自己已经对代码进行了审查和迭代。但结果往往仍然是我所称之为“偏离目标的大贡献”,这意味着很多代码只是遵循了错误的方法。 通常,当有人做了大量工作时,我们过去可以通过代码行数来衡量,这样在事后批评他们似乎不太公平。一个良好的开发流程,通过工单讨论,可以确保在达成对总体方法的共识之前,某人不会做大量工作。但现在,随着AI的出现,这一流程不再有效,部分原因是“太容易”在甚至未决定之前就完成了这些工作。 所以我在问自己,也在问HN:当一个PR整体上毫无价值,应该被丢弃时,指出这一点是否合适?如果我甚至不知道他们是否了解自己提交的代码,我怎么能判断同事在这上面花费了多少“脑力”以及他们对此有多么依恋? 我必须承认,我“讨厌”审查庞大的PR,而AI生成代码的问题在于,通常找到并使用现有的开源库来完成任务会更好,而不是(重新)生成大量代码。但在我真正花时间审查和理解这些新的大贡献之前,我怎么知道这一点?即使我现在确实花时间去理解代码和隐含的方法,我又如何知道其中哪部分反映了他们的真实观点和智力(我会对批评这些部分感到犹豫),而哪些是我可以毫不留情地拆解的AI生成的内容?如果答案是“我们开个会”,那么我会说这个过程就失败了。 不确定这里是否有正确的答案,但我很想听听大家对此的看法。
查看原文
I struggle to find the right way to provide feedback on pull requests (PRs) that mostly consist of AI generated code. Co-workers submitting them have learned to disclose this -- I found it frustrating when they didn&#x27;t -- and now say they have reviewed and iterated on it. But often the result is still what I would describe as &quot;a big contribution off the mark&quot;, meaning a lot of code that just follows the wrong approach.<p>Usually, when someone does a lot of work, which we used to be able to measure in lines of code, it would seem unfair to criticize them afterwards. A good development process with ticket discussions would ensure that someone <i>doesn&#x27;t</i> do a lot of work before there is agreement on the general approach. But now, with AI, this script no longer works, partially because it&#x27;s &quot;too easy&quot; to do it before even deciding this.<p>So I&#x27;m asking myself and now HN: is it OK to point out when an entire PR as such is garbage and should simply be discarded? How can I tell how much &quot;brain juice&quot; a co-worker has spent on it and how attached they might be to it by now if I don&#x27;t even know whether they even know the code they submitted or not?<p>I have to admit that I <i>hate</i> reviewing huge PRs and the problem with AI generated code is that often it would have been much better to find and use an existing open-source library to get the task done rather than (re-)generate a lot of code for it. But how will I know this until I&#x27;ve actually taken the time to review and understand the big, new proposed contributions? And even if I now <i>do</i> spend the time to actually understand the code and implied approach, how will I know which part of it reflects their genuine opinion and intellect (which I&#x27;d be hesitant to criticize) and what is AI-fluff I can rip apart without stepping on their toes? If the answer is &quot;let&#x27;s have a meeting&quot;, then I&#x27;d say the process has failed.<p>Not sure there is a right answer here, but I would love to hear people&#x27;s take on this.