展示HN:在VSCode、Cursor和Windsurf中提供免费的本地安全检查,专为AI编码设计

9作者: jaimefjorge8 个月前原帖
嗨,HN!<p>我们刚刚推出了Codacy Guardrails,这是一款集成开发环境(IDE)扩展,配备命令行界面(CLI)用于代码分析,以及一个MCP服务器,能够实时强制执行针对AI生成代码的安全和质量规则。它可以与AI编码助手(如VS Code Agent模式、Cursor、Windsurf)无缝集成,在代码生成的同时,静默扫描并修复存在漏洞或违反编码标准的AI建议代码。<p>我们之所以开发这个工具,是因为编码助手可能是一把双刃剑。虽然它们确实提高了生产力,但也很容易引入不安全或不合规的代码。纽约大学的一项最新研究发现,40%的Copilot输出存在错误或可被利用的漏洞[1]。其他调查也提到,人们在调试AI生成的代码上花费了更多时间[2]。<p>这就是我们创建“保护措施”的原因,以便尽早发现安全问题。<p>Codacy Guardrails利用一系列开源静态分析工具(如Semgrep和Trivy)对AI的输出进行扫描,符合2000多个规则。目前我们支持JavaScript/TypeScript、Python和Java,重点关注OWASP前10大漏洞、硬编码的秘密、依赖性检查、代码复杂性和样式违规等问题,并且您可以自定义规则以满足项目需求。我们不使用任何AI模型,而是采用“经典”的静态代码分析,与您的AI助手协同工作。<p>这里有一个快速演示:<a href="https://youtu.be/pB02u0ntQpM" rel="nofollow">https://youtu.be/pB02u0ntQpM</a><p>该扩展对所有开发者免费。(我们确实有付费计划,供团队集中应用规则,但使用扩展和本地代码分析与助手并不需要这些计划。)<p>设置非常简单:安装扩展并从侧边栏启用Codacy的CLI和MCP服务器。<p>我们期待听到HN社区的反馈!这种方法在您的AI编码工作流程中是否有用?您是否遇到过AI生成代码的安全问题?<p>我们希望Codacy Guardrails能够使AI辅助开发变得更加安全和可靠。感谢您的阅读!<p>获取扩展:<a href="https://www.codacy.com/get-ide-extension" rel="nofollow">https://www.codacy.com/get-ide-extension</a> 文档:<a href="https://docs.codacy.com/codacy-guardrails/codacy-guardrails-getting-started/" rel="nofollow">https://docs.codacy.com/codacy-guardrails/codacy-guardrails-...</a><p>来源 [1]:纽约大学研究:<a href="https://www.researchgate.net/publication/388193053_Asleep_at_the_Keyboard_Assessing_the_Security_of_GitHub_Copilot's_Code_Contributions" rel="nofollow">https://www.researchgate.net/publication/388193053_Asleep_at...</a> [2]:<a href="https://devops.com/survey-ai-tools-are-increasing-amount-of-bad-code-needing-to-be-fixed" rel="nofollow">https://devops.com/survey-ai-tools-are-increasing-amount-of-...</a>
查看原文
Hi HN!<p>We just launched Codacy Guardrails, an IDE extension with a CLI for code analysis and MCP server that enforces security &amp; quality rules on AI-generated code in real-time. It hooks into AI coding assistants (like VS Code Agent Mode, Cursor, Windsurf), silently scanning and fixing AI-suggested code that has vulnerabilities or violates your coding standards, while the code it’s being generated.<p>We built this because coding agents can be a double-edged sword. They do boost productivity, but can easily introduce insecure or non-compliant code. One recent research team at NYU found that 40% of Copilot’s outputs were buggy or exploitable [1]. Other surveys mention that people are spending more time debugging AI-generated code [2].<p>That&#x27;s why we created “guardrails” to catch security problems early.<p>Codacy Guardrails uses a collection of open-source static analyzers (like Semgrep and Trivy) to scan the AI’s output against 2000+ rules. We currently support JavaScript&#x2F;TypeScript, Python, and Java, focusing on things like OWASP Top 10 vulns, hardcoded secrets, dependency checks, code complexity and styling violations, and you can customize the rules to match your project’s needs. We&#x27;re not using any AI models, it&#x27;s “classic” static code analysis working alongside your AI assistant.<p>Here’s a quick demo: <a href="https:&#x2F;&#x2F;youtu.be&#x2F;pB02u0ntQpM" rel="nofollow">https:&#x2F;&#x2F;youtu.be&#x2F;pB02u0ntQpM</a><p>The extension is free for all developers. (We do have paid plans for teams to apply rules centrally, but that’s not needed to use the extension and local code analysis with agents.)<p>Setup is pretty straightforward: Install the extension and enable Codacy’s CLI and MCP Server from the sidebar.<p>We’re eager to hear what the HN community thinks! Does this approach sound useful in your AI coding workflow? Have you encountered security issues from AI-generated code?<p>We hope Codacy Guardrails can make AI-assisted development a bit safer and more trustworthy. Thanks for reading!<p>Get extension: <a href="https:&#x2F;&#x2F;www.codacy.com&#x2F;get-ide-extension" rel="nofollow">https:&#x2F;&#x2F;www.codacy.com&#x2F;get-ide-extension</a> Docs: <a href="https:&#x2F;&#x2F;docs.codacy.com&#x2F;codacy-guardrails&#x2F;codacy-guardrails-getting-started&#x2F;" rel="nofollow">https:&#x2F;&#x2F;docs.codacy.com&#x2F;codacy-guardrails&#x2F;codacy-guardrails-...</a><p>Sources [1]: NYU Research: <a href="https:&#x2F;&#x2F;www.researchgate.net&#x2F;publication&#x2F;388193053_Asleep_at_the_Keyboard_Assessing_the_Security_of_GitHub_Copilot&#x27;s_Code_Contributions" rel="nofollow">https:&#x2F;&#x2F;www.researchgate.net&#x2F;publication&#x2F;388193053_Asleep_at...</a> [2]: <a href="https:&#x2F;&#x2F;devops.com&#x2F;survey-ai-tools-are-increasing-amount-of-bad-code-needing-to-be-fixed" rel="nofollow">https:&#x2F;&#x2F;devops.com&#x2F;survey-ai-tools-are-increasing-amount-of-...</a>