展示HN:Docgen – 一款C++ AI命令行工具,旨在通过本地大语言模型解决文档地狱问题

1作者: alonsovm25 天前原帖
嗨,HN, 我是一名独立开发者,厌倦了“文档地狱”,要么花费数小时编写的文档立刻过时,要么根本没有文档。我想要一个将文档生成视为标准构建步骤的工具,因此我开发了 Docgen。 Docgen 是一个轻量级的 AI 命令行工具,使用 C++ 编写,旨在实现文档即代码的自动化。它位于你的代码库中(通过 .docgen 文件夹和 Docfile),并在你的源代码旁生成 Markdown 文件。 以下是一些关于它如何在后台工作的技术细节: - 本地优先 & 私密:默认使用本地的 Ollama,这样你的专有代码永远不会离开你的机器(不过如果你愿意,它也支持像 OpenAI/Gemini 这样的云 API)。 - 智能增量构建:它使用内容哈希。当你运行 docgen update 时,它只会重新生成实际更改过的文件的文档,从而节省大量的 API 费用和计算时间。 - 上下文感知(RAG):它会自动分析 #include 依赖关系,为 LLM 提供正确的上下文,而不是盲目地将单个文件喂给它。 - 零依赖:编译为单个静态二进制文件。只需下载并运行即可。 - “自动”模式:这是我最喜欢的部分。如果你运行“docgen auto”,它会作为文件监视器,内置防抖动(在你停止输入/保存后等待几秒)。它在后台静默更新你的 Markdown 文档,让你保持工作状态。 我目前专注于改进 RAG 上下文处理。 你可以在这里查看: [https://github.com/alonsovm44/docgen](https://github.com/alonsovm44/docgen) 我很想听听你的想法、对架构的批评,或者你认为我应该处理的任何边缘案例!
查看原文
Hi HN,<p>I’m a solo dev who got tired of the &quot;documentation hell&quot;, either spending hours writing docs that immediately become outdated, or having no docs at all. I wanted a tool that treats documentation generation as a standard build step, so I built Docgen.<p>Docgen is a lightweight AI CLI tool written in C++ that automates docs-as-code. It sits in your repo (via a .docgen folder and a Docfile) and generates Markdown files next to your source.<p>A few technical details on how it works under the hood:<p>- Local-First &amp; Private: It defaults to using Ollama locally so your proprietary code never leaves your machine (though it supports cloud APIs like OpenAI&#x2F;Gemini if you prefer).<p>- Smart Incremental Builds: It uses content hashing. When you run docgen update, it only regenerates docs for files that actually changed, saving massive amounts of API credits and compute time.<p>- Context-Aware (RAG): It automatically analyzes #include dependencies to give the LLM the right context, rather than just blind-feeding it a single file.<p>- Zero Dependencies: Compiled as a single static binary. Just download and run.<p>- The &quot;Auto&quot; Mode: This is my favorite part. If you run &quot;docgen auto&quot;, it acts as a file watcher with a built-in debounce (waits a few seconds after you stop typing&#x2F;saving). It quietly updates your Markdown docs in the background while you stay in your flow state.<p>I’m currently focused on improving the RAG context handling.<p>You can check it out here: <a href="https:&#x2F;&#x2F;github.com&#x2F;alonsovm44&#x2F;docgen" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;alonsovm44&#x2F;docgen</a><p>I&#x27;d love to hear your thoughts, critique on the architecture, or any edge cases you think I should handle!