展示 HN:Doksnet – 通过哈希验证保持文档与代码同步

1作者: pulko大约 1 个月前原帖
文档会出现偏差。README.md 中的示例可能与实现不再匹配,而持续集成(CI)通常不会注意到这一点。<p>我开发了 doksnet,这是一个小型的 Rust 命令行工具,它可以将文档部分与代码片段链接起来,并通过 Blake3 哈希值验证两者是否保持同步。<p>你可以定义如下的映射:<p>• README.md:15-25<p>• src/lib.rs:40-65<p>doksnet 将这些范围及其哈希值存储在一个紧凑的 .doks 文件中。doksnet test 会重新提取内容,如果有任何变化(包括空格),则会失败(退出代码 1)。<p>基本流程:<p>• doksnet new – 初始化<p>• doksnet add – 创建文档 ↔ 代码映射(交互式)<p>• doksnet test – CI 安全验证<p>• doksnet test-interactive – 审查/修复不匹配<p>它是本地仓库的,不依赖外部服务,也没有解析/抽象语法树魔法——仅仅是确定性的文本提取和哈希。<p>如果你想在 CI 中强制同步,还有一个 GitHub Action。<p>仓库: <a href="https://github.com/Pulko/doksnet" rel="nofollow">https://github.com/Pulko/doksnet</a> 安装:cargo install doksnet 网站: <a href="https://doksnet.pulko-app.com" rel="nofollow">https://doksnet.pulko-app.com</a><p>希望能收到对这种方法的反馈——特别是这个工具是否能比“根据新变化重写所有 README,确保没有错误”更有用,尤其是在 AI 使用受到限制的环境中。
查看原文
Docs drift. Examples in README.md stop matching the implementation. CI usually doesn’t notice.<p>I built doksnet, a small Rust CLI that lets you link a documentation section to a code snippet and verify that both sides stay in sync using Blake3 hashes.<p>You define mappings like:<p>• README.md:15-25<p>• src&#x2F;lib.rs:40-65<p>doksnet stores the ranges + their hashes in a compact .doks file. doksnet test re-extracts the content and fails (exit code 1) if anything changed — including whitespace.<p>Basic flow:<p>• doksnet new – initialize<p>• doksnet add – create doc ↔ code mapping (interactive)<p>• doksnet test – CI-safe verification<p>• doksnet test-interactive – review&#x2F;fix mismatches<p>It’s repo-local, no external service, no parsing&#x2F;AST magic — just deterministic text extraction + hashing.<p>There’s also a GitHub Action if you want to enforce sync in CI.<p>Repo: <a href="https:&#x2F;&#x2F;github.com&#x2F;Pulko&#x2F;doksnet" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;Pulko&#x2F;doksnet</a> Install: cargo install doksnet Web: <a href="https:&#x2F;&#x2F;doksnet.pulko-app.com" rel="nofollow">https:&#x2F;&#x2F;doksnet.pulko-app.com</a><p>Interested in feedback on the approach — especially whether this tool could be more useful then “Rewrite all the readme based on new changes. Make no mistakes”, for instance, in environments where AI usage is restricted