Launch HN: Hyprnote(YC S25)– 一款开源的AI会议记笔记工具
嗨,HN!我们是来自Hyprnote的Yujong、John、Duck和Sung(网址:<a href="https://hyprnote.com" rel="nofollow">https://hyprnote.com</a>)。我们正在构建一个开源、以隐私为首的AI笔记应用,完全在本地运行。可以把它想象成一个开源的Granola。没有Zoom机器人,没有云API,数据永远不会离开您的设备。
<p>源代码:<a href="https://github.com/fastrepl/hyprnote">https://github.com/fastrepl/hyprnote</a>
演示视频:<a href="https://hyprnote.com/demo" rel="nofollow">https://hyprnote.com/demo</a>
<p>我们之所以创建Hyprnote,是因为一些朋友告诉我们,他们的公司由于数据安全问题禁止使用某些会议记录工具,或者他们对将数据发送到未知服务器感到不安。因此,他们不得不回到手动记笔记的方式——在会议中失去专注,事后浪费时间。
<p>我们问自己:能否构建一个同样有用但完全本地化的工具?
<p>Hyprnote是一个桌面应用程序,可以在本地转录和总结会议。它同时捕捉您的麦克风输入和系统音频,因此您无需邀请机器人。它会根据您所做的笔记生成总结。所有功能默认运行在本地AI模型上,使用Whisper和HyprLLM。HyprLLM是我们从Qwen3 1.7B微调的概念验证模型。我们了解到,总结会议是一项非常复杂的任务,模型的原始智能(或权重)并不是那么重要。我们将在完成模型的第二次迭代后发布更多关于评估和训练的细节(目前还不是很好,我们可以做得更好)。
<p>Whisper推理:<a href="https://github.com/fastrepl/hyprnote/blob/main/crates/whisper-local/src/model.rs">https://github.com/fastrepl/hyprnote/blob/main/crates/whisper-local/src/model.rs</a>
<p>AEC推理:<a href="https://github.com/fastrepl/hyprnote/blob/main/crates/aec/src/lib.rs">https://github.com/fastrepl/hyprnote/blob/main/crates/aec/src/lib.rs</a>
<p>LLM推理:<a href="https://github.com/fastrepl/hyprnote/blob/main/crates/llama/src/lib.rs">https://github.com/fastrepl/hyprnote/blob/main/crates/llama/src/lib.rs</a>
<p>我们还了解到,对于一些用户来说,完全的数据可控性与隐私同样重要。因此,我们支持自定义端点,允许用户引入公司内部的LLM。对于需要集成、协作或管理员控制的团队,我们正在开发一个可选的服务器组件,可以自托管。最后,我们正在探索如何让Hyprnote像VSCode一样工作,以便您可以安装扩展并围绕会议构建自己的工作流程。
<p>我们相信,以本地模型为驱动的隐私优先工具,将会开启现实世界AI应用的下一个浪潮。
<p>我们在这里,期待您的反馈!
查看原文
Hi HN! We're Yujong, John, Duck, and Sung from Hyprnote (<a href="https://hyprnote.com" rel="nofollow">https://hyprnote.com</a>). We're building an open-source, privacy-first AI note-taking app that runs fully on-device. Think of it as an open-source Granola. No Zoom bots, no cloud APIs, no data ever leaves your machine.<p>Source code: <a href="https://github.com/fastrepl/hyprnote">https://github.com/fastrepl/hyprnote</a>
Demo video: <a href="https://hyprnote.com/demo" rel="nofollow">https://hyprnote.com/demo</a><p>We built Hyprnote because some of our friends told us that their companies banned certain meeting notetakers due to data concerns, or they simply felt uncomfortable sending data to unknown servers. So they went back to manual note-taking - losing focus during meetings and wasting time afterward.<p>We asked: could we build something just as useful, but completely local?<p>Hyprnote is a desktop app that transcribes and summarizes meetings on-device. It captures both your mic input and system audio, so you don't need to invite bots. It generates a summary based on the notes you take. Everything runs on local AI models by default, using Whisper and HyprLLM. HyprLLM is our proof-of-concept model fine-tuned from Qwen3 1.7B. We learned that summarizing meetings is a very nuanced task and that a model's raw intelligence (or weight) doesn't matter THAT much. We'll release more details on evaluation and training once we finish the 2nd iteration of the model (still not that good we can make it a lot better).<p>Whisper inference: <a href="https://github.com/fastrepl/hyprnote/blob/main/crates/whisper-local/src/model.rs">https://github.com/fastrepl/hyprnote/blob/main/crates/whispe...</a><p>AEC inference: <a href="https://github.com/fastrepl/hyprnote/blob/main/crates/aec/src/lib.rs">https://github.com/fastrepl/hyprnote/blob/main/crates/aec/sr...</a><p>LLM inference: <a href="https://github.com/fastrepl/hyprnote/blob/main/crates/llama/src/lib.rs">https://github.com/fastrepl/hyprnote/blob/main/crates/llama/...</a><p>We also learned that for some folks, having full data controllability was as important as privacy. So we support custom endpoints, allowing users to bring in their company's internal LLM. For teams that need integrations, collaboration, or admin controls, we're working on an optional server component that can be self-hosted. Lastly, we're exploring ways to make Hyprnote work like VSCode, so you can install extensions and build your own workflows around your meetings.<p>We believe privacy-first tools, powered by local models, are going to unlock the next wave of real-world AI apps.<p>We're here and looking forward to your comments!