展示HN:HyperFlow – 基于LangGraph构建的自我改进代理框架
嗨,HN,我是Umer。我最近构建了一个名为HyperFlow的实验框架,旨在探索自我改进AI代理的想法。
通常,当一个代理未能完成任务时,我们开发者会手动调整提示或修改代码逻辑。我想看看代理是否能够自动化自己的改进循环。
HyperFlow基于LangChain和LangGraph,使用了两个代理:
- 一个TaskAgent,用于解决特定领域的问题。
- 一个MetaAgent,负责改进。
MetaAgent会查看TaskAgent的评估日志,重写底层的Python代码、工具和提示文件,然后在一个隔离的沙箱(如Docker)中测试新版本。在多个迭代中,它会将取得最高分的版本保存到档案中。
目前这个框架仍处于高度实验阶段,但其架构深受最近的HyperAgents论文(Meta Research,2026)的启发。
我非常希望听到你对架构的反馈、你对自我引用代理的看法,或者回答你可能有的任何问题!
文档: [https://hyperflow.lablnet.com/](https://hyperflow.lablnet.com/)
GitHub: [https://github.com/lablnet/HyperFlow](https://github.com/lablnet/HyperFlow)
查看原文
Hi HN, I am Umer. I recently built an experimental framework called HyperFlow to explore the idea of self-improving AI agents.<p>Usually, when an agent fails a task, we developers step in to manually tweak the prompt or adjust the code logic. I wanted to see if an agent could automate its own improvement loop.<p>Built on LangChain and LangGraph, HyperFlow uses two agents:
- A TaskAgent that solves the domain problem.
- A MetaAgent that acts as the improver.<p>The MetaAgent looks at the TaskAgent's evaluation logs, rewrites the underlying Python code, tools, and prompt files, and then tests the new version in an isolated sandbox (like Docker). Over several generations, it saves the versions that achieve the highest scores to an archive.<p>It is highly experimental right now, but the architecture is heavily inspired by the recent HyperAgents paper (Meta Research, 2026).<p>I would love to hear your feedback on the architecture, your thoughts on self-referential agents, or answer any questions you might have!<p>Documentation: <a href="https://hyperflow.lablnet.com/" rel="nofollow">https://hyperflow.lablnet.com/</a>
GitHub: <a href="https://github.com/lablnet/HyperFlow" rel="nofollow">https://github.com/lablnet/HyperFlow</a>