展示HN:面向大型语言模型的开源持久内存

1作者: Mnexium大约 1 个月前原帖
今天,我们将Mnexium.com背后的核心记忆引擎开源:CORE-MNX GitHub([https://github.com/mnexium/core-mnx](https://github.com/mnexium/core-mnx)) NPM([https://www.npmjs.com/package/@mnexium/core](https://www.npmjs.com/package/@mnexium/core)) 对我们来说,这是一个产品决策和哲学决策。记忆基础设施正成为严肃AI产品的基础,我们认为核心层应该是透明的、可检查的,并且可以由构建在其上的团队进行扩展。我们也希望获得反馈——我们希望在今天可用的工具下构建出最佳的记忆系统。我们还希望让大型语言模型(LLMs)的表现比现有的开箱即用(OOTB)更好。 CORE-MNX是支持持久记忆工作流的后端层: - 记忆存储和检索 - 声明提取和真相状态解析 - 记忆生命周期管理,以及实时系统的事件流处理 它基于Postgres,优先考虑API,并且旨在与真实的生产环境堆栈集成。 我们尽力使这个系统尽可能独立。最终,这相当困难,因为我们需要LLMs(Cerebras用于快速令牌输出,ChatGPT用于智能等)、数据库用于存储等。我们故意使项目采用API接口,以便您的项目可以与代码无关。 开源CORE让开发者能够: - 了解记忆行为是如何工作的 - 自托管或扩展引擎以用于他们自己的产品 - 避免从头开始重新发明相同的记忆基础设施 关于Mnexium.com的未来方向 Mnexium的长期方向仍然不变:通过持久记忆和可靠的回忆,使AI系统随着时间的推移变得更加有用。我们刚刚意识到,托管记忆并不是我们曾经认为的护城河——我们相信真正的护城河是让LLM系统尽可能易于使用。我们围绕记忆构建的功能集正是我们的差异所在。 开源CORE是我们向所有在这个领域构建的人提供基础的方式。欢迎大家提出改进意见,以及如何解决这个问题的建议。 我们非常希望获得反馈、意见和您可能发现的bug。我们发布的版本虽然不完美,但无疑是一个良好的开端,我们期待进一步改进。
查看原文
Today we’re open-sourcing the core memory engine behind Mnexium.com : CORE-MNX<p>GItHub (<a href="https:&#x2F;&#x2F;github.com&#x2F;mnexium&#x2F;core-mnx" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;mnexium&#x2F;core-mnx</a>) NPM (<a href="https:&#x2F;&#x2F;www.npmjs.com&#x2F;package&#x2F;@mnexium&#x2F;core" rel="nofollow">https:&#x2F;&#x2F;www.npmjs.com&#x2F;package&#x2F;@mnexium&#x2F;core</a>)<p>For us, this is a product decision and a philosophy decision.<p>Memory infrastructure is becoming foundational for serious AI products, and we believe the core layer should be transparent, inspectable, and extensible by the teams building on top of it. We also just want feedback - we want to build the best memory system given the tools we have access to today. We also want to make LLMs perform better then they already do OOTB.<p>CORE-MNX is the backend layer that powers durable memory workflows: memory storage and retrieval, claim extraction and truth-state resolution, memory lifecycle management, and event streaming for real-time systems. It’s Postgres-backed, API-first, and built to integrate into real production stacks.<p>We tried our best to make this system as standalone as possible. Ultimately, its fairly difficult we needed LLMs (Cerebras for fast token output, ChatGPT for intelligence etc), Databases for storage etc. We have intentionally made the project API interfaced so your project can be code agnostic.<p>Open-sourcing CORE lets builders: understand exactly how memory behavior works, self-host or extend the engine for their own products, and avoid reinventing the same memory infrastructure from scratch.<p>What stays on Mnexium.com Mnexium’s long-term direction is still the same: make AI systems more useful over time through durable memory and reliable recall. We&#x27;ve just figured out that hosting memory isnt the moat we once thought it was - the real moat we believe is making the LLM system(s) as easy to use as possible. The feature-set we&#x27;ve built around memory is what is differentiating.<p>Open-sourcing CORE is how we make that foundation available to everyone building in this space. Open to everyone to lend an opinion on improvements and how we make this problem solvable.<p>Would love feedback, opinions and bugs you may find. We release it isn&#x27;t perfect, but certainly a good start we&#x27;d love to improve upon.