请问HN:有没有人使用知识图谱来管理大型语言模型的记忆/上下文?
我正在为需要进行推理和长期操作的LLM代理和助手构建基础设施,而不仅仅是处理单一的提示。
我遇到的一个核心挑战是:管理不断演变的记忆和上下文。检索增强生成(RAG)适用于检索,而临时记事本适合短期推理——但一旦代理需要维护结构化知识、跟踪状态或协调多步骤任务,情况就会迅速变得复杂;上下文变得越来越难以理解。
我正在尝试基于知识图谱构建一个共享记忆层:
- 代理可以将结构化/非结构化数据导入其中
- 随着代理的行动,记忆会动态更新
- 开发者可以观察、查询并优化图谱
- 它支持高层次的任务建模和依赖关系跟踪(前置/后置条件)
我的问题是:
- 你们是否在构建需要持久记忆或任务上下文的代理?
- 你们是否尝试过结构化记忆(图谱、JSON存储等),还是一直使用嵌入/临时记事本?
- 像基于图的记忆这样的东西真的会有帮助吗,还是对于大多数实际应用来说过于复杂?
我正在全力验证这个想法,想听听其他正在使用LLM构建的人的成功经验(或失败教训)。
提前感谢HN的朋友们!
查看原文
I’m building infrastructure for LLM agents and copilots that need to reason and operate over time—not just in single prompts.<p>One core challenge I keep hitting: managing evolving memory and context. RAG works for retrieval, and scratchpads are fine for short-term reasoning—but once agents need to maintain structured knowledge, track state, or coordinate multi-step tasks, things get messy fast; the context becomes less and less interpretable.<p>I’m experimenting with a shared memory layer built on a knowledge graph:<p><pre><code> - Agents can ingest structured/unstructured data into it
- Memory updates dynamically as agents act
- Devs can observe, query, and refine the graph.
- It supports high-level task modeling and dependency tracking (pre/postconditions)
</code></pre>
My questions:
- Are you building agents that need persistent memory or task context?<p><pre><code> - Have you tried structured memory (graphs, JSON stores, etc.) or stuck with embeddings/scratchpads?
- Would something like a graph-based memory actually help, or is it overkill for most real-world use?
</code></pre>
I’m in the thick of validating this idea and would love to hear what’s working (or breaking) for others building with LLMs today.<p>Thanks in advance HNers!