(第二部分)将Claude重新连接到我的代码库,尝试持久化内存。
我杀死了我的“宝宝”,这是我做过的最好的决定。
只有几千人看到了我的CAM帖子,关于10,000行的语义记忆接口、嵌入和知识图谱以及Claude的钩子。
经过大约一周的使用,我发现:
- 它有效
- 速度慢
实际上发生了什么:
我花了一些时间构建这个复杂的记忆基础设施。向量数据库、SQLite、语义搜索、自动摄取管道、关系图,所有的一切。
结果是有效的!Claude记住了东西!问题解决了,对吧?
可是……
每个会话启动都需要4秒以上。试着同时运行6个ghostty会话,填充的上下文窗口相当大。我基本上是在看Claude的“纰漏”(也就是占用我的内存)。
我为让Claude更“高效”而构建的东西反而让Claude变得更慢。
于是我想:
“我是围绕Claude的局限性进行工程设计,还是与它合作?”
重构:
把所有东西都扔掉,重新开始。
新堆栈:
- 两个bash脚本
- 全局/项目CLAUDE.md文件
- Claude代码钩子
- 就这些
会话开始 → 从Markdown加载上下文
会话结束 → 状态保存到Markdown
没有API调用,没有数据库,没有依赖。
总共1500行。
洞察:
代理不需要复杂的记忆基础设施。
它们需要一个持久的层,要求:
- 简单到值得信任
- 轻便到可以忽略
- 强大到能够持久
结果发现,CLAUDE.md文件 + bash脚本 + 钩子可以完成10,000行怪物所做的一切,只是……更简洁、更快速且更易维护。
哲学转变:
我停止了试图围绕Claude的局限性构建,而是开始利用Claude的优势。
原始系统是我试图聪明并尝试新奇(谢谢多动症)。
“Claude没有记忆?那我就构建一个完整的数据库!”
新系统是我聪明的表现。
“Claude可以读取Markdown,而bash速度快得惊人。那我们就用这个。”
更少的基础设施 = 更少的瓶颈 = 更多的窗口 = 更高的速度。
未割断的记忆:
我称之为这个。
同样的问题解决方案,代码减少93%。速度快10倍,实际上是可维护的。
有时候,解决方案是减法而不是加法。
有时候,你的10,000行“解决方案”只是因为你有能力而过度设计。
---
总结 - 我重写了整个Claude记忆系统。从10,000行数据库减少到1,500行Markdown文件。现在瞬间启动。可以无延迟地运行6个窗口。学到了简单总是胜过聪明。
链接到原始的分割线程:[https://www.reddit.com/r/ClaudeAI/comments/1phtii5/unsevering_claude_to_my_codebase_achieving/](https://www.reddit.com/r/ClaudeAI/comments/1phtii5/unsevering_claude_to_my_codebase_achieving/) 如果你想看看我们是如何到达这里的。
链接到Git仓库:[https://github.com/blas0/UnseveredMemory](https://github.com/blas0/UnseveredMemory)
查看原文
i killed my baby and it was the best decision i ever made<p>only a few thousand of you saw my CAM post, the 10,000 line semantic memory interface with embeddings and knowledge graphs and claude hooks.<p>i found after about a week of using it:
- it worked
- slow </3<p>what actually happened<p>spent some time building this elaborate memory infrastructure. vector db. sqlite. semantic search. auto-ingestion pipelines. relationship graphs. the whole nine yards.<p>it worked! claude remembered stuff! problem solved right?<p>except...<p>every session took 4+ seconds just to boot. try running 6 ghostty sessions with a pretty big chunk of filled context windows. i was basically watching claude fibbergitting (aka eating my ram up)<p>the thing i built to make claude more "performant" was making claude slower.<p>so i thought:<p>"am i engineering around claude or working with it?"<p>refactoring:<p>threw it all out. started over.<p>new stack:<p>- two bash scripts<p>- global/project CLAUDE.md files<p>- claude code hooks<p>- thats it<p>session starts → context loads from markdown<p>session ends → state saves to markdown<p>no api calls. no database. no dependencies.<p>1,500 lines total.<p>insight:<p>agents dont need elaborate memory infrastructure.<p>they need a persistent layer thats:<p>- simple enough to trust<p>- light enough to ignore<p>- powerful enough to persist<p>turns out CLAUDE.md files + bash scripts + hooks can do everything the 10k line monster did. just... cleaner. faster. & more maintainable.<p>the philosophy shift<p>i stopped trying to build around claude's limitations and started building with claude's strengths.<p>the original system was me trying to be clever and attempt novelty (thx adhd)<p>"claude has no memory? ill build a whole ass database!"<p>the new system is me being smart.<p>"claude can read markdown and bash is fast as hell. lets just use that."<p>less infrastructure = less bottlenecks = more windows = more velocity<p>unsevered memory<p>thats what im calling it.<p>same problem solution. 93% less code. 10x faster. actually maintainable.<p>sometimes the move is subtracting not adding.<p>sometimes your 10,000 line "solution" was just you over-engineering because you could.<p>---<p>tl;dr - rewrote my entire claude memory system. went from 10k lines with databases to 1.5k lines with markdown files. boots instantly now. runs 6 windows without lag. learned that simple beats clever every single time.<p>link to original severance thread: https://www.reddit.com/r/ClaudeAI/comments/1phtii5/unsevering_claude_to_my_codebase_achieving/ if you wanna see how we got here
link to git repo: https://github.com/blas0/UnseveredMemory