项目Bhavanga:利用佛教心理学修复大型语言模型的上下文稀释问题
嗨,HN,
我想分享一个我在过去11个月里一直在进行的实验。我是一名位于日本的非程序员(建筑师),但我成功构建了一个系统,可以在长上下文(超过80万字节)中稳定运行Gemini 1.5 Pro。
问题:
当上下文过长时,人工智能会变得“醉酒”(上下文稀释),并忽视系统指令。
解决方案:
我借用了古代佛教心理学中的“Bhavanga”(生命连续体)概念。与其使用静态的RAG,我构建了一个三层架构:
1. 超我:系统指令 v1.5.0(锚点)
2. 自我:Gemini 1.5 Pro(处理器)
3. 本我:向量数据库(无意识流)
我在Medium上详细介绍了这个架构。我很想听听你们对这种“伪人类”方法的看法。
完整文章:https://medium.com/@office.dosanko/project-bhavanga-building-the-akashic-records-for-ai-without-fine-tuning-1ceda048b8a6
GitHub:https://github.com/dosanko-tousan/Gemini-Abhidhamma-Alignment
查看原文
Hi HN,<p>I wanted to share an experiment I've been working on for the past 11 months.
I am a non-coder (architect) based in Japan, but I managed to build a system that stabilizes Gemini 1.5 Pro over long contexts (800k+ tokens).<p>The Problem:
When context gets too long, the AI gets "Drunk" (Context Dilution) and ignores System Instructions.<p>The Solution:
I applied the concept of "Bhavanga" (Life Continuum) from ancient Buddhist Psychology.
Instead of a static RAG, I built a 3-layer architecture:
1. Super-Ego: System Instructions v1.5.0 (The Anchor)
2. Ego: Gemini 1.5 Pro (The Processor)
3. Id: Vector DB (The Unconscious Stream)<p>I wrote a detailed breakdown of this architecture on Medium.
I'd love to hear your thoughts on this "Pseudo-Human" approach.<p>Full Article: https://medium.com/@office.dosanko/project-bhavanga-building-the-akashic-records-for-ai-without-fine-tuning-1ceda048b8a6<p>GitHub: https://github.com/dosanko-tousan/Gemini-Abhidhamma-Alignment