具备智能记忆的人工智能助手
嗨,HN,
我们正在构建Engram,一个具有持久记忆的AI助手,能够在不同会话之间有效工作。与ChatGPT/Claude不同的是,后者在每次聊天后会忘记一切,而Engram能够自动提取和索引事实、偏好和上下文。
核心问题:大型语言模型(LLMs)在短期记忆方面表现出色,但在长期记忆方面几乎没有保留。如果你在一月份告诉Claude一个项目,它到三月份未必会记得,除非你把整个对话重新粘贴回来。
我们的解决方案:
- 使用14个因素的重要性评分算法自动提取记忆(目标/承诺的优先级高于随意事实)
- 通过pgvector和OpenAI嵌入进行语义检索
- 认知画像,从写作样本中学习你的沟通风格
- 多供应商路由(例如,免费层使用Llama 3,优质层使用Gemini等)
技术栈:React + Supabase(PostgreSQL + pgvector),全程使用TypeScript。我们构建了一个抽象层,处理供应商故障转移和速率限制。
当前状态:处于测试阶段,约有300名来自宾夕法尼亚大学的用户(我们获得了沃顿创新基金建设奖)。第30天的留存率约为60%,这表明我们的记忆系统确实有用。
试试吧:engramartificial.com
我之所以构建这个,是因为我对我的AI“助手”无法记住我昨天告诉它的事情感到沮丧。希望听到其他人的反馈,看看这是否引起共鸣,或者我是否在解决一个并不存在的问题。
查看原文
Hi HN,<p>We are building Engram, an AI assistant with persistent memory that actually works across sessions. Unlike ChatGPT/Claude, which forget everything after each chat, Engram extracts and indexes facts, preferences, and context automatically.<p>The core problem: LLMs have great short-term memory but zero long-term recall. If you tell Claude about a project in January, it won't necessarily remember in March unless you paste the whole conversation back in.<p>Our approach: Automatic memory extraction using a 14-factor importance scoring algorithm (goals/commitments ranked higher than casual facts)
Semantic retrieval via pgvector + OpenAI embeddings
Cognitive profiling that learns your communication style from writing samples
Multi-provider routing (Llama 3 for free tier, Gemini for premium to name a few)<p>Technical stack: React + Supabase (PostgreSQL + pgvector), TypeScript throughout. We built an abstraction layer that handles provider failover and rate limiting.<p>Current status: Beta with ~300 users from UPenn (we won the Wharton Innovation Fund Build award). Day-30 retention is ~60%, which suggests the memory system is actually useful.<p>Try it: engramartificial.com<p>Built this because I was frustrated that my AI "assistant" couldn't remember what I told it yesterday. Would love to hear if this resonates with others or if I'm solving a non-problem.