对于人工智能代理而言,哪个问题更大:上下文共享还是提示?

1作者: exclusivewombat9 个月前原帖
最近我一直在使用基于大型语言模型(LLM)的代理进行开发,但遇到了一些反复出现的挑战: 1. 提示设计 - 确保代理的行为符合预期,而不需要过长或脆弱的指令。 2. 上下文共享 - 在时间上或代理之间传递记忆、结果和状态,而不至于使系统过载。 3. 成本 - 随着规模的扩大,令牌的费用迅速增加。 我很好奇其他人认为这里的真正瓶颈是什么,以及有没有解决这些问题的技巧或窍门。你们是在围绕令牌限制、内存持久性、优化提示设计等方面进行优化吗? 我很想听听你们对此的看法,或者是否有我们都忽视的更聪明的方法。提前感谢!
查看原文
Been building with LLM-based agents lately and keep running into a few recurring challenges:<p>1&#x2F; Prompting - making sure agents behave how you want without overly long, fragile instructions<p>2&#x2F; Context sharing – passing memory, results, and state across time or between agents w&#x2F;o flooding the system<p>3&#x2F; Cost – tokens get expensive fast, especially as things scale<p>Curious what others think is the real bottleneck here, and any tips&#x2F;tricks for solving this. Are you optimizing around token limits, memory persistence, better prompt engineering?<p>Would love to hear how you’re thinking about this or if there’s a smarter approach we’re all missing. ty in advance!