大型语言模型(LLMs)功能强大,但企业本质上是确定性的。
在过去的一年里,我们一直在企业系统中实验大型语言模型(LLMs)。<p>我们发现一个根本性的矛盾:LLMs 是概率性和非确定性的,而企业则建立在可预测性、可审计性和问责制之上。<p>目前大多数方法试图通过提示、重试或启发式方法来“驯服” LLMs。这在演示中有效,但在需要可解释性、政策执行或事后问责时就开始出现问题。<p>我们发现,将 LLMs 视为建议引擎而非决策者,能够彻底改变架构。实际执行需要在一个确定性的控制层中进行,该层能够强制执行规则、记录决策并安全失败。<p>想知道在座的其他人是如何处理概率性人工智能与确定性企业系统之间的差距的。你们在生产中也遇到类似的问题吗?
查看原文
Over the last year, we’ve been experimenting with LLMs inside enterprise systems.<p>What keeps surfacing is a fundamental mismatch:
LLMs are probabilistic and non-deterministic, while enterprises are built on predictability, auditability, and accountability.<p>Most current approaches try to “tame” LLMs with prompts, retries, or heuristics. That works for demos, but starts breaking down when you need explainability, policy enforcement, or post-incident accountability.<p>We’ve found that treating LLMs as suggestion engines rather than decision makers changes the architecture completely.
The actual execution needs to live in a deterministic control layer that can enforce rules, log decisions, and fail safely.<p>Curious how others here are handling this gap between probabilistic AI and deterministic enterprise systems.
Are you seeing similar issues in production?