AGI在数学上是不可能的(3):科尔莫哥洛夫复杂性

2作者: ICBTheory7 个月前原帖
大家好。 这是我在过去几年中发展的一项持续理论的第三部分,称为无限选择障碍(ICB)。核心思想很简单: 一般智能——尤其是人工通用智能(AGI)——在某些认识论条件下是结构上不可能实现的。 这并不是道德上的问题,也不是实践上的问题,而是数学上的问题。 这个论点分为三个障碍: 1. 可计算性(哥德尔、图灵、赖斯):你无法决定你的系统看不到的东西。 2. 熵(香农):超出某个点后,信号在结构上会崩溃。 3. 复杂性(科尔莫哥洛夫、蔡廷):大多数现实世界的问题在根本上是不可压缩的。 本文聚焦于(3):科尔莫哥洛夫复杂性。它论证了人类关心的许多事情不仅难以建模,而且在形式上是无法建模的——因为问题的最简描述就是问题本身。 换句话说:你无法从无法压缩的东西中进行概括。 —— 以下是摘要: 有一种普遍的误解,认为人工通用智能(AGI)将通过规模、内存或递归优化而出现。本文则提出相反的观点:随着系统的扩展,它们接近于一般化本身的结构极限。 通过使用科尔莫哥洛夫复杂性,我们展示了许多现实世界的问题——特别是涉及社会意义、上下文分歧和语义波动的问题——在形式上是不可压缩的,因此任何有限算法都无法学习。 这不是性能问题。这是一个数学上的壁垒。它并不关心你有多少个符号。 这篇论文并不轻松,但它很精确。如果你对极限、结构以及为什么大多数智能发生在优化之外感兴趣,可能值得你花时间阅读。 https://philpapers.org/archive/SCHAII-18.pdf 欢迎分享你的看法。
查看原文
Hi folks. This is the third part in an ongoing theory I’ve been developing over the last few years called the Infinite Choice Barrier (ICB). The core idea is simple:<p>General intelligence—especially AGI—is structurally impossible under certain epistemic conditions.<p>Not morally, not practically. Mathematically.<p>The argument splits across three barriers: 1.Computability (Gödel, Turing, Rice): You can’t decide what your system can’t see. 2.Entropy (Shannon): Beyond a certain point, signal breaks down structurally. 3.Complexity (Kolmogorov, Chaitin): Most real-world problems are fundamentally incompressible.<p>This paper focuses on (3): Kolmogorov Complexity. It argues that most of what humans care about is not just hard to model, but formally unmodellable—because the shortest description of a problem is the problem.<p>In other words: you can’t generalize from what can’t be compressed.<p>⸻<p>Here’s the abstract:<p>There is a common misconception that artificial general intelligence (AGI) will emerge through scale, memory, or recursive optimization. This paper argues the opposite: that as systems scale, they approach the structural limit of generalization itself. Using Kolmogorov complexity, we show that many real-world problems—particularly those involving social meaning, context divergence, and semantic volatility—are formally incompressible and thus unlearnable by any finite algorithm.<p>This is not a performance issue. It’s a mathematical wall. And it doesn’t care how many tokens you’ve got<p>The paper isn’t light, but it’s precise. If you’re into limits, structures, and why most intelligence happens outside of optimization, it might be worth your time.<p>https:&#x2F;&#x2F;philpapers.org&#x2F;archive&#x2F;SCHAII-18.pdf<p>Happy to read your view.