AGI被宣传为斯皮尔曼的“g”,但其架构类似于吉尔福德的模型。

3作者: jatinkk大约 1 个月前原帖
我不是技术专家,也不在科技行业工作,因此这是一个外部观察者的视角。关于通用人工智能(AGI)的市场宣传承诺了斯皮尔曼的g:一种能够适应新问题、未见问题的通用流动智能。然而,工程方面——特别是“专家混合模型”和独立模块——看起来与J.P.吉尔福德的智力结构完全相同。吉尔福德将智力视为大约150种具体的、独立的能力的集合。 问题不仅在于这些部分是如何拼接在一起的。我看到的问题是:当模型面临一个不符合其预定义部分的问题时,会发生什么?他们将如何确保输出不会显得支离破碎,而架构又依赖于在专门“专家”之间切换,而不是使用统一的推理核心? 具体技能的集合(吉尔福德)与适应任何事物的能力(斯皮尔曼)并不相同。通过优化特定组件,我们正在构建一个在已知任务上表现出色的系统,但可能在真正的通用智能所需的流动推理方面根本缺乏能力。我并不是反对人工智能;我只是觉得我们可能需要重新审视我们的做法。我们不能指望在错误的高速公路上到达正确的目的地。
查看原文
I am not a tech expert and not working in the tech industry, so this is an outsider's perspective. The marketing around AGI promises Spearman’s g: a general, fluid intelligence that can adapt to new, unseen problems. But the engineering—specifically "Mixture of Experts" and distinct modules—looks exactly like J.P. Guilford’s Structure of Intellect. Guilford viewed intelligence as a collection of ~150 specific, independent abilities. The issue isn't just about how these parts are stitched together. The issue I see is: what happens when the model faces a problem that doesn't fit into one of its pre-defined parts? How will they ensure that the output doesn't look fragmented when the architecture relies on switching between specialized "experts" rather than using a unified reasoning core? A collection of specific skills (Guilford) is not the same as the ability to adapt to anything (Spearman). By optimizing for specific components, we are building a system that is great at known tasks but may fundamentally lack the fluid reasoning needed for true general intelligence. I am not anti-AI; I simply feel we might need to relook at our approach.We can't expect the right destination with the wrong highway.