概念化就是你所需要的一切。

1作者: vayllon7 个月前原帖
概念化在人工通用智能(AGI)的发展中起着至关重要的作用,因为它将使这种超智能能够以类似于人类的方式理解和处理抽象信息。然而,从神经元的角度来看,概念是什么呢? 概念可以定义为一种潜在的、抽象的、多模态的表征,它整合了来自不同来源的感官信息:视觉、听觉、嗅觉、触觉,这些信息被编码在一个超维空间中,其结构源于多个神经元的激活模式。 这种表征是组合性的,意味着它是由更简单的表征的层次组合形成的;同时也是关系性的,因为它在拓扑上与相似或功能相关的概念相连接,这取决于网络的训练情况。 类似于语言嵌入,潜在空间中概念之间的“距离”反映了它们的相似程度:它们越接近,在意义、结构或功能上共享的内容就越多。 许多概念与语言标签相关联,但这并不是绝对必要的;我们有很多东西无法用名称来表示。另一方面,语言标签——即单词——本身也是概念,它们同样被编码在那个神经网络的潜在向量空间中。因为语言是在同一个网络中处理的,所以情况不可能是其他的。 换句话说,我们的概念并不仅仅是语言的。它们基于直接的经验:我们所看到的、听到的、触摸到的、闻到的、想象的或在现实世界中经历的。语言作为一种工具,用于指代和分享这些概念,但它本身并不定义这些概念。概念思维的基础是多模态的,而不仅仅是语言的,单词只是其中一种模态,一种元模态。 有趣的是,我们对语言进行了深入的研究,甚至创建了一个专门研究其意义的学科:语义学。然而,我们对概念化的过程关注得很少。事实上,我们甚至没有一个词来命名这个学科:我们没有“概念学”或类似的词。所有超出语言领域的东西——那些无法轻易归结为单词或方程的东西——似乎像水一样从我们手中溜走,难以捕捉,难以理解和分析。只有随着人工神经网络的出现,我们才开始理解概念化的过程。
查看原文
Conceptualization plays a crucial role in the development of Artificial General Intelligence (AGI), as it will enable that super-intelligence to understand and handle abstract information in a manner similar to how humans do, but...<p>What is a concept from a neuronal point of view?<p>A concept can be defined as a latent, abstract, and multi-modal representation that integrates sensory information from different sources: sight, hearing, smell, touch, encoded in a hyper-dimensional space, whose structure emerges from the activation pattern of multiple neurons.<p>This representation is compositional, in the sense that it is formed by hierarchical combinations of simpler representations, and relational, as it is topologically connected to similar or functionally related concepts, depending on the network&#x27;s training.<p>Similar to linguistic embedding, the &quot;distance&quot; between concepts in latent space reflects their degree of similarity: the closer they are, the more they share in terms of meaning, structure, or function.<p>Many concepts are associated with a linguistic label, but this is not strictly necessary; there are many things we cannot name. On the other hand, linguistic labels — words — are themselves concepts that are also encoded in that neural network&#x27;s latent vector space. It could not be otherwise, since language is processed in the same network.<p>In other words, our concepts are not purely linguistic. They are based on direct experience: what we see, hear, touch, smell, imagine, or experience in the real world. Language serves as a tool to refer to and share these concepts, but it does not define them by itself. The foundation of conceptual thinking is multi-modal, not merely verbal, and words are just a modality, a meta-modality.<p>Interestingly, we have studied language in great depth, to the point of creating an entire discipline dedicated to its meaning: semantics. However, we have paid very little attention to the process of conceptualization. In fact, we do not even have a word to name that discipline: we do not have a &quot;conceptmatics&quot; or equivalent. Everything that escapes the linguistic realm — that cannot be easily reduced to words or equations — seems to slip through our fingers like water, difficult to catch, difficult to understand and analyze. It is only with the advent of artificial neural networks that we have begun to understand conceptualization.