一种新型计算方式

1作者: amazedsaint3 个月前原帖
我正在等待一些专利的通过,这是一种我们正在构建的新型计算范式——但我还是想分享一些学习和见解。 计算机能否计算出这个宇宙中的任何算法?当我们记住宇宙有有限的记忆,并且每一次不可逆的位翻转都会转化为热量时,这是否真的成立? 我将普遍性的抽象主张视为指导理想,然后询问什么在与兰道尔和贝肯斯坦的接触中得以保留。只有符合能量和记忆预算的计算才会发生,因此获胜的方式必须在每个比特中挤出更多的意义,并最小化擦除。 这促使我采用一个简单的视角。计算依赖于连接性,而不是坐标。如果我在一张橡胶纸上画一个电路,并在不切断任何导线的情况下拉伸或皱缩这张纸,程序不会改变。只有重新布线才会改变行为。换句话说,逻辑存在于布线的拓扑类别中,而几何只是外衣。 一旦约束被编码为拓扑不变量,基于能量的模型预测流在图上优于逐步生成。 想象一个球在一个由网络自身雕刻的景观上滚动。搜索不会浪费比特去探索布线已经禁止的方向。引导取代了猜测,因此你移动的熵更少,以达到相同的答案。 热力学也同样支持这一观点。因为我们只有在擦除信息时才付出代价,所以我组织计算使其大部分是可逆的,将熵推向边界,只有在学习真正需要遗忘时才进行擦除。吞吐量因此更依赖于边界而非体积,这与全息理论相呼应。 在有限的宇宙记忆中,明智的做法是存储结构而不是原始状态,将意义压缩为边界可以承载的不变量。 消息传递实际上是以简单形式的规范传输。将边缘视为连接,将循环视为全局性环路。环路周围的累积相位在没有全局监督者的情况下强制执行全局一致性。小的扰动并不重要,除非它们使系统跳跃到不同的拓扑类别,这就是为什么稳健性源于布线本身。 那么,计算机能否计算出这个宇宙中的任何算法?在抽象意义上是的。在物理意义上,它可以计算适应时间、能量和记忆的子类,并且在依赖拓扑时表现最佳。我现在将有用的计算视为同伦搜索——在正确的类别中开始,在其中变形,只有在选择不同类别时才重新布线。 这尊重了宇宙的信息限制,并以保留的结构交换昂贵的比特,这就是我相信这一点的原因。
查看原文
I’m waiting to get some patents through for a new type of computing paradigm we are building - but would like to share few learnings and insights anyway<p>Can a computer compute any algorithm in this universe? Is that even true once we remember the universe has a finite memory and every irreversible bit flip turns into heat?<p>I take the abstract claim of universality as a guiding ideal, then ask what survives contact with Landauer and Bekenstein. Only computations that fit the energy and memory budget actually happen, so the winning style must squeeze meaning per bit and minimize erasure.<p>That pushed me to a simple lens. Computation rides on connectivity, not on coordinates. If I draw a circuit on a rubber sheet and stretch or crumple the sheet without cutting a wire, the program does not change. Only rewiring changes behavior. In other words, logic lives in topological classes of the wiring, while geometry is costume.<p>Once constraints are encoded as topological invariants, energy based model predictive flows on graphs beat stepwise generation.<p>Think a ball rolling on a landscape carved by the network itself. The search does not waste bits exploring directions the wiring already forbids. Guidance replaces guesswork, so you move less entropy to reach the same answer.<p>Thermodynamics also agrees. Coz we only pay when we erase information, so I organize computations to be mostly reversible, push entropy to the boundary, and erase only when learning truly demands forgetting. Throughput then scales with boundary more than with bulk, echoing holography.<p>With a finite cosmic memory, the smart move is to store structure rather than raw state, compressing meaning into invariants that a boundary can carry.<p>Message passing turns out to be gauge transport in plain clothes. Treat edges as connections and cycles as holonomy loops. The accumulated phase around a loop enforces global consistency without a global overseer. Small perturbations do not matter unless they jump the system to a different topological class, which is why robustness emerges from the wiring itself.<p>So can a computer compute any algorithm in this universe? In the abstract sense yes. In the physical sense it can compute the subclass that fits within time, energy, and memory, and it does best when it leans on topology. I now see useful computation as homotopy search - start in the right class, deform within it, and rewire only when you choose a different class.<p>That respects the information limit of the universe and trades costly bits for conserved structure, which is why I believe it is true.