问HN:如何衡量一个人能够有效处理或理解多少数据?
是否存在一个衡量人们能够有效处理多少数据的尺度,类似于“卡尔达肖夫数据尺度”?这样一个尺度可以叫什么名字?在Memgraph的社区电话会议中(https://youtu.be/ygr8yvIouZk?t=1307),提到AgenticRuntimes和GraphRAG可以使你在“卡尔达肖夫数据尺度”上向上移动,因为你突然能够从任何数据集中获得更多的洞察,而且每个人都可以使用它(大型企业并不控制它)。我在https://adamdrake.com/from-enterprise-decentralization-to-tokenization-and-beyond.html#productize上发现了一些类似的内容,但其定义/示例看起来非常狭窄。
查看原文
Is there a scale of how much data one can effectively process, something similar to the "Kardashev scale for data"? What would be a name for such a thing? During Memgraph's Community Call (https://youtu.be/ygr8yvIouZk?t=1307), the point is that AgenticRuntimes + GraphRAG moves you up on the "Kardashev scale for data" because you suddenly can get much more insight from any dataset, and everyone can use it (a large corporation does not control it). I found something similar under https://adamdrake.com/from-enterprise-decentralization-to-tokenization-and-beyond.html#productize, but the definition/example looks very narrow.