问HN:我们何时会将“人类作为工具”暴露出来,以便大型语言模型代理可以按需调用我们?
严肃的问题。
我们正在构建能够规划、推理并通过MCP调用工具的自主大型语言模型(LLM)系统。目前,这些工具是API。但许多现实世界的任务仍然需要人类参与。
那么……为什么不把人类作为工具来使用呢?
想象一下,TaskRabbit或Fiverr运行MCP服务器,在那里,一个LLM代理可以:
- 调用人类进行判断、创造力或实际行动
- 传递结构化输入
- 将结构化输出返回到其循环中
在那时,人类就成了代理工具链中的另一个依赖项。虽然速度较慢、成本较高,但偶尔是必要的。
是的,这听起来有些反乌托邦。是的,这将人类视为“人工智能的仆人”。这正是重点。这种情况已经在手动进行……这只是将接口形式化。
我真正好奇的问题是:
- 一旦代理成为默认的软件参与者,这种情况是否不可避免?(基本上就是现在?)
- 首先会破裂的是什么:经济、安全、人类尊严还是监管?
- 市场会否接受成为人工智能的“人类执行层”?
不确定这是否是未来,或者是我们应该积极防止的诅咒想法……但这感觉让人不安地合理。
查看原文
Serious question.<p>We're building agentic LLM systems that can plan, reason, and call tools via MCP. Today those tools are APIs. But many real-world tasks still require humans.<p>So… why not expose humans as tools?<p>Imagine TaskRabbit or Fiverr running MCP servers where an LLM agent can:<p>- Call a human for judgment, creativity, or physical actions<p>- Pass structured inputs<p>- Receive structured outputs back into its loop<p>At that point, humans become just another dependency in an agent's toolchain. Though slower, more expensive, but occasionally necessary.<p>Yes, this sounds dystopian. Yes, it treats humans as "servants for AI." Thats kind of the point. It already happens manually... this just formalizes the interface.<p>Questions I'm genuinely curious about:<p>- Is this inevitable once agents become default software actors? (As of basically now?)<p>- What breaks first: economics, safety, human dignity or regulation?<p>- Would marketplaces ever embrace being "human execution layers" for AI?<p>Not sure if this is the future or a cursed idea we should actively prevent... but it feels uncomfortably plausible.