启动 HN:Dedalus Labs(YC S25)——代理的 Vercel
大家好!我们是Dedalus Labs的Windsor和Cathy(<a href="https://www.dedaluslabs.ai" rel="nofollow">https://www.dedaluslabs.ai</a>),一个为开发者构建自主AI应用的云平台。我们的SDK允许您将任何大型语言模型(LLM)连接到任何MCP工具——无论是本地的还是由我们托管的。无需Docker文件或YAML配置。
<p>这里有一个演示:<a href="https://youtu.be/s2khf1Monho?si=yiWnZh5OP4HQcAwL&t=11" rel="nofollow">https://youtu.be/s2khf1Monho?si=yiWnZh5OP4HQcAwL&t=11</a></p>
去年十月,我(Windsor)试图在云中构建一个有状态的代码执行沙箱,LLM可以调用工具。这是在MCP发布之前,老实说,构建这个过程非常烦人……我当时一直在想:“为什么我不能直接将`tools=code_execution`传递给模型,让它……正常工作呢?”
<p>即使有了MCP,您仍然需要运行本地服务器,并在OpenAI、Anthropic、Google等之间手动处理API认证和格式化,然后才能发布任何东西。每次更改都意味着重新部署、网络配置,以及在AWS上浪费的几个小时。在构建产品时,花费几个小时阅读文档和处理云设置可不是您想要的!</p>
Dedalus将这一过程简化为一个API端点,因此原本需要两周的设置现在只需五分钟。我们允许您将可流式传输的HTTP MCP服务器上传到我们的平台。一旦部署,我们提供与OpenAI兼容的SDK,您可以将其直接集成到代码库中,以使用MCP驱动的LLM。我们的目标是让任何人、任何地方都能为他们的LLM配备强大的功能调用工具。
<p>您编写的代码大致如下:</p>
<pre><code> python
client = Dedalus()
runner = DedalusRunner(client)
result = runner.run(
input=prompt,
tools=[tool_1, tool_2],
mcp_servers=["author/server-1", "author/server-2"],
model=["openai/gpt-4.1", "anthropic/claude-sonnet-4-20250514"], # 默认使用列表中的第一个模型
stream=True,
)
stream_sync(result) # 流式返回结果,也支持工具调用
</code></pre>
我们的文档起始于<a href="https://docs.dedaluslabs.ai" rel="nofollow">https://docs.dedaluslabs.ai</a>。这里有一个简单的Hello World示例:<a href="https://docs.dedaluslabs.ai/examples/01-hello-world" rel="nofollow">https://docs.dedaluslabs.ai/examples/01-hello-world</a>。有关基本工具执行的更多信息,请参见<a href="https://docs.dedaluslabs.ai/examples/02-basic-tools" rel="nofollow">https://docs.dedaluslabs.ai/examples/02-basic-tools</a>。网站上还有许多其他示例,包括使用Open Meteo MCP进行天气预报的复杂示例:<a href="https://docs.dedaluslabs.ai/examples/use-case/weather-forecaster" rel="nofollow">https://docs.dedaluslabs.ai/examples/use-case/weather-forecaster</a>。
<p>MCP领域仍然存在许多问题,这一点毫无疑问。其中一个主要问题是认证(我和我的团队开玩笑说MCP中的“S”代表“安全”)。目前,MCP服务器被期望同时充当认证服务器<i>和</i>资源服务器。这很难正确实现,对于服务器开发者来说,这要求太高了,也就是说,人们只想暴露一个资源端点,完成即可。</p>
尽管如此,我们对MCP持乐观态度。当前的不足并非不可弥补,我们预计未来的修订将解决人们目前的许多顾虑。我们认为,有用的AI代理必然会成为习惯性工具调用者,而MCP是为模型配备工具的相当不错的方式。
<p>我们还没有达到去年十月我想要的有状态代码执行沙箱,但我们正在努力实现!发布安全和有状态的MCP服务器是我们的优先事项之一,我们将在下个月推出我们的认证解决方案。我们还在开发一个MCP市场,让人们能够将他们的工具货币化,而我们负责账单和收益分成。</p>
我们非常重视开源,目前已经有以下这些SDK(MIT许可证):
<p><a href="https://github.com/dedalus-labs/dedalus-sdk-python" rel="nofollow">https://github.com/dedalus-labs/dedalus-sdk-python</a></p>
<p><a href="https://github.com/dedalus-labs/dedalus-sdk-typescript" rel="nofollow">https://github.com/dedalus-labs/dedalus-sdk-typescript</a></p>
<p><a href="https://github.com/dedalus-labs/dedalus-sdk-go" rel="nofollow">https://github.com/dedalus-labs/dedalus-sdk-go</a></p>
<p><a href="https://github.com/dedalus-labs/dedalus-openapi" rel="nofollow">https://github.com/dedalus-labs/dedalus-openapi</a></p>
我们非常希望听到您对阻碍您集成MCP服务器或在当前工作流程中使用工具调用LLM的最大障碍的看法。
感谢大家!
查看原文
Hey HN! We are Windsor and Cathy of Dedalus Labs (<a href="https://www.dedaluslabs.ai/" rel="nofollow">https://www.dedaluslabs.ai/</a>), a cloud platform for developers to build agentic AI applications. Our SDK allows you to connect any LLM to any MCP tools – local or hosted by us. No Dockerfiles or YAML configs required.<p>Here’s a demo: <a href="https://youtu.be/s2khf1Monho?si=yiWnZh5OP4HQcAwL&t=11" rel="nofollow">https://youtu.be/s2khf1Monho?si=yiWnZh5OP4HQcAwL&t=11</a><p>Last October, I (Windsor) was trying to build a stateful code execution sandbox in the cloud that LLMs could tool-call into. This was before MCP was released, and let’s just say it was super annoying to build… I was thinking to myself the entire time “Why can’t I just pass in `tools=code_execution` to the model and just have it…work?<p>Even with MCP, you’re stuck running local servers and handwiring API auth and formatting across OpenAI, Anthropic, Google, etc. before you can ship anything. Every change means redeploys, networking configs, and hours lost wrangling AWS. Hours of reading docs and wrestling with cloud setup is not what you want when building your product!<p>Dedalus simplifies this to just one API endpoint, so what used to take 2 weeks of setup can take 5 minutes. We allow you to upload streamable HTTP MCP servers to our platform. Once deployed, we offer OpenAI-compatible SDKs that you can drop into your codebase to use MCP-powered LLMs. The idea is to let anyone, anywhere, equip their LLMs with powerful tools for function calling.<p>The code you write looks something like this:<p><pre><code> python
client = Dedalus()
runner = DedalusRunner(client)
result = runner.run(
input=prompt,
tools=[tool_1, tool_2],
mcp_servers=["author/server-1”, “author/server-2”],
model=["openai/gpt-4.1”, “anthropic/claude-sonnet-4-20250514”], # Defaults to first model in list
stream=True,
)
stream_sync(result) # Streams result, supports tool calling too
</code></pre>
Our docs start at <a href="https://docs.dedaluslabs.ai" rel="nofollow">https://docs.dedaluslabs.ai</a>. Here’s a simple Hello World example: <a href="https://docs.dedaluslabs.ai/examples/01-hello-world" rel="nofollow">https://docs.dedaluslabs.ai/examples/01-hello-world</a>. For basic tool execution, see <a href="https://docs.dedaluslabs.ai/examples/02-basic-tools" rel="nofollow">https://docs.dedaluslabs.ai/examples/02-basic-tools</a>. There are lots more examples on the site, including more complex ones like using the Open Meteo MCP to do weather forecasts: <a href="https://docs.dedaluslabs.ai/examples/use-case/weather-forecaster" rel="nofollow">https://docs.dedaluslabs.ai/examples/use-case/weather-foreca...</a>.<p>There are still a bunch of issues in the MCP landscape, no doubt. One big one is authentication (my team and I joke that the “S” in MCP stands for “security”). MCP servers right now are expected to act as both the authentication server <i>and</i> the resource server. This is tricky to implement correctly, and it’s too much to ask of server writers, i.e. people just want to expose a resource endpoint and be done.<p>Still, we are bullish on MCP. Current shortcomings are not irrecoverable, and we expect future amendments to resolve most qualms that people currently have. We think that useful AI agents are bound to be habitual tool callers, and MCP is a pretty decent way to equip models with tools.<p>We aren’t <i>quite</i> yet at the stateful code execution sandbox that I wanted last October, but we’re getting there! Shipping secure and stateful MCP servers is high on our priority list, and we’ll be launching our auth solution next month. We’re also working on an MCP marketplace, to let people monetize their tools, while we handle billing and rev-share.<p>We’re big on open sourcing things and have these SDKs so far (MIT licensed):<p><a href="https://github.com/dedalus-labs/dedalus-sdk-python" rel="nofollow">https://github.com/dedalus-labs/dedalus-sdk-python</a><p><a href="https://github.com/dedalus-labs/dedalus-sdk-typescript" rel="nofollow">https://github.com/dedalus-labs/dedalus-sdk-typescript</a><p><a href="https://github.com/dedalus-labs/dedalus-sdk-go" rel="nofollow">https://github.com/dedalus-labs/dedalus-sdk-go</a><p><a href="https://github.com/dedalus-labs/dedalus-openapi" rel="nofollow">https://github.com/dedalus-labs/dedalus-openapi</a><p>We would love feedback on what you guys think are the biggest barriers that keep you from integrating MCP servers or using tool calling LLMs into your current workflow.<p>Thanks HN!