启动 HN:Morph(YC S23)– 以每秒 4,500 个标记应用 AI 代码编辑
大家好,我是Morph的Tejas。我们开发了一种超快速的模型,可以将AI生成的代码编辑直接应用到您的文件中,速度超过每秒4500个令牌。再也不需要缓慢的全文件重写或脆弱的搜索替换方法了。
为什么会这样?因为AI生成的代码无法可靠地插入到现有代码中。全文件重写和脆弱的搜索替换方法太慢、成本高或容易出错。
Morph的解决方案:
- 您的代理“懒惰地”输出编辑,引用现有文件中未修改的行(例如:// ...现有代码...)
- Morph利用我们的快速应用模型和针对原始文件的推测解码,瞬间将这些编辑应用到文件中,使得AI补丁快速、可靠且适合生产环境。
这种方法是去年由Cursor首创的,但他们的模型并未提供API——因此我们为全球开发者构建了Morph(并提供了大规模的免费使用额度!)
实时演示(无需注册):[https://morphllm.com/dashboard](https://morphllm.com/dashboard) 和文档:[https://docs.morphllm.com/quickstart](https://docs.morphllm.com/quickstart)
我们有两个快速应用模型:morph-v3-fast - 每秒4500+令牌,以及morph-v3-large - 每秒2500+令牌。这些模型为create.xyz、databutton、continue.dev等提供了快速应用支持!
我们还提供用于嵌入和重排序的检索模型。
接下来:内联编辑模型(Cmd-K):极快的内联编辑——保持开发流程状态;以及Morph Tab API:我们的下一步编辑预测模型可以在500毫秒内猜测您的下一次代码编辑和操作。该功能目前处于私有测试阶段,但您可以在这里申请提前访问:[https://morphllm.com/tab](https://morphllm.com/tab)
热门话题:
1) 原始推理速度对开发者用户体验比增量准确性提升更重要——你同意还是不同意?
2) 前沿模型的全文件重写是过时的——快速应用编辑在速度、成本和可靠性上胜出。
3) 随着狭窄任务的基准测试达到99%及以上,复杂性正在从单一的前沿模型转向专门的推理优化模型。随着前沿模型向高端市场发展,它们将会抛弃简单任务,并将用于执行只有前沿模型才能完成的任务。
我们非常希望听到您对编码代理的想法和经验!
[https://youtu.be/LdT8epGHJPk](https://youtu.be/LdT8epGHJPk)
– Tejas及Morph团队
查看原文
Hey HN, I’m Tejas at Morph. We’ve built a blazing-fast model for applying AI-generated code edits directly into your files at 4,500+ tokens/sec. No more slow full-file rewrites or brittle search-and-replace hacks.<p>Why? AI spits out code that can’t reliably be inserted into existing code. Full file rewrites, brittle search-and-replace hacks are too slow, expensive, or error-prone.<p>Morph's approach:<p>- Your agent outputs edits “lazily”, referencing unmodified lines in the existing file (ex: // ...existing code...)<p>- Morph instantly applies these edits to a file using our Fast Apply model + speculative decoding against the original file, making AI patches fast, reliable, and production-ready.<p>This approach was pioneered by Cursor last year, but their models aren’t available as APIs—so we built Morph for developers everywhere (with a large free tier!)<p>Live demo (no signup): <a href="https://morphllm.com/dashboard">https://morphllm.com/dashboard</a> and docs: <a href="https://docs.morphllm.com/quickstart">https://docs.morphllm.com/quickstart</a><p>We have 2 Fast Apply models: morph-v3-fast - 4500+ tok/sec, and morph-v3-large - 2500+ tok/sec. These models power Fast Apply at create.xyz, databutton, continue.dev, and more!<p>We also provide retrieval models for embedding + reranking.
Next Up: Inline Edit Model (Cmd-K): Extremely fast inline edits - keep dev flow state; and Morph Tab API: Our Next Edit Prediction model guesses your next code edit + action with sub-500ms latency. It's currently in private beta, but you can request early access here: <a href="https://morphllm.com/tab">https://morphllm.com/tab</a><p>Hot takes:<p>1) Raw inference speed matters more than incremental accuracy gains for dev UX—agree or disagree?<p>2) Full-file rewrites by frontier models are legacy—Fast Apply edits win on speed, cost, reliability.<p>3) As benchmarks on narrow tasks saturate to 99%+, complexity is shifting from single frontier models to specialized inference-optimized models. As frontier models move upmarket, they'll leave simple tasks behind, and they'll be used to do tasks only frontier models can do<p>We’d love to hear your ideas and experiences with coding agents!<p><a href="https://youtu.be/LdT8epGHJPk" rel="nofollow">https://youtu.be/LdT8epGHJPk</a>
– Tejas & the Morph team