问HN:用大型语言模型(LLMs)替代企业产品是一种现实的策略吗?
我希望听取那些真正构建或运营过长期企业软件的人的观点。
背景(故意保持通用):
我们有一款成熟的、能够产生收入的企业应用程序,已经在生产环境中运行多年。
半技术领导层(没有工程背景)正在积极考虑启动一款新产品,该产品将使用基于大型语言模型(LLM)的工具(如AI代码生成、快速原型制作等)构建,认为:
- 现代AI工具显著降低了构建成本,LLM在未来将会有所改善。
- 新系统试图复制一家成熟竞争对手在大约10年内构建的大部分功能。
- 客户可以选择逐步迁移(旧系统仍然得到支持)。
- 这是一款软件产品,旨在用以替代当前应用程序的所有操作复杂性,目标是使其成为可再销售的产品。
- 使用LLM工具创建的早期演示版本是最终生产就绪的良好代理。
向所有者的推介是,这一过程可以比历史上所需的时间和成本快得多,主要因为“AI改变了软件构建的经济学”。
我并不是反对LLM——我每天都在使用它们,并且看到了实际的生产力提升。我的担忧更偏向结构性:
- LLM在加速搭建和迭代方面似乎表现出色,但不清楚它们在多大程度上减少了:
- 操作复杂性
- 数据正确性问题
- 迁移风险
- 长尾客户的边缘案例
- 支持和责任成本
- 演示看起来令人信服,但并没有揭示失败模式。
- 感觉我们是在将一家成熟竞争对手的最终状态与一个全新系统的初始构建成本进行比较。
我试图对自己的思考进行理性检查。
向社区提出的问题:
- 你们见过基于LLM的企业产品重建在实践中成功吗?
- “便宜和快速”的叙述通常在哪里崩溃?
- AI是否实质性地改变了长期成本曲线,还是主要影响了早期的速度?
- 如果你在为非技术背景的所有者提供建议,你会坚持让他们明确承认哪些风险?
- 有没有一种原则性的方式来支持或反对这一策略,而不显得像“传统悲观主义者”?
我特别希望听到以下人士的回答:
- 拥有大规模生产系统的人
- 尝试进行全部或部分重写的创始人
- 在演示已经售出后加入AI优先的全新项目的工程师
感谢分享任何真实的经验、成功故事或警示案例。
查看原文
I’m looking for perspectives from people who have actually built or operated long-lived enterprise software.<p>Context (kept intentionally generic):<p>We have a mature, revenue-generating enterprise application that’s been in production for years.<p>Semi-technical leadership (with no engineering background) is aggressively considering spinning up a new product, built using LLM-driven tools (AI code generation, rapid prototyping, etc.), with the belief that:<p>modern AI tooling dramatically reduces build cost, LLMs are going to improve in the future<p>the new system is an attempt to replicate most of what an established competitor built over ~10 years<p>customers can optionally migrate over time (old system remains supported)<p>software-only product that aims to replace all of the current application's operational complexity with a goal to make it resellable product.<p>early vibe coded demos created with LLM tools are a good proxy for eventual production readiness<p>The pitch to ownership is that this can be done much faster and cheaper than historically required, largely because “AI changes the economics of building software.”<p>I’m not anti-LLM — I use them daily and see real productivity gains. My concern is more structural:<p>LLMs seem great at accelerating scaffolding and iteration, but unclear how much they reduce:<p>operational complexity<p>data correctness issues<p>migration risk<p>long-tail customer edge cases<p>support and accountability costs<p>Demos look convincing, but they don’t surface failure modes<p>It feels like we’re comparing the end state of a mature competitor to the initial build cost of a greenfield system<p>I’m trying to sanity-check my thinking.<p>Questions for the community:<p>Have you seen LLM-first rebuilds of enterprise products succeed in practice?<p>Where does the “cheap and fast” narrative usually break down?<p>Does AI materially change the long-term cost curve, or mostly the early velocity?<p>If you were advising non-technical owners, what risks would you insist they explicitly acknowledge?<p>Is there a principled way to argue for or against this strategy without sounding like “the legacy pessimist”?<p>I’m especially interested in answers from:<p>people who have owned production systems at scale<p>founders who attempted full or partial rewrites<p>engineers who joined AI-first greenfield efforts after demos were already sold<p>Appreciate any real-world experiences, success stories, or cautionary tales.