启动 HN:Parachute(YC S25)– 临床 AI 的安全防护措施

13作者: ariavikram19 天前原帖
嗨,HN,Aria 和 Tony 这里,我们是 Parachute 的联合创始人(<a href="https://www.parachute-ai.com">https://www.parachute-ai.com</a>)。我们正在构建一种治理基础设施,使医院能够安全地评估和监控大规模的临床人工智能。 医院正在争相采用人工智能。去年,美国市场上出现了超过 2000 种临床人工智能工具,从环境记录员到影像模型应有尽有。但新的法规(HTI-1、科罗拉多州人工智能法案、加利福尼亚州 SB 3030、白宫人工智能行动计划)要求提供可审计的证据,以证明这些模型是安全、公平的,并且得到了持续监控。 问题在于,大多数医院的 IT 团队无法跟上。他们无法对每个供应商进行审查,进行压力测试,也无法全天候监控模型。因此,许多有前景的工具在试点阶段夭折,而风险暴露却在不断增加。 我们在哥伦比亚大学厄尔文医学中心部署人工智能时亲身经历了这一切,因此我们创建了 Parachute。哥伦比亚大学现在正在使用它来跟踪生产中的实时人工智能模型。 它的工作原理是:首先,Parachute 根据医院的临床需求评估供应商,并在试点开始之前标记合规性和安全风险。接下来,我们进行自动化基准测试和红队测试,以对每个模型进行压力测试,并发现诸如幻觉、偏见或安全漏洞等风险。 一旦模型部署,Parachute 会持续监控其准确性、漂移、偏见和正常运行时间,并在阈值被突破的瞬间发送警报。最后,每一次批准、测试和运行时的变更都会被封存到不可变的审计轨迹中,医院可以直接将其交给监管机构和审计员。 我们非常希望听到任何有医院经验并对安全部署人工智能感兴趣的人的意见。期待你们的评论!
查看原文
Hi HN, Aria and Tony here, co-founders of Parachute (<a href="https:&#x2F;&#x2F;www.parachute-ai.com&#x2F;">https:&#x2F;&#x2F;www.parachute-ai.com&#x2F;</a>). We’re building governance infrastructure that lets hospitals safely evaluate and monitor clinical AI at scale.<p>Hospitals are racing to adopt AI. More than 2,000 clinical AI tools hit the U.S. market last year - from ambient scribes to imaging models. But new regulations (HTI-1, Colorado AI Act, California SB 3030, White House AI Action Plan) require auditable proof that these models are safe, fair, and continuously monitored.<p>The problem is, most hospital IT teams can’t keep up. They can’t vet every vendor, run stress tests, and monitor models 24&#x2F;7. As a result, promising tools die in pilot hell while risk exposure grows.<p>We saw this firsthand while deploying AI at Columbia University Irving Medical Center, so we built Parachute. Columbia is now using it to track live AI models in production.<p>How it works: First, Parachute evaluates vendors against a hospital’s clinical needs and flags compliance and security risks before a pilot even begins. Next, we run automated benchmarking and red-teaming to stress test each model and uncover risks like hallucinations, bias, or safety gaps.<p>Once a model is deployed, Parachute continuously monitors its accuracy, drift, bias, and uptime, sending alerts the moment thresholds are breached. Finally, every approval, test, and runtime change is sealed into an immutable audit trail that hospitals can hand directly to regulators and auditors.<p>We’d love to hear from anyone with hospital experience who has an interest in deploying AI safely. We look forward to your comments!