Anthropic与国防部: “任何合法用途”是关于控制的争斗

1作者: colek42大约 1 个月前原帖
我在步兵部队服役了12年,然后在联合特种作战司令部(JSOC)为打击ISIS开发了目标工具。现在,我领导一个团队,开发自动化合规流程的人工智能工具。我对Anthropic与国防部(DoD)有一些看法。 当人们讨论“武器中的人工智能”时,就像在谈论科幻中的触发机器人……我无法认真对待。 “杀伤链”并不是一种感觉,而是一个过程。 寻找、固定、跟踪、目标、接触、评估(F2T2EA),其中大部分是信息工作:从噪声中筛选信号,建立信心,缩短时间线,并迅速将决策传递给合适的人,以便产生影响。 这就是为什么Anthropic与国防部之间的争斗引起关注的原因。这不仅仅是“伦理”问题。 ——> 这是关于控制的问题。 以下是实际讨论的内容: Anthropic表示他们会支持军方——但他们希望有两个例外:不进行大规模国内监控和不使用完全自主武器(他们的定义是:完全“将人类排除在外”的系统,并自动选择/接触目标)。 Anthropic还表示,国防部要求“任何合法用途”,并威胁如果不遵守就会面临下架/“供应链风险”的压力。 国防部在media.defense.gov上发布的备忘录明确要求模型“不受使用政策限制”,并指示在人工智能合同中添加标准的“任何合法用途”条款。 争端迅速升级——包括联邦下架/黑名单行动,以及主要媒体报道的“供应链风险”指定。 作为一个曾在目标现实中生活过的人,我的看法是: 人工智能绝对可以帮助杀伤链,而不必是“扣动扳机”的那一方。 加速寻找/固定/跟踪/目标会改变结果——这不是假设。 但如果我们要谈论“任何合法用途”,那么就不要将国家政策外包给合同争斗。 国防部已经有政策规定,自动武器系统应允许适当的人类判断使用武力。 所以真正的问题不是人类是否重要。 而是: 我们希望在模型层(供应商的保护措施)、合同层(“任何合法用途”)还是法律/政策层(国会 + 国防部的原则 + 审计)实施安全和治理? 因为“服务条款与战斗”是解决如此重大问题的愚蠢方式。 如果你曾在情报、目标、采购或治理领域工作过: 边界应该在哪里?模型、合同还是法律,当它破裂时,谁负责?
查看原文
I served 12 years infantry, then built targeting tools at JSOC vs ISIS. Now I lead a team building AI tools automating the compliance process. I’ve got opinions on Anthropic + DoD<p>When people argue about “AI in weapons” like it’s a sci-fi trigger bot… I can’t take it seriously.<p>A “kill chain” isn’t a vibe. It’s a process<p>Find, Fix, Track, Target, Engage, Assess (F2T2EA) and most of it is information work: sorting signal from noise, building confidence, tightening timelines, and getting decisions to the right humans fast enough to matter.<p>That’s why this Anthropic vs. DoD fight is getting attention. It’s not just “ethics.”<p>-&gt; It’s about control.<p>Here’s what’s actually on the table:<p>Anthropic says they’ll support the military — but they want two carve-outs: no mass domestic surveillance and no fully autonomous weapons (their definition: systems that “take humans out of the loop entirely” and automate selecting&#x2F;engaging targets).<p>Anthropic also says DoD demanded “any lawful use” and threatened offboarding &#x2F; “supply chain risk” pressure if they didn’t comply.<p>A DoD memo posted on media.defense.gov explicitly calls for models “free from usage policy constraints” and directs adding standard “any lawful use” language into AI contracts.<p>The dispute escalated fast — including federal offboarding&#x2F;blacklist actions and a “supply chain risk” designation as reported by major outlets. Now my take, as someone who’s lived inside the targeting reality:<p>AI can absolutely help the kill chain without ever being the one “pulling the trigger.”<p>Speeding up Find&#x2F;Fix&#x2F;Track&#x2F;Target changes outcomes — and it’s not hypothetical.<p>But if we’re going to talk about “any lawful use,” then stop outsourcing national policy to contract fights.<p>DoD already has policy that autonomous weapon systems should allow appropriate human judgment over the use of force. So the real question isn’t whether humans matter.<p>It’s this:<p>Do we want safety and governance implemented at the model layer (vendor guardrails), the contract layer (“any lawful use”), or the law&#x2F;policy layer (Congress + DoD doctrine + auditing)?<p>Because “Terms of Service vs. warfighting” is a stupid place to settle a question this big.<p>If you’ve worked in intel, targeting, acquisition, or governance:<p>Where should the boundary live? model, contract, or law, and who owns accountability when it breaks?