在Anthropic/Dow供应链风险故事中被忽视的三件事

1作者: null-phnix27 天前原帖
主导框架是“Anthropic英雄,五角大楼恶棍”。我认为情况更为复杂,有三个具体问题被低估了。 1. 法律定义中有“对手”一词。 根据美国法典第10卷第3252条,"供应链风险"被定义为“对手可能破坏、恶意引入不必要的功能或以其他方式颠覆”国家安全系统。这一用词承载着重要意义——该法规是为与中国共产党(CCP)相关的供应商和外国破坏者设计的,而不是针对自愿放弃数亿美元收入以切断与CCP相关客户的美国公司之间的合同争议。这一称谓不仅在政治上前所未有;从法规自身的框架来看,这也显得格外奇怪。 2. Anthropic的法律挑战比报道的要狭窄。 第3252条(c)(1)包含一项不予司法审查的条款:“任何行动……不得在政府问责办公室或任何联邦法院的投标抗议中受到审查。”Anthropic的法律团队对此是清楚的——他们的挑战必须基于宪法或行政程序法的理由,而不是标准的投标抗议。这是一条更为艰难的道路。“我们会在法庭见”的说法在某种程度上误导了他们实际上可以采取的行动。 3. 民主合法性问题是双向的。 大多数报道将Anthropic的两次拒绝(不支持完全自主武器,不进行大规模国内监控)视为简单正确。这可能确实如此。但“哪些人工智能系统足够可靠以做出打击决策”原则上是由民选官员和军事指挥官来回答的问题,而不是私人CEO。达里奥·阿莫代伊(Dario Amodei)并不是民选的。他的立场是可以辩护的,但并不自动具有权威性。 这与苹果/FBI的iPhone案件也有所不同。苹果被要求解锁现有功能。而国防部则要求Anthropic允许在任何现有合同中没有的新用途——这是扩展,而不是解锁。 我真正感到担忧的是国防生产法的威胁。利用战时征兵权力强制移除人工智能安全防护措施是一种不同类型的权力操作,没有明确的先例。 一个确认的事实使整个事情显得荒谬:据报道,美国中央司令部在宣布供应链风险指定后的几个小时内使用了Claude进行伊朗空袭。被指定的“供应链风险”正在实时进行国家安全行动。 更棘手的问题是,应该用什么框架来管理因伦理原因拒绝政府合同的私人人工智能公司,这是一个没有人认真探讨的问题。这并不是因为答案显而易见,而是因为它需要思考一种不容易归入现有法律或政治类别的企业良知。
查看原文
The dominant frame is &quot;Anthropic hero, Pentagon villain.&quot; I think the situation is more complicated, and three specific things are being underreported.<p><pre><code> 1. The statute has &quot;adversary&quot; in the legal definition. </code></pre> 10 U.S.C. § 3252 defines &quot;supply chain risk&quot; as the risk that &quot;an adversary may sabotage, maliciously introduce unwanted function, or otherwise subvert&quot; a national security system. That word is load-bearing — the statute was designed for CCP-linked vendors and foreign saboteurs, not contract disputes with American companies that voluntarily forewent hundreds of millions in revenue to cut off CCP-linked customers. The designation isn&#x27;t just politically unprecedented; it&#x27;s texturally strange given the statute&#x27;s own framing.<p><pre><code> 2. Anthropic&#x27;s court challenge is narrower than reported. </code></pre> § 3252(c)(1) includes a no-judicial-review clause: &quot;no action...shall be subject to review in a bid protest before the Government Accountability Office or in any Federal court.&quot; Anthropic&#x27;s legal team knows this — their challenge will have to be on constitutional or Administrative Procedure Act grounds, not a standard bid protest. That&#x27;s a harder road. The &quot;we&#x27;ll see them in court&quot; framing is somewhat misleading about what&#x27;s actually available to them.<p><pre><code> 3. The democratic legitimacy question runs in both directions. </code></pre> Most coverage treats Anthropic&#x27;s two refusals (no fully autonomous weapons, no mass domestic surveillance) as straightforwardly correct. They may well be. But &quot;which AI systems are reliable enough to make targeting decisions&quot; is, in principle, a question for elected officials and military commanders — not a private CEO. Dario Amodei wasn&#x27;t elected. His position is defensible; it&#x27;s not automatically authoritative.<p>This is also different from the Apple&#x2F;FBI iPhone case. Apple was asked to unlock existing capability. The DoW was asking Anthropic to allow new uses not in any existing contract — an expansion, not an unlock.<p>The Defense Production Act threat is where I get genuinely alarmed. Using wartime conscription authority to force removal of AI safety guardrails is a different kind of power move with no clear precedent.<p>One confirmed fact that makes the whole thing absurd:<p>US Central Command reportedly used Claude during the Iran airstrikes hours after the supply chain risk designation was announced. The designated &quot;supply chain risk&quot; was running national security operations in real time.<p>The harder question what framework should govern private AI companies refusing government contracts on ethical grounds is the one nobody&#x27;s seriously engaging with. Not because the answer is obvious, but because it requires thinking about a kind of corporate conscience that doesn&#x27;t fit neatly into existing legal or political categories.