展示HN:Pomerium Agentic Access Gateway – AI代理的动态认证
简要概述:我们正在Pomerium中构建一个新的Agentic Access Gateway,以安全地让AI代理(如基于GPT的深度研究者、脚本或助手)代表您访问内部应用和资源——为每个操作提供细粒度、即时的授权。该项目是开源的(GitHub链接见下文),我们希望获得反馈和早期用户。
什么是Pomerium?
对于不熟悉的人来说,Pomerium是一个开源的身份感知代理(“零信任”访问网关)。它位于您的内部应用和API前面,持续验证每个请求的身份和上下文。
问题:
AI代理开始在软件中代表我们行动——自主发起请求、提取数据和触发操作。AI代理和模型上下文协议(MCP)等协议的兴起令人振奋。代理与多种工具(内部和托管的API、数据库、SaaS)交互以执行复杂任务的潜力巨大。
然而,当前的MCP规范专注于工具交互和发现,但对每个请求的授权几乎没有定义。仅依赖初始的OAuth范围,如建议的那样,对于动态代理工作流来说是不够的,因为上下文可能在任务进行中发生变化。将复杂的、上下文感知的授权逻辑推送到每一个工具中,会导致安全扩散、不一致性和操作开销——这与核心的零信任原则相悖。
我们的解决方案:
Agentic Access Gateway是Pomerium中为这个AI驱动的世界设计的新功能。它将Pomerium的核心能力(持续的身份验证/授权)扩展到非人类代理。简而言之,它将AI代理视为一等公民身份,携带上下文,并在每一步都需要进行政策检查。
主要功能包括:
- 集中政策执行:Pomerium作为您MCP工具(以及代理可能使用的其他API)前面的网关。一个地方定义和执行访问政策。
- 上下文感知的政策执行:AI代理的每个请求都将与政策进行检查——包括代理代表谁(或什么)行动、试图访问的数据以及行为中的任何异常。如果代理越界,将立即被拒绝。
- 利用现有身份:代理通过标准流程(OAuth2.1/OIDC风格)进行身份验证,因此您可以将代理的操作与真实用户或服务帐户关联。例如:代表用户Alice行动的代理可以继承Alice的权限(但仅限于您允许的权限,并且仅在执行任务时)。
- 即时凭证:代理可以通过Pomerium请求访问,并获得针对特定任务或工具的短期令牌,而不是静态API密钥。再也不需要“一个令牌统治所有”的情况。
- 审计与可追溯性:所有代理操作都通过一个网关,因此您可以获得集中日志和可见性。很容易查看“哪个AI在何时做了什么”,以便进行合规或调试。
- 与现有工具兼容:由于它内置于Pomerium中,您无需整个新堆栈。您可以在一个地方配置政策,而您的内部API无需修改。
演示:我们制作了一段60秒的视频,展示了Pomerium如何保护对SaaS(Google Docs)和内部应用(内部数据库)的访问。请观看Claude从Google文档中提取数据,然后转向内部Postgres查询——所有操作在一次运行中完成。
请求:我们希望获得HN社区对这种方法的反馈。您在系统中是否已经处理AI代理?
听起来有趣吗?想要将内部数据源与您的大型语言模型(LLMs)结合使用吗?请注册以获得Agentic Access Gateway的早期访问:
如果您想贡献或深入了解代码:
感谢您的阅读!我们构建这个是因为我们相信AI代理的时代需要一种新的访问控制方式。请告诉我们您的想法!
查看原文
TL;DR: We are building a new Agentic Access Gateway in Pomerium to safely let AI agents (like GPT-based deep researchers, scripts or assistants) access internal apps and resources on your behalf – with fine-grained, just-in-time authorization for every action. It's open source (GitHub link below) and we're looking for feedback and early access users.<p>What is Pomerium?
For those unfamiliar, Pomerium is an open-source identity-aware proxy (a "zero trust" access gateway). It sits in front of your internal apps and APIs, continually verifying identity and context on every request.<p>The problem:
AI agents are starting to act on our behalf in software – making requests, pulling data, and triggering actions autonomously. The rise of AI agents and protocols like Model Context Protocol (MCP) is really exciting. The potential for agents to interact with diverse tools (APIs, databases, SaaS) both internal and hosted to perform complex tasks is immense.<p>However, the current MCP spec focuses on tool interaction and discovery but leaves per-request authorization largely undefined. Relying solely on initial OAuth scopes, as suggested, falls short for dynamic agent workflows where context can change mid-task. Pushing complex, context-aware AuthZ logic into every single tool creates security sprawl, inconsistency, and operational overhead – antithetical to core Zero Trust principles.<p>Our solution:
Agentic Access Gateway is a new feature in Pomerium designed for this AI-driven world. It extends Pomerium's core capabilities (continuous authn/authz) to non-human agents. In a nutshell, it treats AI agents as first-class identities that carry context and require policy checks at each step.<p>Key functionality includes:<p><pre><code> - Centralized Policy Enforcement: Pomerium acts as a gateway in front of your MCP tools (and potentially other APIs agents might use). One place to define and enforce access policy.
- Context-aware policy enforcement: Every request from the AI agent is checked against policy – including who (or what) the agent is acting for, what data it's trying to access, and any anomaly in behavior. If an agent strays out of bounds, it's denied on the spot.
- Leverages Existing Identity: Agents authenticate via standard flows (OAuth2.1/OIDC style), so you can tie an agent's actions back to a real user or service account. Example: an agent acting for user Alice can inherit Alice's permissions (but only the ones you allow, and only while performing the task).
- Just-in-time credentials: Instead of static API keys, an agent can request access through Pomerium and get a short-lived token scoped to the specific task or tool. No more "one token to rule them all" lying around.
- Audit & traceability: All agent actions pass through a single gateway, so you get centralized logs and visibility. It's easy to see "which AI did what, when" for compliance or debugging.
- Works with existing tools: Because it's built into Pomerium, you don't need a whole new stack. You configure policies in one place, and your internal APIs don't have to be modified.
</code></pre>
Demo: We made a 60s video showing Pomerium can protect access to both SaaS (Google Docs) and an internal apps (a internal db). See Claude pull data from a Google Doc, then pivot to an internal Postgres query – all in one run.<p><a href="https://youtube.com/shorts/54X4o4tCgKc?si=6_d-xUoK4U6tF4pL" rel="nofollow">https://youtube.com/shorts/54X4o4tCgKc?si=6_d-xUoK4U6tF4pL</a><p>The Ask: We'd love the HN community's feedback on this approach. Are you dealing with AI agents in your systems yet?<p>Sound interesting? Looking leverage an internal datasource to your LLMs? Sign up for early access to the Agentic Access Gateway:<p><a href="https://www.pomerium.com/secure-agentic-access" rel="nofollow">https://www.pomerium.com/secure-agentic-access</a><p>If you'd like to contribute or want to dig into the code:<p><a href="https://github.com/pomerium/pomerium">https://github.com/pomerium/pomerium</a><p>Thanks for reading! We built this because we believe the age of AI agents calls for a new kind of access control. Let us know what you think!