展示HN:OLL – 一种与供应商无关的LLM输入和输出标准
我们刚刚发布了开放大语言模型规范(Open LLM Specification,简称 OLLS)——这是一个由社区驱动的标准,旨在统一开发者与大型语言模型(LLM)之间的交互方式,适用于 OpenAI、Anthropic、Google 等多个提供商。
目前,每个提供商的请求/响应格式各不相同,这使得集成变得困难:
- 解析响应不一致
- 切换模型需要自定义包装
- 错误处理和元数据差异巨大
OLLS 定义了一种简单、可扩展的 JSON 规范,适用于输入(提示、参数、元数据)和输出(内容、推理、使用情况、错误)。可以将其视为 LLM 的 OpenAPI——可移植、可预测且与提供商无关。
GitHub 仓库 - [https://github.com/julurisaichandu/open-llm-specification](https://github.com/julurisaichandu/open-llm-specification)
示例输入/输出格式、目标和路线图
我们正在寻找贡献者、反馈和实际应用案例!让我们共同构建一个统一的 LLM 接口——欢迎贡献想法或加入讨论。
查看原文
We have just launched the Open LLM Specification (OLLS) – a community-driven standard that unifies how developers interact with large language models (LLMs) across providers such as OpenAI, Anthropic, Google, and others.<p>Right now, every provider has different request/response formats, which makes integration painful:<p>Parsing responses is inconsistent<p>Switching models needs custom wrappers<p>Error handling and metadata vary wildly<p>OLLS defines a simple, extensible JSON spec for both inputs (prompts, parameters, metadata) and outputs (content, reasoning, usage, errors).
Think of it like OpenAPI for LLMs—portable, predictable, and provider-agnostic.<p>GitHub Repo - <a href="https://github.com/julurisaichandu/open-llm-specification">https://github.com/julurisaichandu/open-llm-specification</a>
Example input/output formats, goals, and roadmap
Looking for contributors, feedback, and real-world use cases!<p>Let’s build a unified LLM interface—contribute ideas or join the discussion