展示HN:LLML:数据结构 => 提示
我已经构建AI系统一段时间了,但总是遇到同样的问题——提示工程感觉就像是字符串拼接的地狱。每一个复杂的提示都变成了维护的噩梦,充斥着f-strings和模板字面量。
所以我构建了LLML——可以把它想象成提示的React。正如React是数据 => UI,LLML则是数据 => 提示。
问题:
```python
# 我们都写过这样的代码...
prompt = f"角色: {role}\n"
prompt += f"上下文: {json.dumps(context)}\n"
for i, rule in enumerate(rules):
prompt += f"{i+1}. {rule}\n"
# 解决方案:
from zenbase_llml import llml
# 通过组合数据来构建提示
context = get_user_context()
prompt = llml({
"角色": "高级工程师",
"上下文": context,
"规则": ["永远不要跳过测试", "始终审查依赖"],
"任务": "安全地部署服务"
})
# 输出:
<角色>高级工程师</角色>
<上下文>
...
</上下文>
<规则>
<规则-1>永远不要跳过测试</规则-1>
<规则-2>始终审查依赖</规则-2>
</规则>
<任务>安全地部署服务</任务>
```
为什么使用类似XML的格式?我们发现LLM在解析具有明确边界的结构化格式(<tag>content</tag>)时,比JSON或YAML更可靠。编号列表(<规则-1>,<规则-2>)可以防止顺序混淆。
在Python和TypeScript中可用:
```bash
pip/poetry/uv/rye install zenbase-llml
npm/pnpm/yarn/bun install @zenbase/llml
```
对于喜欢冒险的人,还有实验性的Rust和Go实现 :)
主要特点:
```plaintext
- ≤1个依赖
- 可扩展的格式化器系统(为你的领域对象创建自定义格式化器)
- 100%测试覆盖(TypeScript),92%(Python)
- 所有语言实现的输出一致
```
格式化器系统特别好用——你可以覆盖任何数据类型的序列化方式,使处理特定领域对象或敏感数据变得简单。
GitHub: [https://github.com/zenbase-ai/llml](https://github.com/zenbase-ai/llml)
希望听到其他人是否也遇到过类似的提示工程挑战,以及你们是如何解决的!
查看原文
I've been building AI systems for a while and kept hitting the same wall - prompt engineering felt like string concatenation hell. Every complex prompt became a maintenance nightmare of f-strings and template literals.<p>So I built LLML - think of it as React for prompts. Just as React is data => UI, LLML is data => prompt.<p>The Problem:<p><pre><code> # We've all written this...
prompt = f"Role: {role}\n"
prompt += f"Context: {json.dumps(context)}\n"
for i, rule in enumerate(rules):
prompt += f"{i+1}. {rule}\n"
# The Solution:
from zenbase_llml import llml
# Compose prompts by composing data
context = get_user_context()
prompt = llml({
"role": "Senior Engineer",
"context": context,
"rules": ["Never skip tests", "Always review deps"],
"task": "Deploy the service safely"
})
# Output:
<role>Senior Engineer</role>
<context>
...
</context>
<rules>
<rules-1>Never skip tests</rules-1>
<rules-2>Always review deps</rules-2>
</rules>
<task>Deploy the service safely</task>
</code></pre>
Why XML-like? We found LLMs parse structured formats with clear boundaries (<tag>content</tag>) more reliably than JSON or YAML. The numbered lists (<rules-1>, <rules-2>) prevent ordering confusion.<p>Available in Python and TypeScript:<p><pre><code> pip/poetry/uv/rye install zenbase-llml
npm/pnpm/yarn/bun install @zenbase/llml
</code></pre>
Experimental Rust and Go implementations also available for the adventurous :)<p>Key features:<p><pre><code> - ≤1 dependencies
- Extensible formatter system (create custom formatters for your domain objects)
- 100% test coverage (TypeScript), 92% (Python)
- Identical output across all language implementations
</code></pre>
The formatter system is particularly neat - you can override how any data type is serialized, making it easy to handle domain-specific objects or sensitive data.<p>GitHub: <a href="https://github.com/zenbase-ai/llml">https://github.com/zenbase-ai/llml</a><p>Would love to hear if others have faced similar prompt engineering challenges and how you've solved them!