展示HN:LLML:数据结构 => 提示

1作者: knrz7 个月前原帖
我已经构建AI系统一段时间了,但总是遇到同样的问题——提示工程感觉就像是字符串拼接的地狱。每一个复杂的提示都变成了维护的噩梦,充斥着f-strings和模板字面量。 所以我构建了LLML——可以把它想象成提示的React。正如React是数据 => UI,LLML则是数据 => 提示。 问题: ```python # 我们都写过这样的代码... prompt = f"角色: {role}\n" prompt += f"上下文: {json.dumps(context)}\n" for i, rule in enumerate(rules): prompt += f"{i+1}. {rule}\n" # 解决方案: from zenbase_llml import llml # 通过组合数据来构建提示 context = get_user_context() prompt = llml({ "角色": "高级工程师", "上下文": context, "规则": ["永远不要跳过测试", "始终审查依赖"], "任务": "安全地部署服务" }) # 输出: <角色>高级工程师</角色> <上下文> ... </上下文> <规则> <规则-1>永远不要跳过测试</规则-1> <规则-2>始终审查依赖</规则-2> </规则> <任务>安全地部署服务</任务> ``` 为什么使用类似XML的格式?我们发现LLM在解析具有明确边界的结构化格式(<tag>content</tag>)时,比JSON或YAML更可靠。编号列表(<规则-1>,<规则-2>)可以防止顺序混淆。 在Python和TypeScript中可用: ```bash pip/poetry/uv/rye install zenbase-llml npm/pnpm/yarn/bun install @zenbase/llml ``` 对于喜欢冒险的人,还有实验性的Rust和Go实现 :) 主要特点: ```plaintext - ≤1个依赖 - 可扩展的格式化器系统(为你的领域对象创建自定义格式化器) - 100%测试覆盖(TypeScript),92%(Python) - 所有语言实现的输出一致 ``` 格式化器系统特别好用——你可以覆盖任何数据类型的序列化方式,使处理特定领域对象或敏感数据变得简单。 GitHub: [https://github.com/zenbase-ai/llml](https://github.com/zenbase-ai/llml) 希望听到其他人是否也遇到过类似的提示工程挑战,以及你们是如何解决的!
查看原文
I&#x27;ve been building AI systems for a while and kept hitting the same wall - prompt engineering felt like string concatenation hell. Every complex prompt became a maintenance nightmare of f-strings and template literals.<p>So I built LLML - think of it as React for prompts. Just as React is data =&gt; UI, LLML is data =&gt; prompt.<p>The Problem:<p><pre><code> # We&#x27;ve all written this... prompt = f&quot;Role: {role}\n&quot; prompt += f&quot;Context: {json.dumps(context)}\n&quot; for i, rule in enumerate(rules): prompt += f&quot;{i+1}. {rule}\n&quot; # The Solution: from zenbase_llml import llml # Compose prompts by composing data context = get_user_context() prompt = llml({ &quot;role&quot;: &quot;Senior Engineer&quot;, &quot;context&quot;: context, &quot;rules&quot;: [&quot;Never skip tests&quot;, &quot;Always review deps&quot;], &quot;task&quot;: &quot;Deploy the service safely&quot; }) # Output: &lt;role&gt;Senior Engineer&lt;&#x2F;role&gt; &lt;context&gt; ... &lt;&#x2F;context&gt; &lt;rules&gt; &lt;rules-1&gt;Never skip tests&lt;&#x2F;rules-1&gt; &lt;rules-2&gt;Always review deps&lt;&#x2F;rules-2&gt; &lt;&#x2F;rules&gt; &lt;task&gt;Deploy the service safely&lt;&#x2F;task&gt; </code></pre> Why XML-like? We found LLMs parse structured formats with clear boundaries (&lt;tag&gt;content&lt;&#x2F;tag&gt;) more reliably than JSON or YAML. The numbered lists (&lt;rules-1&gt;, &lt;rules-2&gt;) prevent ordering confusion.<p>Available in Python and TypeScript:<p><pre><code> pip&#x2F;poetry&#x2F;uv&#x2F;rye install zenbase-llml npm&#x2F;pnpm&#x2F;yarn&#x2F;bun install @zenbase&#x2F;llml </code></pre> Experimental Rust and Go implementations also available for the adventurous :)<p>Key features:<p><pre><code> - ≤1 dependencies - Extensible formatter system (create custom formatters for your domain objects) - 100% test coverage (TypeScript), 92% (Python) - Identical output across all language implementations </code></pre> The formatter system is particularly neat - you can override how any data type is serialized, making it easy to handle domain-specific objects or sensitive data.<p>GitHub: <a href="https:&#x2F;&#x2F;github.com&#x2F;zenbase-ai&#x2F;llml">https:&#x2F;&#x2F;github.com&#x2F;zenbase-ai&#x2F;llml</a><p>Would love to hear if others have faced similar prompt engineering challenges and how you&#x27;ve solved them!