展示HN:Yori – 将AI逻辑隔离到“语义容器”中(代码的Docker)
嗨,HN,
我已经做了很长时间的开发者,和你们中的许多人一样,我对AI编码工具的“全有或全无”问题感到沮丧。
你请求AI修复一个bug或实现一个功能,它却重写了整个文件。它会更改你的导入,重命名你的变量,或者删除它认为不必要的注释。这就像把生产服务器的根权限交给一个初级开发者(像我一样),仅仅是为了更改一个配置文件。
因此,29天前,我开始构建Yori来解决信任问题。
**概念:语义容器**
Yori引入了一种语法,充当AI的防火墙。你可以在文本文件中定义一个`$${ ... }$$`块。
块外(宿主):你的手动代码、架构和结构。AI无法触碰这些内容。
块内(容器):你用自然语言表达意图。AI只能在这里生成代码。
**示例:myutils.md**
```cpp
EXPORT: "myfile.cpp"
// 我的手动架构 - AI无法更改此部分
#include "utils.h"
void process_data() {
// 容器:AI在这里受到沙箱限制,但可以继承文件的其余部分作为上下文
$${
使用快速排序对输入数据向量进行排序。
过滤掉负数。
打印结果。
}$$
}
EXPORT: END
```
**工作原理:**
Yori是一个C++包装器,用于解析这些文件。EXPORT块内和容器外的内容将按原样复制。当你运行`yori myutils.md -make -series`时,它会将提示发送到本地(Ollama)或云端的LLM,生成语法,填充块,并使用你的本地工具链(GCC/Clang/Python)编译结果。
如果编译失败,它会将错误反馈给LLM进行重试(自我修复)。
**我认为这很重要的原因:**
1. 安全性:你不再给AI“根权限”来访问你的文件。
2. 意图作为源:提示保留在文件中。如果你想将逻辑从C++迁移到Rust,你可以保留提示,只需更改编译目标。
3. 增量构建(即将添加):命名容器允许缓存。如果提示没有更改,你就不需要支付API调用的费用。
这是一个开源项目(MIT),使用C++17,并且可以在本地运行。
我很想听听你们对“语义容器”概念的反馈。这是我们一直缺失的AI编码抽象层吗?请告诉我你的想法。此外,如果你无法运行yori.exe,请告诉我出了什么问题,我们会看看如何解决。我在GitHub上开了一个问题。我也在为这个项目制作文档(GitHub wiki),请期待很快发布。
GitHub: [https://github.com/alonsovm44/yori](https://github.com/alonsovm44/yori)
谢谢!
查看原文
Hi HN,
I've been a developer for some time now, and like many of you, I've been frustrated by the "All-or-Nothing" problem with AI coding tools.<p>You ask an AI to fix a bug or implement a function, and it rewrites the whole file. It changes your imports, renames your variables, or deletes comments it deems unnecessary. It’s like giving a junior developer (like me) root access to your production server just to change a config file.<p>So, 29 days ago, I started building Yori to solve the trust problem.<p>The Concept: Semantic Containers
Yori introduces a syntax that acts like a firewall for AI. You define a $${ ... }$$ block inside a text file.<p>Outside the block (The Host): Your manual code, architecture, and structure. The AI cannot touch this.
Inside the block (The Container): You write natural language intent. The AI can only generate code here.<p>Example: myutils.md<p>```cpp
EXPORT: "myfile.cpp"<p>// My manual architecture - AI cannot change this
#include "utils.h"<p>void process_data() {
// Container: The AI is sandboxed here, but inherits the rest of the file as context
$${
Sort the incoming data vector using quicksort.
Filter out negative numbers.
Print the result.
}$$
}
EXPORT: END
```
How it works:
Yori is a C++ wrapper that parses these files. Whatever is within the EXPORT block and outside the containers ($${ }$$) will be copied as it is. When you run `yori myutils.md -make -series`, it sends the prompts to a local (Ollama) or cloud LLM, generates the syntax, fills the blocks, and compiles the result using your native toolchain (GCC/Clang/Python).<p>If compilation fails, it feeds the error back to the LLM in a retry loop (self-healing).<p>Why I think this matters:<p>1. Safety: You stop giving AI "root access" to your files.<p>2. Intent as Source: The prompt stays in the file. If you want to port your logic from C++ to Rust, you keep the prompts and just change the compile target.<p>3. Incremental Builds (to be added soon): Named containers allow for caching. If the prompt hasn't changed, you don't pay for an API call.<p>It’s open source (MIT), C++17, and works locally.<p>I’d love feedback on the "Semantic Container" concept. Is this the abstraction layer we've been missing for AI coding? Let me hear your ideas. Also, if you can't run yori.exe tell what went wrong and we see how to fix it. I opened a github issue on this. I am also working in making documentation for the project (github wiki). So expect that soon.<p>GitHub: https://github.com/alonsovm44/yori<p>Thanks!