1作者: techbuilder4242大约 1 个月前原帖
https:&#x2F;&#x2F;getreplay.app&#x2F;<p>Hi builders! I was falling asleep when the idea for this project came to me. So, I&#x27;ve decided to vibecode it via Lovable to the best of my ability. This is a personal&#x2F;inspirational project, but I&#x27;ve tried to make it as good as possible given the constraints.<p>Tech Stack:<p>- Frontend: React 18 + Vite + Tailwind CSS<p>- AI: Gemini 2.5 Flash (via Lovable AI Gateway) for low-latency, empathetic coaching (to keep costs low and ship fast)<p>- Infrastructure: Lovable Cloud &#x2F; Supabase<p>TL;DR: An LLM-powered &quot;What If&quot; view of your potential. Built for fun, inspiration, and research.<p>I would appreciate your feedback, guys. Thank you in advance!<p>P.S. Initial post got bugged, so I had to recreate it without a hyperlink.
1作者: ladraoHacker大约 1 个月前原帖
2作者: beacon294大约 1 个月前原帖
This is a declarative orchestration framework for stateless short-lived LLM agents.<p>Agents are orchestrated by hierarchical state machines (HSMs).<p>The HSMs and agents are defined in YAML, with hooks and webhooks for additional functionality.<p>* FlatAgent: A single LLM call: model + prompts + output schema.<p>* FlatMachine: A state machine that orchestrates multiple agents, actions, and state machines.<p>Examples: - <a href="https:&#x2F;&#x2F;github.com&#x2F;memgrafter&#x2F;flatagents&#x2F;tree&#x2F;main&#x2F;sdk&#x2F;python&#x2F;examples" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;memgrafter&#x2F;flatagents&#x2F;tree&#x2F;main&#x2F;sdk&#x2F;pytho...</a> - <a href="https:&#x2F;&#x2F;github.com&#x2F;memgrafter&#x2F;research-crawler-flatagents" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;memgrafter&#x2F;research-crawler-flatagents</a> - <a href="https:&#x2F;&#x2F;github.com&#x2F;memgrafter&#x2F;claude-skills-flatagents" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;memgrafter&#x2F;claude-skills-flatagents</a>
4作者: raymondtana大约 1 个月前原帖
Raymond here from Butter.dev, an LLM response cache built as a chat-completions proxy. Today we&#x27;re launching a key feature for the platform: the ability to generalize on dynamic, templated inputs.<p>Caching at the HTTP request level has the obvious problem of generalizability. Nearly no request is identical, due to templated variables (like names) and metadata (like timestamps), so exact-match cache lookups rarely hit. We solve this at Butter by using LLMs to detect dynamic content in requests and derive their inter-relationships, allowing the cache entry to be stored as a template + variables + deterministic code. This allows future requests to contain different variable data, yet still serve from cache.<p>We&#x27;ve found this approach greatly improves cache hit rate, and believe it could be useful for agents performing repetitive back-office tasks, computer use, or data transformations where input data is frequently of the same shape.<p>- You can see a demo of learning patterns here: <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=ORDfPnk9rCA" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=ORDfPnk9rCA</a><p>- We wrote more about the technical approach here: <a href="https:&#x2F;&#x2F;blog.butter.dev&#x2F;on-automatic-template-induction-for-response-caching">https:&#x2F;&#x2F;blog.butter.dev&#x2F;on-automatic-template-induction-for-...</a><p>- It&#x27;s free to try out here: <a href="https:&#x2F;&#x2F;butter.dev&#x2F;auth">https:&#x2F;&#x2F;butter.dev&#x2F;auth</a>