返回首页
最新
It's a vibe coded slop and I have not written a single line of code for this project, but I still think it's pretty impressive what the current LLM models can churn out. I used Claude Opus 4.5 for everything. The point is, I described pretty much all the features that I had in mind and it managed to implement them more or less correctly, in some instances, even surpassing my expectations. I am looking for ideas and suggestion on what could be improved and features people would like to see.
I’ve been running longer AI agent tasks (mostly in Claude Code), and I kept running into the same problem:
the agent would finish or get stuck asking a question, and I wouldn’t notice until much later because I wasn’t watching the terminal.<p>So I built a small tool called Agent Reachout.<p>It lets an AI agent send me messages on Telegram when:
• it finishes a task
• it hits a blocker
• it needs a human decision to continue<p>I can reply directly from Telegram, and the agent continues where it left off.<p>This turned long-running agent work into something asynchronous — I don’t have to babysit the CLI anymore.<p>What it does
• Simple Telegram bot integration
• One-way notifications or two-way conversations
• Designed for “human-in-the-loop” agent workflows
• Works today as a Claude Code plugin<p>Why I built it
Fully autonomous agents sound nice, but in practice I often want:
• approvals before destructive actions
• clarification on ambiguous decisions
• a quick “yes/no” without stopping my day<p>Telegram was already where I am, so I used that.<p>What it’s not
• Not a general chatbot framework
• Not a workflow engine
• Just a small bridge between agents and humans<p>Repo
<a href="https://github.com/vibe-with-me-tools/agent-reachout" rel="nofollow">https://github.com/vibe-with-me-tools/agent-reachout</a><p>Would love feedback on:
• whether others hit this problem
• what notification channels would be useful next (Slack, WhatsApp, etc.)
• whether this should stay a plugin or evolve into something broader<p>Thanks!
I’ve been using Cursor and Claude Code daily for real work, not just experiments.<p>One thing that surprised me is how quickly code quality converges between tools once you plan clearly. At this point, I don’t feel a meaningful difference in output quality itself.<p>What does feel different is the workflow mode each tool supports.<p>When I want many things moving at once, spawning parallel agents, delegating background tasks, or running async work, Claude Code feels more natural to me. The CLI and agent-first model fits that style well.<p>When I need to slow down, review plans, read diffs, understand context, and make careful changes, Cursor feels more friendly. It’s easier for focused thinking and sense-making.<p>So for me, it’s parallel vs focus mode.<p>We’re also starting to run Claude Code in CI/CD for well-scoped tasks like tests, refactors, and reproducible bug fixes. That background delegation is where CLI-first tools start to matter.<p>Curious how others are splitting work between these tools, or if you see it differently.