返回首页
最新
OP here. I built RepoReaper to solve code context fragmentation in RAG.<p>Unlike standard chat-with-repo tools, it simulates a senior engineer's workflow: it parses Python AST for logic-aware chunking, uses a ReAct loop to JIT-fetch missing file dependencies from GitHub, and employs hybrid search (BM25+Vector). It also generates Mermaid diagrams for architecture visualization. The backend is fully async and persists state via ChromaDB.<p>Link: <a href="https://github.com/tzzp1224/RepoReaper" rel="nofollow">https://github.com/tzzp1224/RepoReaper</a>
I built a small web tool that lets people create Arabic calligraphy without needing design software. Most existing tools are either too complex or very limited, so I wanted something simple and accessible.<p>Features:
• Write Arabic directly or translate from English
• 11 classic calligraphy styles (Thuluth, Naskh, Kufi, Diwani, etc.)
• Adjust layout, colors, line height, stroke, and rotation
• Export as PNG, JPG, or SVG
• No signup required<p>I’d appreciate any feedback on performance, UI, or calligraphy accuracy. This is a solo side project and still evolving.<p>Site: <a href="https://arabiccalligraphygenerator.online" rel="nofollow">https://arabiccalligraphygenerator.online</a>
Finding open source issues is easy.
Deciding which ones are worth your time is not.<p>I built Contrib.FYI as a simple web app to reduce that decision cost.<p>Instead of relying on static, curated lists, it uses live GitHub API data
and shows issues in chronological order, so discovery stays fresh.<p>On top of that, it surfaces a few early signals (language, stars,
no comments, no linked PRs) to help you avoid opening issues that are
already being worked on.<p>The goal is not to find more issues,
but to find better candidates to spend your time on.<p>Source code is available here:
<a href="https://github.com/K-dash/contrib-fyi" rel="nofollow">https://github.com/K-dash/contrib-fyi</a><p>Feedback is welcome.
Just pushed an update (v1.1) to Struxs.<p>We had some users asking for a way to constrain the scope of the visual perception. Specifically, they were processing scanned forms where a field like "Gender" or "Payment Mode" would return inconsistent raw text (e.g., "M", "Male", or just a checked box symbol) depending on the document layout.<p>To solve this, we added an Enum type to the builder.<p>You can now visually map a region and strictly define the allowed states (e.g., ["Male", "Female"] or ["Sedan", "SUV", "Truck"]). The engine will now force the visual signal into one of those pre-defined buckets instead of returning ambiguous strings.<p>It’s a small change, but it makes the JSON output deterministic and saves you from writing extra code to normalize the data downstream.<p>Happy to hear any feedback.
I built kprotect because I wanted a way to protect my sensitive files (SSH keys, env files) that went beyond just standard Linux permissions. Even if a process is running as root, it shouldn't be able to read my secrets unless it’s part of a trusted execution chain.<p>How it works: It uses BPF LSM (Linux Security Modules) to intercept file access at the kernel level. Instead of just checking the PID or the binary name, it looks at the entire lineage (the "Chain of Trust"). For example, cat is only allowed to read my SSH keys if the parent process is my-terminal and the grandparent is vscodium.<p>Key Tech:<p>Backend: Rust + Aya (for the eBPF bits).<p>Frontend: Tauri + React for the dashboard.<p>Security: Logs and configs are AES-encrypted to prevent tampering.<p>It’s currently in beta (0.1.0). It requires a kernel (5.10+) with BPF LSM enabled. I'd love to hear feedback on the "Chain of Trust" logic—specifically if anyone sees edge cases in how I'm verifying the process ancestors.
GitHub: <a href="https://github.com/khoinp1012/kprotect" rel="nofollow">https://github.com/khoinp1012/kprotect</a>
I've been a freelance web dev in a niche (online maps) for about 8 years. I have noticed a large drop in project enquiries over the last 12 months or so, and I'm speculating that one cause is potential clients using AI to implement their own projects.<p>My typical projects are small, single-person full-stack sites: a bit of data preparation, a bit of specialist GIS knowledge, a Vue front-end, a Node backend, and deployment to a server provided by the client. I haven't tried, but I suspect modern LLMs would do a pretty good job building this kind of thing in the hands of a reasonable programmer.<p>In other words: core value proposition (web mapping expertise and experience) can be provided by an AI.<p>I'm curious whether other freelancers are experiencing something similar, or whether people on the other side of that equation (working at companies that might typically hire freelancers) have any insight to offer.
I built a CLI tool that scans your Metabase instance to find which SQL questions reference a column or table you're about to drop/rename.<p>metabase-impact --metabase-url http://localhost:3000 --api-key "mb_xxx" --drop-column orders.user_id<p>It outputs affected questions with direct links so you can fix or archive them before deploying.<p>Built this after breaking dashboards one too many times. Uses sqlglot for SQL parsing (handles aliases and complex queries). Only works on native SQL questions, not MBQL/GUI queries.
I built this because Cursor, Claude Code and other agentic AI tools kept giving me tests that looked fine but failed when I ran them. Or worse - I'd ask the agent to run them and it would start looping: fix tests, those fail, then it starts "fixing" my code so tests pass, or just deletes assertions so they "pass".<p>Out of that frustration I built KeelTest - a VS Code extension that generates pytest tests and executes them, got hooked and decided to push this project forward... When tests fail, it tries to figure out why:<p>- Generation error: Attemps to fix it automatically, then tries again<p>- Bug in your source code: flags it and explains what's wrong<p>How it works:<p>- Static analysis to map dependencies, patterns, services to mock.<p>- Generate a plan for each function and what edge cases to cover<p>- Generate those tests<p>- Execute in "sandbox"<p>- Self-heal failures or flag source bugs<p>Python + pytest only for now. Alpha stage - not all codebases work reliably. But testing on personal projects and a few production apps at work, it's been consistently decent. Works best on simpler applications, sometimes glitches on monorepos setups. Supports Poetry/UV/plain pip setups.<p>Install from VS Code marketplace: <a href="https://marketplace.visualstudio.com/items?itemName=KeelCode.keeltest" rel="nofollow">https://marketplace.visualstudio.com/items?itemName=KeelCode...</a><p>More detailed writeup how it works: <a href="https://keelcode.dev/blog/introducing-keeltest" rel="nofollow">https://keelcode.dev/blog/introducing-keeltest</a><p>Free tier is 7 tests files/month (current limit is <=300 source LOC). To make it easier to try without signing up, giving away a few API keys (they have shared ~30 test files generation quota):<p>KEY-1: tgai_jHOEgOfpMJ_mrtNgSQ6iKKKXFm1RQ7FJOkI0a7LJiWg<p>KEY-2: tgai_NlSZN-4yRYZ15g5SAbDb0V0DRMfVw-bcEIOuzbycip0<p>KEY-3: tgai_kiiSIikrBZothZYqQ76V6zNbb2Qv-o6qiZjYZjeaczc<p>KEY-4: tgai_JBfSV_4w-87bZHpJYX0zLQ8kJfFrzas4dzj0vu31K5E<p>Would love your honest feedback where this could go next, and on which setups it failed, how it failed, it has quite verbose debug output at this stage!
Just built a small tool and created some comparsion of country size vs. planets. Greenland seems larger than i thought.<p>The tool allows you to drag a counry to other planet to see the size there.