返回首页
最新
I've been a freelance web dev in a niche (online maps) for about 8 years. I have noticed a large drop in project enquiries over the last 12 months or so, and I'm speculating that one cause is potential clients using AI to implement their own projects.<p>My typical projects are small, single-person full-stack sites: a bit of data preparation, a bit of specialist GIS knowledge, a Vue front-end, a Node backend, and deployment to a server provided by the client. I haven't tried, but I suspect modern LLMs would do a pretty good job building this kind of thing in the hands of a reasonable programmer.<p>In other words: core value proposition (web mapping expertise and experience) can be provided by an AI.<p>I'm curious whether other freelancers are experiencing something similar, or whether people on the other side of that equation (working at companies that might typically hire freelancers) have any insight to offer.
I built a CLI tool that scans your Metabase instance to find which SQL questions reference a column or table you're about to drop/rename.<p>metabase-impact --metabase-url http://localhost:3000 --api-key "mb_xxx" --drop-column orders.user_id<p>It outputs affected questions with direct links so you can fix or archive them before deploying.<p>Built this after breaking dashboards one too many times. Uses sqlglot for SQL parsing (handles aliases and complex queries). Only works on native SQL questions, not MBQL/GUI queries.
I built this because Cursor, Claude Code and other agentic AI tools kept giving me tests that looked fine but failed when I ran them. Or worse - I'd ask the agent to run them and it would start looping: fix tests, those fail, then it starts "fixing" my code so tests pass, or just deletes assertions so they "pass".<p>Out of that frustration I built KeelTest - a VS Code extension that generates pytest tests and executes them, got hooked and decided to push this project forward... When tests fail, it tries to figure out why:<p>- Generation error: Attemps to fix it automatically, then tries again<p>- Bug in your source code: flags it and explains what's wrong<p>How it works:<p>- Static analysis to map dependencies, patterns, services to mock.<p>- Generate a plan for each function and what edge cases to cover<p>- Generate those tests<p>- Execute in "sandbox"<p>- Self-heal failures or flag source bugs<p>Python + pytest only for now. Alpha stage - not all codebases work reliably. But testing on personal projects and a few production apps at work, it's been consistently decent. Works best on simpler applications, sometimes glitches on monorepos setups. Supports Poetry/UV/plain pip setups.<p>Install from VS Code marketplace: <a href="https://marketplace.visualstudio.com/items?itemName=KeelCode.keeltest" rel="nofollow">https://marketplace.visualstudio.com/items?itemName=KeelCode...</a><p>More detailed writeup how it works: <a href="https://keelcode.dev/blog/introducing-keeltest" rel="nofollow">https://keelcode.dev/blog/introducing-keeltest</a><p>Free tier is 7 tests files/month (current limit is <=300 source LOC). To make it easier to try without signing up, giving away a few API keys (they have shared ~30 test files generation quota):<p>KEY-1: tgai_jHOEgOfpMJ_mrtNgSQ6iKKKXFm1RQ7FJOkI0a7LJiWg<p>KEY-2: tgai_NlSZN-4yRYZ15g5SAbDb0V0DRMfVw-bcEIOuzbycip0<p>KEY-3: tgai_kiiSIikrBZothZYqQ76V6zNbb2Qv-o6qiZjYZjeaczc<p>KEY-4: tgai_JBfSV_4w-87bZHpJYX0zLQ8kJfFrzas4dzj0vu31K5E<p>Would love your honest feedback where this could go next, and on which setups it failed, how it failed, it has quite verbose debug output at this stage!
Just built a small tool and created some comparsion of country size vs. planets. Greenland seems larger than i thought.<p>The tool allows you to drag a counry to other planet to see the size there.