Hi HN,<p>I built Layoffstoday, an open platform that tracks tech layoffs across ~6,500 companies.<p>What it does:<p>Aggregates layoff events from public news sources<p>Normalizes data by company, date, industry, and affected headcount<p>Shows historical patterns instead of isolated headlines<p>Why I built it:
During job transitions, I noticed people had to jump across news articles, spreadsheets, and social posts just to answer simple questions like “Has this company laid people off before?” or “Is this happening across the industry?”<p>This is an attempt to make that information structured, searchable, and accessible.<p>Would love feedback on:<p>Data accuracy / gaps<p>Signals that would actually help job seekers<p>Whether alerts or trend indicators are useful or noisy
返回首页
最新
Chris Wiles showcased his setup for Claude Code and I thought it was sick. So I adapted it for Django projects. Several skills have been added to address the pain points in Django development.
I wanted to run markdown files like shell scripts. So I built an open source tool that lets you use a shebang to pipe them through Claude Code with full stdin/stdout support.<p>task.md:<p><pre><code> #!/usr/bin/env claude-run
Analyze this codebase and summarize the architecture.
</code></pre>
Then:<p><pre><code> chmod +x task.md
./task.md
</code></pre>
These aren't just prompts. Claude Code has tool use, so a markdown file can run shell commands, write scripts, read files, make API calls. The prompt orchestrates everything.<p>A script that runs your tests and reports results (`run_tests.md`):<p><pre><code> #!/usr/bin/env claude-run --permission-mode bypassPermissions
Run ./test/run_tests.sh and summarize what passed and failed.
</code></pre>
Because stdin/stdout work like any Unix program, you can chain them:<p><pre><code> cat data.json | ./analyze.md > results.txt
git log -10 | ./summarize.md
./generate.md | ./review.md > final.txt
</code></pre>
Or mix them with traditional shell scripts:<p><pre><code> for f in logs/\*.txt; do
cat "$f" | ./analyze.md >> summary.txt
done
</code></pre>
This replaced a lot of Python glue code for us. Tasks that needed LLM orchestration libraries are now markdown files composed with standard Unix tools. Composable as building blocks, runnable as cron jobs, etc.<p>One thing we didn't expect is that these are more auditable (and shareable) than shell scripts. Install scripts like `curl -fsSL <a href="https://bun.com/install" rel="nofollow">https://bun.com/install</a> | bash` could become:<p><pre><code> `curl -fsSL https://bun.com/install.md | claude-run`
</code></pre>
Where install.md says something like "Detect my OS and architecture, download the right binary from GitHub releases, extract to ~/.local/bin, update my shell config." A normal human can actually read and verify that.<p>The (really cool) executable markdown idea and auditability examples are from Pete Koomen (@koomen on X). As Pete says: "Markdown feels increasingly important in a way I'm not sure most people have wrapped their heads around yet."<p>We implemented it and added Unix pipe semantics. Currently works with Claude Code - hoping to support other AI coding tools too. You can also route scripts through different cloud providers (AWS Bedrock, etc.) if you want separate billing for automated jobs.<p>GitHub: <a href="https://github.com/andisearch/claude-switcher" rel="nofollow">https://github.com/andisearch/claude-switcher</a><p>What workflows would you use this for?
Shape regularization is a technique used in computational geometry to clean up noisy or imprecise geometric data by aligning segments to common orientations and adjusting their positions to create cleaner, more regular shapes.<p>I needed a Python implementation so started with the examples implemented in CGAL then added a couple more for snap and joint regularization and metric regularization.