Hi HN!<p>"Never perfect. Perfection goal that changes. Never stops moving. Can chase, cannot catch." - Abathur (<a href="https://www.youtube.com/watch?v=pw_GN3v-0Ls" rel="nofollow">https://www.youtube.com/watch?v=pw_GN3v-0Ls</a>)<p>StarCraft 2 is one of the most balanced games ever - thanks to Blizzard’s pursuit of perfection. It has been over 15 years since the release of Wings of Liberty and over 10 years since the last installment, Legacy of the Void. Yet, balance updates continue to appear, changing how the game plays. Thanks to that, StarCraft is still alive and well!<p>I decided to create an interactive visualization of all balance changes, both by patch and by unit, with smooth transitions.<p>I had this idea quite a few years ago, yet LLMs made it possible - otherwise, I wouldn't have had the time to code or to collect all changes from hundreds of patches (not all have balance updates). It took way more time than expected - both dealing with parsing data and dealing with D3.js transitions.<p>Pretty much pure vibe coding with Claude Code and Opus 4.5 - while constantly using Playwright skills and consulting Gemini 3 Pro (<a href="https://github.com/stared/gemini-claude-skills" rel="nofollow">https://github.com/stared/gemini-claude-skills</a>). While Opus 4.5 was much better at executing, it was often essential to use Gemini to get insights, to get cleaner code, or to inspect screenshots. The difference in quality was huge.<p>Still, it was tricky, as LLMs do not know D3.js nearly as well as React. The D3.js transition part is a thing that sometimes I think would be better to do manually, and only use LLMs for details. But it was also a lesson.<p>Enjoy!<p>Source code is here: <a href="https://github.com/stared/sc2-balance-timeline" rel="nofollow">https://github.com/stared/sc2-balance-timeline</a>
返回首页
最新
I keep seeing two extreme futures discussed around AI.<p>One is techno utopia: AI does everything, productivity explodes, humans are free to create and chill.<p>The other is collapse: AI replaces jobs, wealth concentrates, consumption dies, society implodes.<p>What I don’t see discussed enough is the mechanism between those states.<p>If AI systems genuinely outperform humans at most economically valuable tasks, wages are no longer the primary distribution mechanism. But capitalism today assumes wages are how demand exists. No wages means no buyers. No buyers means even the owners of AI have no customers.<p>That feels less like a social problem and more like a systems contradiction.<p>Historically, automation shifted labor rather than deleting it. But AI is different in that it targets cognition itself, not just muscle or repetition. If the marginal cost of intelligence trends toward zero, markets built on selling human time start to behave strangely.<p>Some questions I keep circling:<p>Who funds demand in a post labor economy
Is UBI enough, or does ownership of productive models need to be broader
Do we end up with state mediated consumption rather than market mediated consumption
Does GDP even remain a meaningful metric when production is decoupled from employment<p>I’m not arguing AI doom or AI salvation here. I’m trying to understand the transition dynamics. The part where things either adapt smoothly or break loudly.<p>Curious how others here model this in their heads, especially folks building or deploying these systems today.
I think LLMs are overused to summarise and underused to help us read deeper.
I built a system for Claude Code to browse 100 non-fiction books and find interesting connections between them.<p>I started out with a pipeline in stages, chaining together LLM calls to build up a context of the library.
I was mainly getting back the insight that I was baking into the prompts, and the results weren't particularly surprising.
On a whim, I gave CC access to my debug CLI tools and found that it wiped the floor with that approach.
It gave actually interesting results and required very little orchestration in comparison.<p>One of my favourite trail of excerpts goes from Jobs’ reality distortion field to Theranos’ fake demos, to Thiel on startup cults, to Hoffer on mass movement charlatans (<a href="https://trails.pieterma.es/trail/useful-lies/" rel="nofollow">https://trails.pieterma.es/trail/useful-lies/</a>).
A fun tendency is that Claude kept getting distracted by topics of secrecy, conspiracy, and hidden systems - as if the task itself summoned a Foucault’s Pendulum mindset.<p>Details:
* The books are picked from HN’s favourites (which I collected before: <a href="https://hnbooks.pieterma.es/" rel="nofollow">https://hnbooks.pieterma.es/</a>).
* Chunks are indexed by topic using Gemini Flash Lite. The whole library cost about £10.
* Topics are organised into a tree structure using recursive Leiden partitioning and LLM labels. This gives a high-level sense of the themes.
* There are several ways to browse. The most useful are embedding similarity, topic tree siblings, and topics cooccurring within a chunk window.
* Everything is stored in SQLite and manipulated using a set of CLI tools.<p>I wrote more about the process here: <a href="https://pieterma.es/syntopic-reading-claude/" rel="nofollow">https://pieterma.es/syntopic-reading-claude/</a><p>I’m curious if this way of reading resonates for anyone else - LLM-mediated or not.
Finding a true “Remote from Anywhere” role is harder than it looks.<p>Many jobs are labeled “remote,” but the fine print often ties them to a region, a time zone, or specific legal and tax requirements.<p>Here are practical checks that help you spot “remote anywhere” roles faster, and avoid common red flags.<p>1) Read the location line
Start with the simplest signal: is there a geography attached?<p>- “US Remote” “Remote (EU)” “LATAM only” or “Remote within X countries” usually means location restrictions.
- If time zones are listed, that can also imply location limits, even when the role is technically remote.
- Look for explicit language like “Global remote,” “Work from anywhere,” “fully asynchronous,” or “distributed team across multiple countries.” These are not guarantees, but they are stronger indicators.<p>2) Treat salary as a clue<p>Pay ranges can indicate the target hiring market.<p>- A range like $100k to $250k often signals a US-centered market (not always, but often).<p>3) Watch the application form<p>Sometimes the job post is vague, but the ATS form tells the truth:<p>- Questions like “Which time zone can you work in?” can reveal the required overlap.
- If the location dropdown includes only a few regions (e.g., US, Canada, Europe, Other), it often indicates there are specific geographic requirements.
- Red flags that usually indicate US-only hiring include questions about US work authorization, a US tax ID, US-specific benefits or requirements such as Security Clearance.<p>4) Check the company on LinkedIn<p>If a company truly hires globally, you can usually see it in its team.<p>- Review employee locations. Even if LinkedIn shows only a few “top locations,” individual profiles reveal the real spread.
- Search for your profession (e.g., Software Engineer) and check where they actually live.
- If you see people working from India, Asia, Africa or other regions beyond the US and Europe, that is a strong sign the company can hire internationally.<p>5) Compare career pages and external job boards<p>Job descriptions are sometimes more detailed on the company website.<p>- Look for mentions of an asynchronous culture, a multi-national team, or the number of nationalities in the company.
- Check LinkedIn job posts and external job boards. They sometimes include location constraints that are missing from the official posting.<p>Remote anywhere roles exist, but they are a narrower category than most people expect.<p>Companies balance time zone collaboration, employment compliance, payroll, and security requirements.<p>Good luck with your remote job search!
Hi HN, I’m Abrar Nasir Jaffari, co-founder of HackLikeMe. We built an agentic CLI because we were tired of the context-switching between LLM web chats and the terminal when doing DevSecOps work.Most AI coding assistants are just wrappers for file editing. We’ve built 6 specialized agents (Coder, FullStack, Security, DevOps, Plan, Monitor) that have native terminal access.<p>It doesn't just suggest code; it can:<p>Run nmap to audit your local network.<p>Use tshark to analyze packet captures.<p>Manage docker containers and kubectl clusters.<p>The 'Pause to Think' feature: Before it executes a command, it generates a reasoning plan so you can see why it's about to run a specific script."<p>The "Beta" Offer: "We launched yesterday and we're currently in beta. We are giving free Pro access for the first 100 HN users—no credit card required.<p>We’re running on a mix of AWS and GCP (leveraging some credits we just landed), so we’re able to offer some decent compute for the reasoning models during the beta.