返回首页
最新
URL: <a href="https://github.com/WozzHQ/wozz" rel="nofollow">https://github.com/WozzHQ/wozz</a><p>Hi HN,<p>I built Wozz, an open-source CLI and GitHub Action to catch expensive Kubernetes configs before they merge.<p>The Motivation I noticed that most cloud cost tools (like Kubecost) only show you the bill 30 days later. By then, the over-provisioned sidecar or massive Java heap is already in production. I wanted something that acts like a unit test for resource requests blocking fat finger mistakes in the PR rather than waiting for the bill.<p>How it works Wozz runs in two modes:<p>In CI/CD (The Linter): It parses the git diff of your manifests (deployment.yaml, etc.), calculates the cost delta (requests × replicas), and posts a comment if the change exceeds a threshold (e.g., +$50/mo). It also checks HorizontalPodAutoscaler limits to flag worst-case scaling risks.<p>Locally (The Auditor): It scans your current kubecontext to compare reserved requests vs. actual live usage (kubectl top). This helps find the "Sleep Insurance" gap—where devs request 4GB RAM just to be safe, but the app only uses 200MB.<p>Implementation Details<p>Stack: TypeScript/Node.js.<p>Math: Instead of querying AWS Cost APIs (which requires sensitive creds and is slow), it uses a configurable Blended Rate (e.g., $0.04/GB/hr) to estimate costs deterministically.<p>Privacy: It runs 100% locally or in your runner. No manifests or secrets are sent to any external server.<p>Repo <a href="https://github.com/WozzHQ/wozz" rel="nofollow">https://github.com/WozzHQ/wozz</a><p>Feedback I’m currently using a static Blended Rate for the cost math to keep the tool fast and stateless. I’m curious if this approximation is accurate enough for your team's guardrails, or if you strictly require real-time Spot Instance pricing to trust a tool like this?
I created a daily game where you get a random Bible verse and try to identify the book (e.g. "Psalms", "Genesis", "Luke") in as few guesses as possible.<p>I have absolutely no clue how I got the idea, other than the fact that I grew up in the Orthodox Church and all my other coding projects have been faith-related (a terrible mobile app (1) and slightly broken Byzantine chant website (2) ). I'm a relatively new developer and I've been hungry for a project to build that people will actually use and share around, so I hoped this would fit the bill.<p>Sure enough, friends and family have been making it part of their daily routine. When priests AND my nonreligious college friends started sending me their results every day, I knew I had <i>something</i>. It was really exciting.<p>------<p>When the idea popped into my head, I started working on it right away. I created the project at 1AM and had a MVP/SLC version done a few hours later. That was a few weeks ago.<p>I am using SvelteKit, no external APIs, and SQLite for the database. It's hosted on an Ubuntu machine in my living room. Coding agents like Roo/Kilo Code assisted heavily in the development, but after I had already decided on the overall architecture and how I wanted things to work together.<p>The game is free, has no signup, and I’m not running any ads. I’m looking for any and all feedback, and especially suggestions for how I can make the game more interesting, fun, and/or educational.<p>Thank you HN!
Hey HN,
We all know the pain: The code is clean, the product is solid, but the landing page isn't converting.
I built Vect (vect.pro) to solve this. It’s an Autonomous Marketing OS, but the core feature is the Conversion Killer Detector.
Instead of just "generating text", it acts as a hostile auditor. It simulates a skeptical buyer's inner monologue to flag exactly where your copy is vague, passive, or confusing.
The Tech:
Frontend: React + TypeScript (Command Center UI).
Reasoning: Gemini 2.5 Flash for the audit logic.
Simulation: It runs your copy through 10 distinct "Skeptic" personas to find friction points.
It’s free to try the audit. I built this to help technical founders stop losing sales to bad copy.
Link: <a href="https://vect.pro/" rel="nofollow">https://vect.pro/</a>
AI coding agents can build iOS UI, but they can't verify it. They can update a screen, but the UI might drift or break, and nobody catches it until a human checks.<p>qckfx gives your agent a baseline. Record a simulator session once. Every tap, scroll, and network response gets captured. On replay, each screen is compared against the original.<p>With our MCP, your agent triggers the tests and gets back visual diffs of exactly what changed. Updating the baselines is one click.<p>Under the hood:<p>- Full network replay (HTTP & WebSocket)<p>- Initial disk & keychain state captured during recording and restored on every run<p>- Precise scroll positioning (built from scratch; XCUITest only exposes this on macOS and iPad)<p>- No AI in the loop at runtime, fast execution<p>No SDK or code changes needed. Nothing to commit to git. Just download the app and go.<p>Everything runs locally. Your data stays on your machine.<p><a href="https://qckfx.com" rel="nofollow">https://qckfx.com</a>