I built an open-source desktop weather widget for Windows focused on short-term
rain accuracy using 15-minute nowcasting. Feedback welcome.
返回首页
最新
I did something that might be incredibly stupid.
I made 25 specific predictions about what will happen in tech over the next 6 months — and published all of them with deadlines before I could chicken out.
Not vague "AI will grow" predictions. Specific, falsifiable claims like:<p>Medical AI will face mandatory safety requirements within 18 months (regulatory signals are screaming)
There's a ~6 month window in AI infrastructure before consolidation locks out new entrants
Browser agents hit mainstream faster than current discourse suggests<p>Each prediction has a confidence score, a hard deadline, and what would prove me wrong.
Why would I do this?
Because I'm tired of pundits making unfalsifiable claims and retroactively declaring victory. "I predicted crypto would struggle" — okay, when? By how much? What counts as struggling?
So I'm doing the opposite. Public predictions. Specific deadlines. No editing after the fact.
The first verification check runs January 24. I'll publish results whether they make me look smart or completely delusional.
A few already make me uncomfortable — some have conviction scores above 75%, which feels overconfident for 6-month horizons. But that's the point. If I'm not risking being wrong, I'm not actually predicting anything.
All 25: <a href="https://asof.app/alpha" rel="nofollow">https://asof.app/alpha</a>
What's your most contrarian take on what happens in tech this year? Curious what predictions HN would make with actual deadlines attached.
An easy-to-self-host social media platform
designed to work without a central authority.<p>Each instance controls its own data, feeds are lightweight,
and content can be verified cryptographically.
While working on a proof of concept project, I kept hitting Claude's token limit 30-60 minutes into their 5-hour sessions. The accumulating context from the codebase was eating through tokens fast. So I built a language designed to be generated by AI rather than written by humans.<p>GlyphLang<p>GlyphLang replaces verbose keywords with symbols that tokenize more efficiently:<p><pre><code> # Python
@app.route('/users/<id>')
def get_user(id):
user = db.query("SELECT * FROM users WHERE id = ?", id)
return jsonify(user)
# GlyphLang
@ GET /users/:id {
$ user = db.query("SELECT * FROM users WHERE id = ?", id)
> user
}
@ = route, $ = variable, > = return. Initial benchmarks show ~45% fewer tokens than Python, ~63% fewer than Java.
</code></pre>
In practice, that means more logic fits in context, and sessions stretch longer before hitting limits. The AI maintains a broader view of your codebase throughout.<p>Before anyone asks: no, this isn't APL with extra steps. APL, Perl, and Forth are symbol-heavy but optimized for mathematical notation, human terseness, or machine efficiency. GlyphLang is specifically optimized for how modern LLMs tokenize. It's designed to be generated by AI and reviewed by humans, not the other way around. That said, it's still readable enough to be written or tweaked if the occasion requires.<p>It's still a work in progress, but it's a usable language with a bytecode compiler, JIT, LSP, VS Code extension, PostgreSQL, WebSockets, async/await, generics.<p>Docs: <a href="https://glyphlang.dev/docs" rel="nofollow">https://glyphlang.dev/docs</a><p>GitHub: <a href="https://github.com/GlyphLang/GlyphLang" rel="nofollow">https://github.com/GlyphLang/GlyphLang</a>
I've been using TheTabber primarily to repurpose my mom's TikTok content for other social media platforms. The available options are too expensive and too cluttered and confusing.<p>I thought, why not add more features with a clean and easy-to-use UI and publish it? So here I am.<p>I came up with the name TheTabber because the more tabs you have open for social media posting, the more of The Tabber you are--and this product is for you. LOL!<p>These are the available features so far:<p>- Connect 9+ social platforms
- Post or schedule image, carousel, video, or text content
- Repurpose content from other social media accounts
- View analytics of your posts
- Create UGC-style videos with AI help
- Create 2x2 image grid videos with AI help
- Generate captions, edit styles, split long video into segments with AI help
<i>TLDR:</i> Librario is a book metadata API that aggregates data from Google Books, ISBNDB, and Hardcover into a single response, solving the problem of no single source having complete book information. It's currently pre-alpha, AGPL-licensed, and available to try now[0].<p>My wife and I have a personal library with around 1,800 books. I started working on a library management tool for us, but I quickly realized I needed a source of data for book information, and none of the solutions available provided all the data I needed. One might provide the series, the other might provide genres, and another might provide a good cover, but none provided everything.<p>So I started working on Librario, a book metadata aggregation API written in Go. It fetches information about books from multiple sources (Google Books, ISBNDB, Hardcover. Working on Goodreads and Anna's Archive next.), merges everything, and saves it all to a PostgreSQL database for future lookups. The idea is that the database gets stronger over time as more books are queried.<p>You can see an example response here[1], or try it yourself:<p><pre><code> curl -s -H 'Authorization: Bearer librario_ARbmrp1fjBpDywzhvrQcByA4sZ9pn7D5HEk0kmS34eqRcaujyt0enCZ' \
'https://api.librario.dev/v1/book/9781328879943' | jq .
</code></pre>
This is pre-alpha and runs on a small VPS, so keep that in mind. I never hit the limits in the third-party services, so depending on how this post goes, I’ll or will not find out if the code handles that well.<p>The merger is the heart of the service, and figuring out how to combine conflicting data from different sources was the hardest part. In the end I decided to use field-specific strategies which are quite naive, but work for now.<p>Each extractor has a priority, and results are sorted by that priority before merging. But priority alone isn't enough, so different fields need different treatment.<p>For example:<p>- Titles use a scoring system. I penalize titles containing parentheses or brackets because sources sometimes shove subtitles into the main title field. Overly long titles (80+ chars) also get penalized since they often contain edition information or other metadata that belongs elsewhere.<p>- Covers collect all candidate URLs, then a separate fetcher downloads and scores them by dimensions and quality. The best one gets stored locally and served from the server.<p>For most other fields (publisher, language, page count), I just take the first non-empty value by priority. Simple, but it works.<p>Recently added a caching layer[2] which sped things up nicely. I considered migrating from <i>net/http</i> to <i>fiber</i> at some point[3], but decided against it. Going outside the standard library felt wrong, and the migration didn't provide much in the end.<p>The database layer is being rewritten before v1.0[4]. I'll be honest: the original schema was written by AI, and while I tried to guide it in the right direction with SQLC[5] and good documentation, database design isn't my strong suit and I couldn't confidently vouch for the code. Rather than ship something I don't fully understand, I hired the developers from SourceHut[6] to rewrite it properly.<p>I've got a 5-month-old and we're still adjusting to their schedule, so development is slow. I've mentioned this project in a few HN threads before[7], so I’m pretty happy to finally have something people can try.<p>Code is AGPL and on SourceHut[8].<p>Feedback and patches[9] are very welcome :)<p>[0]: <a href="https://sr.ht/~pagina394/librario/" rel="nofollow">https://sr.ht/~pagina394/librario/</a><p>[1]: <a href="https://paste.sr.ht/~jamesponddotco/a6c3b1130133f384cffd25b33a8ab1bc3392093c" rel="nofollow">https://paste.sr.ht/~jamesponddotco/a6c3b1130133f384cffd25b3...</a><p>[2]: <a href="https://todo.sr.ht/~pagina394/librario/16" rel="nofollow">https://todo.sr.ht/~pagina394/librario/16</a><p>[3]: <a href="https://todo.sr.ht/~pagina394/librario/13" rel="nofollow">https://todo.sr.ht/~pagina394/librario/13</a><p>[4]: <a href="https://todo.sr.ht/~pagina394/librario/14" rel="nofollow">https://todo.sr.ht/~pagina394/librario/14</a><p>[5]: <a href="https://sqlc.dev" rel="nofollow">https://sqlc.dev</a><p>[6]: <a href="https://sourcehut.org/consultancy/" rel="nofollow">https://sourcehut.org/consultancy/</a><p>[7]: <a href="https://news.ycombinator.com/item?id=45419234">https://news.ycombinator.com/item?id=45419234</a><p>[8]: <a href="https://sr.ht/~pagina394/librario/" rel="nofollow">https://sr.ht/~pagina394/librario/</a><p>[9]: <a href="https://git.sr.ht/~pagina394/librario/tree/trunk/item/CONTRIBUTING.md" rel="nofollow">https://git.sr.ht/~pagina394/librario/tree/trunk/item/CONTRI...</a>