2作者: JoseOSAF27 天前原帖
I did something that might be incredibly stupid. I made 25 specific predictions about what will happen in tech over the next 6 months — and published all of them with deadlines before I could chicken out. Not vague &quot;AI will grow&quot; predictions. Specific, falsifiable claims like:<p>Medical AI will face mandatory safety requirements within 18 months (regulatory signals are screaming) There&#x27;s a ~6 month window in AI infrastructure before consolidation locks out new entrants Browser agents hit mainstream faster than current discourse suggests<p>Each prediction has a confidence score, a hard deadline, and what would prove me wrong. Why would I do this? Because I&#x27;m tired of pundits making unfalsifiable claims and retroactively declaring victory. &quot;I predicted crypto would struggle&quot; — okay, when? By how much? What counts as struggling? So I&#x27;m doing the opposite. Public predictions. Specific deadlines. No editing after the fact. The first verification check runs January 24. I&#x27;ll publish results whether they make me look smart or completely delusional. A few already make me uncomfortable — some have conviction scores above 75%, which feels overconfident for 6-month horizons. But that&#x27;s the point. If I&#x27;m not risking being wrong, I&#x27;m not actually predicting anything. All 25: <a href="https:&#x2F;&#x2F;asof.app&#x2F;alpha" rel="nofollow">https:&#x2F;&#x2F;asof.app&#x2F;alpha</a> What&#x27;s your most contrarian take on what happens in tech this year? Curious what predictions HN would make with actual deadlines attached.
1作者: thegoodduck27 天前原帖
An easy-to-self-host social media platform designed to work without a central authority.<p>Each instance controls its own data, feeds are lightweight, and content can be verified cryptographically.
1作者: goose000427 天前原帖
While working on a proof of concept project, I kept hitting Claude&#x27;s token limit 30-60 minutes into their 5-hour sessions. The accumulating context from the codebase was eating through tokens fast. So I built a language designed to be generated by AI rather than written by humans.<p>GlyphLang<p>GlyphLang replaces verbose keywords with symbols that tokenize more efficiently:<p><pre><code> # Python @app.route(&#x27;&#x2F;users&#x2F;&lt;id&gt;&#x27;) def get_user(id): user = db.query(&quot;SELECT * FROM users WHERE id = ?&quot;, id) return jsonify(user) # GlyphLang @ GET &#x2F;users&#x2F;:id { $ user = db.query(&quot;SELECT * FROM users WHERE id = ?&quot;, id) &gt; user } @ = route, $ = variable, &gt; = return. Initial benchmarks show ~45% fewer tokens than Python, ~63% fewer than Java. </code></pre> In practice, that means more logic fits in context, and sessions stretch longer before hitting limits. The AI maintains a broader view of your codebase throughout.<p>Before anyone asks: no, this isn&#x27;t APL with extra steps. APL, Perl, and Forth are symbol-heavy but optimized for mathematical notation, human terseness, or machine efficiency. GlyphLang is specifically optimized for how modern LLMs tokenize. It&#x27;s designed to be generated by AI and reviewed by humans, not the other way around. That said, it&#x27;s still readable enough to be written or tweaked if the occasion requires.<p>It&#x27;s still a work in progress, but it&#x27;s a usable language with a bytecode compiler, JIT, LSP, VS Code extension, PostgreSQL, WebSockets, async&#x2F;await, generics.<p>Docs: <a href="https:&#x2F;&#x2F;glyphlang.dev&#x2F;docs" rel="nofollow">https:&#x2F;&#x2F;glyphlang.dev&#x2F;docs</a><p>GitHub: <a href="https:&#x2F;&#x2F;github.com&#x2F;GlyphLang&#x2F;GlyphLang" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;GlyphLang&#x2F;GlyphLang</a>
1作者: dibasdauliya27 天前原帖
I&#x27;ve been using TheTabber primarily to repurpose my mom&#x27;s TikTok content for other social media platforms. The available options are too expensive and too cluttered and confusing.<p>I thought, why not add more features with a clean and easy-to-use UI and publish it? So here I am.<p>I came up with the name TheTabber because the more tabs you have open for social media posting, the more of The Tabber you are--and this product is for you. LOL!<p>These are the available features so far:<p>- Connect 9+ social platforms - Post or schedule image, carousel, video, or text content - Repurpose content from other social media accounts - View analytics of your posts - Create UGC-style videos with AI help - Create 2x2 image grid videos with AI help - Generate captions, edit styles, split long video into segments with AI help
5作者: jamesponddotco27 天前原帖
<i>TLDR:</i> Librario is a book metadata API that aggregates data from Google Books, ISBNDB, and Hardcover into a single response, solving the problem of no single source having complete book information. It&#x27;s currently pre-alpha, AGPL-licensed, and available to try now[0].<p>My wife and I have a personal library with around 1,800 books. I started working on a library management tool for us, but I quickly realized I needed a source of data for book information, and none of the solutions available provided all the data I needed. One might provide the series, the other might provide genres, and another might provide a good cover, but none provided everything.<p>So I started working on Librario, a book metadata aggregation API written in Go. It fetches information about books from multiple sources (Google Books, ISBNDB, Hardcover. Working on Goodreads and Anna&#x27;s Archive next.), merges everything, and saves it all to a PostgreSQL database for future lookups. The idea is that the database gets stronger over time as more books are queried.<p>You can see an example response here[1], or try it yourself:<p><pre><code> curl -s -H &#x27;Authorization: Bearer librario_ARbmrp1fjBpDywzhvrQcByA4sZ9pn7D5HEk0kmS34eqRcaujyt0enCZ&#x27; \ &#x27;https:&#x2F;&#x2F;api.librario.dev&#x2F;v1&#x2F;book&#x2F;9781328879943&#x27; | jq . </code></pre> This is pre-alpha and runs on a small VPS, so keep that in mind. I never hit the limits in the third-party services, so depending on how this post goes, I’ll or will not find out if the code handles that well.<p>The merger is the heart of the service, and figuring out how to combine conflicting data from different sources was the hardest part. In the end I decided to use field-specific strategies which are quite naive, but work for now.<p>Each extractor has a priority, and results are sorted by that priority before merging. But priority alone isn&#x27;t enough, so different fields need different treatment.<p>For example:<p>- Titles use a scoring system. I penalize titles containing parentheses or brackets because sources sometimes shove subtitles into the main title field. Overly long titles (80+ chars) also get penalized since they often contain edition information or other metadata that belongs elsewhere.<p>- Covers collect all candidate URLs, then a separate fetcher downloads and scores them by dimensions and quality. The best one gets stored locally and served from the server.<p>For most other fields (publisher, language, page count), I just take the first non-empty value by priority. Simple, but it works.<p>Recently added a caching layer[2] which sped things up nicely. I considered migrating from <i>net&#x2F;http</i> to <i>fiber</i> at some point[3], but decided against it. Going outside the standard library felt wrong, and the migration didn&#x27;t provide much in the end.<p>The database layer is being rewritten before v1.0[4]. I&#x27;ll be honest: the original schema was written by AI, and while I tried to guide it in the right direction with SQLC[5] and good documentation, database design isn&#x27;t my strong suit and I couldn&#x27;t confidently vouch for the code. Rather than ship something I don&#x27;t fully understand, I hired the developers from SourceHut[6] to rewrite it properly.<p>I&#x27;ve got a 5-month-old and we&#x27;re still adjusting to their schedule, so development is slow. I&#x27;ve mentioned this project in a few HN threads before[7], so I’m pretty happy to finally have something people can try.<p>Code is AGPL and on SourceHut[8].<p>Feedback and patches[9] are very welcome :)<p>[0]: <a href="https:&#x2F;&#x2F;sr.ht&#x2F;~pagina394&#x2F;librario&#x2F;" rel="nofollow">https:&#x2F;&#x2F;sr.ht&#x2F;~pagina394&#x2F;librario&#x2F;</a><p>[1]: <a href="https:&#x2F;&#x2F;paste.sr.ht&#x2F;~jamesponddotco&#x2F;a6c3b1130133f384cffd25b33a8ab1bc3392093c" rel="nofollow">https:&#x2F;&#x2F;paste.sr.ht&#x2F;~jamesponddotco&#x2F;a6c3b1130133f384cffd25b3...</a><p>[2]: <a href="https:&#x2F;&#x2F;todo.sr.ht&#x2F;~pagina394&#x2F;librario&#x2F;16" rel="nofollow">https:&#x2F;&#x2F;todo.sr.ht&#x2F;~pagina394&#x2F;librario&#x2F;16</a><p>[3]: <a href="https:&#x2F;&#x2F;todo.sr.ht&#x2F;~pagina394&#x2F;librario&#x2F;13" rel="nofollow">https:&#x2F;&#x2F;todo.sr.ht&#x2F;~pagina394&#x2F;librario&#x2F;13</a><p>[4]: <a href="https:&#x2F;&#x2F;todo.sr.ht&#x2F;~pagina394&#x2F;librario&#x2F;14" rel="nofollow">https:&#x2F;&#x2F;todo.sr.ht&#x2F;~pagina394&#x2F;librario&#x2F;14</a><p>[5]: <a href="https:&#x2F;&#x2F;sqlc.dev" rel="nofollow">https:&#x2F;&#x2F;sqlc.dev</a><p>[6]: <a href="https:&#x2F;&#x2F;sourcehut.org&#x2F;consultancy&#x2F;" rel="nofollow">https:&#x2F;&#x2F;sourcehut.org&#x2F;consultancy&#x2F;</a><p>[7]: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=45419234">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=45419234</a><p>[8]: <a href="https:&#x2F;&#x2F;sr.ht&#x2F;~pagina394&#x2F;librario&#x2F;" rel="nofollow">https:&#x2F;&#x2F;sr.ht&#x2F;~pagina394&#x2F;librario&#x2F;</a><p>[9]: <a href="https:&#x2F;&#x2F;git.sr.ht&#x2F;~pagina394&#x2F;librario&#x2F;tree&#x2F;trunk&#x2F;item&#x2F;CONTRIBUTING.md" rel="nofollow">https:&#x2F;&#x2F;git.sr.ht&#x2F;~pagina394&#x2F;librario&#x2F;tree&#x2F;trunk&#x2F;item&#x2F;CONTRI...</a>