I built Zone, an iOS app blocker using Apple's Family Controls API. The differentiator is simple: it counts how many times you try to open blocked apps.
Most app blockers just block. But I found the attempt count more revealing than the block itself. Seeing "you tried to open Instagram 47 times today" was a wake-up call I didn't get from blocking alone.
Technical notes: Family Controls API is poorly documented but provides system-level blocking that actually works (unlike overlay approaches). Had to handle some edge cases around authorization persistence and count tracking across app restarts. The API requires Screen Time permissions which adds friction to onboarding but ensures reliable blocking.
One interesting discovery: users seem to prefer seeing raw attempt counts over gamified metrics (streaks, badges, etc). Less is more for this use case.
Built in SwiftUI, local storage only, no subscriptions. Took about 3 months part-time.
Curious if others have worked with Family Controls API and what challenges you faced. Also interested in thoughts on digital wellness apps in general - does tracking behavior change it, or just make you more aware without actual change?
返回首页
最新
Hi everyone I saw zig is and intresting language I am learning it and also making a transpilied High level language over it I want some help with developing syntax.
in my lang there are three types of var declaration
1) using local keyword this are added in arena of the specific function.
2) using let keyword this are on stack but i am finding solution to make strings easier here.
3) manual memory but my transpiler will automatically use defer keywords so they are safe and delete once block exit
4) using unsafe direct fully manual memory management but still my transpiler will not let compiler till once in code the are freed.
AI memory systems often become a black box. When an LLM produces a wrong answer, it’s unclear whether the issue comes from storage, retrieval, or the memory itself.<p>Most systems rely on RAG and vector storage, which makes memory opaque and hard to inspect, especially for temporal or multi-step reasoning.<p>An alternative is to make memory readable and structured: store it as files, preserve raw inputs, and allow the LLM to read memory directly instead of relying only on vector search.
I recently used Deepseek and when sending another request in "Thinking" mode initially showed activation of "reading" mode, I sent a regular text request without documents so I don't know what that means. Well, I suspect that this is a deeper understanding of the user prompt.
I scraped 1,576 HN snapshots and found 159 stories that hit the maximum score. Then I crawled the actual articles and ran sentiment analysis.<p>The results surprised me.<p>*The Numbers*<p>- Negative sentiment: 78 articles (49%)
- Positive sentiment: 45 articles (28%)
- Neutral: 36 articles (23%)<p>Negative content doesn't just perform well – it dominates.<p>*What "Negative" Actually Means*<p>The viral negative posts weren't toxic or mean. They were:<p>- Exposing problems ("Why I mass-deleted my Chrome extensions")
- Challenging giants ("OpenAI's real business model")
- Honest failures ("I wasted 3 years building the wrong thing")
- Uncomfortable truths ("Your SaaS metrics are lying to you")<p>The pattern: something is broken and here's proof.<p>*Title Patterns That Worked*<p>From the 159 viral posts, these structures appeared repeatedly:<p>1. [Authority] says [Controversial Thing] - 23 posts
2. Why [Common Belief] is Wrong - 19 posts
3. I [Did Thing] and [Unexpected Result] - 31 posts
4. [Company] is [Doing Bad Thing] - 18 posts<p>Average title length: 8.3 words. The sweet spot is 6-12 words.<p>*What Didn't Work*<p>Almost none of the viral posts were:
- Pure product launches
- "I'm excited to announce..."
- Listicles ("10 ways to...")
- Generic advice<p>*The Uncomfortable Implication*<p>If you want reach on HN, you're better off writing about what's broken than what you built.<p>This isn't cynicism – it's selection pressure. HN readers are skeptics. They've seen every pitch. What cuts through is useful criticism backed by evidence.<p>*For Founders*<p>Before your next launch post, ask: what problem am I exposing? What assumption am I challenging? What did I learn the hard way?<p>That's your hook.<p>---<p>Data: Built a tool that snapshots HN/GitHub/Reddit/ProductHunt every 30 minutes. Analyzed 1,576 snapshots, found 2,984 instances of score=100, deduped to 159 unique URLs, crawled 143 successfully, ran GPT-4 sentiment analysis on full article text.<p>Happy to share the raw data if anyone wants to dig deeper.