2作者: PEGHIN30 天前原帖
My agency was bleeding $1,800&#x2F;year on contractor Notion seats. The problem: I needed to give contractors access to specific data (CRM, project tracker) but couldn&#x27;t let them see pricing, margins, or other clients&#x27; information.<p>Notion&#x27;s native solution doesn&#x27;t work:<p>Row-level filtering exists but it&#x27;s view-only (contractors can&#x27;t edit)<p>Column hiding doesn&#x27;t exist<p>Guest sharing is read-only<p>So you either pay $15&#x2F;mo per seat or duplicate databases (maintenance nightmare)<p>I built a permissions layer using Notion&#x27;s OAuth API. It lets contractors see only specific rows and columns, edit data, all without expensive seats.<p>How it works:<p>Connect Notion via OAuth<p>Define roles: &quot;Sales reps see only leads where owner = them, hide pricing column&quot;<p>Contractors access a clean portal<p>They view&#x2F;edit data in real-time (syncs every 5 minutes)<p>You pay $59&#x2F;mo flat for unlimited users<p>The math:<p>5 contractors × $15&#x2F;mo = $900&#x2F;year wasted<p>20 contractors × $15&#x2F;mo = $3,600&#x2F;year wasted<p>50 contractors × $15&#x2F;mo = $9,000&#x2F;year wasted<p>With this: all of them = $59&#x2F;mo flat.<p>Technical:<p>Frontend: React + TypeScript<p>Backend: Supabase + PostgreSQL (RLS)<p>Auth: Notion OAuth 2.0<p>Current state: 50 beta testers. First 20 customers get $49&#x2F;month locked-in (launching at $79 after January).<p>Limitations:<p>Only Notion databases (not pages)<p>5-minute sync (not instant)<p>Requires role definition<p>No team permissions yet (roadmap)<p>The ask: If this solves a problem you have, we&#x27;d love feedback. Are there permission use cases we&#x27;re missing? What&#x27;s your price sensitivity?<p>Free trial: notionportals.com
1作者: Rooster6130 天前原帖
First off, this thread is NOT a petition to rally against the moderation team. Considering the deluge of trash they deal with every day, I think they are doing a valiant job and are to be commended.<p>That said, its becoming more and more obvious every day that there is a tremendous amount of attempts by bots, and specifically AI agents, bombarding HN every day. I worry about the integrity of the discourse here and if the ever growing wave of slop will overtake staff resources to deal with it. Is it time to implement captcha for HN? If so, should it be out of the box, or a new mechanism more tailored to the security and privacy-centric nature of the HN readership?
4作者: throwawayround30 天前原帖
Posting this because it took me way too long to figure out what was going on, and I wish I had seen a post like this earlier.<p>I just canceled two Cursor Ultra plans. My usage went from a steady ~$60–100&#x2F;month to $500+ in a few days, projecting ~$1,600&#x2F;month. Support told me this was “expected.”<p>I did not suddenly start doing 10x more work.<p>Cursor shows a 200k context window and says content is summarized to stay within limits. Pricing is shown as $ per million tokens. Based on that, I monitored my call count and thought I was being careful.<p>What I did not realise: - Cursor builds a very large hidden prompt state: conversation history, tool traces, agent state, extended reasoning, codebase context. - That state is prompt-cached. - On every call, the entire cached prefix is replayed. - Anthropic bills cache read tokens for every replay. - Cache reads are billed even if that content is later summarised or truncated before inference.<p>So the UI says “max 200k context”, but billing says otherwise<p>Concrete example from my usage:<p>MAX mode: off Actual user input: ~4k tokens Cache read tokens: ~21 million Total tokens billed: ~22 million Cost for one call: about $12<p>Claude never attended to 21M tokens. I still paid for them.<p>This was not just Opus. It happened with Sonnet too.<p>Support explained that this is exactly how the API is billed so there wasn&#x27;t an error and I should just use these models more carefully as they could consume a lot of tokens when they are thinking. But there is a limit to that and what I was charged was way high. There is ZERO transparency about how the cache is used. And the cache breakpoints are decided by Cursor so I don&#x27;t think it&#x27;s fair to throw the ball to Anthropic here.<p>The dangerous part is that cost becomes decoupled from anything you can see or reason about as a user. You think you are operating inside a 200k window, but you are paying for a much larger hidden history being replayed over and over.<p>I am not claiming a bug in Anthropic’s API. This is a product transparency issue. If a tool can silently turn a few hundred dollars of usage into four figures because of hidden caching behaviour, users need much better visibility and controls. Support suggested spend controls but I am actually complaining about how my pre-paid package was consumed.<p>If you use Cursor with long-running chats, agents, or large codebases, check your cache read tokens carefully. The UI will not warn you. The only thing you will see is a few days into your subscription &quot;Your are projected to run out of your usage allowance in a few days&quot;<p>I canceled and moved on, giving Claude Code a shot until this is fixed. Posting so others do not find out the hard way.