Anthropic's Claude Code Pro Test: A Pricing Warning for SMBs
For a few hours on April 21, 2026, Anthropic’s pricing page told new subscribers that the $20-a-month Pro plan no longer included Claude Code. The checkmark turned into an X. Then it turned back. Anthropic later confirmed it was an A/B test on roughly 2% of new prosumer signups, and existing Pro and Max subscribers were never affected.
The episode lasted long enough to rattle developer Twitter and short enough that most users never noticed. But for solo developers, small dev shops, and the growing number of small businesses that have quietly built workflows on top of Claude Code, the test is worth paying attention to. Premium AI coding tools are getting repriced in real time, and the cheapest tier is where the squeeze starts.
What Anthropic actually did
According to The Register’s reporting, Anthropic’s pricing page on Monday, April 21 stated that the Pro plan “includes Claude Code.” On Tuesday, that line was gone and the feature checkmark had been replaced with an X. Hacker News picked it up within hours, and Anthropic’s head of growth Amol Avasare confirmed the change in a public statement.
“For clarity, we’re running a small test on ~2 percent of new prosumer signups. Existing Pro and Max subscribers aren’t affected.”
Avasare’s explanation, captured in Simon Willison’s analysis of the confusion, pointed at usage. When the Max plan launched, Claude Code was not part of it and “agents that run for hours weren’t a thing.” Then Claude Code got bundled in, Opus 4 dropped, and heavy users started leaving long-running agentic sessions on overnight. “Usage has changed a lot and our current plans weren’t built for this.” The pricing page was rolled back later that day.
Key facts
- The test affected approximately 2% of new Pro signups, not existing subscribers
- Anthropic publicly framed it as an experiment, not a permanent change
- The trigger appears to be heavy agentic-workflow usage that wasn’t priced into the original Pro tier
- Claude Code remains included in Pro and Max as of this writing
Why this matters for small businesses
Most small businesses do not think of themselves as “AI coding tool customers.” But the line has gotten blurry. A solo bookkeeper using Claude Code to clean up CSV exports. A two-person agency stitching together a Shopify theme with Claude in their editor. A West Virginia general contractor using a developer cousin to build a custom job-tracking dashboard for $40/month in tools instead of $400/month in SaaS.
When the cheap tier of an AI coding tool moves, those projects feel it first.
The big picture
Premium AI tools are running a familiar playbook: launch with everything bundled to win the market, then unbundle the heavy-usage features once you know what they actually cost to serve. We saw it with cloud storage. We saw it with API rate limits. We’re seeing it now with agent runtime.
Anthropic’s own framing matters: the company didn’t say Claude Code is too expensive in general. It said the original Pro plan wasn’t built for users running agents for hours at a stretch. That’s a statement about heavy users at the cheapest tier, not about the product overall. But the heavy users at the cheapest tier are often exactly the small businesses and solo developers who got the most leverage from the original $20 deal.
What this signals about AI pricing in 2026
The Anthropic test fits a broader pattern. Reporting on AI pricing shifts has tracked how cloud providers, model labs, and tool vendors are all reshuffling tiers as compute costs become clearer. Some moves go up — premium features get pushed to higher plans. Some go down — the Gemini 3.1 Flash-Lite cuts brought input tokens to $0.25 per million.
Both directions hit the same underlying truth: AI vendor economics are still being figured out, and your monthly tool budget is the variable they will adjust first. We covered some of this in our piece on the AI inference cost crisis for small businesses, and the Claude Code test is a clean example of what that looks like in practice.
Our take
The rollback matters less than the signal. Anthropic doesn’t run pricing experiments on its public homepage by accident, and the fact that they pulled back doesn’t mean they won’t revisit the same change with a different shape — a usage cap, a separate Code-only tier, an enterprise-tier nudge for heavy users.
The bottom line: If your business workflow depends on a single AI tool at a single price point, you have a single point of failure. That was true in April. It’s still true today.
The most underreported angle in the original coverage is what “heavy users” actually look like. It’s not just power-user developers running 10 simultaneous sessions. It’s increasingly small businesses using AI coding tools the way they use accounting software — quietly, every day, for unglamorous work. Those users are the ones most exposed to a pricing change because they don’t have the slack to absorb a 5x bill or rebuild their workflow on a different tool.
Questions that remain
- Will Anthropic introduce a “Code-only” tier between Pro ($20) and Max ($100+)?
- Do other model providers — OpenAI, Google, DeepSeek — follow with their own coding-tool repricing?
- How long until the enterprise plans pull more of the agent runtime budget into per-seat pricing?
What you should do
If your small business uses Claude Code, GitHub Copilot, Cursor, or any AI coding tool meaningfully, three things this quarter:
- Audit your AI tool budget. Write down every paid AI tool, its tier, what your team actually uses it for, and how much of your workflow depends on it. If a single $20 plan supports more than a quarter of your operation, you have concentration risk.
- Test a backup. Spend an hour this month using a second tool — a different model, a different IDE plugin, a different price tier — for the same task. Knowing you can switch in a day is cheap insurance.
- Document your prompts and workflows in plain text. The most expensive part of switching tools isn’t the new subscription. It’s reconstructing the prompts, system messages, and project context you built up. Keep them in a notes file, not locked inside one vendor’s product.
Watch for
- Anthropic announcing a new pricing tier or plan structure in the next 90 days
- OpenAI or Google quietly testing similar feature-level unbundling
- Your monthly AI bills jumping more than 20% without a new feature to show for it
Resources
- The Register: Anthropic tests reaction to yanking Claude Code from Pro
- Simon Willison: Is Claude Code going to cost $100/month?
- Our piece on AI coding productivity for small businesses
- Custom AI development services for businesses building durable workflows that don’t depend on any single tool’s pricing
The takeaway
Anthropic’s Claude Code test was a small move, well-explained and quickly rolled back. It’s also a preview. AI pricing in 2026 will keep shifting as vendors learn what users actually do with their tools, and small businesses on the cheapest tiers will see those shifts first. The fix isn’t paranoia — it’s a one-page tool audit and a working backup. The cheaper the AI tool, the more important it is to have one.
Building AI workflows that survive a pricing change? Get in touch — we help small businesses pick tools they won’t have to rip out in six months.