Anthropic Hits 40% of Enterprise AI Spend: What SMBs Gain
The enterprise AI market just flipped, and your AI bill is about to feel it
Two years ago, OpenAI controlled half of enterprise AI spending and everyone else fought for the scraps. That era is over. According to Menlo Ventures’ 2025 State of Generative AI in the Enterprise report, Anthropic now captures 40% of enterprise LLM API spend, up from 12% in 2023. OpenAI dropped to 27%. Google tripled its share from 7% to 21%.
For a chiropractor in Charleston or a bakery in Boone, those percentages read like Bloomberg trivia. They aren’t. Enterprise spending drives the compute, pricing, and model quality that eventually shows up in the $30-per-month AI tool you use to handle after-hours calls or draft social posts. When the enterprise market rearranges itself this fast, small business software follows within months.
What Menlo Ventures actually measured
The Menlo report, published December 9, 2025, combined a survey of roughly 495 U.S. enterprise AI decision-makers with bottoms-up market modeling. The headline numbers:
- Anthropic: 40% enterprise LLM API share (up from 24% last year, 12% in 2023)
- OpenAI: 27% (down from 50% in 2023)
- Google: 21% (up from 7% in 2023)
- Combined top three: ~88% of all enterprise LLM usage
- Anthropic’s coding share: roughly 54%, more than double OpenAI’s 21% in that category
Total enterprise AI investment hit $37 billion in 2025, triple the $11.5 billion spent in 2024. Applications — the tools built on top of foundation models — captured $19 billion of that. Fifty-plus AI products now generate over $100 million in annual recurring revenue. Ten have crossed $1 billion.
The shift tracked with real shipping velocity. Anthropic has held the coding benchmarks for roughly 18 months through consistent Claude model releases, while OpenAI’s lead narrowed on general reasoning tasks and disappeared on code.
What this means for small business AI tools
Competition is finally real, and you benefit
Three labs with roughly balanced frontier capabilities — Anthropic, OpenAI, and Google — create genuine price pressure. The old pattern was one dominant provider setting the market; the new pattern is three companies undercutting each other every few months to hold or gain share.
That shows up fastest in the “middle tier” models most small business tools actually run on — Claude Haiku, GPT-5 mini, Gemini Flash. Those prices have fallen roughly 60-70% per year since 2024, and Menlo’s data suggests the trend continues as long as the top three keep scrapping. A workload that cost $500/month two years ago — real-time call transcription, multi-channel chat, automated review responses — now runs $20-50. We’re seeing that directly in our AI Employee agents, where the underlying model cost is a small fraction of what it would have been on 2024 pricing.
Coding dominance trickles down to every app you use
Anthropic’s 54% share in coding applications isn’t just a developer-tool story. Half of all developers now use AI daily, and the small business SaaS tools you use — from Shopify plugins to booking platforms to accounting software — ship faster and more reliably because their engineering teams are more productive.
This is the quiet beneficiary of AI’s enterprise boom. When the tool you rely on for customer intake or scheduling gets better every quarter instead of every year, that’s not magic. That’s the same coding AI enterprises are paying billions for, trickling down to the vendors you actually buy from.
The vendor landscape stabilized
76% of enterprise AI use cases are purchased rather than built in-house, up sharply from the roughly 50/50 split in 2024. That tells you the app layer matured — enterprises trust outside vendors enough to hand over real workloads. For small businesses, the same trust is justified. The AI tool you sign up for in 2026 is far less likely to vanish in a funding cliff than its 2024 equivalent, because its upstream providers are three public-or-about-to-be-public companies with durable revenue.
Our take
The shift from one dominant lab to three balanced ones is the best thing that’s happened to small business AI since ChatGPT shipped. A monoculture in AI infrastructure would have meant one company setting prices, shipping on its own timeline, and making decisions that prioritized its largest enterprise accounts. Three competitive labs means pricing discipline, shipping urgency, and downstream vendors who can route between providers instead of getting locked in.
The bottom line: Anthropic winning 40% of enterprise AI spend means small business tools get better, cheaper, and more reliable — because the infrastructure underneath them is now a real market, not a cartel.
The underreported angle in most of the coverage is that this matters most at the middle tier, not the frontier. Claude Opus 4.7 and GPT-5.4 dominate headlines, but most small business workloads run on models one to two tiers down. Those models drop in price every quarter precisely because the frontier fight keeps compute costs falling and competitive pressure high. Your tool doesn’t need the $15-per-million-token frontier model to handle a voicemail summary — and thanks to this shift, it won’t pay that price.
One question the report doesn’t answer: how long can Anthropic hold 40% when Google has the deepest pockets and most integrated distribution through Workspace and Android? The next Menlo survey, due December 2026, will be the one to watch.
What you should do this quarter
-
Audit your AI tool stack for model flexibility. Ask each vendor: “If your primary LLM provider raised prices 50%, could you switch?” A vendor that can route between Claude, GPT, and Gemini is better positioned than one locked to a single API. If they can’t answer, that’s your signal.
-
Don’t overpay for frontier capability you don’t need. The Claude Opus IPO coverage laid this out in detail: budget around middle-tier models for 90% of your work. Customer intake, review response, scheduling, lead qualification — these don’t need the flagship.
-
Watch for bundles, not just subscriptions. As margins improve for the top three labs, vendors are starting to bundle — Claude + voice + image generation at one price — instead of charging à la carte. The bundled tiers usually offer better value once your usage grows past a basic level.
-
Reassess tools you wrote off in 2024. The AI category churned enough that a tool that didn’t work 18 months ago may now be best-in-class. The underlying model quality improvement at the middle tier has been dramatic.
For small businesses sorting out which AI workloads to automate and which tools to pick, our consulting engagements are built exactly for that question — vendor-neutral, model-agnostic, and grounded in what’s actually shipping.
Where this goes next
The enterprise AI market spent 2025 picking its winners. The next 12 months will be about those winners competing to keep their share — which means more model releases, more price cuts at the middle tier, and more bundled offerings aimed at the small and mid-market segment. Small businesses that were priced out of serious AI automation in 2023 are now firmly in the addressable market. The only question left is which workflows to automate first.
Need help deciding which AI tools fit your budget and your business? Get in touch — we help Appalachian small businesses navigate AI market shifts without overpaying or getting locked in.