Meta Is Open-Sourcing New AI Models — With a Catch
Meta just changed the rules on open-source AI
For three years, Meta was the loud voice in the room arguing that AI’s biggest models should be free. Llama 2, Llama 3, and Llama 4 all shipped with downloadable weights and a license generous enough that thousands of small companies built products on top of them. That era is winding down.
According to an Axios scoop on April 6, Meta is preparing two flagship models — codenamed Mango (image and video) and Avocado (text and coding) — for release in the first half of 2026. The catch: Meta only plans to open-source versions of these models. The largest, most capable variants are expected to stay proprietary. Two days later, on April 8, Meta debuted Muse Spark — the model originally codenamed Avocado — as a closed offering, with the company saying it “hopes” to open-source future versions.
That shift matters more for small businesses than the codenames suggest.
What Meta is releasing and what it is keeping
The new models are coming out of Meta Superintelligence Labs, the AI organization Alexandr Wang took over after Meta paid roughly $14 billion to acquire his stake in Scale AI. Wang’s first move has been to reorganize Meta’s model strategy around two flagship lines, with a hybrid licensing approach.
Here is what is publicly known so far:
- Mango is a multimodal image and video generation model, aimed at competing with Google’s Veo and OpenAI’s Sora
- Avocado (now branded Muse Spark) is a text and coding model, designed to leapfrog the reasoning quality of the current Llama line
- Smaller, derivative versions of both are expected to ship under an open-source license — likely the Llama community license, not a true Apache or MIT license
- The largest versions will remain Meta-only, accessible through Meta’s own products and (possibly) a paid API
- Muse Spark itself launched proprietary, with open-sourcing framed as a future possibility, not a commitment
Wang has been blunt about the reasoning. The largest models are expensive to train and expensive to serve, and Meta wants to recoup that investment before competitors get the weights for free. The smaller open-source releases keep developer goodwill alive without giving away the crown jewels.
Why the hybrid open-source model matters
For most of the last two years, “open-source AI” had a specific meaning in small business circles: you or your vendor could download a Llama model, run it on rented hardware, and avoid paying per-token fees to OpenAI or Anthropic. That arbitrage was real. Companies like Mistral kept it alive on the European side, and the Allen Institute’s OLMo line pushed the efficiency frontier even further.
Meta partially stepping back changes the math in two ways. First, the quality ceiling of free downloadable models drops. If Mango’s biggest variant stays proprietary, the best image generator you can self-host is no longer Meta’s best image generator. Second, the signal to the rest of the industry shifts. When the most-quoted advocate for open weights pulls back, smaller labs notice — and so do investors funding the next generation of open models.
There is a counter-trend worth keeping in view. Google released Gemma 4 under Apache 2.0 earlier this month, and Anthropic, OpenAI, and xAI all face mounting pressure to release research artifacts of their own. Open-weight AI is not dying — but the economics are pushing the field toward a Linux-style split: free software for most use cases, paid commercial extensions on top.
What this means for businesses using Llama models
If your business already runs a Llama-based assistant — through a vendor or self-hosted — nothing breaks today. Llama 4 is still supported, still downloadable, and still useful. But three things are worth watching over the next 12 months:
- The quality gap will widen. The smaller open variants of Mango and Avocado will be good. The full versions will be better. If your competitors are willing to pay Meta for the larger models, expect their AI-generated marketing assets and customer-facing copy to be a notch sharper than what you can produce on free weights.
- License drift is real. Each new Llama-family release has come with slightly more restrictive terms. The Mango/Avocado releases may continue that trend. Read the fine print before you build a product around them — a license that allows research use but blocks commercial deployment is no good for a contractor running a marketing automation flow.
- Vendor lock-in moves to the model layer. When the best open model and the best closed model came from the same company, you had options. With a hybrid approach, vendors who promised “Llama-powered” tooling may quietly switch to Meta’s hosted API to access the better variants — and pass the cost through to you.
This is the same dynamic small businesses already navigate with productivity software. The free version gets you 80% of the way; the paid version gets you the last 20%. The new wrinkle is that the AI version of “the last 20%” can mean the difference between a customer service bot that handles your overflow and one that does not.
Open source vs proprietary — the small business calculus
The right way to think about this is not “open source good, proprietary bad.” It is “what am I optimizing for?” There are three honest answers, and they lead to three different decisions.
If you optimize for cost, open-weight models still win for most small business workloads. Drafting emails, summarizing reviews, classifying support tickets — none of that requires a frontier model. A self-hosted Llama 4 or Gemma 4 deployment, or a vendor that runs one for you, will hit your needs at a fraction of per-token API pricing. That equation does not change with Mango and Avocado.
If you optimize for quality, the proprietary tier of Mango, Muse Spark, GPT-5-class models, and Claude Opus is where the ceiling lives. For high-stakes work — contracts, ad creative, marketing video — paying per-token to a hosted API beats wrestling with a self-hosted model that is one generation behind. That equation gets worse for open source under Meta’s new approach, because the open versions are deliberately the smaller, less capable ones.
If you optimize for control, open weights are still the only honest path. If you need to run AI inside a HIPAA boundary, on a network without internet egress, or on hardware you own, hosted APIs are not an option. That is a small slice of small business AI work, but it is a real one — and where our model fine-tuning and AI infrastructure services usually come in.
The mistake to avoid is picking a vendor based on the “open source” label without checking which tier of model they actually run. A platform that markets itself as “built on open Llama” but quietly upgrades paying customers to Meta’s hosted variants is not really giving you the open-source benefits. It is giving you a marketing line.
The bottom line
Meta’s hybrid open-source strategy means the best free AI model and the best AI model are no longer the same thing.
For Appalachian small businesses, that is not a crisis. The open tier of Mango and Avocado will still be more capable than what existed two years ago, and the cost curve on hosted APIs keeps dropping. But the strategic choice — free model for most work, paid API for the high-stakes stuff — is now one you have to make deliberately, instead of getting both for free from a single download.
The vendors and consultants worth working with will tell you which model is running behind the curtain, and why. The ones that do not are the ones to question.
Trying to figure out where open-source AI fits in your stack? Get in touch — we help small businesses choose between open weights, hosted APIs, and custom fine-tuned models based on what you are actually trying to do.