AI Chatbot Laws Hit 27 States — What to Know
78 chatbot bills. 27 states. One question: does your business comply?
If you use an AI chatbot on your website, answer customer questions with an automated assistant, or run any kind of AI-powered intake tool, state legislators have you on their radar. As of early March 2026, 78 AI chatbot safety bills have been introduced across 27 states — and they’re advancing fast.
The bills focus on disclosure, safety, and youth protection. Most would require businesses to tell customers when they’re talking to AI instead of a human. Some go further, imposing penalties up to $1,000 per violation and creating new obligations around mental health and minors.
If you run a small business with any AI-facing customer tool, here’s what you need to know.
What happened
The wave started building in late 2025 when California passed SB 243, the first companion chatbot law with youth-specific protections. It took effect January 1, 2026. Other states saw the template and started drafting their own versions.
By February, chatbot bills had crossed chambers in multiple states and advanced through committees in several more. As of March 2, bills are moving in Arizona, Iowa, Georgia, Illinois, New York, Oregon, and Washington — with new filings in New Jersey, Louisiana, and Connecticut.
Key facts
- 78 bills introduced across 27 states as of March 2026
- California’s SB 243 is already law — effective January 1, 2026
- Oregon’s SB 1546 passed the Senate 26-1 with a private right of action and $1,000 statutory damages
- March 11, 2026 is a critical federal deadline — the Secretary of Commerce must identify state AI laws the administration considers “burdensome”
- 36 state attorneys general are opposing federal preemption of state AI laws
What these bills actually require
Not all 78 bills say the same thing. But most share three core requirements that DLA Piper’s analysis breaks into clear categories.
AI disclosure
The most common requirement: tell the customer they’re interacting with AI, not a human. This applies when a “reasonable person would believe the chatbot is human.” In practice, that means your chatbot needs a clear label or initial message identifying itself as automated.
Oregon’s SB 1546 is the strictest here. If your chatbot could be mistaken for a human and you don’t disclose, you’re exposed to a private right of action — meaning individual customers can sue, not just the state attorney general.
Youth protections
Several states are adding rules specifically for AI interactions with minors. California’s SB 243 and Oregon’s SB 1546 both require reminders at least every three hours that the chatbot is not human when minors are using it. Utah’s HB 438 restricts advertising by chatbot operators targeting children.
Safety protocols
Bills in Oregon, Virginia, and California require chatbot providers to have procedures for detecting suicidal ideation and self-harm and responding with crisis resources. Virginia’s SB 796 also requires incident reporting.
The state-by-state breakdown
| State | Bill | Status | Key requirement |
|---|---|---|---|
| California | SB 243 | Law (Jan 1, 2026) | Disclosure, youth reminders, $1,000 per violation |
| Oregon | SB 1546 | Passed Senate 26-1 | Disclosure, private right of action, $1,000 damages |
| Utah | HB 438 | Passed House 68-1 | Data privacy, minor protections, ad restrictions |
| Virginia | SB 796 | Passed Senate 39-1 | Applies to 500K+ monthly users, incident reporting |
| Washington | HB 2225 | Passed House 69-28 | Companion chatbot regulation |
| Iowa | SF 2417 | Passed Senate unanimously | Chatbot safety provisions |
| Arizona | HB 2311 | Passed House (amended) | Chatbot disclosure requirements |
| New York | S 7263 | Voted out of committee | Liability for chatbot impersonation of licensed professionals |
Bills have also been introduced in Idaho, Oklahoma, Hawaii, New Jersey, Louisiana, Connecticut, Colorado, and Georgia.
How this affects small businesses using AI chatbots
Here’s the practical question: does a small business with a website chatbot need to worry about this?
The answer depends on the state and the law. Virginia’s SB 796 only applies to operators with 500,000 or more monthly active users — most small businesses are well below that threshold. But Oregon’s SB 1546 and California’s SB 243 have no such floor. If you operate an AI chatbot in those states, the law applies to you.
If you use a third-party chatbot tool
Most small businesses don’t build their own chatbots. You use a tool like Hollr, Intercom, Drift, or a chatbot builder. The good news: well-designed chatbot platforms already include disclosure features. The question is whether those features are turned on and configured properly.
Check your chatbot settings for:
- An opening message that identifies the assistant as AI-powered
- A visible label on the chat widget (like “AI Assistant” or “Virtual Assistant”)
- Age-gating if your chatbot could interact with minors
- Crisis response protocols if your chatbot handles sensitive topics
If you built your own chatbot
If you used a chatbot creator or custom integration, you’re responsible for compliance. That means adding disclosure language, implementing safety protocols if applicable, and staying current as laws change in each state you operate in.
The federal wildcard
On March 11, the Secretary of Commerce is required to identify state AI laws the administration considers inconsistent with federal policy. The Trump administration has signaled support for preempting state AI regulations, while 36 state attorneys general are pushing back. Whether federal preemption actually happens will shape the compliance landscape for the rest of 2026.
What to do now
You don’t need a lawyer to start. Here are three steps any small business can take today.
1. Audit your chatbot for AI disclosure
Open your website. Start a conversation with your chatbot. Does it clearly state that it’s AI-powered? If not, add a disclosure message. Something straightforward works: “Hi, I’m an AI assistant for [Your Business]. How can I help?“
2. Check your platform’s compliance settings
If you use a managed chatbot service, check whether it has built-in disclosure and safety features. Most modern platforms do. Make sure they’re enabled — especially if you serve customers in California, Oregon, or Utah where laws are already active or advancing quickly.
3. Watch the March 11 federal deadline
The federal government’s position on state AI laws could simplify or complicate the picture. If you operate in multiple states, this is the single most important date on the calendar. Follow Troutman Pepper’s weekly AI law tracker to stay current.
What to watch next
- Oregon’s SB 1546 moving to the House — its private right of action makes it the most consequential bill for small businesses
- New York’s S 7263 — could create liability for chatbots that impersonate licensed professionals (lawyers, doctors, accountants)
- March 11 federal deadline — could trigger preemption challenges against state laws
This is the same pattern we saw with state AI pricing laws earlier this year — states moving fast while the federal government debates whether to step in. If you use any form of AI in customer-facing interactions, basic disclosure isn’t just good practice anymore. In a growing number of states, it’s the law.
Need help making your AI tools compliant? Get in touch — we build chatbot and intake solutions with disclosure and safety baked in from the start.