AI and Data Privacy: What US State Privacy Laws Require When You Use Artificial Intelligence
Why AI and Data Privacy Are Colliding in 2026
If your business uses artificial intelligence — whether for customer recommendations, fraud detection, hiring tools, chatbots, or marketing analytics — you are almost certainly subject to provisions in US state privacy laws that specifically regulate AI and automated decision-making. As of March 2026, at least 15 state comprehensive privacy laws contain provisions addressing profiling, automated decisions, or both, and enforcement is accelerating.
This guide explains exactly how US state privacy laws apply to AI, what compliance obligations they create, and what practical steps you should take today to reduce your risk. Whether you build AI tools or simply use them, these rules affect you.
How State Privacy Laws Define AI-Related Processing
State privacy laws don’t typically use the term “AI” directly. Instead, they regulate specific activities that AI systems perform, using two key legal concepts:
Profiling
Most state privacy laws define profiling as any form of automated processing of personal data to evaluate, analyze, or predict aspects of a person’s behavior, preferences, economic situation, health, reliability, or location. This definition captures nearly every machine learning model that processes personal data — from recommendation engines to credit scoring algorithms.
States with explicit profiling provisions include California, Colorado, Connecticut, Delaware, Indiana, Iowa, Kentucky, Maryland, Minnesota, Montana, Nebraska, New Hampshire, New Jersey, Oregon, Rhode Island, Tennessee, Texas, Utah, and Virginia.
Automated Decision-Making
A narrower but more consequential concept, automated decision-making refers to decisions made by technology without meaningful human involvement that produce legal or similarly significant effects on a consumer. Examples include loan approvals, insurance pricing, employment screening, and housing decisions.
California’s CPRA regulations (currently being finalized by the CPPA) would create the most detailed automated decision-making rules in the country, including requirements for pre-use notices, access to the logic involved, and opt-out rights.
The Five Core AI Obligations Under State Privacy Laws
1. Data Protection Assessments for AI Processing
At least 15 state laws require businesses to conduct data protection assessments (also called privacy impact assessments) before engaging in processing that presents a heightened risk of harm. Profiling and automated decision-making are explicitly listed as triggers in most of these laws.
| State | DPA Required for Profiling? | DPA Required for Automated Decisions? | Notable Detail |
|---|---|---|---|
| California | Yes (proposed) | Yes (proposed) | CPPA rulemaking includes detailed ADMT regulations |
| Colorado | Yes | Yes | Must weigh benefits against potential risks to consumers |
| Connecticut | Yes | Yes | SB 4 amendments may expand requirements in 2026 |
| Maryland | Yes | Yes | MODPA enforcement began April 1, 2026; includes sensitive data focus |
| Virginia | Yes | Yes | Must make assessments available to AG upon request |
| Texas | Yes | Yes | AG can request assessments; 18,000+ employee threshold |
| Oregon | Yes | Yes | Covers nonprofit organizations too |
| Delaware | Yes | Yes | Effective January 1, 2025 |
| Montana | Yes | Yes | Lowest revenue threshold ($25K) in the country |
| Minnesota | Yes | Yes | Effective July 31, 2025; includes profiling in employment decisions |
If you deploy any AI model that processes personal data, you very likely need a documented assessment before launch. Use our Privacy Law Calculator to determine which states’ laws apply to your business.
2. Consumer Opt-Out Rights for Profiling
Nearly every comprehensive state privacy law grants consumers the right to opt out of profiling that furthers decisions producing legal or similarly significant effects. Several states go further:
- Colorado, Connecticut, and Virginia were the first to include explicit opt-out rights for profiling.
- Maryland’s MODPA (enforcement active since April 1, 2026) requires opt-out for profiling used in decisions about “access to or the cost of financial lending services, housing, insurance, education enrollment, and criminal justice.”
- California’s proposed ADMT regulations would require a pre-use notice before any significant automated decision and a separate opt-out mechanism.
- Minnesota requires opt-out for profiling in employment, lending, insurance, and housing contexts.
If your AI system influences consumer-facing decisions, you must provide a working opt-out mechanism. See our Opt-Out Link Generator for implementation guidance.
3. Transparency and Notice Requirements
When you use AI to process personal data, state privacy laws require you to disclose this in your privacy policy. At minimum, you must describe:
- The categories of personal data processed by your AI systems
- The purposes of that processing (profiling, targeting, personalization, automated decisions)
- Whether personal data is sold or shared with third parties for AI training
- How consumers can exercise their opt-out rights
California’s proposed ADMT rules would add requirements to explain “the logic involved” in automated decisions and “the key parameters that are most influential in the decision.”
4. Sensitive Data and AI Training
If your AI models process or are trained on sensitive personal data (racial or ethnic origin, religious beliefs, health information, precise geolocation, biometric data, children’s data, sexual orientation), virtually every state law requires you to obtain affirmative consent before processing. This means:
- You cannot use sensitive data to train AI models without explicit consumer opt-in
- You cannot use AI to infer sensitive data categories without consent (some state AGs have signaled this interpretation)
- Maryland’s MODPA and Minnesota’s law go further: they restrict even the collection of certain sensitive data beyond what is reasonably necessary
5. Data Minimization Limits on AI Training
Multiple state laws now include data minimization requirements — businesses must limit personal data collection and use to what is “reasonably necessary” for the disclosed purpose. For AI, this creates a direct constraint: you cannot collect more data than needed to operate your stated service just because it would improve your model.
The PlayOn Sports enforcement action ($1.1M fine, March 2026) illustrates this — using tracking technology to collect student data for targeted advertising exceeded what was “reasonably necessary” for the ticketing service. Companies training AI models on customer data should take note.
Emerging AI-Specific Legislation in 2026
While existing state privacy laws already regulate AI through profiling and automated decision-making provisions, several states are advancing AI-specific bills in 2026:
- Washington HB 2225 — Passed the legislature in March 2026. Regulates AI companions, marking Washington as the second state to pass AI companion-specific legislation.
- New York “One Fair Price Package” — Two bills would prohibit personalized algorithmic pricing based on consumer data and ban electronic shelf labels in large retailers. AG Letitia James is championing the legislation.
- Connecticut SB 4 — Would add algorithmic pricing disclosure requirements to the state’s existing privacy law, among other amendments.
- Colorado AI Act (SB 205) — Enacted in 2024, takes effect February 1, 2026. Requires “deployers” of “high-risk AI systems” to implement a risk management policy and complete impact assessments.
- California ADMT rulemaking — The CPPA continues developing detailed automated decision-making technology regulations, expected to be the most comprehensive AI-privacy framework in the country.
These bills signal a clear trend: states are moving from regulating AI indirectly (through privacy law profiling provisions) to regulating it directly. Businesses should prepare for a patchwork of AI-specific requirements layered on top of existing privacy obligations.
Real-World Enforcement: AI Privacy Violations
Enforcement agencies are already using existing privacy laws to target AI-related practices. Recent notable actions include:
- Disney $2.75M settlement (February 2026) — California AG secured the largest CCPA settlement in history for failure to properly honor opt-out requests across Disney’s streaming platforms. While not strictly an “AI case,” the Disney enforcement underscores that algorithmic ad targeting combined with poor opt-out implementation draws enforcement attention.
- PlayOn Sports $1.1M fine (March 2026) — CPPA fined PlayOn for tracking students via its ticketing platform for targeted advertising without adequate opt-out mechanisms. The case shows regulators view automated ad targeting of vulnerable populations as a priority.
- Tractor Supply $1.35M settlement (2025) — CPPA’s then-largest settlement addressed failure to honor opt-out signals — often generated by automated tools interacting with algorithmic ad systems.
The pattern is clear: businesses that use automated systems to process personal data for advertising, profiling, or decision-making face heightened enforcement risk, especially when those systems touch children’s data or fail to honor opt-out requests. See our full enforcement penalties guide for more cases.
Compliance Checklist: AI and State Privacy Laws
Use this practical checklist to assess your AI-related privacy compliance:
- Inventory your AI systems — Identify every tool, model, or automated process that uses personal data. Include third-party AI services (chatbots, recommendation engines, analytics platforms).
- Map data flows — For each AI system, document what personal data goes in, what decisions or outputs come out, and which third parties receive data.
- Determine applicable state laws — Use the Privacy Law Calculator to identify which state laws apply to your business. Check whether profiling and automated decision-making provisions are triggered.
- Conduct data protection assessments — For any AI system that performs profiling or makes decisions with legal or significant effects, complete a privacy impact assessment documenting risks, benefits, and safeguards.
- Implement opt-out mechanisms — Provide consumers with a clear, functional way to opt out of profiling and automated decision-making. Ensure it works across all platforms and devices (learn from Disney’s $2.75M mistake).
- Update your privacy policy — Disclose your use of AI/automated processing, the types of decisions made, the data involved, and consumer rights. Use our Privacy Policy Generator for state-compliant language.
- Get consent for sensitive data — If your AI processes sensitive personal data, obtain affirmative opt-in consent before processing.
- Apply data minimization — Limit AI training data to what is reasonably necessary for the disclosed purpose. Do not repurpose data collected for one service to train models for another without notice and consent.
- Honor DSAR deadlines — Consumers have the right to know what data your AI processes about them. Ensure you can respond to data subject access requests within state-mandated timeframes.
- Monitor the regulatory landscape — AI regulation is evolving rapidly. Subscribe to updates and revisit your compliance posture quarterly.
Frequently Asked Questions
Do state privacy laws apply to AI tools we use but didn’t build?
Yes. If you deploy a third-party AI tool that processes your customers’ personal data, you are the “controller” under state privacy law and bear responsibility for compliance. You must ensure your vendor agreements include adequate data protection terms and that the tool respects opt-out signals. See our data processing agreements guide.
Is using AI for internal analytics covered by these laws?
Generally, internal analytics that do not produce decisions with legal or similarly significant effects on consumers are lower risk. However, if your analytics involve profiling (analyzing personal data to evaluate or predict behavior), data protection assessment requirements may still apply in states like Colorado, Connecticut, and Maryland.
How does the Colorado AI Act interact with state privacy laws?
The Colorado AI Act (effective February 1, 2026) adds requirements on top of the Colorado Privacy Act. Deployers of “high-risk AI systems” (those making consequential decisions in employment, education, financial services, healthcare, housing, insurance, or legal services) must implement risk management policies, complete impact assessments, disclose AI use to consumers, and allow consumers to appeal adverse decisions. This creates a dual compliance obligation for businesses operating in Colorado.
Can I use customer data to train AI models?
It depends on what you disclosed in your privacy policy and the nature of the data. Under data minimization principles, you can only use personal data for purposes compatible with what you disclosed at collection. Using customer data to train models for a different purpose likely requires additional notice and may require consent. Sensitive data categories always require explicit consent.
What happens if my AI makes a wrong decision about a consumer?
Several state laws grant consumers rights to access, correct, and appeal automated decisions. If your AI denies a service, changes pricing, or otherwise produces a significant adverse outcome, the affected consumer may have the right to understand why, correct inaccurate data, and request human review. California’s proposed ADMT regulations would make these rights particularly robust.
For a complete analysis of which state laws apply to your business and what they require, start with the Privacy Law Calculator and review your obligations with the state comparison tool.
Last updated: March 29, 2026.
Disclaimer: PrivacyLawMap provides general information about US state privacy laws for educational purposes only. This is NOT legal advice. Privacy laws are complex and frequently amended. Consult with a qualified privacy attorney for advice specific to your business. PrivacyLawMap makes no warranties about the accuracy or completeness of this information.