Back to Blog
Compliance GuidesMarch 29, 202610 min read

Automated Decision-Making and Profiling Under US State Privacy Laws: What Businesses Must Know in 2026

Share:

As businesses accelerate their adoption of artificial intelligence and algorithmic systems, US state privacy laws are catching up. In 2026, a growing number of state privacy laws explicitly regulate automated decision-making and profiling — granting consumers the right to opt out, requiring businesses to conduct risk assessments, and mandating transparency about how algorithms affect people’s lives.

This guide explains what automated decision-making and profiling mean under state privacy laws, which states impose specific obligations, what businesses must do to comply, and how these requirements intersect with the broader AI governance conversation. Use our Privacy Law Calculator to determine which state laws apply to your business.

What Counts as “Profiling” and “Automated Decision-Making”?

State privacy laws define these terms with enough breadth to capture most AI and algorithmic systems used in business today.

Profiling is generally defined as any form of automated processing of personal data to evaluate, analyze, or predict aspects of a person’s behavior, preferences, economic situation, health, interests, reliability, location, or movements. If your system uses personal data to build a profile of a consumer — whether for ad targeting, credit scoring, pricing, or content personalization — it likely qualifies as profiling under state law.

Automated decision-making refers to decisions made by technological means without meaningful human involvement. This includes AI-driven hiring tools, algorithmic lending decisions, dynamic pricing engines, insurance underwriting models, and content recommendation systems that determine what consumers see or can access.

The key distinction matters: most state laws give consumers the right to opt out of profiling broadly, while a few also address the narrower category of automated decisions that produce legal or similarly significant effects on consumers.

Which States Regulate Automated Decision-Making?

As of March 2026, every comprehensive state privacy law grants consumers some form of opt-out right related to profiling. However, the scope and specificity vary significantly:

Tier 1: Explicit Automated Decision-Making Protections

These states go beyond basic profiling opt-outs to address automated decisions with significant effects:

  • California (CCPA/CPRA) — The CPPA has finalized regulations on Automated Decision-Making Technology (ADMT). Consumers have the right to opt out of ADMT used for decisions that produce legal or similarly significant effects. Businesses must provide pre-use notices and conduct risk assessments for ADMT processing. California’s framework is the most detailed in the US.
  • Colorado (CPA) — Requires controllers to conduct data protection assessments for profiling that presents a reasonably foreseeable risk of unfair or deceptive treatment, unlawful disparate impact, financial or physical injury, or intrusion on solitude or seclusion. Consumers can opt out of profiling in furtherance of decisions that produce legal or similarly significant effects.
  • Connecticut (CTDPA) — Mirrors Colorado: opt-out right for profiling that produces legal or similarly significant effects, plus data protection assessment requirement. The proposed SB 4 amendments would add algorithmic pricing transparency requirements.

Tier 2: Standard Profiling Opt-Out Rights

The majority of state privacy laws grant consumers the right to opt out of profiling as one of the core consumer rights, alongside opt-outs for targeted advertising and data sales:

Tier 3: Limited or No Explicit Profiling Provisions

  • Utah (UCPA) — Includes opt-out rights for targeted advertising and data sales but has the narrowest profiling provisions of any comprehensive state privacy law.
  • Iowa (ICDPA) — Opt-out rights limited to targeted advertising and data sales. No explicit profiling opt-out right.
  • Tennessee (TIPA) — Includes profiling opt-out but with a broad affirmative defense for good-faith compliance efforts.

Data Protection Assessments for Profiling

Several state privacy laws require businesses to conduct data protection assessments (DPAs) specifically for profiling activities. These assessments must weigh the benefits of the processing against the risks to consumer privacy.

States requiring DPAs for profiling include California, Colorado, Connecticut, Virginia, Oregon, Montana, Delaware, New Hampshire, New Jersey, Nebraska, Maryland, and Rhode Island. The common standard is that an assessment is required when profiling presents a “heightened risk of harm” to consumers.

A compliant DPA for profiling typically includes:

  1. Description of the processing activity — what data is collected, how the algorithm works, what decisions it informs
  2. Purpose and necessity — why profiling is needed and whether less invasive alternatives exist
  3. Benefits assessment — benefits to the controller, the consumer, and the public
  4. Risk assessment — potential harms including discrimination, financial injury, reputational damage, and intrusion on privacy
  5. Safeguards — what measures mitigate identified risks (human oversight, bias testing, accuracy audits, consumer notice)

The California ADMT Framework: Setting the Standard

California’s CPPA has developed the most comprehensive automated decision-making regulations in the US. The finalized rules address three categories of ADMT use:

  1. Decisions producing legal or similarly significant effects — includes decisions about employment, housing, credit, insurance, education, and access to essential goods and services. Consumers have a right to opt out and to receive information about how the decision was made.
  2. Extensive profiling — systematic profiling that goes beyond what consumers would reasonably expect. Businesses must provide pre-use notice and offer an opt-out.
  3. Profiling in publicly accessible places — automated surveillance and behavioral tracking in physical spaces accessible to the public.

Businesses using ADMT in California must provide a “Pre-Use Notice” that explains the purpose of the technology, the types of data it processes, the logic involved, and the likely outcome. This goes further than any other state law currently requires.

Practical Compliance Steps

For businesses using AI, algorithms, or automated systems that process consumer personal data, here is a practical compliance framework:

1. Inventory Your Automated Systems

Start by cataloging every system that uses personal data to make or inform decisions about consumers. This includes marketing automation, recommendation engines, fraud detection, dynamic pricing, credit scoring, hiring tools, and insurance underwriting. For each system, document what personal data it processes, what decisions it influences, and whether those decisions produce legal or similarly significant effects.

2. Map Applicable State Laws

Use our Privacy Law Calculator to determine which state laws apply based on your consumer base. Then check each applicable law’s profiling provisions. If you serve consumers in California, Colorado, or Connecticut, you face the most stringent requirements.

3. Implement Opt-Out Mechanisms

Provide consumers with clear, accessible methods to opt out of profiling. This should be separate from (or clearly identified within) your existing opt-out mechanisms for data sales and targeted advertising. Our Opt-Out Generator can help you create compliant opt-out pages and mechanisms.

4. Conduct Data Protection Assessments

For any profiling activity that presents a heightened risk of harm, complete a DPA before deploying the system. Review and update assessments annually or when the processing materially changes. See our DPA guide for detailed instructions.

5. Provide Transparency

Update your privacy policy to disclose profiling and automated decision-making activities. At minimum, explain that you engage in profiling, identify the categories of profiling, and describe how consumers can exercise their opt-out rights. California requires more detailed pre-use notices for ADMT.

6. Build Human Oversight

For decisions with significant consumer impact, implement meaningful human review. Automated decisions about employment, credit, insurance, housing, or access to essential services should include a human-in-the-loop process — both because it reduces legal risk and because several state laws explicitly reference “solely automated” decisions as a trigger for heightened obligations.

Emerging Trends: What’s Coming Next

Several developments signal that automated decision-making regulation will intensify in 2026 and beyond:

  • Illinois SB 2875 (introduced January 2026) — would add a right to contest adverse profiling decisions, going beyond any current state law.
  • Connecticut SB 4 — the proposed CTDPA amendments would require businesses to disclose when algorithmic pricing is used to set prices for consumers.
  • California ADMT enforcement — the CPPA has indicated that ADMT compliance will be an enforcement priority in 2026, building on the Disney and PlayOn settlements.
  • Colorado AI Act — Colorado has separately enacted SB 205 (2024), an AI-specific law targeting high-risk AI systems that takes effect in February 2026, creating additional obligations beyond the CPA’s profiling provisions.

The trend is clear: as AI adoption accelerates, state regulators are building the enforcement infrastructure to hold businesses accountable for how algorithms treat consumers.

Frequently Asked Questions

Does using AI for marketing count as “profiling” under state privacy laws?

Generally, yes. If you use personal data to build consumer profiles for ad targeting, content personalization, or marketing segmentation, that qualifies as profiling under most state privacy laws. Consumers in states with profiling opt-out rights can request that you stop this processing.

Do I need a data protection assessment for every algorithm?

Not necessarily. DPAs are typically required when profiling presents a “heightened risk of harm” to consumers. Low-risk profiling (like product recommendations) may not require a formal assessment, but high-risk uses (credit decisions, hiring, insurance) almost certainly do. When in doubt, conduct the assessment — it demonstrates good faith compliance.

What are “legal or similarly significant effects”?

This phrase appears in most state laws but is interpreted differently. Generally, it includes decisions about employment, housing, credit, insurance, education, access to essential services, and pricing. It does not typically include basic personalization like product recommendations or content feeds, though California’s ADMT rules interpret this more broadly.

Can consumers opt out of all AI-driven processing?

No. The opt-out right generally applies to profiling in furtherance of decisions with legal or similarly significant effects, targeted advertising, and data sales. Businesses can still use AI for internal operations, fraud prevention, security, and other legitimate purposes that don’t trigger consumer opt-out rights.

How do state profiling rules interact with federal AI regulations?

As of 2026, there is no comprehensive federal AI privacy law. The FTC has enforcement authority over unfair and deceptive AI practices under Section 5, and sector-specific laws (like the Equal Credit Opportunity Act) apply to algorithmic decisions in their domains. State privacy laws fill the gap with consumer-facing rights. Businesses should comply with both state privacy profiling rules and any applicable federal requirements.

Last updated: March 29, 2026.

Disclaimer: PrivacyLawMap provides general information about US state privacy laws for educational purposes only. This is NOT legal advice. Privacy laws are complex and frequently amended. Consult with a qualified privacy attorney for advice specific to your business. PrivacyLawMap makes no warranties about the accuracy or completeness of this information.