AI Compliance Requirements by Industry — Complete Guide 2026
AI compliance in 2026 is not one law — it's a patchwork of federal guidance, state legislation, sector-specific rules, and international frameworks that apply differently depending on your industry. A healthcare operator using AI faces HIPAA-plus requirements. A financial services firm has SEC guidance and state money-transmission rules. An HR platform has EEOC and CCPA exposure. This guide maps the compliance landscape across verticals so you know what you're actually responsible for.
AI Compliance Requirement Matrix by Industry
| Industry | Primary AI Regulations | Key Deadlines / Status | Risk Level | Enforcement Body |
|---|---|---|---|---|
| Healthcare | HIPAA AI Guidance (2025), ONC Health Data Rules | OCR enforcement active | Critical | HHS OCR |
| Financial Services | SEC AI Exam Priorities, CFPB AI Fairness | Ongoing exam cycles | Critical | SEC, CFPB, OCC |
| HR/Recruiting | EEOC AI Bias Guidance, NYC Local Law 144, IL AI Video Act | NYC: active enforcement | High | EEOC, state AGs |
| Legal | ABA Formal Opinion 512 (AI competence), state bar rules | Varies by state bar | High | State bar associations |
| Insurance | NAIC AI Model Bulletin, state-level adoptions | 20+ states adopted | High | State DOI |
| Education (K-12) | FERPA + AI data use guidance, state AI policies | Evolving | Moderate | DOE, state EdDepts |
| Real Estate | Fair Housing Act + AI screening, HUD guidance | HUD guidance 2025 | Moderate | HUD, state agencies |
| Retail / E-commerce | FTC AI Guidance on deceptive practices, CCPA | FTC enforcement active | Moderate | FTC, state AGs |
| Manufacturing | OSHA AI in workplace safety guidance | Guidance, not rules yet | Low-Moderate | OSHA |
| Professional Services | General FTC unfair/deceptive standards | General FTC remit | Low | FTC |
Healthcare: The Strictest AI Compliance Environment
Healthcare AI sits at the intersection of HIPAA, FDA device regulation, and clinical liability. The compliance surface is wide.
**HIPAA and AI:** If an AI tool processes Protected Health Information (PHI) — which includes patient notes, diagnosis codes, scheduling data, and billing data — it must be covered by a Business Associate Agreement (BAA) with the AI vendor. HIPAA-compliant AI vendors (Microsoft Azure HIPAA, Google Cloud Healthcare API, AWS) offer BAAs; consumer AI tools (standard ChatGPT, standard Claude) do not. Using patient data in a non-BAA AI tool is a HIPAA violation.
**FDA SaMD (Software as a Medical Device):** AI tools that influence clinical decision-making may be regulated as Software as a Medical Device. FDA has pre-determined AI/ML-based SaMD pathways. Clinical decision support software that analyzes patient-specific data to recommend treatment must comply with FDA oversight. Ambient documentation tools (Nuance DAX, Abridge) that just transcribe and format notes are generally outside this scope.
**ONC Health Data Rules (2024-2025):** Information blocking prohibitions apply. Healthcare organizations cannot use AI systems to obstruct patient access to their own data. Systems that use AI to generate patient-facing output must be auditable.
**Practical checklist for healthcare operators:**
- BAA signed with every AI vendor touching PHI ✔
- AI training data doesn't include real patient records without consent ✔
- Clinical AI decisions are logged and auditable ✔
- Staff trained on AI limitations (AI hallucination in clinical context = liability) ✔
Financial Services: SEC and CFPB Are Actively Looking
Financial services AI compliance is an active enforcement area, not future risk.
**SEC AI Examination Priorities (2025-2026):** The SEC has included AI use by registered investment advisers in its exam priorities. Specifically: (1) AI systems used for client recommendations must be explainable — "the AI decided" is not a compliant response to a client complaint. (2) AI-generated marketing content must comply with the same advertising rules as human-written content. (3) Cybersecurity around AI systems is in scope.
**CFPB AI Fairness:** The CFPB has issued guidance that AI-based credit decisions must comply with ECOA's adverse action notice requirements. If AI denies a loan application, the applicant is entitled to a specific explanation — an opaque ML score is not compliant. CFPB has taken enforcement actions on this.
**FINRA AI Guidance:** FINRA issued guidance in 2025 that broker-dealers using AI for supervisory functions (flagging unusual trades, compliance monitoring) must validate the AI's outputs with the same rigor as other compliance systems. AI-generated supervisory reports require human review before action.
**Bank Secrecy Act + AI:** FinCEN has issued guidance that AI-powered transaction monitoring (used for AML) must be tested for accuracy and disparate impact. Banks using AI for SAR filing decisions face examination on model governance.
**Practical requirement:** Document every AI system's purpose, data inputs, training methodology, and output review process. Regulators in financial services will ask for this.
HR and Recruiting: NYC Local Law 144 Is the Leading Edge
Employment AI faces active enforcement, especially in hiring contexts.
**NYC Local Law 144 (Automated Employment Decision Tools):** The first law of its kind in the U.S., it requires employers using AI in hiring or promotion decisions affecting NYC residents to: (1) conduct an independent bias audit annually, (2) publish the audit results on their website, and (3) provide job candidates with notice that an AEDT is being used. Enforcement began in mid-2023. Penalties up to $1,500 per violation per day.
**Illinois AI Video Interview Act:** Illinois employers using AI to evaluate recorded video interviews must disclose AI use to candidates, explain how AI works in the evaluation, and get consent. Effective since 2020 but enforcement has increased.
**EEOC AI and Title VII:** The EEOC issued technical assistance guidance clarifying that employers are liable for discriminatory outcomes from AI hiring tools, even if the bias is unintentional and built into the vendor's model. "We used a third-party AI" is not a defense.
**What this means practically:** If your HR platform uses AI to screen resumes, rank candidates, or evaluate interviews, you need to know: (a) whether the vendor has conducted bias audits, (b) what the disparate impact data shows across protected classes, and (c) whether you need local law compliance (NYC, Illinois).
Insurance: NAIC Model Bulletin Spreading State by State
The NAIC (National Association of Insurance Commissioners) adopted a Model Bulletin on the Use of AI Systems in 2023. By early 2026, more than 20 states have adopted it in full or substantial form.
**Core requirements under the NAIC bulletin:**
- Insurers are responsible for AI decisions even when using third-party AI systems
- AI cannot be used to make underwriting or claims decisions that violate existing anti-discrimination rules (including use of protected class proxies)
- Insurers must be able to explain AI-driven decisions to regulators and policyholders
- Written AI governance program required: inventory of AI systems, risk assessment, testing protocols
**States with active enforcement:** Colorado, Connecticut, Illinois, New York, and California have the most active enforcement postures on insurance AI. Colorado's SB21-169 (Algorithmic and AI Act for Insurance) was the first state-level AI insurance law and remains the strictest.
**Practical implication:** Every insurer using AI for underwriting, pricing, or claims must have a documented governance program and be prepared to produce it on examination.
Legal and Professional Services: The Competence Question
Legal AI compliance is largely about professional competence standards rather than government regulation.
**ABA Formal Opinion 512 (2024):** The American Bar Association clarified that lawyers have a duty of competence that extends to AI tools they use. This means: (1) lawyers cannot submit AI-generated legal work without verifying it, (2) AI must be used in a way that maintains confidentiality (using client data in public AI models without consent is a potential ethics violation), and (3) billing for time saved by AI without passing through the efficiency gain is a potential fee issue.
**State bar rules vary:** California, New York, Florida, and Texas have all issued guidance or are developing rules. California's State Bar has been most active, proposing specific AI use disclosure requirements. Check your state bar's AI guidance page.
**Confidentiality in AI tools:** Client data cannot be entered into AI systems that train on inputs (standard ChatGPT, standard consumer tools) without client consent. Enterprise AI agreements (Microsoft Copilot for Legal, Harvey AI, Clio's legal AI) include data processing terms that address this. Know which tier of service you're using.
Universal AI Compliance Baseline for Any Business
Regardless of industry, every business using AI in customer-facing or decision-making contexts should implement these baseline practices in 2026:
**1. AI System Inventory.** List every AI tool in use, its purpose, what data it processes, and who the vendor is. You cannot govern what you haven't inventoried.
**2. Data Processing Review.** For each AI tool: does it process personal data? Is a privacy notice disclosure required (CCPA, GDPR if you have EU users)? Does it require a DPA or BAA with the vendor?
**3. Human-in-the-Loop for High-Stakes Decisions.** Decisions that affect employment, credit, housing, healthcare, or insurance should have human review of AI outputs before action. Document the review process.
**4. Vendor Due Diligence.** Before buying AI for any regulated use, ask: (a) Has a bias audit been conducted? (b) What data was the model trained on? (c) What are the data retention and deletion terms? (d) Do they have compliance documentation for your industry?
**5. Employee Training.** Staff using AI need to understand its limitations. AI hallucination in a legal brief, a clinical note, or a financial report creates direct liability. Training on "when not to trust AI output" is as important as training on how to use it.
FAQ
**Q: Is there a federal AI law in the U.S.?**
A: No comprehensive federal AI law existed as of early 2026. Compliance comes from sector-specific rules (HIPAA, ECOA, SEC regulations), FTC unfair/deceptive standards, and state-level laws (Colorado, California, Illinois, New York are the most active). Federal AI legislation remains in committee.
**Q: Does using AI mean I need to disclose it to customers?**
A: In regulated sectors (healthcare, insurance, HR), disclosure is increasingly required. For general business use, the FTC's standard is that you cannot deceive consumers about AI use. Claiming a human wrote something generated entirely by AI, or using AI to impersonate a human in customer interactions, creates FTC exposure.
**Q: Is the EU AI Act relevant to U.S. businesses?**
A: Yes, if you have customers or operations in the EU. The EU AI Act takes effect in stages through 2026-2027 and has extraterritorial reach similar to GDPR. High-risk AI applications (including HR, credit scoring, and biometric identification) face the strictest requirements.
**Q: What's the biggest AI compliance risk for small businesses?**
A: Using consumer AI tools (non-enterprise ChatGPT, free tiers of AI tools) with customer or employee personal data. These tools don't have the data processing agreements required for regulated data. The fix is straightforward: use enterprise tiers with data processing agreements, or don't put sensitive data in AI tools.
Related Reading
From the Blog
Explore the Network