Texas Passes TRAIGA: What the New AI Law Means for Your Business
- Palomo, Rocio Caine, Michael Petro, Lee G. Ewing, Gregory L.
- Industry Alerts
Click “Subscribe Now” to get attorney insights on the latest developments in a range of services and industries.
On June 22, 2025, Governor Abbott signed the Texas Responsible Artificial Intelligence Governance Act (TRAIGA), which will take effect January 1, 2026. Any business or government agency working with AI in Texas should take note that TRAIGA is not a copy-paste of other states’ laws; rather, it specifically targets intentional misuse of AI, not just “high-risk” AI.
Unlike broader “high-risk AI” frameworks emerging in other states, TRAIGA puts intent at the center of its rules, with an emphasis on preventing deliberate misuse. It also makes meaningful changes to Texas’s privacy statutes to address AI-specific issues, particularly around biometric data and transparency obligations.
Who Must Comply?
- Government agencies are explicitly within scope if they use AI to interact with the public.
- Private sector companies that develop, market, sell, or otherwise provide AI-generated content or AI services to Texas residents, even if the company is based outside the state, if its AI systems impact Texas residents.
Prohibited Conduct: Intent Is Key
TRAIGA targets deliberate misuse of AI systems, prohibiting private entities from developing or deploying AI systems that intentionally:
- Encourage or incite self-harm, violence, or illegal activity
- Engage in intentional discrimination against protected classes under law
- Generate illegal sexual content, including AI-generated deepfakes. The Act also explicitly bans child pornography or sexually explicit chat systems that impersonate children.
Of particular note, accidental or unintentional impacts alone are not sufficient to trigger a violation. The Attorney General must show a purposeful intent to discriminate or cause harm.
Notice and Disclosure Requirements: No More “Black Box” Interactions
One of TRAIGA’s central compliance demands is transparency in government use of AI. Government agencies must provide clear, plain-language notice whenever an individual is interacting with an AI system, rather than a human, “regardless of whether it would be obvious to a reasonable person that the person is interacting with an [AI] system.” Any notice must be:
- Conspicuous and easily understood: No legalese, no fine print, no ambiguous chatbots masquerading as people.
- Provided at the start of the interaction: Users must be informed upfront, not after the fact or buried in a privacy policy.
- Free of “dark patterns”: Agencies are expressly prohibited from using manipulative UX/UI techniques to obscure or downplay AI involvement.
The law signals a move toward radical transparency and government agencies should begin reviewing all user-facing AI touchpoints to ensure compliance. Staff training will be essential, as will regular audits of disclosures and interface design.
Biometric Privacy: More Stringent Rules and New Exceptions
TRAIGA updates Texas’s biometric privacy framework, tightening the rules around notice and consent for the collection and use of biometric identifiers (e.g., fingerprints, iris scans, voiceprints). The law clarifies that it does not apply to (i) general photographs or other biometric identifiers made publicly available by the individual, (ii) voice recordings required by financial institutions, (iii) information collected, used, or stored for health care treatment, payment, or operations, or (iv) biometric data used solely for training or security purposes, provided it is not used to identify individuals.
Although biometric identifiers may be used to train AI systems, if that information is subsequently used for commercial purposes, the entity that collected the data may be subject to the Act’s enforcement provisions, unless it first obtains consent from the individual.
Government Use Restrictions
Under TRAIGA, government agencies are expressly prohibited from:
- Creating or applying “social scoring” algorithms that result in discriminatory or otherwise adverse treatment of individuals; and
- Creating or using AI systems to uniquely identify individuals using biometric data or by collecting images or media from the Internet or other public sources, whether targeted or not, unless (i) the individual has given consent, and (ii) the use does not violate any rights protected by the U.S. Constitution, the Texas Constitution, or applicable state or federal law.
These protections create meaningful guardrails that prevent government overreach in biometric identification efforts.
Recap: Obligations for Government Use
In summary, government agencies using AI should take note of the following key requirements:
- Government agencies must clearly disclose when AI is interacting with consumers, no confusing UX or manipulative design.
- Social scoring and biometric identification (fingerprint, iris, voiceprint) by the government are strictly banned, except for routine photos and audio.
- The state’s biometric privacy law tightens notice/consent and clarifies exemptions for training or security uses not tied to identification.
- Processors handling AI-processed personal data must assist controllers in compliance.
Regulatory Infrastructure: The Texas AI Council and Sandbox
Texas will launch a 7-member AI Council under the Department of Information Resources. The council will serve an advisory role, providing guidance, issuing reports, and supporting agency training, but it will not have rulemaking power.
Organizations pursuing innovative or high-impact AI projects can take advantage of Texas’s new “regulatory sandbox,” which offers a controlled environment to test real-world AI systems. The program promotes faster adoption of AI by temporarily easing regulatory requirements while providing oversight and risk management, allowing organizations to test their AI solutions without the risk of enforcement penalties.
Enforcement and Remedies
- Enforcement authority sits solely with the Texas Attorney General. There is no private right of action - clients can expect a formal complaint system and investigative process, with civil investigative demands as a tool.
- Companies get a 60-day cure period following notice of an alleged violation before penalties accrue.
- Affirmative defenses are available for organizations that document robust internal testing, conduct adversarial “red-teaming,” or follow industry standards like the NIST AI Risk Management Framework guidelines.
- Penalties start at $10,000 per violation, scaling up depending on severity and remediation.
Compliance Action Items for Legal and Risk Teams
- Document Intent Meticulously: Maintain comprehensive records for each AI system’s intended purpose, especially for use cases that could be construed as manipulative, discriminatory, or otherwise high-risk.
- Audit and Train: Schedule regular adversarial and red-team tests, and keep detailed logs/audit trails (NIST AI RMF alignment is a best practice). Train staff on both technical testing and user-facing transparency.
- Privacy Overhaul: Review and update data-use policies, privacy notices, and vendor agreements, with special attention to biometric exemptions and new processor obligations.
- User Transparency: Map all AI touchpoints where users could interact with automated systems. Draft and test clear disclosures, and run usability audits to check for accidental “dark patterns.”
- Prepare for the Sandbox: If you’re eyeing innovative or high-stakes AI deployments, consider the regulatory sandbox as a way to pilot systems with regulatory oversight but reduced risk.
- Monitor the Federal Picture: Assign someone to track federal legislative developments, including potential budget riders that could preempt or override state laws. For example, Senator Ted Cruz recently proposed a federal budget measure that would prohibit states from enforcing their own AI regulations for ten years as a condition for accessing a proposed $500M federal AI deployment fund. If enacted, this could effectively suspend enforcement of TRAIGA, making it critical for compliance teams to remain agile and closely monitor federal activity that may impact state-level obligations.
Why This Matters Now
TRAIGA is not just another AI regulation; it’s a focused, intent-driven statute with teeth, robust transparency requirements, and new obligations for biometric privacy and data processing. It also offers space for innovation via the sandbox, but the threat of federal preemption means the landscape could shift quickly. Now is the time to get your documentation, testing, and notices in order, and to make sure your compliance program is nimble.
Questions?
Dickinson Wright is available to provide legal support for companies navigating AI-related issues.
Contacts
Recent Insights
- Industry Alerts DOJ Announces Withdrawal of Longstanding Guidance for the Healthcare Industry and Eliminates Benchmarking Safety Zone for Competitively Sensitive Information
- Industry Alerts Plugged In: An EV Newsletter Vol. 2, No. 10
- Industry Alerts Plugged In: An EV Newsletter Vol. 2, No. 9
- May 21, 2025 In the News Reuters recently published Sara Jodka’s article, “Plastic, fantastic ... and potentially litigious: AI Barbie goes from dollhouse to courtroom.”
- April 25, 2025 In the News Hector Agdeppa recently authored an article, “AI Startups, Take Note: How To Patent Your Tech In The Age Of Alice,” that was published by Crunchbase News.
- February 17, 2025 In the News Daniel Waldman's article, "Want to thrive in the era of legal AI?" was recently published by Precedent Magazine.
- October 10, 2024 In the News Inbound Logistics published Tripp Lake's article titled, "Protecting Supply Chain Trade Secrets."
- October 04, 2024 Media Mentions John Cunningham was recently quoted in the American Banker article, "How the killing of California's AI bill could affect banks."
- September 30, 2024 Media Mentions Sara Jodka was quoted in the Wired Magazine article, "The US Needs Deepfake Porn Laws. These States Are Leading the Way."