In February 2025, the Virginia legislature passed a bill governing high risk artificial intelligence. On March 24, 2025, Governor Youngkin vetoed the bill.
If enacted, the bill, titled the High-Risk Artificial Intelligence Developer and Deployer Act, would have been the second of its kind, following Colorado’s AI consumer protection law.
Similar to Colorado’s law, Virginia’s bill aimed to protect consumers against algorithmic discrimination and imposed various requirements on both developers and deployers of AI systems that are used to make consequential decisions (“high-risk AI systems”) around education, employment, financial services, health care, and legal services.
Under the bill, among other requirements, developers would have had to use reasonable care to protect consumers from algorithmic discrimination, including by providing various disclosures regarding the high-risk AI system’s intended uses, benefits, limitations, etc. Additionally, developers would have to provide documentation or disclosures related to how the high-risk AI system was evaluated and measures taken to mitigate risk. Developers would also have had to ensure that outputs of high-risk AI systems are identifiable and detectable and comply with applicable accessibility requirements.
Deployers, in addition to providing consumers with various disclosures regarding the high-risk AI system and their use of the system, would have also been responsible for implementing a risk management policy and program and completing an impact assessment for the high-risk AI systems, prior to deployment or use.
The Virginia bill provided for various exemptions, including to allow for developers and deployers to comply with applicable laws, and to be able to protect their trade secrets.
Governor Youngkin, when vetoing the bill, stated that while he supported responsible governance of AI, he believed that this bill would establish a burdensome AI regulatory framework. He emphasized the growth of AI innovators and the AI industry in Virginia and stated that the government should enable and empower innovators to create and grow and that the regulatory framework of this bill would essentially undermine and stifle that progress. Gov. Youngkin stated that there are many laws currently in place to protect consumers and place responsibilities on companies relating to discriminatory practices, privacy, data use, and more, and suggest that such laws are sufficient to protect consumer interests without placing a particularly onerous burden on smaller firms or startups. In fact, last year, Governor Youngkin issued Executive Order 30 (2024) to address the use of AI in the government. Through the Executive Order, the Governor established AI Policy and IT Standards as well as AI Education Guidelines for the integration of AI in schools.
Does Governor Younkin’s veto signal a change in approach for regulating AI?
In January 2025, Trump signed an Executive Order on Removing Barriers to American Leadership in Artificial Intelligence signaling that this administration is leaning toward a subtle regulatory approach for emerging technologies like AI, emphasizing innovation and economic growth over strict oversight. This philosophy is rooted in the belief that overregulation could stifle technological advancements and competitiveness, especially in a rapidly evolving field like AI. Federal policies often prioritize flexibility, allowing companies to self-regulate or adhere to voluntary guidelines rather than imposing rigid mandates.
As discussed in our earlier post, states like Colorado have taken a more proactive stance, implementing stricter regulations to address specific risks associated with AI. Over the past year, approximately 600 AI-related bills have been introduced across 45 states, reflecting a widespread recognition of the need for proactive measures to govern this rapidly evolving technology.
This divergence highlights a key challenge in U.S. AI governance: the lack of a unified federal framework. While subtle regulation at the federal level encourages innovation, it leaves room for states to fill the gaps with their own, often more stringent, rules. This patchwork approach can create inconsistencies, making it harder for companies to navigate compliance across different jurisdictions.
The contrast also reflects broader debates about the role of government in regulating technology. Should the focus be on fostering innovation, or on mitigating risks and ensuring ethical practices? Colorado's approach suggests that some states are unwilling to wait for federal action, opting instead to lead the way in addressing AI's societal impacts. This could serve as a model—or a cautionary tale—for other states and even federal policymakers.
- Associate
Anvi Yalavarthy is an associate in Moore & Van Allen's Intellectual Property group. Her work focuses on transactional intellectual property matters, with a particular interest in privacy and data security.
- Counsel
Tandy is counsel in the Litigation, Discovery, and Privacy & Data Security groups. She specializes in information management issues, including privacy and data security. Tandy uses her experience to help clients understand their ...
About Data Points: Privacy & Data Security Blog
The technology and regulatory landscape is rapidly changing, thus impacting the manner in which companies across all industries operate, specifically in the ways they collect, use and secure confidential data. We provide transparent and cutting-edge insight on critical issues and dynamics. Our team informs business decision-makers about the information they must protect, and what to do if/when security is breached.