In February 2025, the Virginia legislature passed a bill governing high risk artificial intelligence. On March 24, 2025, Governor Youngkin vetoed it.
If enacted, the bill, titled the High-Risk Artificial Intelligence Developer and Deployer Act, would have been the second of its kind, following Colorado’s AI consumer protection law.
Similar to Colorado’s law, Virginia’s bill aimed to protect consumers against algorithmic discrimination and imposed various requirements on both developers and deployers of AI systems that are used to make consequential decisions (“high-risk AI systems”) around education, employment, financial services, health care, and legal services.
Under the bill, among other requirements, developers would have had to use reasonable care to protect consumers from algorithmic discrimination, including by providing various disclosures regarding the high-risk AI system’s intended uses, benefits, limitations, and other key elements. Additionally, developers would have to provide documentation or disclosures related to how the high-risk AI system was evaluated and measures taken to mitigate risk. Developers would also have had to ensure that outputs of high-risk AI systems are identifiable and detectable and comply with applicable accessibility requirements.
Deployers, in addition to providing consumers with various disclosures regarding the high-risk AI system and their use of the system, would have been responsible for implementing a risk management policy and program and completing an impact assessment for the high-risk AI systems, prior to deployment or use.
The Virginia bill provided for various exemptions, including to allow for developers and deployers to comply with applicable laws and to be able to protect their trade secrets.
Governor Youngkin, when vetoing the bill, stated that while he supported responsible governance of AI, he believed that this bill would establish a burdensome AI regulatory framework. He emphasized the growth of AI innovators and the AI industry in Virginia and stated that the government should enable and empower innovators to create and grow. He voiced concern that the regulatory framework of this bill would essentially undermine and stifle that progress. Gov. Youngkin also asserted that there are many laws currently in place to protect consumers and that place responsibilities on companies relating to discriminatory practices, privacy, data use, and more. He suggested that such laws are sufficient to protect consumer interests without placing a particularly onerous burden on smaller firms or startups. In fact, last year, Gov. Youngkin issued Executive Order 30 (2024) to address the use of AI in the government. Through the Executive Order, the Governor established AI Policy and IT Standards as well as AI Education Guidelines for the integration of AI in schools.
Does Governor Youngkin’s veto signal a change in approach for regulating AI?
Governor Youngkin’s stated AI philosophy is surprisingly similar to that of the Trump Administration. On January 23, 2025, President Trump signed an Executive Order 14179 on Removing Barriers to American Leadership in Artificial Intelligence. EO 14179 signals that the Trump administration will employ a subtle regulatory approach for emerging technologies like AI, emphasizing innovation and economic growth over strict oversight. This philosophy is rooted in the belief that overregulation could stifle technological advancements and competitiveness, especially in a rapidly evolving field like AI and is a significant departure from President Biden’s Executive Order 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.
As discussed in our earlier post, states like Colorado have taken a more proactive stance, implementing stricter regulations to address specific risks associated with AI. Over the past year, approximately 600 AI-related bills have been introduced across 45 states, reflecting a widespread recognition of the need for proactive measures to govern this rapidly evolving technology.
This divergence highlights a key challenge in U.S. AI governance: the lack of a unified federal framework. While subtle regulation at the federal level encourages innovation, it leaves room for states to fill the gaps with their own, often more stringent, rules. This patchwork approach can create inconsistencies, making it harder for companies to navigate compliance across different jurisdictions.
The contrast also reflects broader debates about the role of government in regulating technology. Should the focus be on fostering innovation, or on mitigating risks and ensuring ethical practices? Colorado's approach suggests that some states are unwilling to wait for federal action, opting instead to lead the way in addressing AI's societal impacts. Yet even Colorado’s governor expressed concerns about overbreadth. On the same day that he signed the Colorado AI Act, Governor Polis sent a letter to the Colorado legislature encouraging lawmakers to address the burdens of the law and the imposition of liability based on discrimination that is not intentional, while maintaining “guardrails” on the development and deployment of AI. How Colorado’s legislature responds could provide some insight into how states will respond.
- Associate
Anvi Yalavarthy is an associate in Moore & Van Allen's Intellectual Property group. Her work focuses on transactional intellectual property matters, with a particular interest in privacy and data security.
- Counsel
Tandy is counsel in the Litigation, Discovery, and Privacy & Data Security groups. She specializes in information management issues, including privacy and data security. Tandy uses her experience to help clients understand their ...
About Data Points: Privacy & Data Security Blog
The technology and regulatory landscape is rapidly changing, thus impacting the manner in which companies across all industries operate, specifically in the ways they collect, use and secure confidential data. We provide transparent and cutting-edge insight on critical issues and dynamics. Our team informs business decision-makers about the information they must protect, and what to do if/when security is breached.