Recent AI Laws and Regulation Updates

The legal landscape surrounding the creation, use and governance of artificial intelligence (AI) is rapidly changing and growing, imposing significant obligations on business and new rights for individuals. In recent months, the US has seen new AI laws and regulations, both passed and proposed, at both the federal and state levels. The following details some recent developments in the US. There is a clear trend towards notifying consumers that they are interacting with AI, protecting individuals from the risks of AI, as well as an emphasis on AI governance. But stay tuned—these laws are only the beginning of the wave.

State Laws

Colorado’s Artificial Intelligence Act.

One of the most significant of the new laws is the recently passed Colorado Artificial Intelligence Act. This comprehensive artificial intelligence law is focused on protecting individuals from “algorithmic discrimination”.[1]

  • Coverage. The law, which applies to persons doing business in Colorado and protects individuals who are residents of Colorado, outlines various requirements for the “developers” and “deployers” of “high risk AI systems.”
    • Developers develop or intentionally and substantially modify an AI system.
    • Deployers use an AI system that makes, or is a substantial factor in making, a consequential decision (“high-risk AI system”).
    • A consequential decision is a decision that has a material legal or similarly significant effect on the provision or denial to a consumer or the costs or terms of education or employment opportunities, financial, lending, healthcare or essential government services, housing, insurance, or legal services.
    • There are exceptions from some requirements for deployers with fewer than 50 full time equivalent employees who don’t use their own data for training AI, and who meet certain other requirements.
  • Protecting consumers from algorithmic discrimination. Developers and deployers must use reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination from the intended and contracted use of the high-risk AI. Reasonable care is not defined, but there is a rebuttable presumption that the developer or deployer used reasonable care if it complied with certain notice, disclosure and documentation requirements (detailed in Section 6-1-1702 of the Act) and any additional regulations issued by the Colorado Attorney General. 
  • Disclosure that the consumer is interacting with AI. Developers and deployers that make available any AI system (whether high risk or not) must disclose to each consumer who interacts with the system that the consumer is interacting with an AI system.
  • Developer specific disclosures to deployers. Developers must provide notice and documentation to deployers of the high-risk AI system to allow deployers to comply with the law themselves and to assist the deployers in understanding the outputs and monitor the performance of the system for risks of algorithmic discrimination, including:
    • a statement describing the reasonably foreseeable uses and known harmful or inappropriate uses of the system;
    • high-level summaries of the type of data used to train the system;
    • known or reasonably foreseeable limitations of the system, including known or reasonably foreseeable risks of algorithmic discrimination;
    • the purpose, intended benefits and uses, and intended outputs of the system;
    • how the system was evaluated for performance and mitigation of algorithmic discrimination;
    • data governance measures used over training datasets and measures used to examine the suitability of data sources, possible biases, and appropriate mitigation; and
    • measures taken to mitigate algorithmic discrimination.
  • Developer website notices and disclosures of algorithmic discrimination. Developers must also:
    • publish on their websites or in a public use case inventory the types of high-risk AI systems they have developed or modified and currently make available to deployers or other developers and how the developer manages risks of algorithmic discrimination; and
    • disclose to the attorney general and to all known deployers of the system any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended uses of the system within 90 days of discovering or receiving a credible report that the system has caused or is reasonably likely to cause algorithmic discrimination.
  • Deployer notices to consumers of use, adverse decisions and appeal and opt-out rights, and notice to the Colorado AG. Deployers of high-risk AI systems must do the following:
    • Notify the consumer, before a consequential decision is made, that a high-risk AI system has been deployed to make, or be a substantial factor in making, the decision.
      • Notice must include the purpose of the high-risk AI system, the nature of the consequential decision, the contact information for the deployer, a plain language description of the system, instructions on how to access the statement on the deployer’s website;
      • the deployer also must give the consumer information about the right to opt out of the processing of personal data for purposes of profiling in furtherance of decisions that produce legal or similarly significant effects as required under the Colorado Privacy Act; and
      • if the consequential decision is adverse to the consumer, the deployer must provide to the consumer a statement disclosing the principal reason(s) for the consequential decision, the degree to which the high-risk AI system contributed to the decision, the type of data that weas processed by the system, and the source(s) of the data. The consumer must also be given the opportunity to correct any incorrect personal data that the system processed in making the decision and an opportunity to appeal, which allows for human review where feasible, an adverse consequential decision concerning the consumer arising from the deployment of the system.
    • Notify the Attorney General upon discovering that a high-risk AI system has caused algorithmic discrimination.
  • Risk management and impact assessments. In addition, deployers who do not fall within an exception for small deployers who do not use their own data to train the high-risk AI system and who meet certain other requirements must also:
    • implement and update a risk management policy and program to govern the deployment of the system that includes the principles, processes and personnel the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination;
    • complete an impact assessment for a deployed system at least once a year and within ninety days of any intentional and substantial modification to the system is made available;
    • review at least once a year the deployment of the system to ensure that it is not causing algorithmic discrimination; and
    • publish a statement on the deployer’s website that summarizes the types of high-risk AI systems that the deployer currently deploys, how the deployer manages known or reasonably foreseeable risks of algorithmic discrimination and the nature, source, and extent of the information collected and used by the deployer.
  • Exceptions. The Colorado Artificial Intelligence Act has exceptions similar to those seen in comprehensive privacy laws, including stating that the law is not intended to restrict a developer’s or deployer’s ability to comply with applicable law, cooperate with law enforcement, defend legal claims or take other specified action. Entity level exceptions generally require compliance with other regulations governing AI.
  • The Colorado AI law goes into effect February 1, 2026.

Utah’s Artificial Intelligence Policy Act

  • Impact of AI on Consumer Protection Law Violations. Utah’s Department of Commerce has a Division of Consumer Protection that administers and enforces several laws as listed in Utah Code Ann. § 13-2-1, including Utah’s Consumer Privacy Act, Social Media Regulation Act, and Music Licensing Practices Act, among several others. Utah recently passed Artificial Intelligence Amendments[2] (AI Amendments) that went into effect May 1, 2024 and state that it is not a defense to any violation of any of the laws under § 13-2-1 that generative AI violated or was used in the violation of the law. The AI Amendments essentially require users of generative AI to take responsibility for any consumer protection law violations caused by such use.
  • Disclosure that a person is interacting with AI. The AI Amendments also state that:
    • anyone who uses or causes generative AI to interact with another person in connection with any of the statutes under § 13-2-1 must disclose that the person is interacting with generative AI if asked or prompted by the person for such disclosure; and
    • anyone providing services of an occupation regulated by the Department of Commerce that requires a license or state certification to practice must disclose when a person is interacting with generative AI in the provision of such services – anyone providing services of a regulated occupation through generative AI must still meet the requirements of that occupation.
  • Creation of the Office of Artificial Intelligence Policy. The AI Amendments also enact the Artificial Intelligence Policy Act, which creates within the Utah Department of Commerce the Office of Artificial Intelligence Policy (the “Office”), which in turn should create and administer an AI learning laboratory program to research and analyze the risks, benefits, impacts and implications of AI technologies, encourage development of AI technologies, evaluate the effectiveness of current, potential or proposed regulation on AI technologies with AI companies, and produce findings and recommendations for legislation and regulation of AI.

Tennessee’s ELVIS Act

  • Tennessee recently updated its old right of publicity law with the Ensuring Likeness, Voice, and Image Security (ELVIS) Act.[3] The law passed and signed by the Governor in March 2024 and includes a new AI provision.
  • Liability for unauthorized use of name, photograph, voice or likeness. The Act states that every individual has a property right in the use of that individual’s name, photograph, voice, or likeness in any medium in any manner. As such, the Act imposes civil liability on anyone who:
    • uses an individual’s name, photograph, voice, or likeness for advertising or fundraising without the individual’s consent;
    • publishes, performs, distributes, transmits, or otherwise makes available to the public an individual’s voice or likeness, knowing that the individual didn’t authorize the use; or
    • distributes, transmits, or otherwise makes available an algorithm, software, tool, or other technology, service, or device that has the primary purpose or function of producing an individual’s photograph, voice, or likeness without authorization from the individual.
  • Exception. To the extent any use of an individual’s name, photograph, voice, or likeness is protected by the First Amendment, such use is not a violation in certain circumstances, such as when the use is in connection with any news, public affairs, or sports broadcast or account, is for the purposes of comment, criticism, scholarship, satire or parody, or is fleeting or incidental.
  • The ELVIS Act is effective July 1, 2024.

Federal Law and Guidance

Federal Proposed Generative AI Copyright Disclosure Act

  • U.S. Representative Adam Schiff introduced the Generative AI Copyright Act of 2024.
  • Coverage. If enacted, this Act would require anyone who creates or alters a training dataset that is used in building a generative AI system to provide a notice to the Register of Copyrights.
  • Notice requirements. Notice must contain a detailed summary of any copyrighted works used in such creation or alteration of the dataset and a link to the dataset if it is publicly available online at the time the notice is submitted. Notice would have to be submitted at least 30 days before the generative AI system is made available to consumers, if the system is first made available after the Act takes effect, or within 30 days after the Act takes effect if the system was made available before.
  • Penalty. Anyone who doesn’t comply would be subject to a civil penalty of at least $5,000.

OMB Policy

  • Requirements for executive agencies. The Office of Management and Budget (OMB) issued a memorandum, in line with the Biden Administration’s AI Executive Order issued in October 2023, to the heads of all executive agencies outlining requirements and recommendations that would apply to new and existing AI that is developed, used, or procured by or on behalf of covered agencies.
    • Each agency must designate a Chief AI Officer (CAIO) with the necessary authority to perform the responsibilities in the memo and submit plans to comply with the memo.
    • Each agency (except for the DoD and Intelligence Community) must at least annually submit to OMB and post publicly an inventory of each it is AI use cases. Where AI use cases are not required to be inventories, agencies must still report aggregate metrics about such use cases.
    • Each agency listed in the Chief Financial Officers Act (“CFO Act”) must establish an AI Governance Board convening, at least on a semi-annual basis, relevant senior officials and chaired by the Deputy Secretary of the agency or equivalent and vice-chaired by the CAIO.
    • Each CFO Act agency must publicly release a strategy for identifying and removing barriers to the responsible use of AI including current and planned uses of AI, an assessment of the agency’s AI maturity goals, and a plan to effectively govern its use of AI, among others.
    • Agencies should ensure that they have access to adequate IT infrastructure and capacity to sufficiently share, curate and govern agency data to train, test, and operate AI.
    • Agencies should assess potential beneficial uses of generative AI.
    • Agencies should prioritize recruiting, hiring, developing, and retaining talent in AI, including by designating an AI Talent Lead and providing resourced and training to develop AI talent internally.
    • Agencies should share their custom developed code unless prohibited by law or contract or unless such sharing would create a risk to national security, confidentiality of government information, privacy or the rights or safety of the public or to the agency mission, programs, or operations.
    • Agencies must implement minimum risk-management practices, including completing an AI impact assessment and testing and evaluating the AI. Agencies must also conduct ongoing monitoring of the impact of AI and regularly evaluate the risks from the use of AI.
    • Agencies must monitor for and mitigate algorithmic or AI-enabled discrimination.
    • Use of AI must be accompanied where practicable with timely human consideration and potential remedy in case any individual would like to appeal or contest the negative impact of AI on them.
    • Agencies must provide and maintain a mechanism for individuals to conveniently opt-out from the AI functionality in favor of a human alternative.

USPTO Guidance

  • Inventorship. The USPTO, pursuant to the AI EO, published guidance in February 2024, regarding the AI-assisted inventorship. The Guidance stated that inventorship is limited to natural persons, but that a natural person using an AI system would not preclude the person from qualifying as an inventor so long as the person significantly contributed to the invention.[4]
  • Use of AI tools. The USPTO published another Guidance in April 2024 on the use of AI-based tools in practicing before the USPTO. The guidance did not establish new rules with respect to AI, but rather discussed how the existing rules apply to AI.[5] Essentially, practitioners are not prohibited from using AI, but should be aware that they are ultimately responsible for any submissions and presentations to the USPTO and should keep in mind the risks of AI to ensure they otherwise meet their duties and responsibilities under the law. Specifically, under the April Guidance:
    • Using AI to draft documents is not prohibited, but parties submitting the documents to the USPTO are responsible for their contents and should ensure all statements made are true to their own knowledge and all arguments are warranted by existing law.
    • There is no duty to inform the USPTO that AI was used to draft documents, unless specifically requested, but parties submitting the documents have a duty to review the documents and correct any errors.
    • Any documents, forms or correspondence requiring a signature must include the signature of a natural person.
    • The use of an AI tool in the invention creation process or practicing before the USPTO must be disclosed if such use is material to patentability.
    • If an AI tool is used to draft patent claims, the practitioner has a duty to modify those claims as needed to present them in a patentable form before submission, and to ensure technical accuracy.
    • Practitioners must make sure they avoid submitting any AI-generated trademark specimens, which do not show actual use of the trademark in commerce, or any other evidence created by AI that does not actually exist in the marketplace.
    • AI-generated material that misstates facts or law, includes irrelevant material, or includes unnecessarily cumulative material, could be construed as being presented for an improper purpose.
    • AI systems may not obtain an USPTO.gov account.
    • Practitioners should ensure confidentiality is maintained, keeping in mind that any information used to train an AI system may filter into outputs from the AI system provided to others, and that AI tools may use servers located outside the US, where data may be stored.

More to Come:

Various states are considering laws regarding the regulation of AI. Some states have already come out with laws related to deepfake technology in the past few years, for example, and more may follow. We can also expect to see more states enact laws similar to the ones described above. For example, California has also proposed a rule for anti-discrimination concerning AI.

For more information, please connect with our Privacy & Data Security team. 

Authors: 

Anvi Yalavarthy is an associate in Moore & Van Allen's Intellectual Property group. Her work focuses on transactional intellectual property matters, with a particular interest in privacy and data security.

Karin McGinnis, Co-head of Privacy & Data Security, has two decades of experience as a practicing attorney, and is known as a true business partner when litigating and providing counsel to her clients. Karin has assisted clients with privacy and data security issues internationally, including compliance with GDPR, an international ethics hotline, international data transfers, and data breaches affecting consumers overseas. She also has experience negotiating vendor agreements related to AI. Karin is an AI Governance Professional certified through IAPP.

 [1] Algorithmic discrimination means any condition in which the use of an artificial intelligence system results in an unlawful differential treatment or impact that disfavors an individual or group of individuals on the basis of their actual or perceived age, color, disability, ethnicity, genetic information, limited proficiency in the English language, national origin, race, religion, reproductive health, sex, veteran status, or other classification protected under Colorado or federal law.

[2] 2024 Ut. SB 149.

[3] 2023 Tenn. HB 2091.

[4] Various factors are considered to determine significant contribution under existing law.

[5] “[T]he UPSTO has determined that existing rules protect the USPTO’s ecosystem against [the] potential perils [of AI].”

About Data Points: Privacy & Data Security Blog

The technology and regulatory landscape is rapidly changing, thus impacting the manner in which companies across all industries operate, specifically in the ways they collect, use and secure confidential data. We provide transparent and cutting-edge insight on critical issues and dynamics. Our team informs business decision-makers about the information they must protect, and what to do if/when security is breached.

Stay Informed

* indicates required
Jump to Page

Subscribe To Our Newsletter

Stay Informed

* indicates required

By using this site, you agree to our updated Privacy Policy and our Terms of Use.