Despite the media headlines that may lead one to believe that there are no laws applicable to artificial intelligence (AI) in the US, existing federal and state laws apply, along with a series of frameworks issued by various federal agencies. Moreover, AI-specific laws exist in a number of states with additional states considering laws addressing various uses of AI-enabled technology. As the AI regulatory landscape evolves in the US, increased scrutiny by lawmakers and regulatory bodies continues to be a common theme.
FEDERAL REGULATION DEVELOPMENTS
Since our last update, there has been significant federal activity related to regulating AI, with the Federal Trade Commission (FTC) taking the lead. In February 2023, the FTC issued guidance regarding the applicability of its enforcement authority under Section 5 of the FTC Act to address unfair or deceptive AI advertising claims.
Specifically, the FTC warned that the FTC will focus on whether marketers make false or unsubstantiated claims about AI-powered products, such as exaggerations of the technical capabilities of a product, unproven assertions that an AI-powered product is superior to one without AI, or whether a product uses AI at all.
The FTC guidance also warns that companies releasing AI-enabled products or services must possess sufficient understanding of the risks and limitations associated with such products or services, such as whether these products or services include inherent biases that may make these offerings inappropriate for certain uses.
Additionally, on April 25, 2023, the FTC and officials from three other federal agencies—the Civil Rights Division of the US Department of Justice, the Consumer Financial Protection Bureau, and the US Equal Employment Opportunity Commission—released a joint statement pledging to “uphold America’s commitment to the core principles of fairness, equality, and justice as emerging automated systems, including those sometimes marketed as ‘artificial intelligence’ or ‘AI,’ become increasingly common in our daily lives—impacting civil rights, fair competition, consumer protection, and equal opportunity.”
However, the FTC is far from the only federal agency that has been active in the AI regulatory space. On January 26, 2023, the National Institute of Standards and Technology (NIST) released the AI Risk Management Framework Version 1.0, a voluntary guide for organizations developing, designing, and using AI-related products and services to manage risks of AI and promote trustworthy AI systems. Divided into two parts, the first assesses what do trustworthy AI systems look like, with the second part detailing the four categories of functions to address AI system risks.
Despite its voluntary nature, it is possible that NIST’s guidance could become an industry best practice similar to what occurred with its Framework for Improving Critical Infrastructure Cybersecurity, which enjoys widescale use by a large majority of US companies, as well as many companies outside of the United States, and is used as a model in several other countries. NIST anticipates release of an updated 2.0 version of the AI Risk Management Framework in Spring 2023.
STATE-LEVEL AI LEGISLATION AND REGULATIONS
In addition to movement at the federal level, states’ interest in regulating AI services and products continues to grow as evidenced by the 46% increase between 2021 and 2022 in the introduction of bills by state legislators addressing AI, as well as by the creation of state task forces examining the need for AI-centric regulations.
State legislative efforts address a myriad of topics, including predictive policing technologies; use of facial-recognition technologies by police departments; consumer-focused rights; employment-related issues; implementing automated decision-making by the financial and insurance industries; and healthcare-related issues.
Below is an overview of the current AI regulatory landscape among US states:
Enacted Legislation Addressing AI:
- Illinois’ Artificial Intelligence Video Interview Act applies to all employers and requires disclosure of the use of an AI tool to analyze video interviews of applicants for positions based in Illinois.
- New York City’s AI Law (Local Law 144) regulates employer use of automated employment decision tools in hiring and promotions.
- Vermont’s H.B. 410 created an Artificial Intelligence Commission.
- Washington’s S.B. 5693 appropriated funds for an automated decision-making working group.
Legislation Under Consideration Addressing AI:
- S.B. 313 creates an Office of Artificial Intelligence to oversee the use of AI among state agencies.
- AB No. 331 would require a “deployer” and a “developer” of an “automated decision tool” to perform an initial impact assessment and annually thereafter that includes, among other things, a statement of purpose.
- Colorado Division of Insurance’s Proposed Algorithm and Predictive Model Governance Regulation includes governance and risk management framework, documentation mandates, and reporting requirements.
- Connecticut Senate Bill No. 1103 proposes the establishment of an Office of Artificial Intelligence as well as a task force to study AI and develop an AI bill of rights.
- DC’s Stop Discrimination by Algorithms Act of 2023 (reintroduced) would prohibit users of algorithmic decision-making from using algorithmic eligibility determination in a discriminatory manner.
- Texas HB 2060 would create an AI advisory council.
KEY CONSIDERATIONS WHEN CONSIDERING AND IMPLEMENTING AI INTO BUSINESS OPERATIONS
- Identify AI applications within operations. Companies should conduct a holistic survey to determine what operational decisions or processes they use to make decisions where any part of the decision-making process relies on automated processes.
- Conduct documented risk assessments. Once identified, companies should assess the risks and mitigation possibilities of current and future AI tools. Companies should consider establishing an internal mechanism analyzing the costs and benefits of implementing new technologies, determining the appropriate technical, contractual, and organizational risk associated with such technologies, and implementing any relevant controls within the appropriate business units.
- Integrate structural compliance measures throughout the organization at the foundational level. Companies should draft or adapt policies governing the use of AI applications within their organization. Policies should cover common themes such as transparency, accountability, and fairness, including data set integrity, accuracy, foreseeable risks, and social impact. Policies should specifically address how organizations will assess and ensure that implementing AI-related technologies does not produce disparate and unfair outcomes.
- Proactively designate responsibility for accountability and governance. Depending on how pervasive or important AI technologies are to a company, consider appointing an individual responsible for developing, maintaining, and enforcing AI-related policies similar in function to a chief privacy officer.
- Employ existing compliance/risk management programs if applicable. An organization’s approach to AI could fall under existing risk management practices of organizations. Depending on the prevalence of AI within one’s industry and/or organization, consider socializing governance policies with different business units so as to increase awareness and integrating risk mitigation into day-to-day operations.
For more information on US federal and state initiatives seeking to regulate AI and machine learning systems, view our presentation, “Artificial Intelligence Regulation in the United States,” as part of the firm’s Technology Marathon web series.
Source: JD Supra