AI has gone mainstream in all spheres of our lives, including business operations, with a myriad of use cases — from credit scoring in banking and product recommendations on e-commerce websites to content creation, and software coding with generative AI tools. Forrester predicts that the off-the-shelf AI software market will have a 22% CAGR from 2023 to 2030, reaching $227 billion, indicating that vendors are meeting ever-growing user demand.
Despite the benefits of leveraging AI-driven systems like productivity boost and decision-making support, concerns around the risks associated with AI increase. System output biases, AI hallucinations, and unconsented use of copyrighted material as training data for GenAI tools are just a few examples of issues drawing public attention and fueling discussion on responsible development and use of AI. In the legal field, these discussions transform into regulatory frameworks and bills.
In this article, we’ll discuss the concepts of ethical and responsible AI and explore examples of responsible AI legislation in the U.S. that address concerns around systems’ transparency, accountability, and security.
Clarifying Terms: Ethical AI vs. Responsible AI
Before delving into US regulations around this technology, it’s essential to understand what responsible AI is and how it differs from ethical AI.
Responsible AI is the set of practices that help businesses develop, deploy, and govern AI systems while complying with ethics and laws. It aims to ensure that the benefits of AI-driven solutions outweigh their potential negative impacts and risks.
Responsible AI aims to address three main ethical risks with AI — bias, the lack of explainability, and data privacy. Consequently, responsible AI is a broader concept than ethical AI, as the latter focuses primarily on the moral and ethical considerations related to AI, such as fairness, transparency, accountability, and respect for privacy and human rights. Meanwhile, responsible AI addresses a broader range of concerns, including security and legal compliance.
Potential Risks Posed by AI Systems
Understanding the economic, societal, and ethical risks of AI systems is crucial within the U.S. legislative context, as it illuminates the reasons behind policymakers’ efforts to develop mechanisms that minimize them.
Let’s now explore the key negative consequences driving these legislative initiatives.
Bias in system outputs: Bias is the phenomenon where a model produces unfair or subjective outputs for several reasons, including unbalanced (over-representative and under-representative) and incomplete datasets and mistakes in programming, when a developer might unfairly prioritize weighting factors in a model’s decision-making based on their own biases.
The black box problem: Solutions powered by black box models lack interoperability, meaning it’s impossible to explain to users and stakeholders why and how a model made certain decisions or predictions.
Privacy concerns: With its ability to extract, link, and act on sensitive personal information about individuals’ identities and interests, AI poses a great risk of exploiting and exposing personal data.
Misinformation using deepfakes: This risk entails spreading false or misleading information and damaging a public figure’s reputation with the help of synthetic media. Using political deepfakes is a relevant issue for the US, given upcoming presidential elections, with some experts noting that the 2024 race might become the first deepfake election.
Job displacement: AI automation may lead to job losses. According to the World Economic Forum’s Future of Jobs Report 2023, jobs that might become obsolete quickly due to AI include data entry clerks, bank tellers, cashiers, as well as administrative and executive secretaries.
Copyright infringement: There have been numerous lawsuits from content creators and artists against AI companies regarding their use of copyrighted materials for AI model training (Spoiler: the biggest AI companies don’t think they should pay for using copyrighted content.) Lawmakers are also raising questions on whether AI-generated content should be copyrighted and who owns such content.
Threats to national security: In a recently published Executive Order, the Biden Administration notes that irresponsible use of AI may pose risks to national security. It also mandates that tech companies assess their models for potential national security vulnerabilities and misuses (we’ll touch upon the legislation later.) In addition, the Department of Homeland Security claims that the widespread availability of emerging cyber and AI tools, including generative AI ones, will likely empower nation-state adversaries in running misinformation, disinformation, and malinformation campaigns against the country in 2024.
Overview of Responsible AI-Oriented Legislation in the U.S.
As AI technologies continue to advance rapidly, lawmakers in the U.S. are working to keep up with this swiftly-evolving technology. It’s time to look at pieces of legislation to understand better what measures government representatives consider essential to increase the trustworthiness of AI systems. This section will explore some examples of responsible AI frameworks and bills introduced in 2023 (the exception is only the Blueprint for an AI Bill of Rights, published in October 2022).
Blueprint for an AI Bill of Rights
The Blueprint for an AI Bill of Rights, developed by the White House Office of Science and Technology, introduces five principles that should guide the design, use, and deployment of automated systems to protect civil rights in the AI age:
-
Safe and effective systems
-
Algorithmic discrimination protections
-
Data privacy
-
Notice and explanation
-
Human alternatives, consideration, and fallback.
SAFE Innovation Framework for Artificial Intelligence
Senate Majority Leader Charles Schumer proposed the SAFE Innovation Framework to help guide Congress in developing future AI legislation. Its key objectives include:
-
Ensuring national security through responsible AI deployment
-
Fostering accountability to mitigate bias and misinformation
-
Defining information about these systems that the government and general public need to know
-
Supporting US-led innovation in the field
Bipartisan Framework for U.S. AI Act
The Bipartisan Framework for US AI Act by Senators Richard Blumenthal and Josh Hawley is a legislative blueprint for upcoming policies. It suggests establishing an independent oversight body, ensuring legal accountability for harms, defending national security, promoting transparency, and protecting consumers and children.
National AI Commission Act
In the National AI Commission Act, Members of the House of Representatives (Ted Lieu, Ken Buck, Anna Eshoo, and others) suggested creating a blue-ribbon commission that would make recommendations for a comprehensive AI regulatory framework.
AI Risk Management Framework (AI RMF 1.0)
AI RMF 1.0 was released by The U.S. Department of Commerce’s National Institute of Standards and Technology (NIST), which created it in collaboration with the public and private sectors. The document is designed to help organizations and individuals manage AI risks and develop trustworthy AI solutions. It offers four functions to help organizations address the risks of AI tools in practice: govern, map, measure, and manage.
Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence
The Executive Order, which was signed by President Biden on October 30, 2023, establishes an initial framework for regulating and governing the development and deployment of AI systems.
The legislation demands accountability from tech companies on the operation of their AI systems, requiring them to share the results of red-teaming tests with the government before deployment. A red teaming test is a cybersecurity assessment that simulates a real-world attack on an organization’s systems or network to uncover vulnerabilities and weaknesses in its security defenses. This legislation also suggests that federal agencies watermark synthetic content in their communications.
The order builds upon previous works, such as the Blueprint for an AI Bill of Rights and the AI Risk Management Framework.Numerous bills focusing on specific challenges and aspects were introduced apart from the frameworks. The law firm Covington classifies them into six major categories.
AI legislation focusing on specific subject matters | |
---|---|
Target topic |
Bill title |
Protection of national security |
AI and Biosecurity Risk Assessment Act |
Disclosure of AI-generated products |
|
Support of AI research and development |
|
Protection from deepfakes |
Protect Elections from Deceptive AI Act Require the Exposure of AI-Led (REAL) Political Advertisements Act |
AI’s impact on the workforce |
|
The federal government’s use of AI |
The listing isn’t fully inclusive, as the states are also working on AI regulation, introducing bills regarding data privacy protection, algorithmic discrimination, deceptive media and deepfakes, and increasing knowledge of AI among government entities. According to Ernst & Young, 32 states introduced 160 AI-focused bills as of fall 2023.
It provides practical solutions for building AI chatbots that protect sensitive data and adhere to industry regulations and ethical standards. Feel free to download ourwhitepaper to learn more about the framework.
The Future of AI Regulation in the U.S.
The issuance of the Executive Order by the Biden Administration is an essential step towards comprehensive federal regulation and legislation around AI, which the country currently lacks and which numerous state-level laws aim to compensate for.
The Executive Order will likely lead to the introduction of new AI laws in the U.S. in 2024 and beyond.
While lawmakers are struggling to keep pace with the evolution of technology, self-regulation by the tech industry is vital in minimizing risks associated with building and using AI-driven software. Akvelon has taken a proactive approach towards self-regulation by introducing and utilizing the Security and Compliance LLM Testing Framework.
We’ll keep tabs on the evolution of US AI governance to ensure our continued compliance with data privacy, security, and responsible AI laws in all of our custom AI-powered solutions. If you need an expert opinion on necessary changes to the software development cycle to be on the safe side or want to build a compliant AI software from scratch, contact us.