The regulatory landscape surrounding artificial intelligence (AI) in the United States is in a state of flux as the government grapples with how to address the rapid development and deployment of AI technologies.
Currently, there is no comprehensive federal legislation specifically regulating AI.
The absence of robust regulations has led to a permissive approach under President Donald Trump, who, in January 2025, issued the Executive Order for Removing Barriers to American Leadership in AI, which rescinded the previous administration’s efforts to establish regulations focused on the safe and trustworthy development of AI.
This new directive calls for a reevaluation of existing policies to ensure they align with the goal of enhancing America’s competitive edge in AI technology.
While the Trump administration emphasizes innovation, the extent of changes to existing AI-related policies and regulations remains uncertain.
At the congressional level, lawmakers have been deliberating various bills addressing the myriad issues presented by AI technology.
However, with a Republican majority, it is unclear if AI will take precedence or if other priorities will dominate.
Most proposed legislations lean toward establishing voluntary guidelines and best practices rather than imposing strict mandates, which reflects a cautious approach to regulation intended to foster innovation without constraining growth efforts.
Concerns over technological competition, particularly with countries like China, significantly influence this approach.
While existing federal laws offer some guidelines, their application to AI is limited.
Notable examples of current federal statutes that touch on AI include:
1. The Federal Aviation Administration Reauthorization Act, which mandates the review of AI applications in aviation.
2. The National Defense Authorization Act for Fiscal Year 2019, directing the Department of Defense to oversee AI activities.
3. The National AI Initiative Act of 2020, which aims to expand research and development in AI, establishing the National Artificial Intelligence Initiative Office tasked with implementing the U.S. national AI strategy.
In addition to these laws, various frameworks and guidelines are emerging, such as the White House Blueprint for an AI Bill of Rights, which asserts guiding principles for equitable access and the ethical use of AI technologies.
Although this framework was not officially revoked by the Removing Barriers EO, the focus may shift away from its principles under the current administration.
Notably, many leading AI companies, including Adobe, Amazon, and Google, have voluntarily committed to ensuring the safe and responsible development of AI technologies.
This includes internal and external testing of AI systems ahead of their release, sharing information on risk management, and investing in safeguards to ensure transparency and security.
The Federal Communications Commission (FCC) has already illustrated the application of existing laws to AI by declaring that restrictions on pre-recorded voice messages extend to AI-generated voices.
Under the Biden administration, the Federal Trade Commission (FTC) adopted a proactive stance on regulating AI, warning companies against using AI technologies that result in discriminatory practices or making unfounded claims.
The FTC has enforced actions against companies like Rite Aid for faulty AI implementations, setting a precedent for future regulatory actions in the AI domain.
On September 12, 2023, the U.S. Senate conducted public hearings to discuss the future of AI regulations, hinting at potential legislation that might require licensing and establish a federal regulatory agency.
These discussions reflect growing recognition of the need for oversight in AI development as lawmakers engage both AI developers and civil society groups to gather insights.
Among the numerous proposed federal laws are:
– SAFE Innovation AI Framework: A bipartisan initiative focusing on guidelines for AI developers and policymakers.
– REAL Political Advertisements Act: Aimed at regulating the use of generative AI in political advertising.
– Stop Spying Bosses Act: Intended to restrict employers from using AI to surveil employees.
– NO FAKES Act: Aims to protect individuals’ rights from unauthorized reproductions through generative AI technologies.
– AI Research Innovation and Accountability Act: Advocating for greater transparency and accountability in high-risk AI systems.
– American Privacy Rights Act: Proposing a unified consumer privacy framework with specific algorithm-related provisions.
In a recent legislative move, the House Republicans had included a provision in their “One Big Beautiful Bill Act” proposing a 10-year moratorium on state and local AI regulations.
This attempt to streamline federal oversight was met with bipartisan opposition, leading to its removal by the Senate, reflecting broad agreement on the need for localized regulatory approaches to address AI-related harms.
As federal legislation remains absent, states have begun to act independently to fill the regulatory gap.
Colorado became a frontrunner by enacting the comprehensive Colorado AI Act on May 17, 2024, imposing obligations on developers and deployers of high-risk AI systems without revenue thresholds for applicability.
The Act explicitly focuses on automated decision-making systems that materially affect significant domains, including education and healthcare, emphasizing the mitigation of bias and discrimination.
Following Colorado’s example, several other states like Connecticut, Massachusetts, and New York are considering similar proposals.
Progress continues in California, which, in September 2024, enacted several laws concerning AI, notably the Defending Democracy from Deepfake Deception Act, which mandates large platforms to identify and block deceptive content during elections.
Additionally, the California AI Transparency Act, effective January 1, 2026, requires extensive disclosure from AI systems utilized by providers serving over a million users each month, with penalties for non-compliance.
In a similar vein, the Generative AI: Training Data Transparency Act mandates that developers provide summaries of datasets for generative AI systems.
These developments across various states underscore a concerted effort to establish regulatory frameworks that aim to protect consumers and ensure ethical usage of AI technologies.
In May 2024, Utah also joined this movement by enacting the Utah Artificial Intelligence Policy Act, which necessitates disclosures regarding the use of generative AI in communications with consumers, distinguishing requirements for regulated and non-regulated occupations.
With more than 40 state AI bills introduced in 2023, including significant measures in Connecticut and Texas, it is evident that states are keen to take charge of AI regulation.
In Texas, the Texas Responsible AI Governance Act was signed into law on June 22, 2025, focusing on restricting AI use in government contexts and limiting certain types of AI development related to harmful practices.
While the regulatory environment remains unpredictable, the obligations imposed by the aforementioned acts illustrate the growing insistence on accountability, transparency, and ethical considerations surrounding AI technologies.
Internationally, the United States took a step towards cooperation by signing the Council of Europe’s Framework Convention on AI in September 2024, although uncertainties linger about adherence under the Trump administration.
Pending state and federal legislation approaches the nuanced complexities of AI regulation while also considering ethical and societal implications.
As policymakers navigate these challenges, the need for comprehensive solutions that address safety, innovation, consumer protection, and equity in AI technology remains paramount.
In the absence of explicit regulations, companies and entities involved in developing and deploying AI technologies are advised to consult legal counsel to understand the liabilities that may arise under existing laws.
The regulatory approach must continue to evolve, balancing innovation with the critical need to manage risks associated with AI, including addressing issues related to bias, privacy, and civil liberties.
As the discussions on AI regulations progress, the dialogue between the government, industry stakeholders, and civil society will play a crucial role in shaping a sustainable and responsible framework for AI technologies in the U.S.
image source from:whitecase