The regulations around artificial intelligence in the United States have gone through major changes in recent years. They’ve gone from being a new area of policy to having full legal and enforcement systems in place. As AI becomes more advanced and is used in nearly every part of the American economy, both federal and state governments have struggled with the challenge of encouraging innovation while also making sure consumers, civil rights, and national security are protected. Today, the regulations surrounding AI are a mix of actions taken by the executive branch, congressional laws, guidance from federal agencies, state laws, and enforcement actions. Together, these elements determine how AI is created, used, and controlled across the country.
Federal Executive Leadership and Policy Framework
The federal approach to regulating AI has mainly come from executive orders and guidance from government agencies, rather than from laws passed by Congress. Under the Biden administration, the most important development was Executive Order 14110, “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” signed on October 30, 2023. This was the most detailed government action on AI regulation so far. It set requirements for testing AI systems for safety, outlined oversight for federal agencies, and included protections for civil rights.
This order required developers of the strongest AI systems to share their safety test results with the federal government. It also set new standards for AI safety and security, and asked federal agencies to look at how AI affects workers and civil rights. It also created positions for Chief AI Officers within federal agencies and established ways to protect the public from harm caused by AI, with a special focus on privacy and civil rights.
However, this approach changed significantly with the new administration. On January 20, 2025, President Trump immediately cancelled Executive Order 14110 as part of his “Initial Rescissions of Harmful Executive Orders and Actions.” Three days later, on January 23, 2025, Trump signed Executive Order 14179, “Removing Barriers to American Leadership in Artificial Intelligence,” which took a very different approach, focusing on reducing regulations and promoting American AI leadership.
The Trump administration’s strategy highlights free-market innovation and less government interference. Executive Order 14179 told federal agencies to “revise or rescind all policies, directives, regulations, and other actions taken by the Biden administration” that don’t support maintaining American leadership in AI. The order required the development of an AI Action Plan within 180 days to keep the U.S. at the forefront of AI, focusing on human well-being, economic strength, and national security.
In July 2025, the Trump administration released “Winning the Race: America’s AI Action Plan,” which outlines three main goals: speeding up AI innovation, building American AI infrastructure, and leading in international AI diplomacy and security. This plan is a major shift from the previous administration’s focus on safety and civil rights, instead putting a stronger emphasis on competitiveness and reducing the regulatory burden on private companies.
Congressional Action and Federal Legislation
While executive orders have largely shaped federal AI policy, Congress has taken a smaller but growing role. The main federal law in this area is the National Artificial Intelligence Initiative Act of 2020, which was included in the National Defense Authorization Act for Fiscal Year 2021. This law put into place the American AI Initiative, set up a National AI Initiative Office to help coordinate federal AI efforts, and created a group of federal agencies to work together on AI programs.
The 2020 statute focused on supporting AI research, especially through the National Science Foundation, which helped create a network of National AI Research Institutes. It also formed a National AI Advisory Committee that released over 30 reports between 2022 and 2024. The statute mainly aimed to boost research and development rather than set strict rules, showing a preference for encouraging innovation before putting in place regulations.
A major recent change was the passing of the TAKE IT DOWN Act in 2025, which is the first federal law targeting harmful uses of AI. This law makes it a crime to share non-consensual intimate images, including deepfakes made with AI. It also requires online platforms to remove such content within 48 hours of being told about it. This legislation was created in response to worries about the spread of AI-generated non-consensual intimate images, which often affect women and young people.
The TAKE IT DOWN Act includes both criminal and civil penalties, requiring covered platforms to set up systems for reporting and removing harmful content quickly. This law shows a move toward more targeted regulation of AI applications that clearly harm society, and it reflects a shared agreement across political lines to protect people from abuse made possible by AI.
Federal Agency Regulatory Framework
The regulation of artificial intelligence in the United States has mainly been handled by federal agencies, each working within their own legal powers and areas of responsibility.
This system has led to a complicated situation with overlapping duties and different regulatory methods depending on the sector and use of AI.
National Institute of Standards and Technology (NIST)
NIST became a key player in managing AI through the development of the AI Risk Management Framework (AI RMF), which was released in January 2023. This framework gives organizations optional guidance to recognize, evaluate, and handle risks related to AI throughout the technology’s lifecycle. Based on four main functions—Govern, Map, Measure, and Manage—the AI RMF has gained widespread use as a standard for responsible AI development.
The framework highlights important principles for trustworthy AI: transparency, fairness, accountability, and robustness. Although it is not mandatory, the NIST AI RMF has impacted both private sector activities and regulatory expectations across various agencies. However, under the Trump administration, agencies were asked to update the framework to remove references to misinformation, diversity, equity, inclusion, and climate-related issues.
Federal Trade Commission (FTC)
The FTC has been taking a more active role in enforcing regulations related to AI, using its authority under Section 5 of the FTC Act to deal with unfair and deceptive practices.
In September 2024, the FTC started “Operation AI Comply,” a campaign targeting companies that use AI to engage in deceptive or unfair behavior.
This initiative led to enforcement actions against five companies, including DoNotPay for making false claims about its “AI lawyer” service, and three companies that used AI hype to market fake business opportunities. The FTC’s actions show that current consumer protection laws apply fully to AI applications, with Chair Lina Khan stating that “there is no AI exemption from the laws on the books.”
The FTC has also shown interest in areas like surveillance pricing, AI investments and company partnerships, and the risks of generative AI, including deepfakes and voice clones.
The agency has issued requests for information to companies that use AI and consumer data to set targeted prices.
Consumer Financial Protection Bureau (CFPB)
The CFPB has focused heavily on the use of AI in financial services, highlighting that existing consumer financial laws apply fully to AI systems. The bureau has identified specific compliance risks in automated customer service, fraud detection models, and loan approval systems.
In response to a request from the Treasury Department about AI in financial services, the CFPB stressed that “there is no ‘fancy new technology’ exemption from existing consumer financial laws.” The bureau has raised concerns about AI chatbots providing incorrect information, algorithmic discrimination in lending decisions, and privacy risks linked to AI-driven financial services.
The CFPB emphasizes algorithmic transparency, requiring lenders to explain how AI decisions are made, and bias mitigation, requiring companies to detect and correct discrimination in their models. The bureau has indicated that courts have already ruled that using algorithmic decision-making tools can be considered discriminatory under disparate impact liability theories.
U.S. Patent and Trademark Office (USPTO)
The USPTO has come up with clear guidelines for patent applications related to artificial intelligence. These guidelines cover whether AI-related inventions can be patented and how AI tools are used during the patent process. In July 2024, the USPTO released updated rules that clarify AI inventions are not automatically considered abstract ideas that can’t be patented, especially if they involve specific hardware or are used in practical applications.
The guidelines say that while inventions made with AI help are not automatically unpatentable, there must be a major contribution from a human inventor. The USPTO also addressed the use of AI by patent professionals. They require that when AI is used in a way that significantly affects the patent’s approval, it must be disclosed. Also, AI tools cannot be used to sign official patent documents.
State-Level Regulatory Innovation
Without a complete federal law on AI, states have started to create their own AI regulations.
Colorado and California are leading this effort by setting up binding legal rules that put direct responsibilities on private companies that develop or use AI.
Colorado Artificial Intelligence Act
Colorado passed the first major state-level AI law in the US, called the Colorado Artificial Intelligence Act (CAIA), with Senate Bill 24-205 in May 2024. This law takes a risk-based approach, similar to the EU AI Act. It focuses on “high-risk artificial intelligence systems” that influence important decisions in areas like employment, housing, financial services, healthcare, insurance, education, and legal services.
CAIA makes developers and users of high-risk AI systems responsible for protecting consumers from discrimination caused by algorithms. Developers have to provide information about the system’s purpose, how it is intended to be used, where the data comes from, its known limitations, and steps taken to reduce bias. Users have to do impact assessments, create risk management plans, and tell consumers when they are interacting with AI systems.
The law gives the Colorado Attorney General the power to enforce these rules, with fines up to $20,000 for each violation. However, private lawsuits are not allowed. The law’s original start date, February 1, 2026, was delayed to June 30, 2026, due to pressure from the industry and concerns from lawmakers.
This delay shows the ongoing tension between encouraging innovation and setting strong rules. Governor Jared Polis has expressed concern about unintended effects and wants to refine key definitions and the framework for compliance. The changes are expected to focus on narrowing which systems are considered high-risk while keeping consumer protections in place.
California AI Legislation
California has taken a more piecemeal but thorough approach. In September 2024, Governor Gavin Newsom signed two important laws: Senate Bill 942 (California AI Transparency Act) and Assembly Bill 2013, both effective January 1, 2026.
SB-942 requires “covered providers” – those with generative AI systems that have over a million monthly users and are publicly available in California – to offer free AI detection tools and clearly label content that is AI-generated. It applies to image, video, and audio content, with a penalty of $5,000 per violation per day if not followed.
AB-2013 focuses on data transparency during training. It requires developers of generative AI models to publicly share summaries of the data they used for training. This includes data sources, types of content, and any copyrighted materials in the training data.
California has also updated its consumer privacy law to include AI systems that can generate personal information. Additional laws have been passed dealing with deepfakes, election integrity, and the use of AI by the government.
Broader State Activity
According to the National Conference of State Legislatures, all 50 states, Puerto Rico, the Virgin Islands, and Washington D.C. introduced AI-related laws during the 2025 legislative session. Thirty-eight states passed about 100 measures. This huge amount of state legislation shows how significant AI is and the lack of a national law on the topic.
State laws have covered many topics like detecting deepfakes, reducing algorithmic bias in employment and housing, making AI use in government transparent, and protecting data privacy. This creates a complicated mix of rules that companies operating in multiple states must follow, which could lead to a push for federal laws to set standards or take over regulation.
Enforcement Mechanisms and Compliance Requirements
The enforcement of AI regulations in the United States involves multiple levels of government and different regulatory approaches, which makes it complicated for organizations using or developing AI systems to comply with the rules.
At the federal level, regulation happens mainly through existing sector-specific agencies that apply traditional legal principles to AI applications. The FTC’s Operation AI Comply shows how consumer protection agencies can use established laws about unfair and deceptive practices to deal with harm caused by AI. Similarly, the CFPB uses laws related to fair lending and consumer protection to watch how AI is used in financial services.
Federal enforcement has mainly targeted areas like false or misleading claims about AI’s abilities, using AI to support deceptive business practices, algorithmic bias in lending and employment, and AI tools that help in fraud or manipulation. Penalties can include money fines, court orders, being excluded from government contracts, and even criminal charges in serious cases.
State-level enforcement varies a lot but usually gives more power and stricter penalties.
Colorado uses the Attorney General to enforce laws with fines up to $20,000 per violation, and it stops private lawsuits. California has several AI laws that create different ways to enforce rules, including actions by the Attorney General, local governments, and in some cases, private lawsuits.
The challenge of compliance depends a lot on which laws apply and which areas are covered.
Federal rules mostly affect government contractors and certain industries, while state laws can apply to any business operating within the state.
Organizations must deal with different requirements from many agencies and regions, often needing legal experts and strong compliance programs.
International Comparisons and Global Context
The U.S. approach to AI regulation is quite different from other major regions, especially the European Union, which has created a comprehensive and binding regulatory framework through the EU AI Act. The EU focuses on protecting fundamental rights and has strict compliance rules with serious penalties, while the U.S. prefers a more decentralized, sector-specific approach that supports innovation and market-driven solutions.
The EU AI Act, which started in August 2024, sets the same rules across all member states and applies to companies operating in the EU. In contrast, the U.S. system creates a mix of federal guidance, state laws, and sector-based regulations, which may lead to inconsistent rules and compliance challenges.
The “Brussels Effect” could influence U.S. AI regulation as American companies operating globally must follow EU rules, which might create global standards. However, the Trump administration’s push to reduce regulatory barriers and promote competitiveness suggests the U.S. will continue to follow a different path than the EU.
Other regions are also creating their own AI regulatory frameworks, with many looking to the EU or U.S. models as examples. The lack of comprehensive federal U.S. legislation could give other regions the chance to set influential global standards, which might put American companies at a disadvantage if their regulatory environment becomes less predictable or thorough.
Future Outlook and Challenges
The future of AI regulation in the United States faces several major challenges and uncertainties that will shape the regulatory environment in the coming years. The key conflict between encouraging innovation and protecting consumers, civil rights, and national security keeps driving policy discussions and regulatory methods at all levels of government.
The current administration’s focus on reducing regulatory barriers and boosting American AI leadership suggests a continued preference for light federal regulation and self-governance by the industry. This might create space for more state-level action, leading to a more fragmented regulatory environment that could complicate compliance for companies operating in multiple regions.
Enforcement efforts are likely to grow as regulators gain more experience with AI applications and develop better methods for oversight. The FTC’s Operation AI Comply shows the start of structured enforcement, and other agencies are expected to build similar capabilities. State enforcement will also likely increase as laws like the Colorado AI Act take effect and attorneys general gain more experience with AI-related investigations.
The fast pace of technological development poses ongoing challenges for regulators who need to balance the need for clear, predictable rules with the flexibility to address new risks and uses. Generative AI, foundation models, and emerging areas like autonomous systems are evolving faster than regulatory frameworks can keep up, requiring continuous improvement and updating of existing methods.
International cooperation and concerns about competitiveness will also influence the development of U.S. AI regulations. As other regions, especially the EU, implement comprehensive frameworks with global reach, U.S. policymakers will face pressure to ensure American companies can compete effectively while maintaining proper protections for consumers and civil rights.
The role of Congress remains unclear, as not much comprehensive federal legislation has been passed despite many AI-related bills introduced in recent sessions. Political dynamics and competing priorities may continue to limit federal legislative action, leaving enforcement to agencies and states. However, growing awareness of AI’s importance and potential risks may eventually lead to more substantial congressional involvement.
Private sector compliance strategies must consider this evolving and fragmented regulatory environment. Companies developing or using AI systems need strong governance frameworks that can adapt to changing rules while keeping operations efficient and competitive. This includes creating solid risk management programs, keeping thorough records and audit capabilities, and building relationships with regulators and compliance professionals who understand the fast-changing legal landscape.
The success of the U.S. approach to AI regulation will depend on its ability to support innovation while effectively addressing the real risks that AI systems pose to individuals and society. As the technology continues to grow and its uses expand, the regulatory framework will need to evolve to ensure that American leadership in AI development translates into leadership in responsible and beneficial AI use that serves the public interest while maintaining competitive advantages in global markets.