Artificial intelligence (AI) is transforming every industry, from healthcare and finance to manufacturing and national security. As adoption accelerates, lawmakers are racing to keep pace. New AI legislation in 2025 aims to address growing concerns around privacy, bias, transparency, and accountability.
Organizations that leverage AI must now prepare for stricter AI compliance and regulatory requirements in the U.S. and abroad. Is your business ready for the next wave of AI legislation and enforcement?
Schedule a call to assess your readiness and stay ahead of regulatory changes.
The Landscape of AI Legislation and Regulations in 2025
Artificial intelligence (AI) continues to transform how organizations operate, driving efficiency and innovation across every sector. Automation speeds up repetitive tasks, while generative AI (GenAI) enhances creativity and decision-making at scale.
However, these advancements also raise new risks around data privacy, intellectual property (IP), and the ethical use of AI systems. In response, lawmakers are introducing comprehensive AI legislation and regulatory frameworks to manage these risks and ensure accountability.
To navigate this evolving regulatory landscape, it’s essential to understand:
- The current state of AI legislation in the United States
- The global AI regulatory landscape shaping compliance worldwide
- Emerging trends and what they mean for the future of AI governance
Partnering with an experienced AI compliance advisory firm can help your organization stay ahead, ensuring your AI development and usage align with both current and upcoming AI legislation in 2025 and beyond.
AI Legislation, Frameworks, and Guidelines in the United States
Unlike many other countries, the United States does not yet have a comprehensive federal AI law regulating the use of generative or agentic AI systems. While early efforts, such as the National Artificial Intelligence Initiative Act of 2020, laid groundwork for governance, no central authority currently enforces mandatory national standards for AI compliance.
That said, momentum toward federal AI legislation is growing. The National Institute of Standards and Technology (NIST) has published the AI Risk Management Framework (AI RMF), which serves as a key reference for organizations seeking to manage AI-related risks responsibly. Federal contractors and agencies are increasingly expected, and in some cases required, to align with NIST’s guidance.
For now, AI regulation in the U.S. is largely driven at the state level. The National Conference of State Legislatures (NCSL) tracks all AI-related bills across the 50 states, Washington D.C., and U.S. territories. As of 2025, nearly every state has introduced some form of AI legislation, with more than 100 measures enacted across 38 jurisdictions.
This growing patchwork of state and federal initiatives underscores the importance of proactive compliance, organizations must stay informed as the U.S. moves closer to a unified national AI regulatory framework.
State and Local AI Legislation in the U.S.
AI legislation in the United States is rapidly evolving, with most activity happening at the state and local levels. While many new laws focus on government and education, several directly impact how businesses use artificial intelligence across industries.
A recent Bryan Cave Leighton Paisner (BCLP) report provides a state-by-state snapshot of AI legislation across all 50 states. According to their findings:
-
8 states have fully enacted AI legislation.
-
18 states have both enacted and proposed new AI laws.
-
22 states currently have AI legislation in development.
-
3 states have no active or proposed AI laws impacting business use.
Summary of State AI Legislation Activity
-
Enacted Legislation: Arizona, Delaware, Montana, New Hampshire, Oregon, South Dakota, Tennessee, Utah
-
Enacted + Proposed: California, Colorado, Connecticut, Florida, Georgia, Illinois, Indiana, Maine, Maryland, Massachusetts, Michigan, Minnesota, New Jersey, New Mexico, New York, Texas, Virginia, Wisconsin
-
Proposed Only: Alabama, Alaska, District of Columbia, Hawaii, Idaho, Iowa, Kentucky, Louisiana, Mississippi, Missouri, Nebraska, Nevada, North Carolina, North Dakota, Ohio, Oklahoma, Pennsylvania, Rhode Island, South Carolina, Vermont, Washington, West Virginia
-
No Legislation Yet: Arkansas, Kansas, Wyoming
While state-level AI laws differ in scope, most align with broader cybersecurity and data privacy standards—focusing on integrity, transparency, and accountability.
One key concept gaining attention across many states is AI explainability, the ability to demonstrate exactly how and why AI systems make decisions. Organizations must be prepared to document, justify, and audit their AI systems to ensure transparency and compliance with emerging laws.
The NIST AI Risk Management Framework (AI RMF)
The National Institute of Standards and Technology (NIST) AI Risk Management Framework (AI RMF) is a comprehensive guideline developed in collaboration with public and private sector experts. Its purpose is to help organizations design, deploy, and manage AI systems responsibly, minimizing the security, ethical, and operational risks associated with artificial intelligence.
The framework identifies seven key risk areas organizations should address when developing or using AI:
- Validity and reliability: ensuring AI systems function as intended
- Safety: protecting users and data handled by AI systems
- Security and resilience: maintaining operational integrity against threats
- Accountability and transparency: ensuring decisions can be traced and justified
- Explainability and interpretability: understanding how AI reaches conclusions
- Privacy and confidentiality: safeguarding sensitive data
- Fairness and bias management: promoting equitable outcomes in AI processing
The AI RMF builds upon principles from the NIST Cybersecurity Framework (CSF) but applies them directly to AI-related risks. It includes four core pillars that create a continuous cycle of improvement:
- Map: Identify and contextualize AI risks within organizational objectives.
- Measure: Assess and monitor the impact of identified risks.
- Manage: Prioritize and mitigate risks based on their potential effect.
- Govern: Establish an organizational culture of responsible AI use and accountability.
While NIST AI RMF compliance is not yet a legal requirement, it’s increasingly recognized as a best practice framework for aligning with emerging AI legislation and ethical standards. Public and private sector organizations adopting the framework position themselves ahead of future regulations and strengthen stakeholder trust.
Global AI Legislation and Governance in 2025
Unlike the United States, many countries have already established comprehensive AI legislation and governance frameworks, some at national and others at international levels. While not all of these laws apply directly to U.S.-based organizations, they can still have far-reaching effects, especially for businesses that handle data from international users or operate across borders.
Two of the most influential frameworks shaping the global AI regulatory landscape are:
- ISO/IEC 42001:2023: A global AI management system standard that guides organizations in developing responsible, transparent, and trustworthy AI systems. Although it’s not legally binding, ISO/IEC 42001 is influencing how lawmakers worldwide are drafting and refining AI legislation.
- The EU Artificial Intelligence Act (EU AI Act): A landmark piece of AI legislation from the European Union that establishes risk-based requirements for AI deployment. Similar to the impact of the GDPR, the EU AI Act affects U.S. companies offering products or services to EU residents or processing their data.
Together, these frameworks illustrate the growing momentum toward global AI compliance and accountability. Organizations that understand and align with these standards can minimize legal risk and stay ahead of future AI governance requirements.
The ISO/IEC 42001:2023 Framework and Its Role in Global AI Legislation
One of the most influential global AI governance frameworks is ISO/IEC 42001:2023, a joint publication by the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC). Unlike national AI legislation, ISO 42001 is not a law, it’s a global standard that helps organizations establish effective Artificial Intelligence Management Systems (AIMS).
Many countries are using ISO 42001 as a foundation for developing their own AI regulations, making it a valuable reference for businesses seeking to future-proof compliance.
The framework follows a structure similar to other well-known ISO standards, such as ISO/IEC 27001. It includes 10 core clauses, grouped into key focus areas that guide organizations in building trustworthy and transparent AI systems:
- Context and Leadership: Define your organization’s AI objectives, scope, and leadership roles.
- Planning and Support: Identify AI-related risks, set objectives, and provide the resources and training needed to meet them.
- Operational Controls: Manage day-to-day AI system risks, data integrity, and performance monitoring.
- Evaluation and Improvement: Continuously assess, audit, and improve AI governance practices.
While ISO/IEC 42001 is not legally mandated under any global AI legislation, its principles strongly influence emerging laws and regulations across regions such as the European Union, United Kingdom, and Asia-Pacific.
By adopting ISO 42001, organizations can strengthen their AI compliance posture, demonstrate accountability, and stay aligned with evolving international AI governance standards.
The EU Artificial Intelligence Act: A Global Model for AI Legislation
The European Union Artificial Intelligence Act (EU AI Act) is one of the world’s most comprehensive and influential pieces of AI legislation. First proposed in 2021, published in 2023, and officially adopted in June 2024, its primary restrictions began taking effect in February 2025, with additional provisions rolling out in phases.
The EU AI Act uses a risk-based approach to regulate artificial intelligence systems, classifying them into four categories: unacceptable risk (prohibited), high risk (strictly regulated), limited risk, and minimal risk. The act primarily impacts AI developers, distributors, and organizations operating within or serving users in the EU.
Prohibited AI Practices
The EU AI Act bans any AI use cases deemed to pose an unacceptable risk to human rights or public safety, including:
- Deceptive or manipulative AI behavior (e.g., subliminal messaging)
- Exploiting vulnerable populations through coercive communication
- Social scoring and personality-based profiling
- Real-time facial recognition or biometric identification in public spaces
High-Risk AI Applications
High-risk AI systems, those used in areas such as biometric identification, critical infrastructure, employment, education, public administration, and law enforcement, must comply with strict requirements, including:
- Establishing an AI quality management and risk program
- Implementing data governance and validation processes
- Maintaining technical documentation proving compliance
- Ensuring human oversight, accuracy, and cybersecurity by design
Aligning with ISO/IEC 42001 for Compliance
While the EU AI Act is distinct from ISO/IEC 42001:2023, both share core principles of accountability, transparency, and ethical AI governance. Implementing the ISO 42001 framework can help organizations align with EU AI legislation requirements and prepare for future global standards.
Together, these measures set a new global benchmark for responsible AI regulation, one that other regions, including the U.S., may follow in the coming years.
Emerging AI-related Compliance Considerations
While AI legislation continues to evolve globally, many existing industry regulations are also adapting to the use of artificial intelligence. Organizations must consider how established frameworks like HIPAA and PCI DSS address AI to ensure ongoing compliance and data protection.
In healthcare, the Health Insurance Portability and Accountability Act (HIPAA) doesn’t yet include detailed AI-specific requirements. However, experts are increasingly concerned about how AI tools interact with protected health information (PHI) and patient privacy. Partnering with a HIPAA compliance advisor who understands AI governance can help organizations manage these emerging risks and maintain regulatory compliance.
Similarly, the Payment Card Industry Data Security Standard (PCI DSS) sets requirements for securing cardholder data (CHD). As businesses integrate AI-driven systems for fraud detection or payment automation, maintaining PCI compliance becomes more complex. Working with a PCI compliance expert familiar with AI-related challenges helps organizations safeguard payment data while innovating responsibly.
Prepare for Future AI Compliance Today
The global landscape of AI legislation is expanding rapidly, with new laws and standards emerging across the U.S. and worldwide. While there’s still no comprehensive federal AI law in the United States, several states have introduced their own rules to address AI ethics, transparency, and responsible use. On a global level, frameworks such as ISO/IEC 42001:2023 and the EU AI Act are setting the foundation for trustworthy and compliant AI operations.
As AI regulations continue to evolve, organizations must stay proactive to remain compliant and competitive. Partnering with a trusted cybersecurity and compliance firm like RSI Security helps your team anticipate upcoming requirements, align with global frameworks, and implement effective AI governance strategies.
At RSI Security, we believe that discipline drives innovation. By building a strong compliance foundation today, your organization can embrace AI confidently and securely tomorrow.
Contact us to learn how our experts can help you prepare for future AI compliance challenges
Download Our NIST Datasheet