- FuturePoint Digital AI Blog
- Posts
- Generative AI Regulation in the U.S. & EU
Generative AI Regulation in the U.S. & EU
FuturePoint Digital's ongoing segment on AI legal & regulatory updates
The current state of US Gen AI regulation is characterized by a notable increase in the number of regulations and legal norms over the past year. As of April 2024, the U.S. federal government has introduced 200 AI-related bills, a significant increase from the 181 proposed in 2023. This reflects a continued focus on AI governance and the need to address the challenges and opportunities presented by AI technologies.
At the state level, the trend of active involvement in AI regulation continues, with over 450 AI-related bills introduced across more than 40 states. Utah's AI Policy Act remains a notable example of state-level regulation, and other states are following suit with their own AI-related legislation.
The U.S. government has also been actively engaging with international partners to develop a shared understanding of AI governance and to promote the responsible development and deployment of AI technologies. The U.S. has participated in several international AI governance forums and has been a key player in shaping global AI norms and standards.
In response to the rapid growth of AI technologies and the increasing number of regulations, industry stakeholders have been actively engaging with policymakers to ensure that regulations are informed by industry expertise and best practices. This collaboration is aimed at striking a balance between promoting innovation and ensuring the responsible use of AI technologies.
Overall, the U.S. regulatory landscape for AI in 2024 continues to evolve, with both the federal and state governments taking a proactive approach to AI governance. This trend reflects the growing importance of AI technologies and the need for a comprehensive and coordinated approach to managing their development and deployment.
U.S. vs. Europe
As of April 2024, the European Union (EU) has fully implemented the AI Act, which was passed in March of this year. The AI Act has introduced a comprehensive legal framework to regulate AI technologies in Europe, setting a high bar for AI regulation with a focus on transparency, fairness, and human oversight. The Act categorizes AI applications into different risk levels, with more stringent requirements for high-risk applications, such as biometric identification, education, worker management, the legal system, and law enforcement. It also mandates the disclosure of what is artificially generated content and the training data used for large language models.
In the United States, the federal government has continued to encourage industry standards and guidelines through executive orders and the involvement of federal agencies, such as the National AI Initiative Office. However, the U.S. approach still allows for state-level regulations, resulting in a potentially fragmented regulatory landscape.
In addition to the federal and state legislative efforts noted above, the U.S. government has also been actively engaging with international partners to develop a shared understanding of AI governance and to promote the responsible development and deployment of AI technologies. This collaborative approach aims to ensure that AI technologies are developed and deployed in a manner that respects human rights, privacy, and democratic values, while also fostering innovation and economic growth.
The key difference between the U.S. and EU approaches to AI regulation remains the level of binding regulations. The EU's AI Act enforces strict penalties for violations, whereas the U.S. approach continues to rely more on industry self-regulation and non-binding guidelines. This divergence reflects the different regulatory philosophies in the two regions, with the EU prioritizing comprehensive legislation and the U.S. leaning towards a more flexible, industry-led approach.
Overall, the U.S. and EU regulatory landscapes for AI in 2024 continue to evolve, with both regions taking steps to address the challenges and opportunities presented by AI technologies.
Anticipated U.S. Future Direction
The anticipated future direction of the U.S. in AI regulation appears to be a combination of regulatory and non-regulatory approaches, emphasizing industry self-regulation and best practices while also setting some specific standards and guidelines. This approach is characterized by a focus on protecting privacy, promoting innovation, and ensuring AI safety and security.
Regulation through Executive Orders: President Joe Biden's executive order on AI, issued in October 2023, aims to address both the potential harms and benefits of AI. It includes directives to protect Americans' privacy, develop AI principles and best practices, and accelerate the hiring of AI professionals in the government. This executive order invokes the Defense Production Act to require companies to notify the federal government when training large AI models and to share the results of safety tests.
Non-Regulatory Measures: The U.S. approach also involves non-regulatory measures such as guidelines and standards issued by agencies like NIST. For example, NIST released its Artificial Intelligence Risk Management Framework 1.0 in January 2023. This framework is a voluntary, non-sector-specific, use-case-agnostic guide for technology companies designing, developing, deploying, or using AI systems.
Collaboration with Industry: The U.S. approach encourages collaboration with industry to develop AI standards and best practices. This includes engaging the AI industry in discussions and workshops to understand the technology better and to formulate appropriate regulatory responses.
State-Level Regulations: At the state level, there is a wide variety of AI-related bills in various stages of development, showing a proactive approach by individual states to regulate AI within their jurisdictions.
Federal Legislation: At the federal level, there is a growing number of AI-related bills, indicating an interest in more comprehensive AI regulation. However, the specific direction and content of these bills can vary widely.
In a nutshell, the anticipated future U.S. direction in AI regulation is a mix of executive orders, agency guidelines, state regulations, and potentially federal legislation. This approach is characterized by a focus on collaboration with industry, protection of privacy and safety, and the promotion of AI innovation.
Implications for U.S. & European Businesses
The implications of AI regulation for most business entities in the U.S. and Europe can be quite significant, particularly given the different approaches taken by each region.
The U.S.’ mix of executive orders, agency guidelines, and potentially federal legislation, emphasizes collaboration with industries to develop AI standards and best practices, protection of privacy and safety, and the promotion of AI innovation. The U.S. approach is more distributed and relies on non-regulatory infrastructure, with a focus on self-regulation and industry best practices. This could lead to a more flexible environment for businesses to innovate with AI technologies, but it may also create uncertainty and variation in compliance requirements across different states and industries.
In contrast, the EU's approach is more comprehensive and proactive. The recent passage of the EU AI Act in March of 2024 sets binding and comprehensive rules that directly apply to businesses throughout the AI value chain. It categorizes AI applications into different risk levels and includes stringent requirements for high-risk applications. This approach could create a more uniform regulatory environment across the EU but could also impose more stringent compliance requirements on businesses.
For most business entities, these regulatory developments could have several implications:
Increased Compliance Costs: Both the U.S. and EU approaches could lead to increased compliance costs for businesses. In the U.S., businesses may need to navigate a more complex and varied regulatory landscape, while in the EU, they will need to meet the specific requirements of the AI Act.
Differentiation in Business Strategies: Given the different regulatory approaches in the U.S. and EU, businesses may need to adopt different strategies for operating in these regions. This could include tailoring their AI applications to meet the specific requirements of each jurisdiction.
Impact on Innovation: The regulatory environment could impact the pace and direction of AI innovation. In the U.S., the emphasis on self-regulation and industry best practices could allow for more rapid innovation, but could also lead to concerns about safety and privacy. In the EU, the more stringent regulatory approach could slow the pace of innovation, but could also lead to greater confidence in the safety and fairness of AI applications.
Competitive Advantages and Disadvantages: The different regulatory environments could create competitive advantages and disadvantages for businesses operating in the U.S. and EU. For example, businesses in the U.S. may have more freedom to innovate, but could also face more uncertainty and variation in compliance requirements. In the EU, businesses may face more stringent compliance requirements, but could also benefit from a more uniform regulatory environment.
Impact on International Trade: The differing regulatory approaches could also impact international trade. Businesses operating in both the U.S. and EU may need to navigate the different requirements of each jurisdiction, which could increase the complexity and cost of doing business internationally.
In summary, the implications of AI regulation for most business entities in the U.S. and Europe are significant and could have a profound impact on their operations, strategies, and competitive position.
How might Future Point Digital help your organization reimagine the art of the possible with respect to new ways of working, doing, thinking, and communicating via emerging technology? Follow us at: www.futurepointdigital.com
About the Author: David Ragland is a former senior technology executive and an adjunct professor of management. He serves as a partner at FuturePoint Digital, a research-based technology consultancy specializing in strategy, advisory, and educational services for global clients. David earned his Doctorate in Business Administration from IE University in Madrid, Spain, and a Master of Science in Information and Telecommunications Systems from Johns Hopkins University. He also holds an undergraduate degree in Psychology from James Madison University and completed a certificate in Artificial Intelligence and Business Strategy at MIT. His research focuses on the intersection of emerging technology with organizational and societal dynamics.