Upgrading the Regulatory System for Emerging Industries

Regulatory and Policy Systems for
Emerging Industries in Major Economies
and Policy Directions for Korea

Global regulatory frameworks are evolving into comprehensive systems that span both innovation and safety, moving away from simply imposing restrictions. As major economies move promptly to establish AI Safety Institutes (AISIs), they aim to take the lead in setting global standards. This article examines national regulatory and policy systems for emerging industries in the United States, the United Kingdom and the European Union. Based on this analysis, an assessment of Korea’s current position will be presented, followed by an exploration of policy directions for building an effective and integrated regulatory system.

By Sung-won Ahn, Head of the AI Policy Research Division, Software Policy and Research Institute

1. Background

Physical AI and data-driven industries are characterized by rapid technological advancement and far-reaching cross-industry convergence. This makes it difficult for businesses to adapt to changes via traditional frameworks that focus mainly on regulation. Physical AI applications, such as robotics, autonomous driving and drones, are increasingly being integrated across industries, reshaping industrial ecosystems. At the same time, complexity is increasing with the rise of data-driven sectors such as AI training data, cloud computing, and industries involving data transactions and personal information.
These emerging industries are characterized by the rapid evolution of technologies and services, the emergence of new types of risk and the cross-border nature of global supply chains and platforms. As a result, the objective of regulation is being redefined from a restriction focus to the simultaneous pursuit of innovation and the safeguarding of the public interest.
With the rise of advanced AI technologies, including generative AI and agentic AI, AI safety and trustworthiness have moved to the forefront of the policy agenda. In response, major economies are establishing dedicated national governance structures for AI models, data and algorithms, on top of existing industrial regulations. In this context, AISIs are being established to conduct evaluation, testing and verification of AI systems, as well as to develop standard frameworks.
Accordingly, regulation today is expanding beyond legal provisions, forming regulatory systems, which encompass broader fields that include standard-setting, infrastructure for model evaluation and certification, public testbeds and mechanisms for incident investigation and accountability.

2. Trends in Regulatory and Policy Systems for Emerging Industries in Major Economies

2.1 The United States

The United States has traditionally taken a market-oriented approach that emphasizes innovation through competition. There is evident decentralization in emerging industry regulation, in which the federal government sets principles and guidelines while state governments and individual agencies are responsible for implementation. For autonomous vehicles, federal safety standards are combined with state-level operational permits, which encourages local-level experimentation.
Rather than adopting a comprehensive framework act for AI, the U.S. employs risk management frameworks, standards and procurement requirements. This approach was reinforced by Executive Order 14110 on the safe, secure and trustworthy development and use of artificial intelligence issued in 2023, although the order was rescinded on January 20, 2025, highlighting policy shifts within the administration and concerns over continuity.
Nevertheless, the U.S. continues to build capabilities in AI safety and reliability evaluation, with the National Institute of Standards and Technology (NIST) playing a central role. In November 2023, the institute launched its own AISI, forming a national system for evaluating and testing frontier AI models and conducting safety research.
In terms of data regulation, personal data protection has developed primarily through state-level legislation rather than a comprehensive federal law, which may complicate compliance for businesses. This can however encourage policy innovations through experimentation, enabling businesses to deploy different projects in various test-bed states. With inter-state competition, policies may be able to evolve further.



2.2 The United Kingdom

The United Kingdom has favored a decentralized approach that addresses AI issues within the existing mandates of regulatory authorities, including those responsible for competition, data protection and safety, rather than introducing a comprehensive framework act on AI. The core objective is to secure “predictable flexibility” by setting principles such as safety, accountability, transparency and fairness, strengthening the technical capabilities of regulators and maintaining continuous engagement with industry.
Following the AI Safety Summit in November 2023, the UK AISI was launched with the aim of building national capabilities for the evaluation and testing of frontier AI models. As international cooperation has grown, the UK AISI’s functional scope has broadened from AI safety to include AI security.
In addition, the Automated Vehicles Act 2024 has clarified the regulatory framework for commercialization. The UK government indicated that autonomous vehicles could be permitted on UK roads starting in 2026, on the condition of compliance with safety standards and the establishment of clear accountability systems. The UK government also projected that the autonomous driving industry could create 38,000 jobs and generate approximately GBP 42 billion, or about KRW 81 trillion, in economic value by 2035, reinforcing the economic rationale for policy implementation.



2.3 France

As a member of the European Union, France is directly influenced by the phased implementation schedule and regulatory provisions of the EU AI Act. The EU is applying the AI Act in stages, with full implementation expected around August 2, 2027, while certain early provisions, including those related to prohibited AI practices, began taking effect in February 2025 as part of a gradual implementation roadmap.
In January 2025, the French government announced the establishment of the Institut National pour l’Évaluation et la Sécurité de l’Intelligence Artificielle (INESIA), a national institution for AI evaluation and security. INESIA’s main tasks include evaluation, security and safety functions and its structure suggests that it aims to support both compliance with EU regulations and industrial innovation by consolidating national capabilities in evaluation and governance.
France’s approach combines the EU’s strong normative and regulatory framework with INESIA’s institutional execution of practical evaluation and verification capabilities. These efforts can be interpreted as a move to enhance effectiveness, by offsetting the rigidity of rule-based regulation with evaluation infrastructure and standards.



2.4 Canada

Canada is pursuing a strategy that institutionalizes AI safety research at the national level through the Canadian Artificial Intelligence Safety Institute (CAISI), integrating national safety research with the country’s AI research and talent ecosystem to build a foundation of trust for emerging industries.
In May 2024, the Canadian government announced plans to establish the Canadian AISI, along with an initial investment of CAD 50 million, or about KRW 52.8 billion, over five years. In November 2024, the institute was formally launched, creating a policy framework that combines institutional establishment with budgetary support to ensure stable national-level capabilities in AI safety research and coordination. The institute aims to advance the understanding of risks associated with frontier AI, develop mitigation measures for potential adverse effects and support research on sectoral application. It also seeks to strengthen testing and evaluation capabilities through international cooperation, including collaboration with AISIs in other countries.



2.5 Germany

Germany, like France, is directly subject to the EU AI Act and is in the process of establishing national acts to support domestic implementation, including designating supervisory authorities, procedures, powers and penalties. While building a framework to implement the EU AI Act, Germany is also strengthening AI safety and security capabilities through public research institutions such as the German Aerospace Center, seeking to link regulatory effectiveness with industrial application.
The Institute for AI Safety and Security under the German Aerospace Center has accumulated research capabilities covering the operational safety of AI-based solutions, and built resilience against attacks and broader security issues. The institute aims to build a trustworthy AI ecosystem across sectors including transport, energy, aerospace and digital innovation areas related to Industry 4.0. Through this approach, Germany is pursuing a strategy that connects compliance with EU regulations to practical industrial applications in areas such as mobility, energy and security.



2.6 Japan

Japan has traditionally relied on guidelines, standards and industry collaboration to promote technological innovation. Rather than utilizing legal frameworks that focus on comprehensive restrictions, Japan is emphasizing assessments to ensure AI safety and trust, as well as the promotion of industrial applications through standards and public-private cooperation.
In February 2024, Japan’s Ministry of Economy, Trade and Industry (METI) announced the establishment of an AISI. Japan’s AI Safety Institute is designed as a central hub for research on AI safety evaluation methodologies, inter-ministerial coordination and participation in international initiatives. It is also positioned as a core institution for AI safety within Japan’s integrated innovation strategy. Japan is aiming to strengthen trust in AI safety by establishing its safety institute as a central hub for evaluation and testing, while advancing guideline development, standard-setting and international cooperation.



2.7 Singapore

Singapore has promoted demonstration projects in emerging industries through strong administrative coordination and the use of urban-scale testbeds. In 2024, the Infocomm Media Development Authority (IMDA) designated the Digital Trust Centre (DTC) as Singapore’s AISI, assigning it responsibility for evaluation and governance to support the safe development and deployment of AI. In addition, Singapore developed the world’s first official testing tool designed to validate AI trustworthiness, AI Verify, in May 2022. The tool is still in active use today.
Singapore’s approach combines regulatory frameworks for demonstration project infrastructure, including testbeds and sandboxes, with safety and trust frameworks to pursue both rapid commercialization and systematic risk management.

3. The Current Status and Challenges facing Korea’s Regulatory System for Emerging Industries

The Korean government has supported demonstration projects in emerging industries through regulatory sandboxes, regulation-free zones and regulatory exceptions for demonstration projects. However, with the expansion of frontier AI, including generative AI, there is growing demand to move beyond support for demonstration projects and toward enhancing frameworks for safety, trust and accountability on an institutional level. In this light, the Korean regulatory system must simultaneously balance the desire for demonstration and commercialization with the need to uphold safety, ethics and security, whilst aligning with global standards.
In November 2024, the government launched the Korean AISI and outlined a strategy that focuses on strengthening evaluation, research and international networks. This approach is built on cooperation with AISI initiatives in major economies, and reflects the need to complement the limitations of legal regulation with enhanced evaluation and verification capabilities.
Korea has also: enacted the Framework Act on the Development of Artificial Intelligence and the Creation of a Foundation for Trust; established the role of the National AI Strategy Committee as the central control tower; defined the role of the AISI; introduced various support measures for expanding AI adoption; set out obligations related to AI safety and trustworthiness; and, provided comprehensive measures and specified details on defining and assessing high-impact AI through enforcement decrees. This positions Korea alongside global regulatory developments, including the EU AI Act.
A key issue will be designing and implementing systems under the AI Framework Act while minimizing excessive regulatory costs and ensuring both accountability for high-impact AI and effective industrial promotion. In particular, advancing the AI regulatory system will require strengthening functions across evaluation, testing, standard-setting and policy linkage.
In evaluation, we need to develop methodologies for assessing risks associated with frontier and general-purpose AI. For testing, we should set up capable red teams and establish infrastructure necessary to test model and algorithm security and robustness. For the setting of standards, we must engage continuously and proactively in international AISI networks to bolster linkages to domestic and global standards. In terms of policy linkage, it is important to design systems with high relevance to guidelines, certification schemes and procurement requirements.
At present, policies on physical AI, autonomous vehicles, drones and the data economy tend to be arranged in parallel across government ministries. Going forward, system integration will be required to establish a unified framework that links AI model safety, physical safety and liability for accidents. It is also necessary to create mechanisms through which outcomes from regulatory sandboxes and special zones are reflected in standards and certification systems on a regular basis. In parallel, efforts should be made to refine trust-based data governance to enable integration and utilization of personal, industrial and public data.

4. Policy Recommendations

First, it is necessary to expand beyond a focus on legal provisions and move toward a regulatory system that encompasses standards, evaluation and testing, and accountability. Notably in the AI domain, capabilities in evaluation, testing and verification are directly linked to the effectiveness of regulation. Thus, dedicated institutions such as AISIs should function not merely as research bodies, but as central pillars of the regulatory system, supported by testing infrastructure and linked to standard certification, coordination mechanisms and incident response frameworks.
Second, demonstration exemptions should be elevated to structured learning mechanisms. In Singapore, for example, the strategic view on regulatory sandboxes is not that they are exceptions to the rule, but rather are tools for institutional learning and data accumulation which inform regulatory improvement. A similar approach is needed in Korea, whereby demonstration outcomes are systematically accumulated and standardized, and continuously fed back into formal regulatory frameworks.
Third, we need to find an optimal balance for strengthening accountability and transparency in high-impact areas while minimizing compliance costs. A review of the approaches in the EU and the UK reveals that the core principle of regulation is to impose risk-based obligations focused on high-impact areas, instead of imposing uniform restrictions. At the same time, compliance costs for startups and small and medium-sized enterprises should be minimized. Supporting measures, including standardized templates, shared testing infrastructure and government-backed certification and evaluation vouchers, should be developed in parallel.
Finally, global cooperation and working level cooperation needs to be strengthened. International collaboration should involve concrete measures such as joint benchmarks, the sharing of red-teaming results and the standardization of safety reporting formats. Through dedicated institutions, Korea should reinforce its role as an Asia-Pacific regional hub and establish a systemic cooperation system aimed at promoting interoperability with the EU AI Act and the regulatory frameworks of other major economies.