The regulatory and policy landscape for AI is an emerging issue in regional and national jurisdictions globally, for example in the European Union and Russia. Since early 2016, many national, regional and international authorities have begun adopting strategies, actions plans and policy papers on AI. These documents cover a wide range of topics such as regulation and governance, as well as industrial strategy, research, talent and infrastructure. Different countries have approached the problem in different ways. Regarding the three largest economies, it has been said that "the United States is following a market-driven approach, China is advancing a state-driven approach, and the EU is pursuing a rights-driven approach."
African Union The African Union has increasingly been active in the field. Most importantly, the African Commission on Human and Peoples' Rights published a study on AI and human rights and advocated for an African Framework Convention on AI and Human Rights. This initiative builds on earlier debates within the AU, particularly discussions on autonomous lethal weapons, which African representatives had previously raised in international forums.
Australia In October 2023, the
Australian Computer Society,
Business Council of Australia,
Australian Chamber of Commerce and Industry,
Ai Group (aka Australian Industry Group), Council of Small Business Organisations Australia, and Tech Council of Australia jointly published an open letter calling for a national approach to AI strategy. The letter backs the federal government establishing a whole-of-government AI taskforce. In September 2024, a bill was introduced which granted the
Australian Communications and Media Authority powers to regulate AI-generated misinformation. Several agencies, including the ACMA,
ACCC, and
Office of the Australian Information Commissioner, are all expected to play roles in future AI regulation.
Brazil On September 30, 2021, the Brazilian Chamber of Deputies (Câmara dos Deputados) approved the Brazilian Legal Framework for Artificial Intelligence (Marco Legal da Inteligência Artificial). This legislation aimed to regulate AI development and usage while promoting research and innovation in ethical AI solutions that prioritize culture, justice, fairness, and accountability. The 10-article bill established several key objectives: developing ethical principles for AI, promoting sustained research investment, and removing barriers to innovation. Article 4 specifically emphasized preventing discriminatory AI solutions, ensuring plurality, and protecting human rights. When the bill was first released to the public, it faced substantial criticism, alarming the government for critical provisions. The underlying issue is that this bill failed to thoroughly and carefully address accountability, transparency, and inclusivity principles. Article VI establishes subjective liability, meaning any individual that is damaged by an AI system and is wishing to receive compensation must specify the stakeholder and prove that there was a mistake in the machine's life cycle. Scholars emphasize that it is out of legal order to assign an individual responsible for proving algorithmic errors given the high degree of autonomy, unpredictability, and complexity of AI systems. This also drew attention to the currently occurring issues with face recognition systems in Brazil leading to unjust arrests by the police, which would then imply that when this bill is adopted, individuals would have to prove and justify these machine errors. The main controversy of this draft bill was directed to three proposed principles. First, the non-discrimination principle, suggests that AI must be developed and used in a way that merely mitigates the possibility of abusive and discriminatory practices. Secondly, the pursuit of neutrality principle lists recommendations for stakeholders to mitigate biases; however, with no obligation to achieve this goal. Lastly, the transparency principle states that a system's transparency is only necessary when there is a high risk of violating fundamental rights. As easily observed, the Brazilian Legal Framework for Artificial Intelligence lacks binding and obligatory clauses and is rather filled with relaxed guidelines. In fact, experts emphasize that this bill may even make accountability for AI discriminatory biases even harder to achieve. Compared to the EU's proposal of extensive risk-based regulations, the Brazilian Bill has 10 articles proposing vague and generic recommendations. The Brazilian AI Bill lacks the diverse perspectives that characterized earlier Brazilian internet legislation. When Brazil drafted the Marco Civil da Internet (Brazilian Internet Bill of Rights) in the 2000s, it used a
multistakeholder approach that brought together various groups—including government, civil society, academia, and industry—to participate in dialogue, decision-making, and implementation. This collaborative process helps capture different viewpoints and trade-offs among stakeholders with varying interests, ultimately improving transparency and effectiveness in AI regulation.
Canada The
Pan-Canadian Artificial Intelligence Strategy (2017) is supported by federal funding of Can $125 million with the objectives of increasing the number of outstanding AI researchers and skilled graduates in Canada, establishing nodes of scientific excellence at the three major AI centres, developing 'global thought leadership' on the economic, ethical, policy and legal implications of AI advances and supporting a national research community working on AI. In November 2022, Canada has introduced the Digital Charter Implementation Act (Bill C-27), which proposes three acts that have been described as a holistic package of legislation for trust and privacy: the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act, and the Artificial Intelligence & Data Act (AIDA). In September 2023, the Canadian Government introduced a Voluntary Code of Conduct for the Responsible Development and Management of Advanced
Generative AI Systems. The code, based initially on public consultations, seeks to provide interim guidance to Canadian companies on responsible AI practices. Ultimately, its intended to serve as a
stopgap until formal legislation, such as the Artificial Intelligence and Data Act (AIDA), is enacted. Moreover, in November 2024, the Canadian government additionally announced the creation of the Canadian Artificial Intelligence Safety Institute (CAISI) as part of a 2.4 billion CAD federal AI investment package. This includes 2 billion CAD to support a new AI Sovereign Computing Strategy and the AI Computing Access Fund, which aims to bolster Canada's advanced computing infrastructure. Further funding includes 700 million CAD for domestic AI development, 1 billion CAD for public supercomputing infrastructure, and 300 million CAD to assist companies in accessing new AI resources. In 2021, China published ethical guidelines for the use of AI in China which state that researchers must ensure that AI abides by shared human values, is always under human control, and is not endangering public safety. In 2023, China introduced
Interim Measures for the Management of Generative AI Services. On August 15, 2023, China's first generative AI measures officially came into force, becoming one of the first comprehensive national regulatory frameworks for generative AI. The measures apply to all providers offering generative AI services to the Chinese public, including foreign entities, ultimately setting the rules related to data protection, transparency, and algorithmic accountability. In April 2026, China's
National Development and Reform Commission ordered
Meta to unwind its $2 billion acquisition of
Manus, an AI startup that had relocated from China to Singapore in 2025. This marks a rare move to block a completed cross-border tech deal. The decision signals Beijing's effort to prevent foreign firms from acquiring Chinese AI talent and intellectual property, establishing national security review as a regular condition for cross-border technology transactions involving Chinese assets, founders, or technology. In parallel, earlier regulations such as the Chinese government's Deep Synthesis Provisions (effective January 2023) and the Algorithm Recommendation Provisions (effective March 2022) continue to shape China's governance of AI-driven systems, including requirements for watermarking and algorithm filing with the
Cyberspace Administration of China (CAC). Additionally, In October 2023, China also implemented a set of Ethics Review Measures for science and technology, mandating certain ethical assessments of AI projects which were deemed socially sensitive or capable of negatively influencing public opinion.
Colombia Although Colombia has not issued specific AI laws, this does not mean there is a lack of frameworks or initiatives to govern it. In fact, there are numerous instruments issued for that purpose, including national policies, ethical frameworks, roadmaps, rulings, and guidelines. In addition, there are other existing regulations applicable to AI systems, such as data protection, intellectual property, consumer laws, and civil liability rules. One of the first specific instruments issued was the CONPES 3920 of 2019, the National Policy on Exploitation of Data (Big Data). The main purpose of this policy was to leverage data in Colombia by creating the conditions to handle it as an asset to generate social and economic value. Another milestone occurred in 2021, when the National Government published the Ethical Framework for AI in Colombia. It was a soft law guide for public entities, offering recommendations to consider in the management of AI-related projects. An additional framework for AI was adopted by Colombia in 2022: the Recommendation on the Ethics of Artificial Intelligence by UNESCO. 2024 was a prolific year in governing AI in Colombia. A roadmap for an ethical and sustainable AI adoption was launched by the National Government. The Superintendence of Industry and Commerce issued a guide on the processing of personal data in AI systems. The Judiciary Council published a guideline for the use of AI in the judicial sector. In the global context, the OECD principles were updated, the Global Digital Compact by the United Nations was published, and the UN adopted Resolution A/78/L.49 on safe, trustworthy, and reliable AI systems for sustainable development. In 2025, a new national policy on AI was issued by the National Government, contained in CONPES 4144. The ruling T-067/25 by the Constitutional Court provided some rules for access to public information and transparency of algorithms. Until Congress issues AI regulations, these soft-law documents can guide the design, development, and use of AI systems in Colombia.
Council of Europe The
Council of Europe (CoE) is an international organization that promotes human rights, democracy and the rule of law. It comprises 46 member states, including all 29 Signatories of the European Union's 2018 Declaration of Cooperation on Artificial Intelligence. The CoE has created a common legal space in which the members have a legal obligation to guarantee rights as set out in the
European Convention on Human Rights. Specifically in relation to AI, "The Council of Europe's aim is to identify intersecting areas between AI and our standards on human rights, democracy and rule of law, and to develop relevant standard setting or capacity-building solutions". The large number of relevant documents identified by the CoE include guidelines, charters, papers, reports and strategies. The authoring bodies of these AI regulation documents are not confined to one sector of society and include organizations, companies, bodies and nation-states.
Czech Republic The Czech Republic adopted a National AI Strategy in 2019 and updated it in 2024 with the National AI Strategy of the Czech Republic 2030. The updated strategy includes a provision to ensure effective legislation, to create codes of ethics for developers and users, to establish supervisory bodies and to promote the ethical use of AI.
European Union The EU is one of the largest jurisdictions in the world and plays an active role in the global regulation of digital technology through the
GDPR,
Digital Services Act, and the
Digital Markets Act. For AI in particular, the
Artificial intelligence Act is regarded in 2023 as the most far-reaching regulation of AI worldwide. Most European Union (EU) countries have their own national strategies towards regulating AI, but these are largely convergent. supported by a High-Level Expert Group on Artificial Intelligence. In April 2019, the
European Commission published its
Ethics Guidelines for Trustworthy Artificial Intelligence (AI), following this with its
Policy and investment recommendations for trustworthy Artificial Intelligence in June 2019. The EU Commission's High Level Expert Group on Artificial Intelligence carries out work on Trustworthy AI, and the commission has issued reports on the Safety and Liability Aspects of AI and on the Ethics of Automated Vehicles. In 2020. the EU Commission sought views on a proposal for AI specific legislation, and that process is ongoing. The White Paper consists of two main building blocks, an 'ecosystem of excellence' and a 'ecosystem of trust'. The 'ecosystem of trust' outlines the EU's approach for a regulatory framework for AI. In its proposed approach, the Commission distinguishes AI applications based on whether they are 'high-risk' or not. Only high-risk AI applications should be in the scope of a future EU regulatory framework. An AI application is considered high-risk if it operates in a risky sector (such as healthcare, transport or energy) and is "used in such a manner that significant risks are likely to arise". For high-risk AI applications, the requirements are mainly about the : "training data", "data and record-keeping", "information to be provided", "robustness and accuracy", and "human oversight". There are also requirements specific to certain usages such as remote biometric identification. AI applications that do not qualify as 'high-risk' could be governed by a voluntary labeling scheme. As regards compliance and enforcement, the Commission considers prior conformity assessments which could include 'procedures for testing, inspection or certification' and/or 'checks of the algorithms and of the data sets used in the development phase'. A European governance structure on AI in the form of a framework for cooperation of national competent authorities could facilitate the implementation of the regulatory framework. A January 2021 draft was leaked online on April 14, 2021, before the Commission presented their official "Proposal for a Regulation laying down harmonised rules on artificial intelligence" a week later. Shortly after, the
Artificial Intelligence Act (also known as the AI Act) was formally proposed on this basis. This proposal includes a refinement of the 2020 risk-based approach with, this time, 4 risk categories: "minimal", "limited", "high" and "unacceptable". The proposal has been severely critiqued in the public debate. Academics have expressed concerns about various unclear elements in the proposal – such as the broad definition of what constitutes AI – and feared unintended legal implications, especially for vulnerable groups such as patients and migrants. The risk category "general-purpose AI" was added to the AI Act to account for versatile models like
ChatGPT, which did not fit the application-based regulation framework. Unlike for other risk categories, general-purpose AI models can be regulated based on their capabilities, not just their uses. Weaker general-purpose AI models are subject transparency requirements, while those considered to pose "systemic risks" (notably those trained using computational capabilities exceeding 1025
FLOPS) must also undergo a thorough evaluation process. A subsequent version of the AI Act was finally adopted in May 2024. The AI Act will be progressively enforced.
Recognition of emotions and real-time remote
biometric identification will be prohibited, with some exemptions, such as for law enforcement. The European Union's AI Act has created a regulatory framework with significant global implications. This legislation introduces a risk-based approach to categorizing AI systems, focusing on high-risk applications like healthcare, education, and public safety. It requires organizations to ensure transparency, data governance, and human oversight in their AI solutions. While this aims to foster ethical AI use, the stringent requirements could increase overhead and compliance costs, delaying certain AI designs and deployments. Observers have expressed concerns about the multiplication of legislative proposals under the
von der Leyen Commission. The speed of the legislative initiatives is partially led by political ambitions of the EU and could put at risk the digital rights of the European citizens, including rights to privacy, especially in the face of uncertain guarantees of data protection through cyber security. and the concept of digital sovereignty. On May 29, 2024, the
European Court of Auditors published a report stating that EU measures were not well coordinated with those of EU countries; that the monitoring of investments was not systematic; and that stronger governance was needed. The EU's Artificial Intelligence Act (Regulation (EU) 2024/1689)
entered into force on 1 August 2024, creating a risk-based legal framework for AI systems, including special provisions for general-purpose AI models enforceable by 2 August 2025.
Finland Finland has appointed a working group to evaluate what national legislation is required by the EU
Artificial intelligence Act, and to prepare a legislative proposal on its national implementation. The working group began its evaluation on April 29, 2024, and is expected to conclude by June 30, 2026.
Germany In November 2020,
DIN,
DKE and the German
Federal Ministry for Economic Affairs and Energy published the first edition of the
"German Standardization Roadmap for Artificial Intelligence" (NRM KI) and presented it to the public at the Digital Summit of the Federal Government of Germany. NRM KI describes requirements to future regulations and standards in the context of AI. The implementation of the recommendations for action is intended to help to strengthen the German economy and science in the international competition in the field of artificial intelligence and create innovation-friendly conditions for this
emerging technology. The first edition is a 200-page long document written by 300 experts. The second edition of the NRM KI was published to coincide with the German government's Digital Summit on December 9, 2022. DIN coordinated more than 570 participating experts from a wide range of fields from science, industry, civil society and the public sector. The second edition is a 450-page long document. On the one hand, NRM KI covers the focus topics in terms of applications (e.g. medicine, mobility, energy & environment, financial services, industrial automation) and fundamental issues (e.g. AI classification, security, certifiability, socio-technical systems, ethics).
Israel On October 30, 2022, pursuant to government resolution 212 of August 2021, the
Israeli Ministry of Innovation, Science and Technology released its "Principles of Policy, Regulation and Ethics in AI" white paper for public consultation. By December 2023, the Ministry of Innovation and the
Ministry of Justice published a joint AI regulation and ethics policy paper, outlining several AI ethical principles and a set of recommendations including opting for sector-based regulation, a risk-based approach, preference for "soft" regulatory tools such as AI sandboxes and maintaining consistency with existing global regulatory approaches to AI. In December 2023, Israel unveiled its first comprehensive national AI policy, which was jointly developed through a collaboration between ministerial and stakeholder consultation. In general, the new policy outlines ethical principles aligned with current
OECD guidelines and recommends a sector-based, risk-driven regulatory framework, which focuses on areas like transparency and accountability. The policy proposes the creation of a national AI Policy Coordination Center to support regulators, and further developing the tools necessary for responsible AI deployment. In addition, alongside 56 other nations, to domestic policy development, Israel signed the world's first binding international treaty on artificial intelligence in March 2024. The specific treaty, led by the
Council of Europe, has obliged signatories to ensure current AI systems uphold democratic values, human rights, and the rule of law.
Italy In October 2023, the Italian privacy authority approved a regulation that provides three principles for therapeutic decisions taken by automated systems: transparency of decision-making processes, human supervision of automated decisions and algorithmic non-discrimination. In March 2024, the President of the Italian Data Protection Authority reaffirmed their agency's readiness to implement the European Union's newly introduced
Artificial Intelligence Act, praising the framework of institutional competence and independence. Italy has continued to develop guidance on AI applications through existing legal frameworks, including recent innovations in areas such as facial recognition for law enforcement, AI in healthcare,
deepfakes, and
smart assistants. The Italian government's National AI Strategy (2022–2024) emphasizes responsible innovation and outlines goals for talent development, public and private sector adoption, and regulatory clarity, particularly in coordination with EU-level initiatives. In recent years, Morocco has made efforts to advance its use of artificial intelligence in the legal sector, particularly through AI tools that assist with judicial prediction and document analysis, helping to streamline case law research and support legal practitioners with more complex tasks. Alongside these efforts to establish a national AI agency, AI is being gradually introduced into
legislative and judicial processes in Morocco, with ongoing discussions emphasizing the benefits as well as the potential risks of these technologies. Generally speaking Morocco's broader
digital policy includes robust
data governance measures including the 2009 Personal Data Protection Law and the 2020 Cybersecurity Law, which establish requirements in areas such as privacy, breach notification, and data localization. In 2020, the
New Zealand Government sponsored a
World Economic Forum pilot project titled "Reimagining Regulation for the Age of AI", aimed at creating regulatory frameworks around AI. The same year, the Privacy Act was updated to regulate the use of New Zealanders' personal information in AI. In 2023, the
Privacy Commissioner released guidance on using AI in accordance with information privacy principles. In February 2024, the
Attorney-General and Technology Minister announced the formation of a Parliamentary cross-party AI
caucus, and that framework for the Government's use of AI was being developed. She also announced that no extra regulation was planned at that stage.
Philippines In 2023, a bill was filed in the Philippine
House of Representatives which proposed the establishment of the Artificial Intelligence Development Authority (AIDA) which would oversee the development and research of artificial intelligence. AIDA was also proposed to be a watchdog against crimes using AI. The
Commission on Elections has also considered in 2024 the ban of using AI and deepfake for campaigning. They look to implement regulations that would apply as early as for the 2025 general elections.
Spain In 2018, the Spanish
Ministry of Science, Innovation and Universities approved an R&D Strategy on Artificial Intelligence.
Switzerland Switzerland currently has no specific AI legislation, but on 12 February 2025, the
Federal Council announced plans to ratify the
Council of Europe's AI Convention and incorporate it into Swiss law. A draft bill and implementation plan are to be prepared by the end of 2026. The approach includes sector-specific regulation, limited cross-sector rules, such as data protection, and non-binding measures such as industry agreements. The goals are to support innovation, protect fundamental rights, and build public trust in AI.
United Kingdom The UK supported the application and development of AI in business via the
Digital Economy Strategy 2015–2018 In the public sector, the
Department for Digital, Culture, Media and Sport advised on data ethics and the
Alan Turing Institute provided guidance on responsible design and implementation of AI systems. In terms of cyber security, in 2020 the
National Cyber Security Centre has issued guidance on 'Intelligent Security Tools'. The following year, the
UK published its 10-year National AI Strategy, which describes actions to assess long-term AI risks, including AGI-related catastrophic risks. In March 2023, the UK released the
white paper A pro-innovation approach to AI regulation. This white paper presents general AI principles, but leaves significant flexibility to existing regulators in how they adapt these principles to specific areas such as transport or financial markets. In November 2023, the UK hosted the first
AI safety summit, with the prime minister
Rishi Sunak aiming to position the UK as a leader in
AI safety regulation. During the summit, the UK created an
AI Safety Institute, as an evolution of the
Frontier AI Taskforce led by
Ian Hogarth. The institute was notably assigned the responsibility of advancing the safety evaluations of the world's most advanced AI models, also called
frontier AI models. The UK government indicated its reluctance to legislate early, arguing that it may reduce the sector's growth and that laws might be rendered obsolete by further technological progress.
United States Discussions on regulation of AI in the United States have included topics such as the timeliness of regulating AI, the nature of the federal regulatory framework to govern and promote AI, including what agency should lead, the regulatory and governing powers of that agency, and how to update regulations in the face of rapidly changing technology, as well as the roles of state governments and courts. In the United States, the October 2023 executive order on AI safety and security presented federal requirements on AI development and deployment. The order also requires individuals who develop the most capable AI systems to share any safety testing results with government agencies before releasing them to the public. On 12 December 2025, President Trump signed an executive order directing federal agencies to develop a unified national approach to AI policy, evaluate state AI laws for potential conflicts, challenge them through legal action, and condition certain federal funding on state compliance, while exempting state laws related to child safety, data center infrastructure, and state government procurement. "This is an executive order that orders aspects of your administration to take decisive action to ensure that AI can operate within a single national framework in this country, as opposed to being subject to state level regulation that could potentially cripple the industry", White House aide
Will Scharf said in the Oval Office, commenting on the executive order. Walter Donway, writing for a publication of the
American Institute for Economic Research, criticized the order, saying, "The premise behind it is philosophically wrong: that a central authority can foresee the risks of an emergent technology better than the distributed knowledge of millions of actors operating within a free market. That premise was wrong when applied to railroads, radio, electricity, telephony, airlines, and nuclear power. It is even more disastrously wrong when applied to artificial intelligence." == Regulation of fully autonomous weapons ==