BLAIR INSTITUTE FOR GLOBAL CHANGE: How Leaders in the Global South Can Devise AI Regulation That Enables Innovation

Full Text Sharing

 

https://institute.global/insights/tech-and-digitalisation/how-leaders-in...

 

Artificial intelligence has the potential to usher in a new era of economic growth and human flourishing. Already AI systems help doctors to improve medical diagnostics, engineers to optimise energy consumption and scientists to uncover exciting discoveries. Indeed, AI is projected to contribute approximately $15.7 trillion to the global economy by 2030, enhancing productivity and fostering new product innovation

Yet the AI revolution is only beginning. And if governments are willing and able to invest in the right data infrastructure, compute capacity and skills, while also providing incentives for AI research and adoption, they will be important drivers of that revolution.

Governments also need to recognise that technological innovation alone is not enough: realising AI’s economic and social potential also requires political leadership and good governance. Contrary to the commonly held assumption about a direct conflict or necessary trade-off between innovation and regulation, the two go hand in hand; businesses need some degree of regulatory certainty, allied to technical standards, to thrive. In addition, without careful design and testing, AI systems could cause harm that slows down their adoption. A pro-innovation approach ensures a safe environment and keeps overly restrictive regulations at bay.

A parallel can be made between the need for AI regulation and the need for market regulation. Markets drive economic growth but need contract law to function and regulations to address market failures. Similarly, while AI systems are powerful tools for improving efficiency and solving problems, regulation is needed to address safety concerns and ensure widely distributed benefits. As with markets, the question is not whether to regulate AI but how to do so effectively, promoting growth while improving social outcomes.

Political leaders worldwide face a common challenge: harnessing AI’s potential while managing its legal, social, environmental and security risks. Regulatory responses have varied. In 2024 the European Council approved the EU AI Act, South Korea’s parliament passed the Basic AI Act, Brazil’s senate passed the Brazilian AI Bill and California rejected a state-level proposal for regulation. These initiatives have sparked much debate about how to regulate rapidly evolving general-purpose technologies such as AI.

More than 37 countries have proposed AI-related legal frameworks, but the AI regulation debate has so far focused on the Global North. Without a more inclusive approach, there is a real risk that regulatory asymmetries will widen the global AI divide, leaving emerging economies at a disadvantage and deepening existing economic disparities. The fact that technology makers and takers have fundamentally different interests in shaping AI regulations has thus far been largely overlooked. Furthermore, AI risks vary across cultural contexts and evolve over time, as do laws and norms. AI regulations can therefore neither be copied and pasted from one jurisdiction to another, nor remain static over time.

In this report, the Tony Blair Institute for Global Change offers practical guidance to political leaders in the Global South on how to design and implement effective, proportionate AI regulations. Drawing on TBI’s first-hand experience of working with leaders in more than 45 countries, the report builds on previous work on global AI governance and incorporates insights from academic experts, industry practitioners and policymakers across diverse geographies.

The report provides two key frameworks.

First, it outlines a five-step process for designing and implementing AI regulations. While these steps mirror well-established approaches to regulation, our report provides specific insights into their application within the unique contexts of regulating AI in the Global South.

  • Define regulatory objectives.

  • Establish AI principles and ethical guidelines.

  • Define a regulatory posture.

  • Design comprehensive interventions.

  • Commit to continuous adaptation and learning.

 

Artificial intelligence has the potential to usher in a new era of economic growth and human flourishing. Already AI systems help doctors to improve medical diagnostics, engineers to optimise energy consumption and scientists to uncover exciting discoveries. Indeed, AI is projected to contribute approximately $15.7 trillion to the global economy by 2030, enhancing productivity and fostering new product innovations.

Yet the AI revolution is only beginning. And if governments are willing and able to invest in the right data infrastructure, compute capacity and skills, while also providing incentives for AI research and adoption, they will be important drivers of that revolution.

Governments also need to recognise that technological innovation alone is not enough: realising AI’s economic and social potential also requires political leadership and good governance. Contrary to the commonly held assumption about a direct conflict or necessary trade-off between innovation and regulation, the two go hand in hand; businesses need some degree of regulatory certainty, allied to technical standards, to thrive. In addition, without careful design and testing, AI systems could cause harm that slows down their adoption. A pro-innovation approach ensures a safe environment and keeps overly restrictive regulations at bay.

A parallel can be made between the need for AI regulation and the need for market regulation. Markets drive economic growth but need contract law to function and regulations to address market failures. Similarly, while AI systems are powerful tools for improving efficiency and solving problems, regulation is needed to address safety concerns and ensure widely distributed benefits. As with markets, the question is not whether to regulate AI but how to do so effectively, promoting growth while improving social outcomes.

Political leaders worldwide face a common challenge: harnessing AI’s potential while managing its legal, social, environmental and security risks. Regulatory responses have varied. In 2024 the European Council approved the EU AI Act, South Korea’s parliament passed the Basic AI Act, Brazil’s senate passed the Brazilian AI Bill and California rejected a state-level proposal for regulation. These initiatives have sparked much debate about how to regulate rapidly evolving general-purpose technologies such as AI.

More than 37 countries have proposed AI-related legal frameworks, but the AI regulation debate has so far focused on the Global North. Without a more inclusive approach, there is a real risk that regulatory asymmetries will widen the global AI divide, leaving emerging economies at a disadvantage and deepening existing economic disparities. The fact that technology makers and takers have fundamentally different interests in shaping AI regulations has thus far been largely overlooked. Furthermore, AI risks vary across cultural contexts and evolve over time, as do laws and norms. AI regulations can therefore neither be copied and pasted from one jurisdiction to another, nor remain static over time.

In this report, the Tony Blair Institute for Global Change offers practical guidance to political leaders in the Global South on how to design and implement effective, proportionate AI regulations. Drawing on TBI’s first-hand experience of working with leaders in more than 45 countries, the report builds on previous work on global AI governance and incorporates insights from academic experts, industry practitioners and policymakers across diverse geographies.

The report provides two key frameworks.

First, it outlines a five-step process for designing and implementing AI regulations. While these steps mirror well-established approaches to regulation, our report provides specific insights into their application within the unique contexts of regulating AI in the Global South.

  • Define regulatory objectives.

  • Establish AI principles and ethical guidelines.

  • Define a regulatory posture.

  • Design comprehensive interventions.

  • Commit to continuous adaptation and learning.

Position: Co -Founder of ENGAGE,a new social venture for the promotion of volunteerism and service and Ideator of Sharing4Good