https://www.ids.ac.uk/opinions/ten-reasons-not-to-use-ai-for-development...
Here I outline ten reasons not to use generative AI, like ChatGPT, in international development or humanitarian work, while outlining how work on Responsible AI can address these failings, explore remedies. I urge politicians and practitioners to defer use of these potentially harmful technologies until this work is complete.
Mark Zuckerberg coined the term ‘move fast and break things’ and it remained Facebook’s motto until 2014. However, the backlash against the resulting harms of disinformation and surveillance led Harvard Business Review to declare in 2019 that the age of ‘move fast and break things’ was over. Yet today Kier Starmer is trying to position himself as a ‘Tech Bro’ suggesting that AI is a drug that we can’t get enough of. This is despite mounting evidence that the uncritical use of AI is damaging the environment, violating human rights, and not delivering economic return on investment.
In international development and humanitarian work we are ethically committed to the precautionary principle of “do no harm” and need – at the very least – to take the more considered and reflective approach such as the Responsible AI in development approach.
What is Artificial Intelligence?
The term Artificial Intelligence (AI) is more than 70 years old and applies to a wide range of different technologies. It is really a marketing term that is empty of content as the technology is neither ‘artificial’ nor does it involve any ‘intelligence’. AI often relies on number crunching: statistical modelling of big data sets to identify patterns used for prediction – of everything from which Netflix movie you might like to whether a person should get access to social protection or parole. The technology is prone to bias and errors producing risks of harm.
There are many types of AI but the resurgence of interest been largely due to a specific form of ‘machine learning’ AI that generates natural language outputs like the chatbot program ChatGPT. Billions of dollars of investment have poured into ventures including OpenAI, Google Gemini, Amazon Lex, Anthropic and Cohere, who all have headquarters in Silicon Valley, California.
The large language models were produced by copying data from the internet without obtaining consent or copyright permissions. The mechanisms of AI systems present some serious challenges in relation to humanitarian principles.
Given the known harms of AI it may be considered difficult to reconcile it with the precautionary principle of ‘do no harm’ to use AI for international development (AI4D), and morally dubious to experiment with it on vulnerable populations such as refugees.
Ten reasons not to use AI in international development
- Stolen Data. Much of the big data that AI is based upon is stolen, having been scraped from the web without permission or otherwise procured without the consent of its producers.
- Biased Data. Much of that data is biased in ways that reflect and reproduce historical patterns of racial and gender prejudice.
- Labour Exploitation. Once collected the data is often labelled using the labour of exploited women from Kenya and other low-income, low labour protection locations.
- No Transparency. Once labelled the data is processed using opaque and proprietary algorithms so it is not known on what basis decisions are being made.
- Biased Algorithms. It is not only the data that can be biased, the mechanism used to operate on the big data can also be biased and problematic.
- Prone to Error and Lies. We now know that generative AI (like Chat-GPT) is not only biased but error-prone – regularly producing outputs that are either incorrect or fabrications (sometimes called AI hallucinations) ranging from small inaccuracies to glaring errors.
- No Accountability. Because the inner workings of the AI algorithms are unknowable ‘black boxes’ it is practically impossible for citizens to remedy errors made or obtain redress.
- Dehumanising. A core objective or outcome of many AI applications is disintermediation and/or automation, i.e. removing human elements of processes and replacing them with machine calculation. This is hard to reconcile with humanitarian participatory principles, or commitments to human-centred processes.
- Colonialism. Many AI applications extract data from Africa/Asia to US big tech of big finance corporations in a way that scholars call the algorithmic colonisation of Africa, the new imperialism in the Global South, or simply data colonialism.
- Climate Impact. The ballooning carbon emissions of AI are hard to reconcile with sustainable development commitments. Evidence shows that AI is exploitative not only of African bodies in sweatshop labour but also of global water and mineral resources. Training just one AI model creates emissions equivalent to 300 people flying around the world 100 times.
Ten routes to responsible use of AI for international development
In recognition of the harm and discrimination caused by AI there is now an emerging body of work underway to address and avoid these injustices. The Canadian development agency IDRC is leading the way by researching and developing forms of “Responsible AI”, which aims to use AI without violating human rights or causing injustice or gender or racial discrimination.
- Data integrity. Rather than drawing on data sets ‘stolen’ by Big Tech corporations Responsible AI examines the use of in-house big data sets of government departments and agencies.
- Debiasing data. Responsible AI practitioners are experimenting with a range of methods for removing bias to remove historical patterns of gender and racial prejudice – although some scientists are arguing the need to go beyond debiasing the data.
- Decent work. Responsible AI practitioners need to remove worker exploitation from the AI supply chain and ensure that fair work practices are in place.
- Transparency. Responsible AI practitioners can make their processes ‘open’ or otherwise transparent and subject to forms of oversight and accountability.
- Algorithmic audit. Responsible AI practitioners can submit to algorithmic accountability and subject themselves to human rights audits of AI systems.
- Do no harm. Many Responsible AI initiatives use the precautionary principle ‘do no harm’ and go slowly by design – adopting and implementing guidelines for how they use AI.
- Humans in the Loop. Remote and automated processes can result in dehumanised development experiences, bias and discrimination. One form of Responsible AI is to always retain human judgement and human interfaces in development processes.
- Participation and Inclusion. Many Responsible AI initiatives focus on developing AI solutions with and by affected populations. Aiming to find ways to ensure influential participation of affected populations in each stage of the project cycle.
- Decolonising AI. This new area is relatively under-developed. Efforts are being made to build AI-hubs in the majority world and to expand training and education in AI in underserved communities. However, the critique of AI coloniality goes beyond numbers and there remains a great deal to do in this space.
- Climate Impact. Some initiatives present AI as the solution to building climate resilience but AI can also accelerate global warming. Much more thought needs to go into decreasing the net carbon emissions of the AI industry and AI and the environment generally.
Use technologies known to do no harm
Although the harms of using AI have already been experienced, and well documented, the work to mitigate and overcome AI harms and injustice is in its infancy. To avoid causing harm and reputational damage, development funders and humanitarian agencies should apply precautionary principles. They should not experiment with AI on vulnerable populations and marginalised communities.
Until the ten problems identified above have been addressed, funders and agencies should concentrate their efforts on advancing the pathways to Responsible AI, or only use technologies known to ‘do no harm’.
If you are involved in the development of digital development practices, policies and strategies and are interested in exploring the role of AI, Tony Roberts is co-delivering the Inclusive Digital Transformation in International Development short course at IDS in March.