Note: no AI tools were used to assist the writing of this article.

We are already in a world where 27% of UK charities use AI (source: Charity Digital Skills report 2023). It’s a percentage that’s rising all the time. Without strong governance, rationale and openness about the role AI plays in the work charities do, the train may leave the platform before anyone can get on board.

This article pulls together various bits of in-depth research which consider the perceived impacts it may have on the sector, and the role public opinion has towards charities using AI. What they use it for and how. And how transparent they are about AI usage.

The main resources used for exploring the issues in this article are:

Support for AI is broad, but not total

The public broadly has differing but balanced views about the risks and opportunities surrounding AI. Although there are cohorts whose views can be described as being very pro or anti AI technology, no studies surveyed here found evidence that the public felt AI should not be used by charities under any circumstances.

The majority view is that most people see AI as a technology which has varying degrees of risks and benefits. One study in particular (Charities Aid Foundation report from 2023) found that on balance, there were more opportunities than risks, while 34% felt that the benefits and risks were about equal.

The Alan Turing Institute – Understanding Public Attitudes to AI report, found that the public see clear benefits for many different uses of AI. In particular, the use of technology relating to health, science and security

However, risk, as with opportunities, are seen as relative not absolute terms. Some risks are deemed far greater than others. The number of studies into charities and AI are admittedly still limited, but comparisons and shared themes can be found between them.

Risks

The Charities Aid Foundation report, which is the most comprehensive report I found to date, stated that of the seven identified risks posed by AI, the top three were:

  1. Reduction of workforce (27%)
  2. Risk of a data breach (23%)
  3. Making biased decisions (15%)

Other notable concerns included decisions becoming unethical (10%), which closely aligned with the ‘making unethical decisions’ concern (10%), and ‘becoming disconnected with their cause’ (11%).

Losing connection to the cause, losing trust with the organisation’s decisions, and impact on workforce, are three areas where people also raised concerns about AI technology.

AI could make charities more impactful and responsive

In terms of opportunities, clear themes could also be identified in the Charities Aid Foundation report. The top three were:

  1. Faster response to disasters (28%)
  2. Helping more people (25%)
  3. Being more efficient (16%)

In essence, being more responsive and efficient were identified as providing the greatest benefit to charities from an AI point of view. It is interesting to note that the positive characteristics most often identified by charities are not necessarily valued equally by the public.

Charities ought to consider how much their priorities towards AI align with public opinion and consider the reasons why there might be a gap between the two if one does exist.

People who think the public don’t care about AI are kidding themselves

Transparency around how AI technology is used, and why, are very important issues. The public know and accept that AI is going to play part in their lives, and by extension the work of the causes they support. Put simply, demonstrating what the tangible benefits of using AI are and aligning those with real-world examples serves as a powerful way to build trust and confidence in how organisations are using this technology.

Conversely, there are obvious risks to not being transparent about how AI is being used. The various reports highlighted the potential consequences of this, which could lead to an erosion of confidence and trust, and ultimately support, for organisations using AI.

Image: The Enigma Machine at Bletchley Park, England (Alex Donohue)

It is also naïve to assume that how AI technology is being used within charities is not an issue of concern to the public. It is. What’s more, there is sound evidence that people do care about this issue very much. The Charities Aid Foundation report found only 13% of people would “pay not much or no attention” to what a charity said publicly about how it used AI.

The Charity AI – The Impact of AI In The Charity Sector report identified a number of common issues which people working within the industry have identified. These can be summarised as:

  • Governance and legal issues / pace of change
  • Issues around authenticity and understanding how the technology works
  • The existential threat AI poses to jobs and career opportunities
  • Concerns about the technology making unfair and biased decisions
  • The danger that AI is going to widen the gap between the haves and have nots within the charity sector

Public attitudes towards AI, and in particular AI usage among charities, are complex and nuanced. But that isn’t to say there aren’t trends and common themes in people’s opinions – whether they identify as being generally pro AI or not. It is true that charities are more likely to come under closer scrutiny than commercial organisation’s about their AI usage. This is largely because charities are expected to operate to a higher ethical code in the mind of the public.

It is also naïve to assume that how AI technology is being used within charities is not an issue of concern to the public. It is.

Ultimately, the issue comes down to trust, authenticity and staying true to your values. Notable by its absence, there seems to be very little discussion (if any) about charities using AI generated imagery to represent its people, products, or services. This seems an obvious vulnerability given the reputational damage misleading or false imagery can do. Policies around AI usage must cover not only words but imagery, animations, and video footage too.

What the reports had to say

Charities Aid Foundation report (highlights)

The full Charities Aid Foundation report is available here.

Methodology: Around 6,000 people were interviewed across 10 countries. Then they hosted two focus groups in the UK to dig deeper into opinions. Participants were shown seven opportunities that AI presented to charities, and then shown seven potential risks. To keep things fair, half the sample saw the risks first, while the other half saw the opportunities first.

Overall, the majority thought the opportunities outweighed the risks. However, low and middle-income countries tended to be much more positive about AI, with Kenya the most receptive of all.

Benefits of using AI

The seven opportunities presented were:

  • Managing volunteers better (7% positive)
  • Measuring success (7%)
  • Making informed decisions (12%)
  • Being more efficient (16%)
  • Personalising communications (5%)
  • Faster response to disasters (28%)
  • Helping more people (25%)

When forced to prioritise, the two most popular were the two opportunities that had the most direct human benefit, ‘faster disaster response’ and ‘helping more people’.

‘Increased efficiency’ was also popular, as it could allow charity workers to spend more time helping their communities.

Risks of using AI

The most likely risk was felt to be charities reducing their workforce, as certain tasks become automated.

The seven risks presented were:

  • Risking a data breach (23%)
  • Becoming disconnected with their cause (11%)
  • Losing support from donors (6%)
  • Making biased decisions (15%)
  • Making unethical decisions (10%)
  • Reducing their workforce (27%)
  • Reducing accessibility (8% risk)

When they were asked to prioritise, ‘reducing their workforce’ and ‘a data breach’ were the two most likely risks for charities. ‘Making biased decisions’ also came up strongly in the focus groups.

In the focus groups, it also became clear that even those who see the risks as being dominant are not completely averse to charities using AI. The key issue was maintaining what makes charities special.

Charities need to communicate carefully about how they use AI. People are watching and expect transparency about how AI is being used. Just 13% of people would pay not much or no attention to what a charity they supported said publicly about how they were using AI.

The challenge for charities is that AI is a tricky subject to grasp without real world examples of how it’s used. Discussions in the focus groups showed that people don’t really know how AI works, so charities need to consider bringing it to life for their audiences and being example-led. People often cite examples of bad technology as bad AI, such as a frustrating phone-tree or a supermarket self-checkout, and these perceptions can dominate when AI is discussed at a high level without tangible examples.

Fundamentally, the public want to know that a charity is not losing sight of its most important part; the connection between the organisation and the cause it supports. That’s why it’s the positive human benefits – helping more people, disaster relief, making better decisions – that are the easiest for people to grasp and, therefore, the most supported.

In turn, reactions would be extremely negative if a charity was seen to be using AI to drastically slim down its workforce. However, other major risks such as a data breach need to be taken in context: while rated as the second most likely risk, people do not see the use of AI as making a data breach any more likely than it is already.

Losing sense of your values comes at a heavy price

While scepticism about AI usage does exist, there was no evidence of people saying charities should not use AI at all. Broadly, people see both the benefits and risks. The concerns people have about AI are naturally how charities use the technology, and what they use it for.

Being more illustrative about the positive impact helps tackle the concern that exists, but there are still threats within this. For example, being more responsive and being able to help more people are viewed as being big positives. But charities losing connection with their cause and replacing workers with AI technology are seen as highly negative and would likely damage opinions and the public perception of the charity. 

When discussing AI usage, we should always aim to bring things back to the cause. We can help more people. We can be more responsive. Doing so helps demonstrate the true benefits of what AI can offer.

Charity AI impact (highlights)

The Charity AI – The Impact of AI for Charities article considers a range of topics around AI usage at charities.

Several important issues are raised:

  • Lack of transparency about how AI works
  • Danger of bias, inaccuracy, and unfairness creeping into processes
  • Growth of AI technology rapidly outpacing legal protections, particularly with Intellectual Property
  • Potential for AI to widen gaps in the charity sector between the haves and have nots
  • The risk of AI taking on work that was previously done by people, however this risk needs to be put in context as the same argument has been made about the internet, PCs, Windows etc.

 This is notable because the themes are prevalent in other studies, helping reinforce the point that these are all important challenges for charities.

Alan Turing Institute – Understanding Public Attitudes to AI (highlights)

The full Alan Turing Institute – Understanding Public Attitudes to AI report is available here.

Highlights from the survey results analysis

The survey found that the public see clear benefits for many uses of AI, particularly technologies relating to health, science, and security.

For example, when offered 17 examples of AI technologies to consider, respondents thought the benefits outweighed concerns for 10 of these:

  • 88% of the public said that AI is beneficial for assessing the risk of cancer
  • 76% saw the benefit of virtual reality in education
  • 74% think climate research simulations could be advanced using the technology

The survey also showed that people often think speed, efficiency and improving accessibility are the main advantages of AI. For example, 82% think that earlier detection is a benefit of using AI with cancer scans and 70% feel speeding up border control is a benefit of facial recognition technology.

However, attitudes do vary across different technologies. Almost two thirds (64%) were concerned that workplaces would rely too heavily on AI for recruitment, rather than using professional judgement, and 61% are concerned that AI will be less able than employers and recruiters to take account of individual circumstances.

When asked what would make them more comfortable with the use of AI, almost two thirds (62%) chose ‘laws and regulations that prohibit certain uses of technologies and guide the use of all AI technologies’. 59% chose ‘clear procedures for appealing to a human against an AI decision’.

Conclusions of Alan Turing report

  • Policymakers and developers of AI systems must work to support public awareness and enhance transparency surrounding the use of less visible applications of AI used in the public domain.
  • The findings show that the public expect many AI technologies to bring improvements to their lives, particularly around speed, efficiency and accessibility.

  • While people are positive about some of the perceived benefits of AI, they also express concerns, particularly around transparency, accountability, and loss of human judgement.

  •  People call for regulation of AI and would like to see an independent regulator in place, along with clear procedures for appealing against AI decisions.

  • People in older age groups are particularly concerned about the explainability of AI decisions and lack of human involvement in decision making.

  • Lastly, policymakers must acknowledge that the public have complex and nuanced views about uses of AI, depending on what the technology is used for.

As with any human relationship, trust is something which is earned over time. It can also be lost fairly easily if the wrong decisions are made. Once it is lost, it may be hard to earn it back. Or it could be lost forever. Openness about AI is too important an issue to gamble on, and if charities don’t provide the answers their supporters are looking for, they might start to wonder why. Or go looking for the answers themselves.

Alex Donohue (May 2024)