AI in the Philanthropic Sector:

An Unmissable Opportunity or an Ethical Dilemma

11 July 2024

Juliet Valdinger, Philanthropy Consultant


Before we start, let’s be clear about what is Artificial Intelligence (AI). AI is more than ‘technology’. It does more than collate, process, analyse and provide static data. It assembles and evaluates data in a way far more expansive than the human mind can process. It identifies trends, patterns, and insights. It can create pictures, films, music, and so much more. And, although no one knows how it does it, it can make decisions.

So, what does it mean for the philanthropic sector?

An ever-increasing number of funders, charities, socially driven organisations, and enterprises are considering what the role of AI could be in the philanthropic sector. Can it improve decision-making processes and make them more equitable? Can AI-powered software provide a platform for collective intelligence and level the playing field? Can it increase efficiency by providing data-driven insights, predictive analytics, and automation? Can it improve service delivery by increasing our understanding of society’s most pressing needs?

 Despite all these potential benefits, there is not yet a cohesive approach to using AI. Many see opportunities for a sector that needs to quickly evolve and respond to various crises. Others are worried about the ethics of using AI and the uncontrollable speed of change it brings with it. It is worth noting that although ‘philanthropy’ is used as an umbrella term to cover the entire third sector, different stakeholders have different views on using AI. Most charities are trying to use AI, whereas most funders remain reluctant to do so.

In this blog post, I will look at the opportunities AI offers to charities, reflecting on some case studies. I will also explore its challenges and how it can ethically be used.

AI’s Opportunities for the Philanthropic Sector

AI can primarily be used in two ways in the philanthropic sector:

  1. It can be applied to improve and streamline the grant-making processes for charities and funders alike.
  2. It can be used to better target how societal crises are addressed and managed and, in doing so, improve service delivery, promoting a proactive rather than reactive approach.

A few examples of how it can do so include:

  • Providing insights (i.e. scope the landscape, or identify needs and opportunities);
  • Analysing data (i.e. identify trends, patterns, and insights, decide where to allocate resources, or measure impact);
  • Automating routine tasks (i.e. complete and manage applications, track performance, or manage portfolios); and
  • Communicating with all stakeholders (i.e. deliver tailored and/or group outreach messaging and distribution of materials).

Case Studies

Let’s look at several examples of how AI is currently being used in the philanthropic sector.

Since its inception in 2012, the Danish start-up, Be My Eyes has been at the forefront of creating technology to support the global community of over 250 million individuals who are blind or have low vision. Their work has been instrumental in connecting people with visual impairments to Virtual Volunteers, who can help with various daily tasks, ranging from identifying products to navigating complex environments like airports. With the introduction of GPT-4, an AI chatbot, the Virtual Volunteers can now generate and use more natural, human-like language with these individuals. With Virtual Volunteers obtaining a comprehensive understanding of discourse similar to that of a human volunteer, and using it appropriately, they can better assist those who are blind or have low vision.

Langland Conservation is a UK-registered charity which provides intelligence support to conservation partners. It is currently developing an AI model to map and track habitat degradation in a key Giant Pangolin habitat in Kenya, as this region is experiencing rapid habitat decline due to widespread loss of forest, and the mass conversion of viable habitat to farmland. They are using AI-techniques to collate data and share this information with their local partners. Langland is conducting this AI project as part of its wider research into the field of deep learning and remote sensing for conservation.

In Sierra Leone, a ChatGPT-like AI chatbot called TheTeacher.AI was piloted for teachers, tailored to the local curriculum and operable with unreliable and irregular internet. At first, teachers struggled to use the platform. One of the issues they faced was not being able to frame their queries to obtain useful responses. However, with time, frequent usage, and training, teachers were able to explain concepts, plan lessons, and create materials using the AI platform. This demonstrates that using an AI software is not always obvious, but with appropriate training, experimentation, and sharing helpful applications, it can provide benefits to the educational sector.

 The UK is also experimenting with using AI in education. The Government has invested £2million into Oak National Academy to support teachers to use AI in their teaching methodology.

The Challenges with AI: Many Unanswered Questions

Unfortunately, there are a great number of hurdles and obstacles the philanthropic sector has and will continue to encounter when it comes to using AI.

AI technology and platforms are being rolled out without appropriate consideration of their short and long-term impacts. Moreover, there are still numerous unanswered, and potentially problematic, questions about how AI will be used:

    • Who will design the structure of the technologies and how will the data be kept accurate and updated?
    • How will the machine decide what is important, what are the priorities, and what are the right solutions?
    • How will every member of staff receive training about how to use the AI systems?
    • What will be the carbon impact of its usage?
    • Is the implementation of AI financially viable?
    • How will AI initiatives be scaled across different countries and different languages?
    • Will every country have the appropriate infrastructure and data regulation? Will it create more inequality?
    • Problems and solutions for social and environmental challenges change minute by minute, and AI is not yet programmed to pick up the nuances of each of the stakeholders’ independent desires, knowledge, goals, and successes, and if and/or where they fit on a moral compass. And whose moral compass is the right one to follow?

    Without answering these questions, we leave ourselves open to future problems and to implementing solutions that only create further inequity and inequality.

    But one of the biggest challenges with AI is how it can be implemented and used ethically.

    Considering the Ethics of Using AI

    Although ethics must be considered when using AI in the philanthropy sector, it is not unique to this sector. Scientists, think tanks and civil organisations have raised the alarm over the potential unethical use of AI across all industries.

    One of the key concerns is that there is no single entity overseeing its ethical implementation and application, setting over-arching regulations, or keeping its various stakeholders accountable.

    The European Commission has published the Ethics Guidelines for Trustworthy AI, but these Guidelines are hardly enforceable.

    The Guidelines propose that trustworthy AI should be:
    1. Lawful – respecting all applicable laws and regulations;
    2. Ethical – respecting ethical principles and values; and
    3. Robust – both from a technical perspective while taking into account its social environment.

    The Guidelines also include seven key requirements that AI systems should meet to be deemed trustworthy:

    1. Human agency and oversight;
    2. Technical robustness and safety;
    3. Privacy and data governance;
    4. Transparency;
    5. Diversity, non-discrimination and fairness;
    6. Societal and environmental well-being; and
    7. Accountability.

     In practice, this means that those using AI must regularly assess and monitor their data to ensure the accuracy of information and predictions that are generated. This requires time and money and without being enforced will most likely not happen rigorously.

    Furthermore, the three most critical ethical dilemmas that have been identified with using AI are:

    1. This is the state of not being disturbed by others or being free from public attention. Key areas of privacy have been recognised by the European Convention on Human Rights and the Universal Declaration of Human Rights. Privacy is also a running thread in GDPR. However, many AI systems rely on large datasets, which may include the personal information of donors, beneficiaries, or other stakeholders. There are risks of this data being misused, breached, or exploited without proper safeguards and consent mechanisms in place.
    2. AI algorithms can perpetuate and amplify societal biases present in the training data, which can lead to discriminatory outcomes or unfair treatment of certain groups. Biases are endemic through most AI datasets which raises concerns, especially for vulnerable populations served by philanthropic organisations.
    3. Many AI systems operate as ‘black boxes’, making it difficult to understand how decisions are made and how personal data is being used. This lack of transparency can undermine trust and raises concerns among stakeholders, particularly in the criminal justice, finance, and healthcare industries.

    Ultimately, protecting privacy should be a core consideration as the philanthropic sector embraces AI technologies, ensuring that the benefits of AI are realised without compromising the rights and dignity of the individuals and communities they serve.

    An independent Legal and Ethical Review Board could be set up to hold organisations accountable, set regulations, ensure compliance, and advise the philanthropic sector on how to ethically use AI.


    1. AI is poised to revolutionise philanthropy thanks to its incredible data analysis capabilities and the collaborative and delivery tools it can support. But to ensure AI has a positive impact on the philanthropic sector more time is required to ensure its ethical development, implementation, and application.

      Stakeholders in the philanthropic sector must work together to unlock AI’s potential and ensure its benefits are shared equally across the sector. AI’s potential is great but its capacity to exacerbate existing inequalities and issues reminds us that it must be carefully considered like any other advanced technology. Its potential impact on mankind is enormous but could be very exciting.

      Tools and Further Resources:

      If you wish to learn more about AI or are looking for AI-powered tools:

      1. One of the leading players in the philanthropy sector is Zoe Amar. Zoe has built a portfolio of tools that in some ways are more appropriate to charities rather than funders, but the majority of her tools can be used by both groups.
      2. Through her work at We and AI,Tania Duarte and her community have put together some resources to teach people about AI and its impact on society. This includes blogs, toolkits, workshops, quizzes and a free image library with more inclusive AI pictures. 
      3. Rachel Coldicutt, Executive Director of Careful Industries, is well versed on the social impact of new and emerging technologies. Amongst other things, her organisation provides training so others can understand the impact their use of AI is making on society and the environment.
      4. In July 2024, The Conduit Academy’s AI for Social Good online course will be made available to the public until the end of August 2024. It is very informative and provides a useful introduction to what AI is, before providing theory and case studies about Responsible AI, AI and Health, AI and Education, AI and the Energy Transition, AI and Conservation, and AI and Charities. If you use the code ATHENA10 when you sign up, you will get a 10% discount