Charities risk losing public trust if they use artificial intelligence (AI) to influence decisions about who receives help, according to new CharityTracker research into public attitudes to its use within the charity sector.
The December 2025 study of 3,000 nationally representative UK adults* finds that public opinion on charity use of AI is cautious, conditional, and highly context-dependent, rather than uniformly negative. Just over a third of people (36%) feel positive about charities using AI, while a similar proportion (37%) are unsure, and 27% feel negative.
Age and familiarity with AI shape confidence and attitudes around how it can be used. CharityTracker’s survey found that while 39% of adults have used chat or writing assistants and 37% have used voice assistants in the past 12 months, a substantial minority, 32%, report no personal AI use at all. Among adults aged 65 and over, this rises to 55%.
Attitudes on the acceptability of charities using AI vary sharply depending on how the technology is applied. Using AI to help decide who receives support is the least acceptable use tested. Here, a larger proportion feel that it is unacceptable to use AI in this way, than feel it’s acceptable (38% vs. 33%). Discomfort is strongest among older adults and those with limited personal experience of AI. Nearly half of people aged over 55 (46%) say this use is unacceptable, alongside 49% of those who have never used AI.
CharityTracker analysis suggests this concern reflects unease about AI influencing high-stakes judgements, rather than opposition to the technology itself. Even where people recognise potential efficiency gains, there is a clear expectation that decisions affecting access to help remain human-led, accountable, and transparent.
By contrast, the research identifies clear permission spaces where AI use is widely supported. Nearly two-thirds of the public (64%) say it is acceptable for charities to use AI to detect fraud or scams, and a majority (53%) are comfortable with back-office productivity uses such as scheduling or financial planning. Support for these applications is high across age groups and particularly strong among those who have used AI personally in the past year.
Taken together, the findings point to what researchers describe as a conditional licence to operate. The public is open to AI where it protects funds, improves efficiency, or supports staff, but far less comfortable when AI appears to replace human judgement or frontline care.
The research also highlights concerns around using synthetic media, a potential red flag for reputational risk. The use of AI-generated images or videos divides opinion, with 40% finding it acceptable and 31% unacceptable. Acceptance is significantly lower among older audiences, where concerns about authenticity and trust are strongest. These concerns matter because visual storytelling plays such a central role in charity communications and appeals.
Human-facing uses of AI are similarly sensitive. Just 41% of people find it acceptable for charities to use AI to answer enquiries by chat or email, compared with 30% who find this unacceptable. Alongside this, 44% say it is important to have an easy way to speak to a human instead. CharityTracker analysis suggests this reflects concern about losing access to human contact, particularly where it may be perceived as a barrier to support or advice.
Across all use cases, transparency and safeguards consistently outweigh innovation in shaping public confidence. Half of respondents (50%) say charities should clearly tell people when AI is being used, while 38% want to know what data is used and how. Older adults place particularly high value on visibility, reassurance, and the ability to opt for human contact.
While people recognise potential benefits of AI, including saving staff time (29%), improving services (27%), and helping charities use money more efficiently (27%), perceived risks remain prominent. Data security (36%), loss of the human factor (35%), and the risk of serious mistakes (31%) are the most common concerns. Only 13% of people are comfortable with sensitive personal data being used in AI systems, underlining strong expectations around data minimisation and consent.
These findings highlight a significant opportunity for charities to embrace AI in ways that strengthen impact and public confidence. When used transparently and with clear human oversight, the technology has the potential to help organisations do more with limited resources, protect funds from fraud, and free up staff time for frontline work. For charities under growing financial and operational pressure, responsible and visible use of AI offers a practical route to innovation that aligns with public expectations and charitable values.
Speaking about the findings, Ashley Rowthorn, Executive Director at CharityTracker, said: “Charities are rightly exploring AI to manage pressure on services and use their resources more effectively, but this research shows how easy it is to get this wrong.
“The public is not rejecting AI outright. Where it supports people, protects funds, or improves efficiency, there is real permission. But when it starts to replace human judgement in decisions about who receives help, trust quickly falls away. Familiarity with AI also plays a role. Those who have used AI personally in the past year are more comfortable with its use by charities, while non-users are more cautious. Strong governance, transparency, and human accountability are essential to maintaining public confidence.”