Why non-profits can’t afford to get AI wrong

Leadership

New Trust in AI research from KPMG and the University of Melbourne was applied to the non-profit sector this week. Here’s how to use the technology as a surgical tool and have the greatest impact.
AI should be deployed as a sharp tool not a blunt instrument, according to Emma Crichton, AutogenAI’s APAC CEO. Image: Getty

In a room full of non-profit leaders at The Langham in Melbourne’s Southbank, a recent panel discussion explored how the sector can strategically integrate artificial intelligence into its operations.

The conversation featured the CEO of Good Shepherd ANZ, the APAC CEO for Autogen AI, and the Chair of Trust at the University of Melbourne, who recently co-authored a global AI report. The Trust, attitudes and use of artificial intelligence: A global study recommends four factors be considered when implementing AI: engaging trust, boosting AI literacy, and strengthening governance, and transformational leadership.

Trust study insights for not-for-profits

While the report doesn’t have a dedicated section for charities, its core findings on public trust, risk, and the “benevolent” use of AI are highly relevant and offer insights that can be applied to the non-profit sector.

According to the research spearheaded by Dr Gillespie, public trust in AI remains low despite widespread use. This is largely due to concerns about risks such as misinformation, job displacement, and data security. The report reveals that people are more willing to trust AI when its purpose is explicitly benevolent – a key principle that the not-for-profit sector can leverage.

“I think people lean in a lot more, and are a lot more forgiving when the whole purpose of the AI is to do good,” Dr Gillespie noted on the panel.

Nicole Gillespie, University of Melbourne Professor, Stella Avramopolous, Good Shepherd CEO, and Emma Crichton AutogenAI APAC CEO discuss using AI in not-for-profit organisations. Image: AutogenAI

“It is benevolent purpose which really underpins the trust.”

A central theme of both the report and this week’s panel discussion is the importance of taking a ‘risk-stratified approach’ to AI, classifying AI applications based on their potential for harm. Low-risk applications can be trialled and adopted quickly, the report advises, while high-risk uses require caution, additional governance, and a “human in the loop.”

Emma Crichton, the APAC CEO of AutogenAI, highlighted the ineffectiveness of a one-size-fits-all approach to AI, stating that “AI for everyone is AI for no one.”

“The true value comes from using AI as a surgical tool to solve a specific problem, not as a blunt instrument applied universally.”

Emma Crichton

Crichton cautions organisations against “acquiring a new AI tool and trying to use it everywhere.”

Good Shepherd’s path forward

Panellist Stella Avramopoulos, the CEO of the Good Shepherd, agreed and stated that she implements a risk-stratified approach at her organisation.

Good Shepherd oversees 15 family violence refuges, operates a 24-hour crisis hotline, facilitates the No Interest Loan Scheme, and is the largest provider of financial wellbeing programs in the country.

The not-for-profit manages more than 100,000 calls for financial assistance a year, and is now trialling Co-pilot to explore how AI can support its mission.

The panellists advise that non-profits adopt AI proactively. They note that waiting for external grants to fund innovation is often no longer sufficient and organisations need to leverage balance sheets to invest in the future.

Trust and governance

Dr. Gillespie shared several key takeaways from the research she conducted. The study was based on a global survey of 48,000 people across 47 countries.

It found that building stakeholder trust requires moving slowly and ensuring human involvement in consequential decisions. Gillespie revealed that a problem-led approach to AI drives trust, whereby AI is used to solve a clear, specific issue rather than being applied indiscriminately.

The ‘Human-in-the-loop’ construct posits that human input is required in a system. Image: Getty

The conversation also addressed the historical reasons why the non-profit sector has been slower than others to adopt new technologies. A central factor is the inherent risk aversion of boards and leaders, who must prioritise the safety of their clients above all else. With limited resources, non-profit leadership can be hesitant to make large investments in unproven technology, especially when the compliance and assurance costs of their work continue to rise.

Additionally, the complexity of human services work – where clients are not “nice linear to-do lists”- makes it difficult to automate processes. This cautious culture, combined with the lack of dedicated tech funding and expertise, has been a significant barrier to change.

The essential ‘human-in-the-loop’

The discussion stressed the need for a human to be involved in high-stakes decisions made by AI. The panellists note that this principle is intended to mitigate the risk of errors and ensure accountability, which is particularly important in a sector dealing with vulnerable populations.

In a sector where a caseworker’s ability to read a client’s subtle body language is crucial, AI should serve as an augmentative tool rather than a replacement. The ultimate goal, it was concluded, is a synthesis of human expertise and technological capability.

Look back on the week that was with hand-picked articles from Australia and around the world. Sign up to the Forbes Australia newsletter here or become a member here.

More from Forbes Australia

Avatar of Shivaune Field
Business Journalist