Your Not for Profit staff is likely using AI today. Yet, do you know the 5 risks of AI adoption in Not for Profits?

It was about two years ago that ChatGPT went live, changing everyone’s perspective on this little-understood capability called Artificial Intelligence, or AI.

That little bit of knowledge has taught the average office worker how to become more productive using such tools, and vendors are selling their wares harder then ever to recoup their investments.

But like any emerging technology, their value comes with risk, especially for Not for Profits where privacy, compliance, and trust are critical.

Here are five key risks your organisation should consider before adopting AI tools like ChatGPT, Microsoft Copilot, or other generative AI platforms.

1) Privacy and Confidential Data Leaks

There have already been known cases in Australia where privacy rules have been violated by uploading sensitive data into AI tools.

Furthermore, the lack of critical data management in locations like SharePoint can easily reveal confidential or privacy data with an AI tool like Copilot.

Any use of AI must consider the tool’s reuse of your information and data restrictions. And it’s important for NFPs to have clear policies around what information can be entered into AI tools and understand how that data might be stored, reused, or shared.

2) Reliance on False Information

Most people are aware that AI can generate false information or hallucinate, rather than stating, “I don’t know.”

While this may be a minor risk in most queries, if this occurs while analysing revenue data projections, answering a WHS question or in creating a case summary for a client, there could be significant repercussions.

Human oversight of these tools is imperative to mitigate this risk.

3) Exponential Costs

The free versions of AI tools like ChatGPT and Gemini have led many to believe that AI is a free or low-cost tool.

However, when enabled in applications, the potential costs can be extraordinary and hard to budget.

As an example, Salesforce currently charges $2+ per conversation using an “Agent.”

4) Lack of Skills

Some early adopters of AI tools are now questioning the value of these investments.

This can be partially blamed on the lack of skills and knowledge to effectively utilise these tools.

This is not just for the end-user but also for administrators in more advanced use cases to ensure that workflows and processes continue to function correctly and remain relevant.

Therefore, ongoing training is essential.

 

5) Dirty or Incomplete Data

AI runs on data. If your existing systems have gaps, inconsistencies, or poor-quality data, then the AI outputs will also be flawed.

Before jumping into AI, it’s worth reviewing your data governance, quality, and structure. Clean data isn’t optional—it’s foundational for getting a solid return on your AI investment.

 

Summary

Your staff is already using AI, whether formally adopted or not. But with opportunity comes risk.

It’s important that your NFP understands the 5 risks of AI adoption: data privacy breaches, false or misleading outputs, hidden costs, lack of user skills, and poor data quality.

With the right governance and guardrails, you can harness the benefits of AI while protecting what matters most—your people, your data, and your mission.

 

 

I regularly help Not for Profits make IT investment decisions and manage corresponding risks.  Let me know if you need some help.

 P.S. If you found this article helpful, you might want to read these too:

 

 

Tammy Ven Dange is a former charity CEO, Association President, Not for Profit Board Member and IT Executive. Today, she helps NFPs with strategic IT decisions, especially around investments.

 

 

Discover more from Roundbox Consulting

Subscribe now to keep reading and get access to the full archive.

Continue reading