Is your Not for Profit practicing safe AI?
In a recent webinar where I presented, 49% of the attendees said they knew their staff were using AI or Artificial Intelligence tools like ChatGPT. Yet, the same people also told me that only a fraction of their organisations have completed cybersecurity training in the last twelve months.
This statistic worries me because it means these organisations are not practicing safe AI.
How AI has changed the workplace
Much has changed since ChatGPT was released to the public in November 2022, with wide-scale adoption this year.
Everywhere you turn, other AI tools are being developed, released or discussed. And there is a good reason for it – AI could improve productivity.
You and your staff may already use it for search, content development and automating processes. Software developers are also using it to write or improve code.
The cybersecurity risks with AI
Such widespread use of new technology also poses new cybersecurity risks. Because these tools are usually trained by the information fed into them, any information you supply will be absorbed into the model.
One of the known examples of a staff member leaking sensitive information into ChatGPT was done at Samsung. Confidential trade secrets were uploaded into the model as engineers attempted to improve source code.
While you might not have staff developing code, they do work with other sensitive information like that about your various stakeholders. This could be donors, members or clients, which presents risks.
For example, some of your staff may have already found that AI tools are great at organising and cleansing data. However, this is a big risk if it’s Privacy Act data.
Ways of practicing safe AI
Fortunately, more AI tools are coming online now that promise the security of your data. Microsoft is a significant investor in OpenAI, and while we are still waiting for their AI Copilot additions to the 365 suite, they have already released an Enterprise version of Bing. This search function is very similar to ChatGPT but with security guarantees.
Check to see if your Microsoft license includes this Enterprise edition for free. Your administrator may need to activate it.
Other systems, particularly CRMs, are also planning to release AI functionality soon. This is first focussed on predictive analytics and reporting, i.e. the ability to spot trends in your data early to help you make decisions. However, automating repeatable tasks and customer service are also strong function contenders for these improvements.
Regardless of application, these systems will unveil safer options over time. In the meantime, it’s important NOT to put sensitive information into public AI models.
Creating a policy for safe AI use
One of the main preventative things your organisation should do is add an Acceptable Use of AI policy to your existing policies. This may reside in your Data Privacy Policy or Acceptable Use of Data Policy. Regardless of what it’s called, your staff needs to know what is appropriate when using AI – especially since it’s so difficult to detect right now.
As a minimum, I would recommend that privacy or confidential information be prohibited from being submitted to any AI tool not approved by the organisation.
Here’s an example of this policy that was actually prepared by Microsoft Bing’s Enterprise AI tool. I used the prompt “Write me a sample clause for the acceptable use of AI to be added to our staff data privacy policy.”

This is not a perfect policy, but a good first draft. Furthermore, it shows once again how these tools can improve productivity for staff in your organisation.
Once the policy is written, it should be incorporated into the cybersecurity training you hopefully do every 6 months.
Summary of practicing safe AI
AI tools have the potential to increase staff productivity greatly. However, cybersecurity risks must be addressed if they are used.
So, Not for Profits should:
- Recognise that their staff likely already use AI tools;
- Create an Acceptable Use Policy and incorporate this information into regular cybersecurity training;
- Only use tools that safeguard organisational information and
- Never put sensitive information into public AI tools like ChatGPT.
I regularly help Not for Profits with strategic IT decisions, including on identifying cybersecurity risks. Let me know if you need some help.
P.S. If you found this article helpful, you might want to read these ones too:
- Why I can’t wait until AI is incorporated into Microsoft Office
- How should Not for Profits use AI?
- What is AI or Artificial Intelligence?
Tammy Ven Dange is a former charity CEO, Association President, Not for Profit Board Member and IT Executive. Today she helps NFPs with strategic IT decisions, especially around investments.