It is difficult to work in the nonprofit industry and avoid working with data. You have data from constituents, beneficiaries, donors, members, volunteers, partners, and on and on.
The use of data has become an essential part of our work as a sector. Likewise, protecting and securing that data should also be an essential part of our DNA.
Artificial intelligence (AI) will bring positive change and help transform nonprofits. As leaders, we need to encourage adoption as it will also become a big part of our work in the future. However, we must also be careful with the data entrusted to us or we’ll miss the opportunity ahead and, instead, introduce new challenges.
The Risks of Sharing Data With AI
Consider this: If a stranger walked up to you on the street and asked to look at your data, you wouldn’t simply hand over your files to them. You wouldn’t even let them sneak a peek.
But when you share your data with an AI agent, that is exactly what you’re doing — handing it over to an unknown entity.
It’s actually even worse than letting that stranger on the street look at your data. That’s because human brains have imperfect memories. That stranger might scan through your data and retain a few details, but forget most of it.
Think about it: You can remember the lyrics to that Salt-N-Pepa song from 30 years ago, but you can fail to recall the name of the person you just met 30 minutes ago.
AI memories won’t have these same challenges.
When you share your data with an AI agent, there is no mechanism that forces the AI to forget the data you shared. This is in stark contrast to most other databases today that record communication preferences and consent to receive communication.
Successful adoption will require trust and confidence. Your team members will need clear boundaries or rules of engagement for AI use.
Blazing a Path Forward
Responsible AI starts with responsible data.
Wherever you are in your AI journey, Fundraising.AI’s responsible AI framework is a must-read. This group of nonprofit leaders is helping to ensure that our industry can reap the rewards from AI without harming our industry.
For anyone thinking the framework only applies to tech giants and AI researchers, please consider how the data we share with AI will impact our industry. What can or can’t be shared?
You want your staff to take advantage of AI in a safe and secure way. That requires a careful and measured approach to the use of AI.
Consider these three scenarios:
- When you share data through ChatGPT’s interface, you are allowing OpenAI to include your data in their GPT research project.
- When you share data through one of OpenAI’s GPT APIs, OpenAI will not include that data in their AI research projects. Your data is safe.
- When you use someone else’s AI agent built on top of a model like GPT or Meta’s Llama, you might be sharing ownership of your data with that vendor.
Slight variances in how you upload data to an AI agent or model can make a big impact on who has access to it.
4 Steps Your Organization Can Take to Protect Its Data
It’s long been said that if something is free, then you’re the product. This has never been truer.
AI models need data to learn and improve, so be wary of free services. Assume they are selling your data or using your data for future AI models.
What do you need to do now to protect your data?
1. Review the Fundraising.AI Responsible AI Framework
And consider signing the framework.
The more public support the framework receives, the more it moves from suggested best practices to industry standards. You should also sign up to attend the Fundraising.AI Virtual Global Summit in October.
2. Develop an Acceptable Use Policy for AI Agents
This policy should require your team to vet any AI agent’s terms of service. You don’t have to avoid AI agents that share your data with their AI research projects, but you do need to know which AI agents will do that. By default, this policy should prevent your team from sharing personally identifiable information (PII) or sensitive data with AI agents.
3. Seek Partners and Tools That Help You Secure and Protect PII Data
Your partners should be providing solutions that help mask or anonymize your data. For example, your CRM ID can be sent with non-PII data. You should avoid sharing first name, last name, street address, phone number or email address with an AI agent.
4. Update Your Contracts
Make sure all contracts include language that prevents your organization’s PII data from being shared with AI agents and AI research projects.
We all have a role in securing and protecting the data we work with every day. We need to be intentional about what we will and will not share with AI agents.
Your donors trust you with their data. They aren’t expecting their private information to end up in a silicon brain.
The preceding blog was provided by an individual unaffiliated with NonProfit PRO. The views expressed within do not directly reflect the thoughts or opinions of NonProfit PRO.
Related story: Why Nonprofits Can No Longer Afford to Ignore Data Security
Charles Lehosit has been described as an entrepreneur, solutions architect, strategist, technologist and futurist. As vice president of technology at RKD Group, Charles excels at developing solutions that answer clients’ business needs. Charles understands what it takes to deliver successful projects, and he has done so for clients like Coca-Cola, U.S. Army and General Motors before moving into the nonprofit industry.
Charles brings deep digital experience to clients, having performed virtually every job an IT professional can hold. He is a leading expert in application development, API integration, CMS, CRM/eCRM, email marketing, lead generation, mobile development, SDLC and website development.