The Risks and Rewards of Using Artificial Intelligence Within Your Nonprofit
The National Eating Disorders Association disabled its helpline chatbot after it recommended tactics, such as restricting calories and measuring skin folds, that can encourage eating disorders. Vanderbilt University issued an apology after releasing a ChatGPT-written statement on the Michigan State University mass shooting that lacked empathy.
Rebekah Tweed, executive director at All Tech Is Human, cited these instances when she spoke about the risks and rewards of artificial intelligence (AI) in her session, “Striking the Balance: AI Efficiency and Donor Trust,” at the inaugural Fundraising.AI Summit held virtually this week.
“There are very real risks to relying primarily on AI-driven technical tools when you are dealing with sensitive and high-risk topics,” she said. “Especially harmful for nonprofits is this loss of trust. So for nonprofits, building trust with your donors is at the core of fundraising. You cannot do your work or fulfill your mission without that.”
When it comes to rewards, these might be obvious at this point. Nonprofits often find ways to streamline workflows with generative AI. Not only can these tools increase efficiency by automating laborious tasks, but small nonprofits with limited budgets can gain access to these tools for little to no cost.
Risks of Artificial Intelligence for Nonprofits
Regardless of the organization’s size, it's critical to have responsible AI guidelines in place to mitigate harm. Your organization has spent years building relationships with donors, so you don’t want to make a careless mistake that erodes their trust. Here are four risks of careless use of artificial intelligence.
1. Hallucinations
Generative AI can produce what has become known as hallucinations, or “inaccurate but plausible-sounding information,” Tweed said. So, it’s important that your team fully understands the limitations of these tools. Fact-checking its outputs is key for internal use, but there’s a bigger responsibility for external uses to avoid repercussions.
“If someone relies on that advice, and is injured as a result you could be liable,” Tweed said. “And beyond legal liability, you don't want reputational damage — like what happened with [National Eating Disorders Association].”
2. Data Privacy and Proprietary Information
Tweed recommended nonprofits avoid inputting sensitive data. Though, she noted, some generative AI platforms allow users to opt out of sharing their content, it is not the default setting.
“Especially for some of these smaller orgs that are maybe not using some of those enterprise products, it's a good idea to be on the safe side and not input anything that is proprietary if you're not sure,” she said.
Data to avoid could include names, email addresses, phone numbers, home addresses, exact dollar amounts and foundation names, she said. Instead, a person should add the information in after utilizing a generative AI tool. The same applies to uploading a spreadsheet of donor data. If that data isn’t anonymized, the generative AI tool may be trained off of that personal data.
“It could potentially be spitting that data back out to other people in the future in future iterations of that tool,” Tweed said. “This is just not a safe way to handle the data that's been entrusted to you. I think there are a number of things that could go wrong with that. But, ultimately, I think that if you're putting in personal data, things that you would not want publicly out there, it's not a great idea.”
3. Biased and Toxic Outputs
Tweed noted that the training data for the large language models are commonly low-quality, internet-sourced massive datasets.
“AI companies do implement some guardrails against that bias, which takes the edge off, but this is not enough to offset the degree of built-in bias within the training data,” she said.
4. Plagiarism and Attribution
There isn’t clear guidance on this yet, but Tweed insisted a human must take ownership of anything a donor might see from your organization.
“Generative AI is great at getting you off the starting block,” she said. “Generative AI can really address that blank page problem at the start of a task. It can adjust tone. It's really great at modifying links, but a human should be taking the baton from there.”
Mitigate Artificial Intelligence Risks to Retain Donors’ Trust
Though there are risks, the rewards can easily outweigh the risks with proper education and oversight. Here are three ways to mitigate the risks to maintain your donors’ trust
1. AI Literacy
Though nonprofit staff doesn’t need to understand all of the gritty, technical aspects of generative AI, they should have a basic understanding of how it works. More importantly, they should have a strong grasp of what AI is actually capable of doing well, as well as its limitations.
“So technologists know that generative AI is not thinking or reasoning or understanding or inferring or creating in any kind of human way, and they understand why generative AI outputs are not necessarily accurate,” Tweed said. “But non-technical users are understandably swayed by the powerful and convincing appearance of intelligence in these generative AI tools, and often will subsequently put misplaced trust in these tools and in the accuracy of these outputs.”
2. Human Accountability
Again, generative AI is a tool that can make a workflow more efficient, but still needs someone to review its outputs for accuracy and adding more detail, like personalization.
“It's a good idea to start small in your use of AI and to stay human-centered through the process,” Tweed said. “So AI should most often play a supporting role — something to empower and augment your team's existing capabilities.”
3. AI Policies
Create a governance model that accounts for educating employees and providing guidelines for the use of artificial intelligence by your nonprofit’s employees, advisers, volunteers, etc.
“You also want to be upfront about these policies and post them publicly to show donors that you're taking this seriously,” Tweed said. “You're not getting left behind.”
Related story: 5 Ways Nonprofits Can Use AI for Better Fundraising