"…in this new age of AI, many time-honored assumptions about strategy and leadership no longer apply." — From the book, Competing in the Age of AI: Strategy and Leadership When Algorithms and Networks Run the World by Marco Iansiti and Karim R. Lakhanti
I started the introduction of this book with a quote from Shawn Olds. So far, it's the most succinct explanation about what's happening in the fundraising world because of the power of artificial intelligence. It's an exciting time, but it's also a period of enormous anxiety. People understand that jobs and how we work are on the line. The philanthropic sector is not immune, and neither are fundraisers.
But, while it's a time of uncertainty, we have to remain positive and optimistic that things will work out — they always do. Even if you begin to research and learn about the issues, inevitably, you adapt to the changes and move with the times. Ultimately, that places you in a much better position to experience the inevitable evolution than others.
There was a time when nonprofits hired a team of fundraisers. The team leader was a development director with a major gifts officer, an events person and a grant writer. Today, nonprofits can do what it took the director of development and the major gifts officer a few days — or even months — in a couple of minutes.
The development team's work can also get completed at a fraction of the cost with better results. Nonprofits seek to invest in artificial intelligence. In the book from which I took the quote for the opening of this chapter, three essential ingredients allow organizations to harness AI's power for growth. In that book, the authors offer insights about how any company, and yes, even nonprofits, could take advantage of artificial intelligence's benefits through 1) scale, 2) scope and 3) learning.
In short, technology has the power to disrupt any business or industry, for that matter, wholly. Because of the power of artificial intelligence and machine learning, organizations could grow in scale, scope and learning beyond what they ever imagined. Scale is the growth of the work done by an organization.
In referring to scope, it means going beyond what you do to other areas because of technical abilities. And finally, learning is the knowledge coming from data and tech that informs business and nonprofit leaders. For instance, because of the power of AI to scour the entire internet and spot patterns, it could deliver results that allow managers to predict when and how to approach their customers.
The silos that existed within companies, and nonprofits, which limits growth, are coming down. Even more fascinating is the reality that business leaders don't have to restrict themselves any longer to one particular industry.
As we know, the world experienced a pandemic in 2020. Both Pfizer, AstraZeneca and Moderna, among others, came out as leaders in the production of vaccines. Typically, vaccines would take years of research, trials and testing. However, the world experienced Pfizer, AstraZeneca and Moderna move toward producing vaccines in less than one year. That is truly remarkable and a first for the world. It wouldn't be possible without the technology we have in the 21st Century.
Moderna: A Different Kind of Biotech Company
In the book, Competing in the Age of AI: Strategy and Leadership When Algorithms and Networks Run the World, by Marco Iansiti and Karim R. Lakhanti, the authors wrote about Moderna. They wrote, "…is purpose-built for this kind of rapid response and exponential impact" as it relates to a situation like the COVID pandemic. According to the authors, the CEO, Stéphane Bancel, says they are a technology company that does biology.
In essence, their pharmaceuticals get created by their software to determine what proteins the human body needs to produce to fight a specific disease. As Iansiti and Lakhanti explained, the company is an "AI factory," which uses data and technology in every aspect of the company. AI factories create solutions using both business intelligence, which is data-driven and no longer relies on intuitions, hunches or even experience. It combines business intelligence with artificial intelligence.
As a result of the technological advances, we have wholly moved from the industrial age headlong into the digital age. Now companies, and nonprofits, can do what was once considered impossible. For instance, several companies created a pandemic vaccine that went from research to market in less than a year — previously undoable. However, it was possible because of the power of data, artificial intelligence and machine learning, informing what humans wanted and needed to get done.
Artificial Intelligence in the Nonprofit Sector
In a discussion I had with Shawn about AI's future in the nonprofit sector, he stressed that human/machine team powers the most potent form of AI. He also said we are not at a fulcrum of replacing human development directors, but with AI, we can empower them to be more productive and efficient and focus on what they are good at, the art of fundraising.
As you know, I'm a fundraiser, and something professional fundraisers have said for a long time is that what we do is both an art and a science. The idea that the more AI improves and the "science" gets handled in a fraction of the time so I could focus on what I do best, which is relationship building, is attractive. I'm sure that many other fundraisers feel the same.
I appreciate Shawn's idea and thoughts that AI's strength is the human/machine team. I'm also mindful of the changes coming to the sector and the evolution of how fundraisers and other professionals within nonprofit groups work. For instance, I could easily see a world where program officers could predict when they need to increase food supplies at a food bank because of data information and AI learning from patterns.
Perhaps there's more need during the holidays or economic downturns. Still, AI has the power to spot patterns with varied data sets, say including the unemployment rate in a given community, and predict for program officers how many meals will be necessary that month. AI is redefining processes and how things get accomplished.
Through software such as boodleAI, we now have the technology where fundraisers could get incredible predictive intelligence about opportunities for giving. boodleAI, for instance, takes nonprofit databases and checks it against more than 500 demographic, behavioral and other attributes belonging to 220 million American adults to offer fundraisers "intelligent fundraising." In other words, the guesswork — or the art, some would say — is getting taken out of fundraising so that fundraisers get better targets, donor scoring for prioritization and segmentation.
Going back to the introduction and Shawn's quote: “AI fundraising platform boodleAI has helped nonprofits acquire a $1,000 new donor in 10 minutes, raise $3,600 in new donors in one hour, raise $10,000 in new donations in one week and achieve email donation rates 100 times the average.” It's not something that will happen. It's not in the future. It's all happening now.
boodleAI, for example, is a company in the nonprofit sector that seeks to disrupt it by making it much more efficient and powerful. And the reality is that donors and the public want this sort of efficiency. It's up to individual nonprofits and the sector as a whole to determine how they will move forward in the age of artificial intelligence. For our purposes, the best place to begin is by understanding what artificial intelligence is and how it's integrating itself into the philanthropic sector.
What Is Artificial Intelligence?
To understand AI's power, you have to know a bit about what it is and how it works. In essence, artificial intelligence is intelligent computing. Meaning, today's computers are small but incredibly powerful. By definition, intelligent computing means that machines can learn — for themselves. Whereas in the past, your programmers were feeding computers information for new learning. Today's technology can learn for itself based on the knowledge and information it acquires as it accomplishes tasks.
For example, we know that millions of people spend time on social media, and billions have Facebook accounts. You've probably heard a lot about algorithms. Those algorithms are artificial intelligence. So, in the simplest terms, let's say you have 100 friends on Facebook. Every day, you get on Facebook, and you like or comment on your closest friends. In that bunch, you also have a friend or two whom you don't engage with for whatever reason when you're scrolling through your timeline.
Over time, Facebook AI learns that you must not be too interested in those two friends, so eventually, although they remain as Facebook friends, you won't see much of any of their future posts. In other words, Facebook’s artificial intelligence learned and predicted you don't want to engage with those particular people.
As the years' pass, the chances are that AI will learn a lot more about us, our lives and what we need. So, it's crucial, particularly as AI enters our homes and offices, to understand a bit about how they work. By doing so, when the time comes to hire a technology vendor for your charity, you'll appreciate a few high-level concepts.
As I described in my example about Facebook, artificial intelligence can learn your preferences and make predictions about how you will behave. The ability for artificial intelligence to learn is called "machine learning." In fact, in computer science, some programmers spend their time programming what AI needs to know independently. Therefore, the programming done by humans creates artificial intelligence that does machine learning. It is that learning that then allows AI to predict who you want to see on your Facebook feed or the films you want to see on Netflix or Hulu.
The Ethics of Artificial Intelligence
As artificial intelligence enters into every aspect of our collective existence, I think there's a critical topic for nonprofits to understand, primarily since they exist to help society — ethics. Nonprofits and social enterprises have a special responsibility for ethical behavior. Also, the public demands it, and it's why it is fast to take a group to a task that does not behave ethically.
For instance, we know of cases where human biases have been introduced into algorithms for AI. Thus, we have situations where these biases have adversely affected people seeking employment and justice in the courts as an example. Therefore, while technology is awesome and we could use it, for instance, to cure diseases, we have to be mindful of the biases. As an example, it's essential to examine bias in recruiting software.
Many organizations, including small nonprofits, are jumping on the bandwagon and purchasing software for recruiting. Platforms such as SmartRecruiters or HigherMe.com allow groups to rank and score applicants for job openings. The recruiting process is convenient for both hiring managers and applicants. Managers get reporting that scores each candidate based on their resume and the job requirements and additional factors, such as the candidate's availability for work hours or the distance an applicant lives from the office (the closer the commute, the higher the score).
For the applicant, artificial intelligence can answer questions that they might have without waiting for office hours or applying by text. Also, interview scheduling can happen quickly when an employer reaches out to a candidate. The applicant selects available timeslots, for example. Gone are the days of back and forth emails or coordination calls. Still, these platforms have the potential for problems.
First, as I mentioned, you could obtain — without your knowledge — a recruiting platform that has human bias already baked into its DNA. So, you can end up with a platform that might discriminate against gender or other variables. You can also set it and forget it and never really have human oversight over your organization's artificial intelligence. In other words, human eyes, supervision and review on everything that artificial intelligence processes are essential.
PwC published a report that addressed what organizations have to do when using artificial intelligence, and it behooves your group to realize that with artificial intelligence comes having to ensure ethical behavior. The following are essential topics that PwC reported as necessary elements of artificial intelligence:
- Fairness: Are we minimizing bias in our data and AI models? Are we addressing bias when we use AI?
- Interpretability: Can we explain how an AI model makes decisions? Can we ensure those decisions are accurate?
- Robustness and security: Can we rely on an AI system’s performance? Are our AI systems vulnerable to attack?
- Governance: Who is accountable for AI systems? Do we have the proper controls in place?
- System ethics: Do our AI systems comply with regulations? How will they impact our employees and customers?
Nonprofit leaders and even donors are in unique positions to push technology companies to act responsibly and ethically. And the way to have the ability to encourage vendors to ensure ethical behavior is to understand all of the issues concerning artificial intelligence. To that end, an essential argument of how we should view artificial intelligence was written in an op-ed for The New York Times by Dr. Stuart Russell. He is a professor of computer science at the University of California, Berkeley.
There are two fundamental ideas that Dr. Russell noted in his op-ed. The first is, "The "standard model" in AI, borrowed from philosophical and economic notions of rational behavior, looks like this: "Machines are intelligent to the extent that their actions can be expected to achieve their objectives." What Dr. Russell means by that is that at present, machines don't have any objectives that they determine on their own. Artificial intelligence does not achieve its own purposes — it realizes that of humans.
However, Dr. Russell argues that this approach to artificial intelligence is erroneous. If we are wrong on our objectives, and because AI is smarter than us, then we may have serious problems. As he goes on to argue, social media is an example. Programmers manipulate human preferences, and we have issues with democracy. As another example, he asks a chilling question. As Dr. Russell wrote in the op-ed, “The effects of a super-intelligent algorithm operating on a global scale could be far more severe. What if a super-intelligent climate control system, given the job of restoring carbon dioxide concentrations to preindustrial levels, believes the solution is to reduce the human population to zero?"
Think of how terrifying that could be for everyone on our planet. Dr. Russell also argues that "pulling the plug" might not be possible. AI might prevent that from happening (remember, it is much smarter than we are and predicts human behavior). But, before you lose all hope, there is another path that Dr. Russell offers us. He continues, “The solution, then, is to change the way we think about AI. Instead of building machines that exist to achieve their objectives, we want a model that looks like this: ‘Machines are beneficial to the extent that their actions can be expected to achieve our objectives.’”
With that understanding, let's go back to the example of artificial intelligence and recruiting. As Dr. Russell explained, we want artificial intelligence to help us achieve our objectives, not its goals. Therefore, to ensure that we do not end up in a situation where AI does what it wants for its purposes — and not ours — nonprofit leaders have a special obligation to consider AI's ethical issues. It's not just in recruiting, but also in fundraising, operations, and programs.
Finally, on this point, I want to mention that some wealthy donors understand the challenges that humans face as we integrate it fully into all areas of our lives. It goes to my earlier point that donors drive the priorities in the nonprofit sector. Because they understand what’s at stake, they're leading the path as it relates to artificial intelligence.
According to The Chronicle of Philanthropy, nine donors have donated $585.5 million to nonprofits developing AI tools and studying the impact of artificial intelligence on human lives. Those individuals include the late Paul Allen, who co-founded Microsoft, Reid Hoffman, co-founder LinkedIn, and Elon Musk, co-founder of PayPal and Tesla Motors. Financier Stephen Schwarzman donated $350 million to MIT to work on "ethical applications of artificial intelligence."
Again, donors drive the discussion, and nonprofit groups can address one of the most significant issues related to artificial intelligence — ethics. Now that we've dealt with artificial intelligence's ethical use, let's focus on its power. Nothing else in human history will change human history in ways we have yet to know, as will artificial intelligence. Sure, human history has dealt with famine, war, disease, the Renaissance, and the Industrial Revolution, for example. However, with artificial intelligence, humans have met something that supersedes our intellect and abilities for once.
How Technology is Moving to What Was Once Impossible
As I wrote this book, I came across an article in The New York Times. The article showed how artificial intelligence creates images that look like people you could know or passed on the street or on your social timeline. All of the faces are friendly, but there's only one problem. Not a single one of the faces is a human who exists. Furthermore, you could create a computer-generated image for less than $3.
The technology is called a generative adversarial network. In the process, you feed computers pictures of real people, and then it's up to the software to create new images of "people." Imagine a world where on social media, for instance, we start to follow accounts of influencers who are fake people with fake children and — fake lives. I could see a reality where these accounts become influencers, and advertisers will show images of "people" having a grand time sipping champagne at the top of Mount Everest.
However, while you might wonder about some technology uses, there are undoubtedly many areas where tech is fantastic. For instance, we have the real-life example of vaccines getting created in record time (what took years, now only taking months) so that the world could start to inoculate people against COVID-19.
Virtual reality is another form of technology that will bring lots of new experiences to people. Imagine someone who could no longer travel or is sick in bed traveling all over the world through virtual reality programs. Augmented reality could assist young surgeons in their training and preparations. In short, technology is a truly transformative force that is unlike anything we ever experienced, and it's slipping into our day-to-day lives in many ways.
Think about your phone and our digital assistants. I have Alexa at home and prefer not having to type into my cell phone when all I want to know is if it's going to rain. All I do is ask her, and she informs me how I should prepare to greet the day. In short, many of us have allowed technology to enter our lives, and it's making our lives easier. If you get lost in a new city, you don't have to walk around clueless or asking strangers. You could just pull out your phone and speak to your digital voice assistant to direct you.
Technology is permeating our lives in ways that are large and small. As we move to smart homes, in a few short years, millions will speak to their homes using voice commands to lower the thermostat in the house, turn on the stove where you placed left-overs from the evening before to warm-up, or turn on the lights before you walk in the door. Already, millions make use of that technology and others that keep their homes safe when they are away on vacation. Remember asking your neighbor to drop by your place or leaving lights on (and wasting energy)? That’s no longer necessary.
However, there is a balance we have to strike as a society. The public, including consumers, businesses, and nonprofits, have to ensure that because technology, including artificial intelligence and machine learning, are so incredibly powerful, that we have a check on it somehow. Legislation has started to get created around data protection, which you will read about in Chapter 3, but the chances are we will have to go further with it. I'm not sure anyone knows what technology will look like thoroughly in 5, 10, or 20 years. One thing's for sure; it is going to be unlike anything we've seen.
Because of its immense power, as a whole, we have to enjoy the benefits of technology, but also reign it in when necessary, and always push for the ethical use of the enormous capabilities we now have at our disposal. In short, all of us have to be informed citizens and understand what it means to live as responsible citizens in a world where you can’t always believe what you see as real.
What You See Is Only What It’s Predicted You Want to See
Artificial intelligence is powerful. There's no question about it. Moreover, it could be used to accomplish many great things related to improving the lives of people around the world. However, as with everything else in life, there could certainly be a downside to this immense power. One of the areas is social media.
Social media has torn down borders. The moment you post something in one part of the world, you could have someone on the other side of the planet responding. For nonprofits, social media has been an excellent tool because they have been able to share their stories and get supporters and donate to give to their causes for a much lower cost.
However, there's a darker side to social media, which you'll see concerning data privacy later in the book. As a global society, we also have other issues we have to deal with concerning giant social media platforms. For one, we have to concern ourselves with the threat of social media on democratic societies.
Look, I'm not a big social networking user, but I understand that AI learns what you want to see, and then it predicts what you want to see — to the exclusion of other ideas and voices. It doesn't matter what side of the political side you find yourself on. If you don't listen or read ideas and thoughts from others who have differing opinions than yours, all you do is confirm what you believe to be true. Ultimately, you have a society where people view the world through entirely different frameworks, threatening democracies. For instance, millions of people genuinely believe that elections are rigged.
There's a construct in psychology called confirmation bias. It means that people tend to filter and interpret information that confirms what they believe or value. It's not something that they may be aware of, but subconsciously, they gravitate toward information that confirms their views. Moreover, these confirmation biases are strong when we gravitate toward information on deeply emotional issues for us.
Now consider AI within social media. Let's say you're a big sports fan, and as a New Yorker, you loved the Mets. It doesn't take a long time to start seeing groups and people in your timeline related to the Mets. Ultimately, if you click on those links, you'll start seeing sponsored ads from companies selling Mets gear. Let's say the Mets play for the World Series but lose, and some betting conglomerate spreads a rumor that the Mets lost because the game was rigged.
Being such a Mets fan, you start to see posts about the game rigging, likely, not actual. Nevertheless, something profound inside of you—because you are emotionally attached to the Mets — starts to believe the Mets lost because the game was rigged. The algorithms continue to feed this content to you, and your confirmation bias only reinforces with every like and comment you make that this is the content you want to see and ultimately share.
Now take this same idea about confirmation bias, and anything could be true—or not. That's how conspiracy theories spread like wildfire. It's how bullies could spread rumors about classmates, which get promoted beyond the school, and now get views in the entire community and even the world. Unfortunately, teen suicide is increasing, and researchers are exploring a correlation between that and social media.
As you can see, when anything could be shared and spread, it's easy to see how democracies themselves could be under threat. If you happen to be a passionate supporter of a particular party or candidate, and they lose. Your social media timeline starts to show how the election got rigged or votes not counted; what do you begin to believe if that's all you see?
Organizations such as the Center for Humane Technology exist to help prevent the more shady side of technology. The nonprofit exists to call the alarm about tech addiction (there's something to be said about making sure your notifications aren't on because the human mind can't help itself look when it sees notifications). Furthermore, it's created to ensure that tech companies, the public, and governments help ensure that technology gets used to improve democracies, relationships, human well-being, and information.
As you will read throughout the book, artificial intelligence and technology are profoundly changing our lives. Moreover, the ability to immensely broaden scale and scope, the nonprofit sector, or anything else, will ever be quite the same again. Technology is now infused in everything from blockchain, impact investing in fundraising technology. As you will see in the coming chapters, all of it is dismantling traditional philanthropy seeking social good.
Editor's Note: This complimentary first chapter has been provided to NonProfit PRO. To read more from The Future of Fundraising: How Philanthropy’s Future Is Here With Donors Dictating the Terms by Paul D'Alessandro, click here.
Paul D’Alessandro, J.D., CFRE, is a vice president at Innovest Portfolio Solutions. He is also the founder of High Impact Nonprofit Advisors (HNA), and D’Alessandro Inc. (DAI), which is a fundraising and strategic management consulting company. With more than 30 years of experience in the philanthropic sector, he’s the author of “The Future of Fundraising: How Philanthropy’s Future is Here with Donors Dictating the Terms.”
He has worked with hundreds of nonprofits to raise more than $1 billion dollars for his clients in the U.S. and abroad. In addition, as a nonprofit and business expert — who is also a practicing attorney — Paul has worked with high-level global philanthropists, vetting and negotiating their strategic gifts to charitable causes. Paul understands that today’s environment requires innovation and fresh thinking, which is why he launched HNA to train and coach leaders who want to make a difference in the world.