As we move into 2020, artificial intelligence usage is becoming commonplace throughout society. In many cases, AI is creating a positive impact in the world, for instance helping to identify breast cancer more accurately, track elephant populations and may even help find a cure for COVID-19. At the same time, AI requires an exceptional amount of data and forces individuals, organizations and governments to ask major ethical questions, let alone struggle with ways in which ethical behaviors can be measured or enforced.
Citizens must accept (sometimes unintentionally) that Google and Alexa listen to everything. While these companies find value in the aggregate data, there are real downsides for the individual (e.g. personal privacy implications of adding your face into a database of 100 million faces). On top of that, large companies such as Facebook, Google and Marriott are losing, breaching or struggling with ethical data and privacy issues on a daily basis. Suffice to say, we are facing massive societal data issues with both short-term and long-term implications.
In "The State of Artificial Intelligence in the Nonprofit Sector” report, we noted that nonprofits serve an economic area unfulfilled by government or business. Nonprofits represent underserved or underrepresented individuals and communities. At a time when 81% of U.S. adults believe they have little or no control over their personal data, AI in the nonprofit community is distinct from the world of business or government. In this article, we have teamed up with the experts at the Futurus Group to dive deep into the topic of AI ethics.
When we conducted our research on AI, we found 83% of nonprofit professionals believe there needs to be an ethical framework in place before wider adoption of AI (Figure 1). We also found that 52% of nonprofit practitioners are scared of AI. The nonprofit sector is not alone in that belief; significant ethical concerns have been brought up by notable thought leaders, such as Stephen Hawking, Bill Gates and (most recently) Elon Musk.
Current Ethical Issues in Technology
AI ethics is a big topic. To understand AI ethics, it’s first important to understand what is happening more broadly in technology ethics. Benedict Evans, in his 2020 tech predictions, proposed that technology will soon become a “regulated” industry much like utilities, health care and banking. His rationale for this line of reasoning includes the following:
- Major tech companies are now the biggest companies in the world (Figure 2).
- Tech problems and solutions create tradeoffs between what is best for society, individual and economic forces (Figure 3).
- The major issues with tech companies are not simple (Figure 4).
Below are three graphs from his study demonstrating his points:
These issues, among others, clearly highlight that ethics in technology is a hot topic, multifaceted and without an easy answer. Currently, 11 states are working on data privacy laws. AI ethical issues align with these primarily because many of the issues Benedict highlights are based on AI technologies. For example, the nonprofit sector highly respects the premise that open platforms lead to greater innovation, but if that open platform is providing open access to millions of user data points, there is a significant opportunity for privacy breaches, as Andrew Trask notes in his talk on AI privacy issues.
AI Ethics in Nonprofits: What Is Most Important?
For nonprofits, is the delineation between what is ethical versus required to generate funds different than the private sector? Should the nonprofit sector hold a higher bar than the private sector? Is diversity and inclusion more important than organizational efficiency? These are incredibly difficult questions to answer. As well, the answers to those questions may vary by individual and organization. It’s a problem that is being furiously researched, according to the Stanford AI Index (Figure 5 ).
While significant research denotes a strong interest in the topic there is no single, clear ethical framework today. Instead, there are a myriad of competing frameworks on the topic (Figure 6).
The positive news is that with competing frameworks, there comes the opportunity to provide meta-analysis on a few major ethical AI principles. The most mentioned principles are the following:
-
- Interpretability and explainability: 95% of association papers and 92% of government papers cited these
- Fairness: 100% of technology company papers and 89% of association papers
- Transparency: 88% of industry and consultancy, and 81% of technology company papers
- Accountability: 72% of technology company papers
- Human control: 88% of think tanks and academic papers
- Data privacy and security: 75% of industry and consultancy papers
While each principle is important, it is possible all six principles are necessary for ethical AI to be successfully developed. However, every principle cannot be treated equally, so nonprofits need to create "principle stacks", or a prioritized set of principles, to enable efficient decision-making.
In the nonprofit sector, unfortunately, no research (including our SAINS research) has asked nonprofits what needs to be involved in building an ethical framework for the nonprofit sector. We have proxies from organizations, such as the Partnership on AI, that have created their own frameworks but none that take into account the nonprofit industry's perspective. Partnership on AI’s framework does create a nice starting place, with the following principles (these are paraphrased):
- Empowerment
- Stakeholder engage and education
- Privacy and security
- Fairness
- Social responsibility
- Security
- Understandability and interpretability
- Cooperation between science, industry and society
These principles are a great place to start, but require additional research based on variances in organizational size, focus and discipline.
AI Ethics in Nonprofits: How Do We Get There?
Creating an ethical framework for AI is going to require significant collaboration between the private sector tech industry, government entities and nonprofit organizations. As well, it is going to require each of us to define our own frameworks that aligns with our belief system. In terms of collaboration, our two teams (PwrdBy and the Futurus Group) have been thinking through how to best approach that collaboration. We’ve started a special interest group to focus on data and ethics with The Nonprofit Alliance and are in talks about creating more focused AI ethical groups.
If your nonprofit is thinking about building an ethical AI framework, we recommend taking a materiality approach. If you are unfamiliar, it is when you ask your stakeholders (internal and external) a series of questions to gauge the importance of critical issues. Here are some questions you could use to determine your organization’s ethical AI framework:
- Is the issue (e.g. AI understandability) important to your organization? (Yes or No)
- If it is important, how important? (Typically utilizing a one to 10 approach; 10 being very important)
- What is your organization's current level of performance? (using one to 10 scale)
- What is the minimum acceptable performance? (using one to 10 scale)
What this framework does is to help chart out what is most important to your organization and where to draw the line on ethics. Once your team has created a framework, you can start navigating the waters of ethical AI. Next time you speak to an AI technology provider, it may also be useful to ask them about their ethical framework. It’s more than just data security — how do they approach the topics that matter most to your organization?
Where Do We Go From Here?
We recognize there is a lot in this deep dive, and there could be even more if we dug another layer deeper. For the sake of brevity, we want to end with a few predictions on where things could be going ethically for nonprofits. Here are some things we have found in our research:
- People will increasingly raise concerns and demands for regulations around preventing AI-assisted surveillance from violating privacy and civil liberties.
- Protecting data privacy will become increasingly important, especially among disadvantaged communities.
- Nonprofits will play a greater role in AI ethics and security, both by number of organizations and by the voice they play.
- AI security, which will become increasingly necessary as more sensitive donor data is being used to build and deliver AI.
- Nonprofit organizations need to decide how much data is too much as it relates to understanding a person's proclivity toward supporting a nonprofit.
- Whether the responsible use of AI within a nonprofit setting should be measured in short-term revenue or long-term relationships, or both.
Facial recognition, deep fakes and other photo/video-related AI developments are at the forefront of ethical considerations. As shown below (Figure 7), we now can make up fake people online.
Free Resource: Have you downloaded the SAINS report yet? It’s the most comprehensive analysis of AI for social good available.
Jared Sheehan is the CEO of PwrdBy. He started PwrdBy in 2015 after leaving Deloitte Consulting, where he was a senior consultant in their supply chain sustainability practice and worked with clients such as TOMS Shoes, Starwood hotels and Panasonic.
Jared is a Lean Six Sigma Black Belt and has built numerous successful products in partnership with clients, including the Children’s Miracle Network Hospitals fundraising application, In Flight, which helps manage 75K+ corporate partners and raise $400M annually. Jared is the creator of the Amelia and NeonMoves mobile apps.
Jared graduated Summa Cum Laude from Miami University with a double major in accounting and environmental science. Jared is an iron athlete, mountaineer and has cycled across the U.S.
Nathan Chappell is the president of the Futurus Group, a firm dedicated to the responsible use of AI technologies to support net increases in philanthropic giving. Over the past 20 years Nathan has served in leadership positions within several large and complex nonprofit organizations, leading fundraising teams that have generated more than $1 billion in philanthropic revenue. In 2019, Nathan presented the first TEDx on the topic of artificial intelligence and the future of generosity and was listed as one of the Top 100 Influencers in Philanthropy.