Jared: Thank you for chatting with me today, Michael. To give readers a little background, Michael Tjalve is a principal nonprofit architect in the Tech for Social Impact group in Microsoft Philanthropies, as well as an assistant professor in the Linguistics Department at the University of Washington. This is going to be a really exciting topic. As you know, I’ve been researching artificial intelligence in the nonprofit sector for the last few years, which pales in comparison to your impressive work! Thank you again.
Michael: Thank you for the opportunity to chat with you. It’s a pleasure to be here.
Jared: I have a few questions for you, but this can really go in any direction. First question: What overall trends are you seeing in the nonprofit AI space?
Michael: There are a few major currents that have aligned over the past few years which make it a truly exciting time to be working at the intersection of AI and the nonprofit sector.
First off, the underlying technology has matured significantly, boosted by greater access to data, better algorithms and scalable cloud computing. These advances in fundamental research and applied science have led to both more robust AI capabilities and to a much broader range of applications of AI.
Second, and to a large degree due to these advances, we’ve experienced an increased readiness within the nonprofit sector for exploring the use of AI to help address the challenges they're working on. Some of these capabilities are rapidly gaining traction and we are starting to see nonprofits apply them to their operations.
Jared: Wow! That is really exciting.
Michael: It really is. We're working closely with the sector on identifying the applications of AI that can enable the most impact for nonprofit and humanitarian initiatives. We join forces in workshops and brainstorming sessions, and the nonprofit organizations bring examples of challenges where lack of appropriate technology is holding them back today. We then get really crisp on specific use cases and try to identify potential technology solutions together. Sometimes building a more traditional system works well enough. Other times, AI will leapfrog existing technology paradigms enabling new experiences that were hard to imagine just a few years ago.
This partnership is mutually beneficial by design. Our nonprofit partners benefit from efficiency improvements that may free up constrained resources (e.g., improving program design or delivery). We, as technology partners, learn from their unique insights about what works for the sector and where we need to go back to the drawing board to improve on an existing technology component (or to build an entirely new one). And the beneficiaries of the work done by nonprofits can enjoy the new services available to them.
Jared: Well, wonderful and a comprehensive answer. Currently, do you see the largest limitations in AI with people, process or the technology itself?
Michael: That's a great question. Honestly, the underlying technology is pretty mature now and perfectly capable of providing real value in many nonprofit use cases. However, I think there are two key areas where we still have work to do. The first is around awareness and training. Making sure that people in the sector are aware of the capabilities that are available to them, how those capabilities can help address the challenges they’re working on and training on the use of the technology.
The other major limitation is the need to reduce the barriers for adoption. The broad adoption of some of these advanced capabilities typically hinges on having the right kind of technical resources available to work on the implementation. However, there's a growing trend in the industry that is solving this issue. It's to empower citizen developers (i.e., people without a deep technical background) to create software solutions. At Microsoft, the Power Platform was created for this specific purpose, and we are seeing many nonprofit organizations leveraging the low-code/no-code platform's capabilities for things like program design and delivery. We have ongoing projects with nonprofit partners who are building fairly sophisticated chatbot experiences to engage more directly with their audience based on Power Virtual Agents without depending on data scientists or needing to write a single line of code. It's part of a broader objective to democratize AI and to make its capabilities accessible to a much broader audience and in an equitable manner.
Jared: That’s why I love writing about AI in the nonprofit sector. I am a huge fan of low and no code solutions and agree they will be a gamechanger. For my next question, how do you compare AI in nonprofit to other industries?
Michael: The nonprofit sector as a category is immensely diverse. From after school learning programs to art preservation and disaster response — and everything in-between. However, most nonprofit organizations have some notion of fundraising and management of donors or volunteers. This direct dependency on outside resources means that internal investments, particularly around technology, often are heavily scrutinized. This dependency leaves little room for onboarding unproven technology solutions. The sector has consequently often been running a few years behind other sectors on adoption of cutting-edge technology. This has been the case for AI as well.
Jared: That makes sense. When you are speaking with nonprofits, what is their primary concern? Or maybe said differently, what are you most worried about in nonprofit AI?
Michael: One concern that nonprofits raise is doubt around whether AI can provide real value, (i.e., whether the tech can live up to the hype). It's a valid concern and it's what's driving the deeper explorations we have with the sector on matching use cases to the most suited technologies. Sometimes that’s AI. Sometimes it isn’t.
I also think it’s important to keep in mind that AI is not perfect and never will be and to make sure that expectations around the capabilities of AI are set appropriately. Integrators and users should understand both how AI makes decisions and how it makes mistakes so that potential risks can be mitigated before implementation.
This includes, for example, not to blindly trust the AI output. Machine learned approaches can reinforce bias that exist in the data used to train the models. Having a human in the loop as part of the decision-making process can help avoid or counter this bias. This is particularly important for the many nonprofits working with data from at-risk populations.
Jared: Yes, agreed. Ethics in AI, and especially reducing AI bias, is an incredibly important topic that we brought up in our State of AI in the Nonprofit Sector research. It’s also an area where I think nonprofits have a unique voice — as you mentioned. That brings me to my next question, should we have an ethical AI framework?
Michael: Absolutely yes. As with any groundbreaking technology, there's potential for both positive and negative outcomes of AI. I believe that there are valid concerns related to the use of AI and I think that it's important to have an open discussion across the community around those concerns. For example, AI will have a direct impact on the job market of the future. Some jobs will disappear, some will evolve, and new jobs will be created. There's nothing new about this as such but what is new is the scale and the pace of the change. In fact, McKinsey reports that by the year 2030, as many as 800 million jobs will be displaced by AI. This transition will be challenging for some people and as new industries emerge and new skills will be needed, it's important that we provide accessible training opportunities for more people to remain in the workforce.
There's fortunately a growing body of work in the area of ethical AI frameworks. The AI field is in relative terms very open to collaboration. Consider, for example, the creation of Partnership on AI, where direct competitors in the field of AI are working together to establish best practices for building and using AI systems. This is something you don't see very often in the industry, and I think it's an acknowledgement of the influence that AI is going to have on society.
At Microsoft, we have published our Responsible AI principles, which we comply with internally and recommend externally. There are many aspects to consider as you think about the big picture in which the AI solution is used. Think about the users and their context. Think about the underlying technology and how it's built. Think about the potential impact of AI and how to make sure that ethical and equitable use of AI is considered throughout.
As we design AI-powered experiences, it's key to design with a human focus at the core of everything we build. AI is good at some things, humans at others. I think that the best outcomes of AI in society will be based on complementarity (i.e., connection points of human-AI collaboration) where AI can enhance human creativity.
Jared: I see. I think there is probably another discussion on that topic alone. Let’s switch topics a bit. What is the most interesting application of AI you’ve seen recently?
Michael: We see interest in AI across a vast range of use cases, from image-based tooling to assist with disease detection for frontline workers, speech-to-speech translation for field staff, satellite and drone imagery of post-disaster regions to assist with damage assessment and recovery efforts, and advanced analytics with proactive insights from complex data sets to better measure and understand mission impact. Plenty of incredibly inspiring use cases!
There has also been a lot of interest in conversational AI and specifically language understanding and chatbots. Chatbots provide a rather unique interface for engaging with your audience. Available 24/7, they can handle basic tasks and answer routine questions, which in turn can free up time for the nonprofit staff to focus on tasks where their skills are better applied.
This past year, for obvious reasons, has been different in so many ways. It has accelerated tech adoption in some areas. During the COVID-19 lockdown, we have seen some very creative and impactful applications of chatbots, often driven by urgent need and strained resources. The Government of India, for example, built a COVID-19 health assistant to help offload the surge in questions they’re getting about COVID-19-related issues from citizens throughout India. And Swansea Council in Wales built a chatbot to help address the significant increase in cases of domestic abuse during the lockdown. The chatbot provides safety advice and emotional support to its users and guides victims and people at risk of domestic abuse to the services that are relevant to their individual situation.
Jared: I agree. COVID-19 has accelerated the pace of AI innovation and adoption significantly. Well, those are all my questions. Is there anything else you want to bring up?
Michael: Well, we've covered a lot. I'm always interested in learning about new and challenging use cases and think about how technology may be able to help. I see a continuous need for collaboration across stakeholders in the nonprofit sector, particularly around better use of existing data. Data often lives in silos making it near impossible to leverage outside of the narrow use cases where it lives. However, we've seen a lot of progress in this area over the past few years with the introduction and adoption of the Nonprofit Common Data Model. It's a standard for representing data which was built with and for the sector to encourage interoperability across platforms and solutions.
I'm very excited about the potential for synergy when you make data sharing across organizations easy so that collaborating partners within the community can build on top of existing data assets. I think that this will be one of the key drivers for sector innovation and impact going forward.
Jared: Yes, agreed. I am very supportive of the need to adopt the Nonprofit Common Data Model. Michael, thank you so much. I really appreciate you taking the time. This has been incredibly informative for me and, hopefully, our readers. Let’s do this again.
- People:
- Michael Tjalve
Jared Sheehan is the CEO of PwrdBy. He started PwrdBy in 2015 after leaving Deloitte Consulting, where he was a senior consultant in their supply chain sustainability practice and worked with clients such as TOMS Shoes, Starwood hotels and Panasonic.
Jared is a Lean Six Sigma Black Belt and has built numerous successful products in partnership with clients, including the Children’s Miracle Network Hospitals fundraising application, In Flight, which helps manage 75K+ corporate partners and raise $400M annually. Jared is the creator of the Amelia and NeonMoves mobile apps.
Jared graduated Summa Cum Laude from Miami University with a double major in accounting and environmental science. Jared is an iron athlete, mountaineer and has cycled across the U.S.