Photo: person talking with robotic AI futuristic technology

5 Questions about Philanthropy and AI

Posted on June 13, 2024 by Andrew Spector, TPF Fellow 2023/24

On April 23, I attended a Technology Association of Grantmakers (TAG) webinar, "AI in Grantmaking: How to Fund Initiatives that Drive Social Impact."  The webinar featured a panel moderated by TAG Executive Director Jean Westrick and included Michael Belinsky, Director at Schmidt Futures, Shannon Farley, Co-Founder & Executive Director at Fast Forward, and Laura Maher, Head of External Engagement and Senior Program Manager at Siegel Family Endowment.

One of my biggest takeaways from the webinar was that AI is moving so fast that it's nearly impossible to keep up. This is a problem because the stakes are high with AI.

According to GZERO Media:

In November 2022, the US Department of State commissioned a comprehensive report on the risks of artificial intelligence. The government turned to Gladstone AI, a four-person firm founded the year before, to write such reports and brief government officials on AI safety. Gladstone AI interviewed more than 200 people working in and around AI about what risks keep them up at night. Their report, titled "Defense in Depth: An Action Plan to Increase the Safety and Security of Advanced AI," was released on March 11. The short version? It's pretty dire: "The recent explosion of progress in advanced artificial intelligence has brought great opportunities, but it is also creating entirely new categories of weapons of mass destruction-like and WMD-enabling catastrophic risks." Next to the words "catastrophic risks" is a particularly worrying footnote: "By catastrophic risks, we mean risks of catastrophic events up to and including events that would lead to human extinction."

Of course, on the flip side, AI has the potential to accelerate significant advancements in human capability, including improvements to medical technologies and efficiencies across industries. Regardless of where one stands on the pros and cons of AI, what seems to be objectively true is that AI is here to stay. It's growing, and it's going to change how we live and work.

Philanthropy has a role in all this. On the webinar, Laura, Shannon, and Michael said the role of philanthropy in AI includes:

  • Leveraging philanthropy's lack of accountability to take calculated AI risks and fail forward, specifically through unrestricted capital for this experimentation.
  • Supporting communities and partnering with others for ethical and equitable AI adoption.
  • Operating with a balanced perspective somewhere between the "boomers" and "doomers," including convening folks from different perspectives.
  • Intentionally backing the AI builders that are addressing inequity and not leaving people out.

As I reflect on philanthropy and AI, five key questions are emerging:
  1. What are the unique opportunities and threats of AI to Charlotte, DeSoto, Manatee, and Sarasota counties?

  2. What are the opportunities and threats of AI to our country?

  3. What are the opportunities and threats of AI to our species and planet?

  4. Should philanthropy use AI internally and/or externally to do our work more efficiently and effectively? If so, how and when?

  5. Can we support other organizations in driving more impact by using AI? If so, how and when?
In answering each question, it's important that we right-size our level of urgency and internalize that no person or organization must respond alone. What decisions should we make by when to best mitigate the threats and maximize the opportunities? Who else cares about this with whom we can work?

Leave a comment

You are commenting as guest.