The AI Boom: What it Means for the Nonprofit World

By: Darwin Morales-Ortiz

Artificial intelligence (AI) is changing the way organizations across most industries operate – and nonprofits are no exception. As AI tools become more accessible, many nonprofits are finding ways to incorporate them into their work, from streamlining operations to enhancing communications. Now, with more than 80% of nonprofits reporting using AI to some degree, it’s important for organizations to understand both its potential and its limitations.
Ways AI Is Supporting Nonprofit Work

AI has the ability to build capacity. Platforms like ChatGPT can assist with tasks such as content creation, donor outreach and messaging. This support is especially valuable for organizations operating with limited budgets. 

Nonprofits can also benefit from AI’s ability to assist with grant writing by reviewing guidelines, checking eligibility requirements and drafting materials. Additionally, it can help generate ideas for fundraising events, allowing organizations to have more time to focus on their mission-driven work. Tools like Google Gemini are examples of platforms already supporting nonprofits with these tasks. 

Many organizations use AI for data analysis, including donor segmentation, streamlining reporting and automating data entry. According to Nonprofit Pro, 64% of nonprofits are “using AI to analyze end user data to understand their needs and pain points.”

The growing presence of AI is allowing organizations, especially smaller ones, to operate in a crowded sector, with DonorSearch reporting that 68% of nonprofits already use it for data analysis. By streamlining time-consuming tasks like content creation and data analysis, these tools help nonprofits work more effectively and focus on areas they consider most important. Access to this technology has the potential to create equal footing, allowing under-resourced nonprofits to compete with larger, better-funded organizations.

Challenges and Concerns Around AI Use

While AI can be a powerful tool, it also raises concerns around misinformation and credibility. According to the Columbia Journalism Review article, “AI Search Has A Citation Problem,” generative tools often provide users with inaccurate information since they don’t reject questions they are unable to answer. Further, these tools can sometimes fail to link back to original publishers or cite the wrong sources altogether. This not only contributes to the spread of misinformation, but it also has the potential to damage the reputation of the organizations relying on AI tools’ output. 

Beyond potential accuracy issues, AI tools may fail to capture an organization’s voice or use language that aligns with its values. Without careful oversight, AI tools can create language that feels generic and disconnected from the people nonprofits work to serve. The lack of a human touch can lead to challenges for nonprofits hoping to communicate with communities in an authentic, personal way that builds trust. 

Ethical considerations also come into play, from ensuring AI-generated content aligns with organizations’ missions and values to transparency regarding their use of AI. Despite these concerns, fewer than 10% of nonprofits have official policies around AI use, according to Nonprofit Quarterly. Establishing guidelines can help organizations implement AI into their work while preserving their integrity. 

While some see AI as a tool for greater efficiency, broader reach and funding competitiveness, others believe it raises concerns around trust and authenticity–especially in a sector built on relationships. As this technology evolves, nonprofits will need to work intentionally and responsibly when using AI to ensure it doesn’t weaken their connection to the communities they serve.

Final Thoughts

AI offers nonprofits opportunities to increase capacity and streamline tasks. From content creation to grant writing, smaller teams can use these tools to develop efficiency. However, AI is not without limitations. Misinformation, lack of personalization and ethical considerations mean that AI-generated content requires consistent, careful vetting and editing to ensure content is always accurate and authentic. With technology and proper oversight, organizations can use AI to their advantage while staying true to their mission and values.

Survival Guide for Deepfakes and AI From a PR Perspective

Every couple of months a new AI feature goes viral. From image generators, personal assistants, note taking and audio transcription services to work out plan generators, the options were ever growing in 2023.  

With these tools and applications designed to make our lives easier, there come a few side effects to watch out for when incorporating them into our everyday lives. One of them is our diminishing ability to distinguish between what information is real and what’s false. 

AI’s Contribution to Disinformation

AI tools have become some of the main contributors to deepfakes and the spread of false news and misinformation. Merriam-Webster defines deepfake as “an image or recording that has been convincingly altered and manipulated to misrepresent someone as doing or saying something that was not actually done or said.” We’ve seen deepfakes overtake social media, especially when the world is watching major news or political events unfold. 

Likewise, AI chatbots have raised concerns as they have the tendency to give users misleading information. The main issue is that AI chatbots can’t clearly distinguish between fact and fiction. They’re simply aggregators of information available online – and as we all know, information online is not always reputable. This causes users to run into trouble when they’re fully trusting and relying on these chatbots as sources of information. 

So what can we do to avoid falling for misinformation? 

How to Manage the Rise of AI-Generated Content

  1. Review the quality of the video or image 

AI still has kinks and quirks that prevent it from being able to seamlessly create a video or image that looks 100% real. However, AI platforms are working to improve image generation, and we might not be able to tell what’s real versus fake soon. In the meantime, look for some key giveaways, including: 

  1. No-to-little blinking in videos of people
  2. Patchy skin tones
  3. Extra fingers or teeth on people or animals
  4. Poor lip-synchronization with audio
  5. Strange lighting effects with light coming from multiple directions

2. Approach information on social media with a critical eye 

Not everything you see on the internet is true. Going beyond believing what you see on the internet and backing up the information consumed on social platforms with research of your own from notable news sources and journals is how we can avoid falling victim to the spread of false and misleading information. Remember, we’re living in the age of disinformation, and social media is at the forefront of inflamed sentiment around the world. If you read something inflammatory on social media, whether or not it may be AI-generated, always double-check that it’s real before you disseminate it. 

  1. Double check information chatbots provide 

When using AI chatbots for research, it’s important to ask the chatbot where it sourced the information it’s providing you with. This allows you to double check to see if the source is reliable and can provide you with a sense of security. 

When you make a query, and if you’re content with the response provided, ask the chatbot “What source did you get this information from?” The chatbot will return the name of the source since it cannot pull direct links. You can then independently vet that information. 

  1. Protect your data

Remember that while AI can be a powerful tool, many privacy and security issues stem from broader online practices. Being mindful of your digital footprint and adopting good cybersecurity habits can go a long way in protecting your data from various threats, including potential misuse by AI.  

  1. Read privacy policies before engaging with AI technology to understand how your data will be stored and used.
  2. Avoid sharing private or personal information. If possible, use generic or anonymous information instead of providing personally identifiable details. This can help maintain a level of privacy while still allowing you to interact with the chatbot.
  3. Ensure that the platform hosting the AI chatbot uses secure and encrypted connections (look for “https” in the URL). This helps protect your data during transmission.
  4. Consider using the incognito or private browsing mode of your web browser when interacting with chatbots. This mode often limits tracking and data storage.
  5. Be aware of suspicious activity. If the chatbot is asking for personal information that is unrelated to its services make sure to verify the legitimacy of the platform. 

Ultimately, Remain Vigilant

As AI technology advances, the AI giveaways could be resolved in the near future, making deepfake media more believable and therefore a bigger threat. The only true way of avoiding falling victim to AI-generated misinformation and deepfakes is to do your own research and ensure the information you’re given is coming from a reliable source. In the current age of digital media, you can never be too careful. 

Back to top