Why Most Nonprofits Are Getting AI Wrong (And How the 4D Framework Fixes It)

Most nonprofits I talk to are stuck in one of two places with AI.

Either they're paralyzed, worried about misinformation, security, or just not knowing where to start. Or they're rushing in without guardrails, letting staff use whatever tool they found on LinkedIn, with no sense of what could go wrong.

Neither approach works.

The paralyzed organizations fall behind. The reckless ones create liability. And both groups miss what AI could actually do for them: amplify their mission without replacing the humans who make that mission real.

After working with AI across commercial real estate, health tech, and cancer advocacy for the past two years, I've found a framework that cuts through the chaos. It's called the 4D Framework for AI Fluency, developed by Professor Rick Dakan at Ringling College of Art and Design and Professor Joseph Feller at University College Cork.

What makes it useful isn't that it's academic - it's that it's practical. The framework defines AI fluency as the ability to work effectively, efficiently, ethically, and safely with AI systems. Not using AI. Working with it.

That distinction matters.

The framework breaks AI fluency into four core competencies. Each one addresses a different aspect of working with AI responsibly.

Here's how they work together.

Delegation: Deciding What Gets Done By Whom

This is where you figure out which tasks benefit from AI and which still need human judgment.

Most nonprofits skip this step. They jump straight to tools without asking whether AI is even the right answer for the problem they're trying to solve.

Delegation slows you down just enough to make smarter choices.

It's not about dumping tasks on AI. It's about understanding your work well enough to know what AI can handle, what it can't, and where humans still matter most.

This breaks into three parts:

Problem Awareness
Before you hand anything to AI, you need to be clear about what you're actually trying to accomplish. Not "we need better social media" but "we need consistent posting that frees up our comms director to focus on donor storytelling."

If you can't name the problem clearly, AI won't solve it. It'll just automate confusion.

Platform Awareness
You need to know what your tools can and can't do. ChatGPT is good at certain things. Claude is better at others. Some tools are built for creativity. Some are built for analysis. Some leak data if you're not careful.

Platform awareness means you're not just using AI. You understand its capabilities and limitations well enough to choose the right tool for the job.

Task Delegation
This is where you actually assign the work. But it's not binary. AI doesn't just "do" or "not do" something. The question is: what parts of this task benefit from AI, and what parts still need a human?

Drafting a donor thank-you note? AI can give you a strong first pass. But a human needs to add the personal details that make it feel real. Planning a year-end campaign? AI can help with structure and messaging angles. But strategy - understanding your community and timing and what will land - that's still you.

Description: How You Communicate With AI

This is what most people call prompt engineering. But it's more than just writing better prompts.

Description is about being precise with what you want and how you want AI to work.

Most people treat AI like a search engine. They type a vague question, get a mediocre answer, and assume that's all it can do. But AI doesn't read your mind. It responds to what you give it.

Product Description
This is where you define what you want. Not "write a blog post" but "write a 600-word blog post for nonprofit executives about avoiding burnout during year-end giving season, with a conversational tone and three actionable takeaways."

The clearer your product description, the closer AI gets to something you can actually use.

Process Description
This is how you guide the approach. Do you want AI to brainstorm first, then draft? Do you want it to reference specific examples? Should it avoid certain phrases or jargon?

Process description is where you shape how AI works, not just what it produces.

Performance Description
This is where you define the behavior you want from AI during collaboration. Should it ask clarifying questions? Challenge your assumptions? Stay neutral or take a stance?

You can literally tell AI: "Push back if my framing seems off" or "Keep responses concise and skip the preamble." And it will.

Performance description turns AI into a better collaborator.

Discernment: Evaluating What AI Gives You

This is where critical thinking comes in. And it's non-negotiable.

Discernment means you're not just accepting AI outputs at face value. You're evaluating quality, checking for hallucinations, and making sure the work actually serves your goals.

Product Discernment
Is the output good? Does it match what you asked for? Is the tone right? Are there errors, hallucinations, or claims that need fact-checking?

Product discernment is basic quality control. You wouldn't publish a blog post without proofreading it. Don't do it with AI either.

Process Discernment
How did AI arrive at this answer? Can you trace its reasoning? Does the approach make sense, or did it take shortcuts that undermine the result?

Process discernment helps you understand whether AI's logic is sound or whether it just strung together plausible-sounding sentences.

Performance Discernment
How did AI behave? Was it helpful? Did it follow your instructions? Did it handle sensitive topics appropriately?

Performance discernment tells you whether the tool is working the way you need it to - or whether you need to adjust how you're using it.

Diligence: Acting Responsibly and Ethically

This is what keeps AI from creating problems you didn't anticipate.

Diligence means you're being intentional about which tools you use, transparent about AI's role, and accountable for what you deploy.

Creation Diligence
This is about being thoughtful on the front end. Which tool are you using? Why? What data are you feeding it? Are you using a free tool that sells your input data, or a paid version with better privacy controls?

Creation diligence means you're not just grabbing the first AI that pops up. You're choosing tools that align with your values and your risk tolerance.

Transparency Diligence
If AI helped create something, say so. Not in a defensive way. Just honestly.

"We used AI to draft this" or "AI helped us analyze survey responses" isn't a disclaimer. It's context. And in nonprofit work, where trust is currency, transparency diligence protects that trust.

It also protects you legally. If something goes wrong and you weren't transparent about AI's role, that's a bigger problem than the mistake itself.

Deployment Diligence
Before you send it, post it, or publish it - verify it.

AI makes mistakes. It invents statistics. It misunderstands tone. It confidently generates things that are just wrong.

Deployment diligence means a human reviews, edits, and takes responsibility for what goes out the door. You don't outsource accountability to a machine.

Why This Framework Matters for Nonprofits

Nonprofits don't have margin for error. You're accountable to donors, boards, communities, and beneficiaries. A careless AI mistake doesn't just hurt efficiency. It damages trust.

The 4D Framework doesn't eliminate risk. But it gives you a structure to manage it.

It helps you decide what AI should and shouldn't touch. It makes you better at communicating what you need. It keeps you critical about what you get back. And it ensures you're acting responsibly throughout the process.

You don't need to be an AI expert to use this framework. You just need to be thoughtful.

Ask the right questions. Verify what matters. Be transparent about your process. Take responsibility for what you deploy.

That's not complicated. It's just diligent.

What This Looks Like in Practice

Let's say you're running year-end fundraising and want AI to help with donor emails.

Delegation:
You start by defining the problem clearly: you need personalized thank-you emails that feel human, not templated. You evaluate tools and their capabilities, choosing one that doesn't sell donor data. You decide AI will draft the structure and tone, but a human will add personalized details for each donor.

Description:
You tell AI exactly what you want: "Write a 150-word thank-you email to first-time donors, warm and personal, acknowledging their gift and explaining how it will be used." You guide the approach: "Reference our mission without sounding corporate." You set behavioral expectations: "Keep it conversational and avoid nonprofit jargon."

Discernment:
When AI delivers the draft, you evaluate it critically. Is this warm enough? Does it sound like us? You check the reasoning behind word choices and structure. You assess whether it followed your instructions or reverted to generic nonprofit language.

Diligence:
You're using a paid tool with privacy controls, not a free version that could expose donor information. You document internally that AI was used in the drafting process. Before sending anything, a staff member reviews each email for accuracy, adds personal touches, and takes responsibility for what goes to donors.

That's the framework in action. It's not flashy. It's just deliberate.

The Bottom Line

AI isn't going away. The question isn't whether your nonprofit should use it. It's whether you'll use it well.

The 4D Framework won't make you an AI expert overnight. But it will make you more intentional, more responsible, and a lot less likely to create problems you didn't see coming.

Delegation helps you choose the right work for AI.
Description helps you communicate what you need.
Discernment keeps you critical about what you get.
Diligence ensures you're acting responsibly throughout.

Together, they turn AI from a risk into a tool that actually serves your mission.

And that's the whole point.

Learn More About the 4D Framework

The 4D Framework for AI Fluency was developed by Professor Rick Dakan at Ringling College of Art and Design and Professor Joseph Feller at University College Cork through ongoing research into human-AI interaction in creative and business processes.

They collaborated with Anthropic to create a free online course based on this framework. You can access the course at AI Fluency for nonprofits.

Whether you're looking for structured training for your team or just want to understand the framework more deeply, it's worth exploring.

Kenny Kane

Kenny Kane is an entrepreneur, writer, and nonprofit innovator with 15+ years of experience leading organizations at the intersection of business, technology, and social impact. He is the CEO of Firmspace, CEO of the Testicular Cancer Foundation, and CTO/co-founder of Gryt Health.

A co-founder of Stupid Cancer, Kenny has built national awareness campaigns and scaled teams across nonprofits, health tech, and real estate. As an author, he writes about leadership, resilience, and building mission-driven organizations.

Next
Next

A Few Notes From 2025