AI Ethics for Small Nonprofits: Beyond the Buzzwords

Most AI ethics frameworks were written for tech companies trying to look responsible after the fact. They’re often abstract enough that they could apply to anything and concrete enough about nothing in particular.

Small nonprofits face different ethical questions than tech companies do. Different stakes (participants who depend on your services, not customers who can switch). Different power dynamics (your participants often can’t opt out). Different accountability (your board and your funders, not shareholders). What follows is a working framework for the actual questions small nonprofits face.

Question 1: Are you using AI to do something to participants they didn’t consent to?

This is the hardest question and the most important one. Examples of using AI to do something to participants:

None of these are categorically wrong. Some have legitimate program design purposes. But all of them require explicit, informed consent — not buried in a sign-up form, not assumed because the participant is in your program. Consent means the participant could meaningfully decline and still receive services.

If you can’t honestly say the participant could decline without consequence, you don’t have consent. You have compliance with a checkbox.

Question 2: Are you using AI to make decisions that affect participants’ access to services or benefits?

The line: AI helping a human make a decision is different from AI making the decision. The first is augmentation; the second is automation.

For participant-affecting decisions — eligibility, service tier, prioritization, exit — the rule is human in the loop, with the human authoritative and accountable. AI can surface considerations. Humans decide. Decisions have a documented review path the participant can use.

If your AI tool makes the decision and a human rubber-stamps it, you have automation with a human-shaped fig leaf. That’s worse than honest automation because it diffuses accountability.

Question 3: Are you publishing what you do, including what doesn’t work?

The nonprofit sector has a transparency norm that the broader tech world doesn’t share. Funders, peers, and the communities you serve all benefit when your organization is honest about what AI tools you use, how you use them, and what’s worked or not.

Transparency is also a check against your own drift. Knowing you’ll publish what you’re doing makes you slow down at the decisions you might otherwise rush. Field reports — even short ones — are an ethical practice, not just a content marketing strategy.

Strategical AI runs on this premise; we publish what we try, including what didn’t work. We recommend the same to peer organizations.

Question 4: Are you protecting the people whose work makes the AI work?

The AI tools your organization uses were trained on labor. Some of that labor was content creators whose work was scraped without consent. Some was paid annotators in low-wage geographies. Some was problematic in ways the vendors don’t disclose.

You won’t fix this single-handedly. But you can take it seriously:

What’s NOT actually an ethical question (despite the marketing)

“Bias.” Bias is real and important, but “AI is biased” treated as a categorical statement substitutes for engagement with the specific decisions your specific use is shaping. Specificity is ethics; generalities are theater.

“Hallucinations.” AI making up facts is a quality and accuracy problem, not primarily an ethics one. It becomes an ethics problem when the made-up output reaches a participant or funder as fact, and the answer is human review, not theology.

“AI sentience.” Not a current organizational ethics question for any small nonprofit. Save it for the conferences.

The bottom line

Ethical AI use at a small nonprofit isn’t a framework you adopt. It’s a practice: who decides; who’s accountable; who knows; who can opt out; and what you tell the people whose lives are affected.

The Nonprofit AI Use Policy: Staff Handbook is built around this practice. The three-tier framework, the human-review requirement on Restricted-tier uses, and the prohibited-use list aren’t abstract values — they’re the operational expression of these questions.


Need an ethics review for a specific AI integration, or facilitated discussion with your board? Visit strategicalai.net.

jami799d9b62aa1 Avatar

Posted by

Leave a comment