AI for Nonprofits: A Beginner’s Roadmap

If you run a small nonprofit and you’ve been hearing about AI from your board, your funders, or your peers, you’ve probably had the same instinct: it would be smart to do something with this, and it would be reckless to do too much. Both are right. Here’s a roadmap that lets you do something useful without overcommitting.

Decision 1: Decide what AI is FOR at your organization

The single biggest mistake small nonprofits make with AI is treating it as a thing to adopt instead of a tool to point at a problem. Before any tool conversation, get clear on which one or two organizational problems AI might actually solve.

For most small nonprofits, the candidate problems look like this: too many grant proposals to write in the time available; volume of internal documentation that no one has time to summarize or organize; outreach materials that need to be produced in multiple formats or languages; reporting work that eats program-staff time. AI does well on writing-adjacent and summarization-adjacent tasks. AI does poorly on things that require deep organizational context or human judgment about specific people.

Write down two problems. Anything you adopt later should serve one of those two. If a tool doesn’t, you don’t need it.

Decision 2: Establish a written policy before you scale

You don’t need a 30-page policy. You need a one-page statement that says: which uses are permitted without approval, which require approval, and which are off-limits. The Nonprofit AI Use Policy: Board Brief is exactly this and is designed for board adoption in one meeting.

Why this matters before you adopt tools: without a policy, staff make their own decisions about what to paste into which tool, and the decisions are inconsistent. With a policy, your team works inside guardrails that protect participants and protect you.

Decision 3: Name a responsible person

Someone in your organization needs to own AI decisions — not as a full-time role, but as a clear point of accountability. For most small nonprofits, this is the executive director, with the board chair as the escalation point. The person doesn’t need to be technical. They need to be the place where “can we use this tool for this thing?” gets answered.

If no one owns it, AI decisions become whatever individual staff members feel comfortable with. That’s not governance, it’s drift.

Decision 4: Pick one small first use

The first AI use at your organization should be small enough to evaluate and structured enough to learn from. Three patterns that work well for first uses:

Avoid first uses that touch participant data, that automate decisions about people, or that depend on AI for outputs you’d send to a funder. Build the muscle on safer ground first.

What you don’t need yet

Almost everything else. You don’t need an enterprise AI platform. You don’t need to retrain anyone. You don’t need a vendor contract longer than 12 months. You don’t need to issue a public statement about AI. You don’t need an AI strategy with a five-year horizon.

You’ll need some of these eventually. But the order matters. A policy, a named person, and a small first use will teach you more about what your organization actually needs than any vendor demo or strategy document.

Where to go from here

Once you’ve made the four decisions, the AI Readiness Self-Assessment will surface the gaps you should address next. The articles in this category will work through specific implementations — prompt patterns for program staff, workflows for case managers, ways to talk to your board about AI risk — as the field reports come in.

If you’re a federally funded program, also read the Federal Compliance articles before adopting anything; the rules are stricter than the general nonprofit baseline.


Need a custom roadmap for your specific organization, including which tools to adopt and in what sequence? Visit strategicalai.net.

jami799d9b62aa1 Avatar

Posted by

Leave a comment