Most AI case studies you read in the nonprofit sector are sanitized. The pilot worked. The participants loved it. The metrics moved. The vendor was great. There’s a quote from an executive director that could have come from any of fifty other case studies.
These case studies are useless. Worse than useless: they shape funder expectations and peer-organization decisions about what AI can do, and they’re written by people whose job depends on the answer being yes.
Field reports from Strategical AI will look different. This article explains the format, the principles, and why the failures matter at least as much as the successes.
What a field report covers
Each field report covers five sections, in this order:
- The goal. What were we trying to do? Who is “we” — which organizations participated, in what roles? What was the program context?
- The setup. What tool(s) did we use? What was the workflow? What staff time and budget went in? What did we explicitly choose NOT to do?
- What happened. Outputs produced, participants affected (anonymized), staff time used, money spent. Numbers, not impressions.
- What worked, what didn’t. Honest. If half the workflow failed and half succeeded, we’ll say so and split the analysis.
- What we’d do differently. The lessons that should affect anyone else trying something similar.
Why the failures matter at least as much
Three reasons the failures get equal weight:
Failures are denser with information. A successful pilot tells you one path worked. A failed pilot can tell you about five things that don’t work and three you should consider before trying. Per word, failure reports are richer than success stories.
Other organizations are going to try what you tried. Without the failures in print, every nonprofit experimenting with AI is rediscovering the same mistakes. That’s a sector-wide tax on participants who didn’t ask to be in the test cohort.
Funders are getting better at this. The funder landscape has shifted in the last several years toward valuing organizations that share what didn’t work. Field reports are now an asset in grant applications, not a liability. Publishing failure builds credibility in places that matter.
What we won’t do in field reports
We won’t put participant identifiers in reports. Examples will be composites or anonymized to the point that re-identification is implausible.
We won’t soft-pedal vendor problems. If a tool’s data-handling commitment turned out to be weaker than advertised, we’ll name the tool and quote the specific failure. Vendors who don’t want to be named in negative reports should have built better products.
We won’t run sponsored field reports. No vendor pays for inclusion. No funder shapes the conclusions. If our funder funds a pilot, we say so and the funder reads the report at the same time as everyone else.
We won’t write reports we don’t have evidence for. A field report needs participation numbers, time logs, and either qualitative interviews or quantitative outcomes. “We think this went well” is not a field report; it’s an opinion.
What’s coming
Strategical AI is in the process of standing up its first grant-funded pilots. Field reports will appear in this category as those pilots complete, beginning with shorter pieces (one workflow, one cohort) and growing toward longer multi-site reports.
If you’re a nonprofit or workforce program running an AI pilot that you’d be willing to write up as a field report for this site — with editorial control on your side, with us providing structure and a publication channel — reach out. Honest field reports are a public good. We want to help more of them exist.
Interested in commissioning a field report on your organization’s AI work? Custom field-report development is handled at strategicalai.net.
Leave a comment