top of page

Before Your Team Uses ChatGPT: The Board-Ready AI Policy Checklist

Updated: Jan 24

How nonprofit boards can set safe, practical guardrails before staff adopt AI tools.


Nonprofit board reviewing an AI policy checklist before using ChatGPT

Nonprofit staff are already experimenting with tools like ChatGPT - often informally - while public trust and donor confidence remain fragile. In Canada, AI chatbot use has moved into the mainstream, and concerns about misinformation are high.


Before AI becomes "just part of how we work," boards and leadership teams need simple guardrails: what data can't be used, what use-cases are allowed, what requires human review, how vendor risk is assessed, and what happens when something goes wrong.


The goal is good governance so people can use it safely, ethically, and with confidence.


Why This Is Urgent

Illustration of nonprofit leader balancing trust and AI adoption

Here's what I'm seeing across the sector:

Your staff are changing their behavior. Many people now treat AI tools like search engines and personal assistants, which shifts expectations about how quickly they can get answers and support.


Trust is on the line. Concerns about AI misinformation in Canada are extremely high, which means a single public misstep can land harder than it used to. 1 in 3 donors say undisclosed AI use would erode trust.


Nonprofits are catching up fast. Canadian nonprofit AI adoption has been low historically, but interest is rising quickly.


And one more truth that boards don't always hear directly: Even when the organization hasn't "adopted AI," staff may already be using it. Usually with good intentions (drafting, summarizing, brainstorming) but without shared boundaries.


Governance has a tendency to be seen as bureaucracy. But I invite us to think about good governance as care.



A Quick Frame


AI is the current tool - but the deeper question is: what kind of workplace are we building?


For nonprofits, "human-first" means:

  • Protecting the privacy and dignity of community members

  • Preserving donor trust

  • Supporting staff capacity without increasing risk

  • Making equity a design requirement, not an afterthought


With that frame, the checklist becomes much simpler.



Checklist page on a table with notes and a laptop

The Board-Ready AI Policy Checklist


Use this as a board conversation agenda, a policy skeleton, or a leadership team working session. Each point should be short enough to be remembered, and specific enough to follow.


1) Purpose + Boundaries

Define what AI is for in your organization (and what it's not for). Tie it to mission and values, not novelty.

Think: "Where would AI free capacity without weakening trust?"


2) People Impacted + Equity Check

Name who could be helped and who could be harmed by AI use in your context (staff, communities served, donors, partners). Build in a representation check so "efficiency" doesn't override dignity, accessibility, or lived experience.


3) Human Review + Owner

Person reviewing an AI draft with a checklist

Be explicit about where humans must make the final call - and who is accountable. AI can draft, summarize, and suggest; humans decide, approve, and take responsibility for outcomes.




4) Incident Plan (Reporting + Escalation)

Decide what happens when something goes wrong: sensitive info is pasted into a tool, an output is harmful, or something inaccurate goes public. Staff should know who to tell, what to document, what gets paused, and how you communicate if needed.


5) Vendor Safety (Procurement + Terms)

If you're paying for tools or integrating them into workflows, treat vendor decisions as governance. Ask about data retention, whether prompts are used for training, admin controls, audit logs, storage location, and exit terms.


6) Allowed vs Not Allowed (Use-Cases)

Write this in plain language. Identify a few low-risk, high-benefit uses (drafting internal comms, summarizing public info) and clearly prohibit high-risk uses (client-facing advice without safeguards, entering identifiable donor/client data, generating unsupervised legal/financial guidance).


7) Never-Enter Data List (Privacy Boundary)

Private information kept out of an AI chat window

Create a short "never-enter" list that staff can remember:

  • Identifiable client/community member data (PII)

  • Donor lists, giving history, wealth screening data

  • HR/employee records

  • Passwords, access links, internal security details

  • Confidential case notes, medical/health details, legal advice in progress


8) Fact-Check Rules (Hallucinations + Misinformation)

Assume AI can be confidently wrong. Require verification for anything public, anything financial, anything legal, and anything that could affect services or trust. If you can't verify it, don't publish it.


9) Staff Basics Training (Safe Use Baseline)

Training doesn't need to be complex but it needs to be consistent. Make sure everyone understands the boundaries, the approved uses, how to protect sensitive data, and what to do when they're unsure.


10) Review + Update Rhythm (Living Policy)

AI tools and norms change; your policy should too. Review at least annually, and update sooner when you adopt a new tool, shift a use-case, or learn from an incident.



What Belongs in a Policy vs. Staff Guidelines?

AI policy stored as a shared document with printed staff guidelines nearby

Here's a clean separation:


Board/organizational policy (stable):

  • Principles and purpose

  • Data boundaries

  • Permitted/prohibited use-cases

  • Accountability roles

  • Vendor/procurement expectations

  • Incident response

  • Review cadence


Staff guidelines (practical, update often):

  • Examples of safe prompts

  • Do/don't screenshots

  • "Red flag" scenarios

  • What to do when unsure

  • Template language for disclosure (if you choose to disclose AI use publicly)


That separation helps boards govern without micromanaging, and helps staff act without guessing.



Three Common Nonprofit Scenarios


Three nonprofit AI use scenarios preview

Scenario 1: Communications and Storytelling


Your communications lead uses AI to draft a newsletter story about impact.


Checklist:

  • Only public, non-sensitive info in the prompt

  • Bias and tone review (especially when writing about communities)

  • Fact-check any stats or claims

  • A human editor owns the final voice


Scenario 2: Fundraising and Donor Engagement


A fundraiser wants to use AI to personalize donor outreach.


Checklist:

  • Donor data boundaries (what can't be used)

  • Fairness and consent considerations (especially with segmentation)

  • Transparency choices (what you will/won't disclose)

  • Procurement questions if a tool touches your CRM


Scenario 3: Client-Facing Information or Advice


A program team considers a chatbot on the website for service navigation.


Checklist:

  • Strict human-in-the-loop design for any advice-like content

  • Clear limitations and escalation to a person

  • Incident response plan before launch

  • Equity testing (language, accessibility, bias, safety)

If you serve vulnerable communities, client-facing AI needs good governance.



Frequently Asked Questions


What should a nonprofit board decide before staff use ChatGPT?

Purpose, boundaries, accountability, and risk appetite. Staff can experiment inside guardrails. Boards set the guardrails.


What data should never go into an AI chatbot?

If it would harm someone (or the organization) if exposed, it doesn't go in. Start with identifiable client info, donor data, HR info, passwords, and confidential internal documents.


Is ChatGPT confidential for work?

Not by default. Treat public AI tools like a public hallway conversation: useful for brainstorming, risky for sensitive details. Your policy should spell this out clearly.


How do we handle hallucinations and misinformation risk?

Assume AI can be wrong. Require verification for anything public, anything financial, anything legal, and anything that affects services. Use AI to draft - not to decide.


What belongs in an AI policy vs. staff guidelines?

Policy = principles, boundaries, roles, and incident response. Guidelines = examples and how-to usage that changes as tools change.


Do we need an AI committee or champion?

Not always. But you do need someone accountable for policy upkeep, training coordination, and vendor review. In small orgs, that might be one person with a clear mandate.


How often should we review our AI policy?

At least annually, and sooner when you adopt a new tool, change how you use data, or experience an AI-related incident.


What if we're very small or have no time or budget?

Start even smaller.

"We can use AI for: drafting (not publishing), research summaries, brainstorming.

We never put in: client info, donor data, passwords.

Rule: Check with [name/role] before anything public or new."

That's governance. You can grow it later.



In Closing


Here's what I know: your staff are already experimenting. AI tools are becoming more accessible, not less. And trust, once broken, takes years to rebuild.


So the question is "can we build guardrails that match how we actually work?"


The best AI policies aren't about controlling technology or people. They're about protecting relationships - between your organization and the communities you serve, between leadership and staff, between efficiency and dignity.


This board-ready AI Policy checklist is your starting point. Make it yours. Adjust it as you learn. And when you need support turning policy into practice, reach out.


If your board is asking "are we behind?", try reframing: "What do we need in place so our people can use these tools without gambling trust?"


Answer that, and you're exactly where you need to be.


 If you want help developing an AI policy for your nonprofit, I work with boards, executive directors, and leadership teams to build clear, practical guardrails and staff guidelines. Contact Sarah Downey Consulting


About Sarah Downey


Sarah Downey is a Canada-based consultant helping nonprofits adopt AI safely, ethically, and confidently through governance clarity and policy development.

Comments


Commenting on this post isn't available anymore. Contact the site owner for more info.

I’m a white settler, grateful to be a living on the traditional, ancestral, and unceded lands of the Songhees, Esquimalt (Lək̓ʷəŋən), and WSÁNEĆ peoples. Unceded means they were never signed over through treaty rights, and still rightfully belong to the nations who have stewarded them since time immemorial.

I recognize the ongoing impacts of colonialism and commit to using my voice and work to contribute to truth, repair, and meaningful change.

hello@sarahdowneyconsulting.com

Victoria, BC
CANADA

  • LinkedIn

© 2026 by Sarah Downey Consulting and Wix

bottom of page