top of page

Policy Design in the AI Era

Updated: Feb 28

A guide for leaders who know AI governance matters and aren't sure where to begin


Watercolor illustration of a hand touching a geometric prism that refracts golden light, representing how thoughtful policy design brings clarity to complexity in the AI era

We’re redesigning how work happens.


Most workplace policies were written for a world where tools changed slowly and jobs stayed recognizable for years at a time.


That world is gone.


AI is a shift on the level of the printing press. It changes what counts as knowledge, what counts as skill, and what counts as work. And the tricky part is this: when you’re inside the shift, it’s hard to see the size of it. It’s like trying to describe the whole ocean while you’re swimming under the waves.


So if you’re holding an AI policy that was drafted with pre-AI assumptions, I’m not here to critique it. I’m here to name the mismatch, and help you build something that actually protects your people, your mission, and your trust in the years ahead.


1. Good policy looks further ahead than feels comfortable.


A minimalist watercolor illustration in a wide horizontal format showing a copper telescope on the far left projecting a narrow golden beam across a cream-toned sky. Soft teal watercolor washes suggest distant terrain below, emphasizing long-range vision and forward focus.

Not in a reckless way. In a leadership way.


Because policy-making puts us in a strange position right now: we’re building something that has to support people in the present while also staying relevant as the ground continues to move. The future is already here, in the form of tools showing up in everyday workflows. And at the same time, the bigger societal impact is still unfolding, and we don’t yet have language for all of it.


So this is the posture: we look ahead on purpose. We name what’s changing. We build guardrails that can hold through uncertainty. And we give ourselves permission to revise often, because the landscape is still forming.


2. This is a change-management moment, whether we admit it or not.


A minimalist watercolor illustration in a wide horizontal format showing a single figure stepping upward across four rectangular platforms that transition from cool teal on the left to warm gold on the right. Large cream whitespace surrounds the scene, emphasizing gradual learning and supported progression.

AI policy is governance. It’s also a leadership challenge. And right now, every leader is a change leader. People are being asked to adapt how they write, plan, research, communicate, and make decisions, often while they’re already stretched thin. That’s a lot to hold.


So a responsible AI policy has to be designed like a transition plan, not a compliance memo. It has to acknowledge that humans learn in phases: curiosity, resistance, experimentation, confidence, overconfidence, correction, integration. The policy needs to create a runway, time to play, time to practice, time to build judgment, while still protecting what matters: privacy, trust, quality, mission.


A good policy doesn’t demand instant mastery. It builds a culture where learning is expected, supported, and resourced, because careers and futures are on the line, not just workflows.


3. Policy can’t be “one and done” anymore.


A minimalist watercolor illustration in a wide horizontal format showing a solid teal foundation bar at the bottom with three elevated rectangular segments above it in teal, gold, and copper tones. Subtle curved arrows between the upper segments suggest iterative updates built upon a stable base.

The fast-moving tech sector trained the world to work iteratively: we test, we learn, we update. That's not always comfortable in nonprofits, where stability matters. But AI policy design for nonprofits has to account for the fact that the tools, risks, and opportunities are still shifting.


So an AI policy needs two layers.

Layer One: the foundation (static). Board-approved. Mission-aligned. Values-forward. This layer doesn’t chase tools. It names the organization’s principles, the non-negotiables, who is accountable, and how often the policy is reviewed.


Layer Two: the operational layer (adaptive). This is where approved tools live. This is where incident reporting lives. This is where “what changed in the last six months?” gets handled without rewriting your entire governance foundation.


And then you create a third expression: a plain-language one-pager for staff. Because if people can’t understand the policy quickly, they won’t follow it, and they’ll hide use.


4. AI disrupts identity. Expertise is how you apply knowledge now, not just what you know.


A wide watercolor divider showing a teal cloud of scattered symbols on the left, a hand holding a magnifying glass at center, and neat gold-and-copper dots on the right, symbolizing human critical thinking turning AI output into accountable decisions.

If someone can learn in five seconds what another person spent five years studying, that brings up difficult questions of identity, expertise, status, and the ways people understand their roles at work.


Workplaces are going to feel that.


So policy has to do more than be a compliance document. It has to name, clearly, which decisions belong to humans and why. The policy should be clear about what requires human oversight (always), what cannot be delegated to AI, and how AI-generated work gets validated against mission, values, and real-world impact.


AI can draft. Humans decide.


Critical thinking must come to the forefront as the new expertise. Questioning the output. Checking for accuracy. Catching bias before it travels downstream. Making sure the final product aligns not just with a prompt, but with what the organization actually stands for. That’s the human work that matters now.


5. If AI frees capacity, the real question becomes: what do we do with our reclaimed time?


A wide watercolor divider showing a soft blue clock on the left and a seated figure on the right thoughtfully arranging five small golden stones on a table, symbolizing intentional use of reclaimed time.

If a small organization can save 5 to 15 hours a week through better tooling, that time doesn’t automatically turn into wellbeing. It can also turn into more output pressure.

So this is where policy becomes strategic.


A human-centered AI policy should explicitly say: we are using AI to uplevel people, not replace them. That means naming where the freed capacity goes:


  • deeper stakeholder relationships,

  • community engagement,

  • creative problem solving,

  • strategic thinking,

  • better service and care.


In that way, the policy becomes a declaration of culture, not just a compliance document.


6. Uncertainty creates fear. Data governance is how we protect trust.


A wide watercolor divider showing a calm gold stream flowing between two thin blue parallel lines, with faint teal wash outside the boundaries, symbolizing clear guardrails that protect trust.

Let’s be honest: most people don’t fully understand where their data is going when they use AI tools, and that uncertainty fuels fear.


Your policy should reduce that fear with clarity.


In Canada, that means being explicit about privacy expectations and data stewardship practices, including PIPEDA-aligned thinking. Practically, that looks like:


  • vendor due diligence (what happens to our data, where is it stored, what are the terms),

  • an approved tools list with security considerations,

  • clear rules about what information can never be entered (client data, sensitive internal info, personally identifiable information unless explicitly governed),

  • and training so staff can recognize boundaries without needing to be technical experts.


A policy is only as strong as the moments when people are tired and moving fast. That’s why the guardrails have to be simple and real.


7. In Canada, ethical AI means Indigenous data sovereignty belongs in your policy.


A wide watercolor divider showing a laptop with an empty input field and a hand hovering just above the keyboard, symbolizing a deliberate pause before entering sensitive data.

This one doesn’t live in the footnotes.


If your organization works with Indigenous communities, serves Indigenous clients, or handles data that touches Indigenous Peoples and knowledge, your AI policy needs to name that explicitly.


General-purpose data governance principles don’t go far enough here. Indigenous data sovereignty means Indigenous communities have the right to own, control, access, and possess data about their peoples, lands, and cultures. An AI tool that processes that data without explicit authority and direction from the community isn’t just a privacy risk. It’s a deeper violation than that.


I’m not the authority on this. The First Nations Information Governance Centre is. Their OCAP principles (Ownership, Control, Access, Possession) and the CARE framework (Collective Benefit, Authority to Control, Responsibility, Ethics) are the places to start.


What I can say with confidence is this: before any AI tool touches work that implicates Indigenous Peoples or knowledge, the right question is not “is this allowed under our privacy policy?” It’s “are we operating under the direction of the people this data belongs to?”


Naming that question in your policy is the beginning of respect. Your obligation doesn’t end there.


8. AI is a mirror, and “algorithmic bias” is not a strong enough sentence.


A narrow watercolor divider showing a teal silhouette reflected with slight distortion in a simple blue-framed mirror, separated by a thin copper line from a calm gold silhouette, symbolizing protective oversight correcting bias.

AI reflects society back to us: racism, sexism, classism, ableism, plus misinformation, privacy breaches, and reputational harm.


If the policy treats this like a footnote, you’re setting staff up to stumble into harm while thinking they’re being “innovative.” And that harm doesn’t stay internal. It reaches the clients, communities, and people your organization exists to serve.


Here’s what I recommend organizations put in writing:


  • baseline staff training in AI ethics,

  • equity checks on high-impact outputs,

  • and clear escalation pathways when something feels off.


Not punitive. Protective.


9. Pretending AI is optional can do real harm.


A narrow watercolor divider showing a simple bridge extending forward on the left and an open hand on the right holding a small green sprout, symbolizing thoughtful adoption and duty of care.

Saying “we don’t use AI here” might feel like a values stance. But over time, it can become a career-limiting environment for staff, especially as AI becomes woven into how work happens across industries.


This is where we need to be honest and humane. We are in a transition period. For some organizations, that’s one year. For others, it’s four. For others, it’s ten. But the direction is clear: AI is becoming part of the baseline toolkit of modern work. And if an organization refuses to engage with it entirely, it risks doing employees a disservice, not because people must become “AI people,” but because their future options will be shaped by what they’ve had the chance to learn.


The policy stance: adopt thoughtfully, with a spine. Clear guardrails. Real training. Room to practice. And leadership that treats capability-building as part of its duty of care.


10. Trust dies in secrecy. Policy is how we stop covert use.


A narrow watercolor divider showing two simplified silhouettes seated across a shared laptop on a thin horizontal table line, symbolizing open and transparent AI use supported by policy.

Right now, many workers hide their AI use because they don’t know what’s allowed, or they’re afraid they’ll be judged.


This should be understood as a governance gap and not a “staff problem.”


Your policy should create psychological safety by clearly defining:


  • what’s permitted,

  • what’s encouraged,

  • what’s prohibited,

  • and what people should do when they’re unsure.


And yes: a version of the policy should live on your website. Because transparency builds trust with funders, communities, and partners, and it sets a tone internally that says: we’re doing this thoughtfully, out loud, together.



In Closing

The printing press didn’t just change how books were made. It changed who got to speak, who got to know things, and who held power. It took generations to understand the full size of that shift, and the people living inside it couldn’t see it clearly either. They were swimming under the waves too.


AI is doing the same thing. The world where policies were written once and filed away is gone. The world where expertise meant what you knew, rather than what you could do with what you knew, is gone. The world where trust was built slowly, in private, without anyone watching how decisions got made, that world is gone too.


What remains is a choice. Every organization navigating this moment gets to decide what kind of institution it wants to be on the other side of this shift. A policy won’t make the waves stop. But it will tell your people, your communities, and your funders exactly where you stand in the water, and that you didn’t wait for someone else to figure it out first.


You can wait for regulation to tell you what to do. Or you can decide now, on your own terms, what your organization stands for. That’s the kind of leadership the AI era needs.


A watercolor-and-ink illustration showing the right edge of a faceted blue prism on the left, emitting structured gold and copper beams across flattened teal water. A small figure stands knee-deep in stylized horizontal wash bands, facing a calm cream horizon and distant simplified shoreline, symbolizing clarity and steady leadership after transformation.

About Sarah Downey Sarah Downey is a Canada-based consultant helping nonprofits adopt AI safely, ethically, and confidently through governance clarity and policy development.

Comments


I’m a white settler, grateful to be a living on the traditional, ancestral, and unceded lands of the Songhees, Esquimalt (Lək̓ʷəŋən), and WSÁNEĆ peoples. Unceded means they were never signed over through treaty rights, and still rightfully belong to the nations who have stewarded them since time immemorial.

I recognize the ongoing impacts of colonialism and commit to using my voice and work to contribute to truth, repair, and meaningful change.

hello@sarahdowneyconsulting.com

Victoria, BC
CANADA

  • LinkedIn

© 2026 by Sarah Downey Consulting and Wix

bottom of page