Nobody Trusts Anyone to Govern AI. The Nonprofit Sector Should Change That.
- Sarah Downey
- 6 days ago
- 3 min read
This piece was originally written as an op-ed in response to SFU research published in the Vancouver Sun in March 2026.

Something shifted in British Columbia last year.
Researchers at Simon Fraser University surveyed BC residents about artificial intelligence in late 2024, then surveyed them again in early 2026. What they found wasn't what the tech industry tends to predict. The more people learned about AI, the more concerned they became. Curiosity faded. Anxiety took its place.
This wasn't simply backlash from people who don't understand the technology. Respondents reported using AI more often and knowing more about it than the year before. Their skepticism grew alongside their familiarity.
The concerns they named are real: losing the human element in decisions, bias built into automated systems, governments and companies using opaque algorithms to affect people's lives, personal data being used without meaningful consent. These risks are no longer hypothetical. They are already happening.
Here is the finding that stayed with me: most BC residents believe governments should regulate AI. And more than half don't trust government to do it in the public's interest. Trust in tech companies is even lower. The only institutions that maintain broad public confidence are universities and researchers.
That gap is a problem. It's also an opening.
The institutions the public still trusts have something in common with the nonprofit sector. They aren't profit-driven. They're accountable to communities, not shareholders. They hold values that predate AI and that won't be abandoned for a competitive advantage.
Nonprofits have spent decades building exactly the kind of trust that's now in shortest supply.
And most of them have no AI governance framework at all.
This is the part that both troubles me and excites me most. Because if nonprofits are willing to step into this gap rather than wait for government to set the rules, they have something rare to offer: a values-based approach to AI governance that this country desperately needs. Organizations that have spent decades making decisions through an ethics lens, with community trust at the centre, are exactly the voices that should be shaping how AI gets governed, and not just inside their own walls, but in the broader national conversation.
The nonprofit sector has a long history of getting left behind when new technology arrives. Resources flow to corporate adoption. Attention goes to industry use cases. Tools get built for sectors with budgets. Nonprofits catch up years later, without the infrastructure or guidance they need.
AI is following that pattern right now. I understand why many nonprofit leaders are watching from the sidelines. The technology moves faster than capacity to keep up. The stakes feel high and the guidance feels thin.
But there is a huge opportunity right now that must not be missed. The public has already withdrawn trust from the institutions that would normally step in. This is not a situation to watch from the sidelines. It's an invitation.
The public is not asking governments or tech companies to lead on AI governance. They've already signaled they don't trust them to. What the public is asking for is transparency, human oversight, protection of personal data, accountability for automated decisions - and this is language the nonprofit sector lives in. These are the foundational values for the sector.
Nonprofits that develop strong AI governance are doing more than protecting their own organizations. They are modeling what responsible adoption looks like for every sector watching.
That's a different kind of leadership than the nonprofit sector is used to. Not leadership through a funding competition or a policy brief or a seat at someone else's table. Leadership by doing. By building governance that reflects your values. By refusing to treat AI policy as a compliance exercise and owning it instead as an expression of what your organization actually stands for. By being open about how you're doing it. That visible action is the leadership.
This requires clarity about values, honesty about risk, and someone willing to make decisions before a crisis forces them to. Boards are asking about AI. Funders are starting to ask. Communities are already asking. Organizations that have genuinely thought this through will answer those questions with confidence. That confidence earns a different kind of credibility.
British Columbians are paying attention. They're learning, using the technology, and thinking seriously about what it means. They want guardrails. They want institutions they can trust to hold them.
The nonprofit sector can be that institution.
But only if it decides to show up.
Sarah Downey is a Canada-based consultant who helps nonprofits adopt AI ethically through governance clarity, training, and AI policy development.



Comments