Government & Public Services
Engage with governments and regulatory bodies to further public-sector readiness.
Governments are being asked to govern something they barely understand.
Governments at every level are making the biggest decisions about AI in public life without proven frameworks, shared standards, or meaningful input from the people they serve.
Most public officials making decisions about AI procurement, deployment, and oversight have had no training in how these systems work, what risks they carry, or what questions to ask vendors. They're expected to move fast, but they have almost no playbook for doing it well.

What We're Seeing
The challenges Governments are facing now
There's no shortage of principles. There's a shortage of practice. Over 190 countries have endorsed the UNESCO Recommendation on AI Ethics. Dozens of frameworks, indexes, and guidelines exist. But when a city agency needs to decide whether to deploy a predictive tool in child welfare or criminal sentencing, there is very little practical guidance on how to connect those principles to that moment.
The global approach is fractured. The EU has binding law. The U.S. relies on a patchwork of executive orders and agency-level guidance. Most countries in the Global South have no comprehensive AI policy at all, and a 40-point readiness gap separates high-income nations from middle-income ones. AI doesn't respect borders, but governance still does.
Citizens are almost entirely absent from the conversation. The people most affected by AI, those navigating benefits systems, healthcare, immigration, and the justice system, often have no voice in how these tools are designed, deployed, or overseen. Most don't even know AI is involved. In recent surveys, 37% of people said they simply don't know how AI is affecting the government services they depend on.
The organizations building AI are often the same ones advising governments on how to regulate it. This creates a tension that erodes public trust before governance has even had a chance to take root. Meanwhile, civil society groups, researchers, and advocacy organizations with deep expertise are under-resourced and under-consulted.
Everyone agrees AI governance matters. Almost no one agrees on what it should look like or who should lead. The result is a global landscape where urgency is high, coordination is low, and the window to get this right is narrowing.
What We Do
What working with us actually looks like
We start by finding what's missing. Before anything else, we work with governments and public-sector organizations to identify gaps in policy, practice, capacity, and public trust.
We bring the right people to the table, people who may normally not be in the same room. Our network spans policymakers, standards bodies, civic tech organizations, researchers, labor groups, and privacy advocates across dozens of countries. We connect these organizations around specific projects, not abstract conversations. A government trying to build an ethical AI procurement process shouldn't have to figure it out alone when someone across the world has already done the hard work.
We help build the plan, not just the vision. A lot of organizations can tell you what ethical AI should look like. We focus on what it takes to actually get there, the implementation roadmap, the partnerships, the practical toolkits, the pilot programs. We work alongside governments and organizations to co-develop strategies they can realistically execute with the resources and capacity they have.
We stay focused on what's measurable. Every engagement produces something concrete: a readiness assessment, an implementation framework, a set of policy recommendations with clear benchmarks. Organizations we work with leave with a plan they can act on, track progress against, and hold themselves accountable to.
We don't replace what's already working. We fill in what's not. Where strong organizations and frameworks already exist, we partner with them and help extend their reach. Where critical work isn't happening, or isn't happening fast enough, we step in to close the gap. The goals are always the same: make sure AI governance and its use in government are both shaped to benefit humanity.
What Changes Because of This
The outcomes people can expect
Governments stop guessing and start governing AI with intention. Before, decisions about AI in public services were reactive, driven by vendor pitches or political pressure. After, there's a clear framework grounded in ethics, evidence, and the needs of the people being served.
The gap between principles and practice closes. Organizations already committed to ethical AI on paper finally have a concrete path to put it into action with implementation strategies, timelines, and benchmarks that reflect what's actually achievable.
People who were working in isolation find each other. Policymakers in one country discover that a civic tech group on another continent has already solved the problem they're struggling with. Researchers connect with the government agencies that need their expertise most. Work that was duplicated or siloed is now collaborative, accelerating impact and scope.
Public servants gain confidence in decisions they used to avoid. When a procurement officer is asked to evaluate an AI system, they have a framework for knowing what to look for and what to push back on. When a legislator is drafting oversight policy, they're drawing on real-world precedent rather than starting from scratch.
Citizens move from invisible to included. Communities that were never consulted about the AI systems shaping their access to benefits, justice, and services begin to have a structured voice in how those systems are designed and deployed.
Trust becomes possible. Not blind trust but earned trust. When governments can show how AI decisions are made, who's accountable, and how people can challenge outcomes, the relationship between institutions and the public starts to repair. That's the foundation everything else depends on.
Who We Work With
Who this is for
Government agencies navigating AI for the first time or trying to do it better. Whether it's a national ministry developing its first AI strategy or a city agency rethinking how it procures algorithmic tools, we want to work with public-sector teams at every stage of readiness. If you're responsible for AI decisions that affect how people access services, justice, or opportunity, we would like to partner with you.
International and multilateral bodies setting the rules. We partner with the intergovernmental organizations and treaty bodies that shape the frameworks countries adopt, helping move policy from principle to practice and closing the gap between what's been agreed to and what's actually being implemented.
Standards and safety organizations building the technical foundation. We work with the institutions developing evaluation methods, risk frameworks, and testing protocols that make trustworthy AI measurable and connect their work to the governments and organizations that need it most.
Civic tech and democracy organizations keeping citizens in the conversation. We collaborate with groups working to ensure AI governance isn't just about institutions, it's about people. We want to help them bring the tools, methods, and networks to embed citizen voice into foundational policy by design, not as an afterthought.
Think tanks and academic research centers providing the evidence base. We work with the institutions producing the research that informs credible policy, and help translate their findings into actionable guidance governments can use.
Privacy, labor, and environmental organizations connecting AI to its real-world consequences. We partner with groups focused on data rights, workforce disruption, and environmental sustainability which are essential perspectives that AI governance can't afford to leave out.
Private-sector organizations committed to governing AI responsibly. We work with companies that recognize their AI systems touch people's lives through public services, hiring, lending, healthcare, and more, and want to get governance right before it's mandated. Whether that means developing internal ethical frameworks, aligning with emerging global standards, or ensuring the communities affected by their products have a voice in how they're designed. We partner with companies willing to hold themselves to a higher standard than the market currently requires.
Key Issues
- Surveillance overreach and chilling of speech and association
- Safety-critical automation errors in public systems and emergency response
- Bias and disparate impacts in eligibility/safety decisions (including wrongful flags/arrests)
- Opaque risk scoring with limited contestability and due-process gaps (notice, explanation, appeal)
- Vendor opacity that hinders accountability and effective remediation
- Uneven impacts across communities or protected classes
Key Objectives
• Governmental bodies are AI-ready and informed
• Governmental incentives exist for re-skilling jobs replaced with AI
• Global governments have prepared citizens for a new AI based world
