Communication & Media
Communicate with the global public to raise awareness, build trust, and inspire ethical thinking around AI.
The information ecosystem is now programmable—and trust is the scarce resource.
AI can generate content at scale, tailor messaging to individuals, and optimize what people see in real time. That changes how truth spreads, how communities form beliefs, and how influence works—often faster than our institutions can respond.
At the same time, the same systems that enable creativity and access can also amplify manipulation, harassment, and exclusion. When people can’t tell what’s real, who made it, or why it’s being shown to them, the foundations of public trust start to crack.
Humanity in AI focuses on strengthening the conditions for healthy communication: transparency, safety, rights, and a public that can navigate AI-shaped media with clarity.

What We're Seeing
AI is accelerating persuasion, flooding attention, and weakening shared reality.
Across platforms, communities, and news ecosystems, the same patterns keep showing up:
- Misinformation and deepfakes spread faster and cheaper, especially during high-stakes moments.
- Targeted persuasion and dark-pattern nudges become more personalized and harder to detect.
- Algorithmic bias and exclusion show up in moderation, discovery, and recommendation—often harming vulnerable groups most.
- Privacy and civil liberties erosion grows through face/voice ID, cross-platform tracking, and surveillance-by-default.
- Online safety threats increase—harassment, abuse, and child-safety risks scale with automation.
- Creator and IP pressure intensifies as provenance, consent, and compensation lag behind training and distribution.
The result: people feel manipulated, creators feel exploited, and institutions lose credibility—even when they’re trying to do the right thing.
What We Do
We turn ethical principles into practical playbooks for truth, safety, and rights online.
We help organizations move from “we know this is a problem” to “here’s what we’re implementing.” In practice, that means:
- Building public-facing explainers that translate AI risks into everyday impacts—clear, non-technical, and culturally relevant.
- Convening cross-sector partners to create rapid-response toolkits for deepfakes, influence ops, and coordinated manipulation.
- Advancing provenance and transparency norms (labeling, disclosure, audit trails) that work in the real world.
- Supporting bias-aware moderation and safety frameworks that protect vulnerable communities without silencing legitimate speech.
- Helping the marketing and advertising ecosystem adopt ethical AI guidelines and measurable safeguards.
- Partnering with creators and rights holders on consent, licensing, and attribution pathways that are workable at scale.
What Changes Because of This
People regain the ability to trust what they see—and to challenge what harms them.
When communication systems are strengthened:
- Disinformation becomes easier to detect, slower to spread, and harder to profit from.
- Platforms and institutions have clear accountability paths—who owns what, who fixes what, and how users get redress.
- Marginalized communities experience fewer disproportionate harms from moderation and amplification systems.
- Creators gain clearer rights and fairer economic participation in an AI-shaped media economy.
- The public becomes more resilient—less panic, less manipulation, and more informed civic engagement.
Who We Work With
We partner with the people shaping public attention—so safety and trust aren’t optional.
We collaborate with organizations working at the intersection of:
- Platforms and product teams (social, search, messaging, creator tools)
- Journalism, fact-checking, and media integrity organizations
- Civil liberties, privacy, and digital rights groups
- Child safety, anti-harassment, and online trust & safety experts
- Creators, publishers, and IP/rights organizations
- Election integrity and civic resilience partners
- Accessibility and inclusion organizations (low-resource languages, disability access)
Key Issues
- Misinformation, deepfakes, and influence operations (including election contexts); targeted persuasion and dark-pattern nudges
- Bias, discrimination, and exclusion in algorithms/moderation, with disproportionate harms to vulnerable groups
- Privacy and civil liberties erosion & mass surveillance (face/voice ID, cross-platform tracking) and resulting chilling effects on speech, press, protest, and association
- Pressures on the creative economy and IP (training data, provenance clarity)
- Harassment, abuse, and online safety risks (including child safety)
- Access and inclusion gaps (accessibility, low-resource languages)
- Governance and user-rights shortfalls: data rights/consent, transparency/explainability, and effective redress/accountability
Key Objectives
• Collaborate with educators, influencers, and artists to further ethical AI
• Educate Marketing and Advertising ecosystems on their framework for ethical AI
• Combat disinformation, misinformation and fear-based narratives
• Provide education on the distinction between implicit and explicit bias in algorithms in technology
