We've reached 4,000,000+ young people on the platforms they already use with free, evidence-based digital mental health interventions.
Rooted in ethical commitments, third-party review, and a readiness to adapt, Koko identifies and redirects at-risk users to free helplines, peer support, and self-help courses.
We can work directly with online platforms to provide insights, custom tools and technology to protect their users. Our project and partnerships vary in scope and complexity. If your company is interested in tools, technology, and consulting to help keep youth online safe, we'd love to connect.
We build AI tools to identify harmful mental health content. We use LLMs and keyword libraries, categorized and evaluated by psychologists. Our API is developer friendly and privacy-focused (no data is passed to our platform to make classifications). Our models are updated regularly, reviewed by psychologists, and based on our team’s experience developing AI systems and content filtering for Airbnb, Pinterest, and Twitch.
Contact us to get started today.
Users in distress need help immediately. We can help you refer them to over 300 help lines, peer support, or free self-guided courses. Our referral flows are easy to install and have been evaluated in peer-reviewed research with Harvard and Stony Brook University.
Hundreds of online platforms provide “one click” access to our services. Our free interventions are co-designed by top-tier universities, such as Harvard, Stanford and Northwestern, and are built with feedback from our users. We target depression, anxiety, self-harm, and disordered eating. Our research guides our work in positively impacting youth.
Online communities need mental health support. Over 20,000 Discord servers are using Koko Bot, which offers risk detection, as well as a safe place for users to vent and access resources. You can install Koko on your Discord server here.
Our team has adapted interventions for a variety of languages, such as Chinese and Hindi, ensuring mental health support reaches those in need, no matter the language.
"Koko has been the leader for over a decade in helping trust and safety teams detect harmful mental health content and build strategies that ensure users get the right support."
— Tom Siegel - Co-Founder and CEO, Trust & Safety Laboratory Inc
Founded Google’s first Trust & Safety team
Koko’s proprietary algorithm detects high-risk search terms and phrases on various platforms and online communities.
Koko outperforms traditional filters by helping you capture up to 20-40% more dangerous content and continuously adapting to reflect new trends and slang.
We provide diverse support options for at-risk users. Developed in collaboration with Stanford, Harvard, and other institutions, all our services are evidence-based and freely accessible.
Let us create a safety solution tailored to your platform's needs.
Koko is a registered 501(c)(3) nonprofit | EIN: 85-3604591 © 2020
All rights reserved, kokocares.org. | Based in San Francisco, CA
Privacy Policy | Bot Privacy Policy | Terms of Use Agreement | Contact Us | Developer Docs