Conversation AI Policies on Save use, accountabillity and moderation
AI audio helps overcome language and communication barriers, paving the way for a more connected, creative, and productive world. It can also attract bad actors. Our mission is to build and deploy the best audio AI products while continuously improving safeguards to prevent their misuse.
“AI safety is inseparable from innovation at Blue Marloc. Ensuring our systems are developed, deployed, and used safely remains at the core of our strategy.”
Our mission is to make content accessible in any language and in any voice.
We use a trusted AI audio provider for millions of users around the world, as well as for leading publishing and media companies
Safety in practice
We are guided by three principles to manage risk while ensuring AI audio benefits people worldwide:
moderation, accountability, and provenance.
Moderation
We actively monitor content generated with our technology
Automated moderation.
Our automated systems scan content for violations of our policies, blocking them outright or flagging them for review.
Human moderation. A growing team of moderators reviews flagged content and helps us ensure that our policies are adopted consistently.
No-go voices. While our policies prohibit impersonations, we use an additional safety tool to detect and prevent the creation of content with voices deemed especially high-risk.
voiceCAPTCHA. We developed a proprietary voice verification technology to minimize unauthorized use of voice cloning tools, which ensures that users of our high-fidelity voice cloning tool can only clone their own voice.
Accountability
We believe misuse must have consequences.
Traceability. When a bad actor misuses our tools, we want to know who they are. Our systems let us trace generated content back to the originating accounts and our voice cloning tools are only available to users who verified their accounts with billing details.
Bans. We want bad actors to know that they have no place on our platform. We permanently ban users who violate our policies.
Partnering with law enforcement. We will cooperate with the authorities, and in appropriate cases, report or disclose information about illegal content or activity.
We believe that you should know if audio is AI-generated.
AI Speech Classifier. Our vender has developed a highly accurate detection tool which maintains 99% precision and 80% recall if the sample wasn't modified and lets anyone check if an audio file could have been generated with our technology.
AI Detection Standards. We believe that downstream AI detection tools, such as metadata, watermarks, and fingerprinting solutions, are essential. We support the widespread adoption of industry standards for provenance through C2PA.
Collaboration. We invite fellow AI companies, academia, and policymakers to work together on developing industry-wide methods for AI content detection. Our vendor is part of the Content Authenticity Initiative, and partner with content distributors and civil society to establish AI content transparency. Our vendor supports governmental efforts on AI safety, and are a member of the U.S. National Institute of Standards and Technology’s (NIST) AI Safety Institute Consortium.