We recognize that producing a speech that resembles the voice of the people carries serious risks, which are particularly acute during elections. We collaborate with U.S. and international partners in government, media, entertainment, education, civil society and beyond to ensure we incorporate their feedback as we build.
Partners testing Voice Engine today have accepted our usage policies, which prohibit impersonation of another person or organization without consent or legal right. Additionally, our terms with these partners require explicit, informed consent from the original speaker and we do not allow developers to create ways for individual users to create their own voices. Partners should also clearly communicate to their audiences that the voices they hear are AI-generated. Finally, we have implemented a set of security measures, including watermarking to trace the origin of any audio generated by Voice Engine, as well as proactive monitoring of how it is used.
We believe that any large-scale deployment of synthetic voice technology should be accompanied by voice authentication experiments that verify that the original speaker knowingly adds their voice to the service and a banned voice list that detects and prevents creating voices that are too similar. to notable personalities.