Turn your texts, PPTs, PDFs or URLs to video - in minutes.
At Synthesia, we want to empower creators, businesses, and individuals to harness the power of AI and video in ways that are meaningful and responsible. As AI-generated content continues to become an important part of how we communicate online, we believe it is essential to build systems that promote transparency, accountability, and integrity.
Today, we are announcing an update to our platform’s policies regarding the creation of political content in line with our previous efforts to protect elections and the democratic process. Effective immediately, only users with an enterprise account and a custom AI avatar will be allowed to create and distribute political content on Synthesia. This policy shift is designed to ensure that political content is created with the highest degree of transparency and accountability, reinforcing our commitment to a trusted and reliable information ecosystem.
Why we’re making this change
The world of political communication has become increasingly complex, with more political campaigns and organizations taking advantage of AI to create content. As AI-generated content becomes more prevalent, we have a responsibility to establish policies that both protect the platform and foster trust among our users and the broader public.
Here are the key reasons behind this policy update:
- Promoting transparency in political communication: Political messaging can have a profound impact on public opinion. To ensure that the source of such messaging is clear and accountable, we are requiring that any political content generated on our platform come from enterprise accounts. Enterprise users undergo a rigorous verification process, and the creation of custom avatars helps reinforce the transparency of political communications by clearly identifying the organization or individual behind the message.
- Safeguarding against deceptive content and misuse: Our platform is built on the principle that AI can be a force for good. By limiting political content creation to enterprise users, we add an additional layer of accountability and oversight, reducing the risk of anonymous actors using our technology to spread misinformation or manipulate public discourse.
- Ensuring the integrity of our platform: We believe that protecting the integrity of our platform is critical to maintaining a trusted space for users to create and share content. This new policy aligns with our broader strategy to safeguard against malicious actors and to create an environment where AI-generated content enhances—not erodes—public trust. By working closely with our enterprise partners, we can ensure that political messaging on our platform meets high standards of transparency and ethical responsibility.
How the new policy works
Synthesia users interested in creating political content must have an enterprise account, which includes a verification process that confirms the identity of the organization or individual. Once the user is onboarded on our platform, there are two further factors they need to consider if they wish to create political content
- Using a custom AI avatar: Political content can only be created using custom AI avatars (Studio or Personal Avatars), which are designed based on the voice and likeness of an enterprise user and with their consent, and further emphasize the accountability of the content creator.
- Following our disagreeable content policies: As part of our commitment to maintaining a safe and trusted platform, all political content will be subject to review to ensure compliance with our content policies. The chart below provides guidance on what kind of political content is restricted and which is prohibited, respectively
A step toward a more responsible digital future
This policy change is part of our ongoing effort to build a platform where AI is used responsibly and in ways that serve the public good. While we are excited about the creative potential AI offers, we are equally focused on ensuring that its use supports healthy, transparent, and constructive discourse—especially in the political realm.
We understand that this may be a significant shift for some of our users, but we believe it is a necessary one to ensure the long-term trust and integrity of our platform. We will continue to engage with our community and gather feedback to ensure that our policies reflect the evolving needs of both our users and the broader digital ecosystem. For example, later this month we'll be participating in a red team test of our content moderation policies and systems organized by NIST in partnership with Humane Intelligence, a non-profit led by Dr Rumman Chowdhury.
Thank you for being part of our community as we take this important step toward greater transparency and responsibility in AI-powered content creation.