Preparing for the global elections cycle of 2024 and beyond

Written by
Alexandru Voica
Published on
December 2, 2024
Table of contents

Turn your texts, PPTs, PDFs or URLs to video - in minutes.

Learn more

Elections are the cornerstone of democracy and we all have a role to play in safeguarding public trust and election integrity. With the 2024 election cycle in full swing, we are more committed than ever to fostering the responsible use of AI.

Synthesia is the leading enterprise AI video creation platform, trusted by more than 55,000 businesses of all sizes, across diverse industries and geographies. Since its inception in 2017, Synthesia has demonstrated its commitment to the responsible innovation of artificial intelligence by adhering to its guiding principles of consent, control and collaboration.

Watch the video below (generated with Synthesia) and read on for more information about how our responsible AI principles are being applied in practice and other concrete steps we are taking to prepare for elections in 2024.


Consent, control and collaboration in practice

The world is seeing a rise in AI-propagated misinformation and disinformation. Because our video platform can generate avatars that resemble the voices and likenesses of real people, it’s important for us to channel our responsible AI efforts in ways that can make a difference:

Consent. Synthesia offers stock and custom avatars, both of which cannot be created without the explicit consent of the individuals whose likeness has been used for the generation of the avatar.  

Stock avatars are created using footage of real actors who have consented to the process and have been compensated for their contributions. We have explicit rules in place around what these avatars can and cannot be used for, including a ban on the creation of content that interferes with the integrity of the democratic process or that reports on current events.

Custom avatars are created following a thorough Know Your Customer-type procedure so that custom avatar owners, once verified, can have greater flexibility to decide who uses their avatar and for what purposes. However, custom avatars remain subject to comprehensive content policies. For example, custom avatar owners can make politically-related educational content based on facts or use their avatars to express political opinions as long as they do not violate our policies.

Control. AI is a powerful technology. Our strategy has been to control the output of our models by implementing content moderation at the point of creation. We are pioneers in this space, and this is a new approach in the creative software industry. Historically, online content has been created without any restrictions and that content would only be moderated if it was shared on a distribution service such as a social media platform. The social media platform would analyze the content and make a decision whether it should be removed. Unfortunately, at that point, the content could have received widespread distribution, making the process somewhat ineffective and largely reactive. 

By moderating content at the point of creation - meaning content that does not meet our standards does not get produced – we can better ensure that harmful content is not generated and our policies are adhered to, making our approach proactive. 

Our content policies include specific measures designed to safeguard elections, including the restriction or prohibition of certain types of political content and a complete ban on the creation of news content for non-corporate customers as well as robust measures to prevent other types of misinformation (health, financial, legal, and more).  

Collaboration. We understand that safeguarding elections requires collaboration with other external stakeholders like industry peers, civic leaders, the media and academia. By working together, we can be more effective to identify potential risks and combat threats. 

That’s why we became founding partners of the Partnership on AI’s program on Responsible Practices for Synthetic Media, the first industry-wide framework for the ethical and responsible development, creation, and sharing of synthetic media. 

We are also members of the Content Authenticity Initiative, a community of key stakeholders including civil society organizations and academics to promote an open industry standard for content authenticity.

Additional concrete and actionable steps

Finally, we are taking additional concrete steps to help protect elections:

Resources and personnel. Firstly, we are growing our industry-leading team of trust and safety experts who are responsible for building out technology solutions, developing and updating policies, and doing the difficult but necessary content moderation work that helps to ensure our platform is not being used to manipulate election outcomes. This team already represents 10% of the company’s headcount and is composed of individuals with extensive experience in the fields of internet safety and integrity.

Content credentials and watermarking. Next, we have been one of the first generative AI companies to support the Coalition for Content Provenance and Authenticity’s (C2PA) credentials for AI videos and have begun experimenting with the technology on our platform, with encouraging early results. The video below has been generated with Synthesia and includes the C2PA content credentials. We are using a tool developed by the Content Authenticity Initiative which can analyze and display the authenticity of the content credentials: 

We are encouraging the rest of the industry to adopt C2PA so we can build a complete ecosystem for video creation and distribution, including more video players (such as this open source one) that can authenticate C2PA content credentials and, further down the line, invisible watermarks. Additionally, for videos created on our Free tier, we are applying a watermark that will be visible to everyone. While there is no perfect solution to preventing the proliferation of misleading content, watermarking contributes to the goal of broader AI transparency and can help researchers to identify whether a video has been AI-generated or AI-modified. 

Standards and frameworks. We are monitoring the emerging compliance standards and frameworks and evaluating how Synthesia can be an early adopter to enhance the trust and credibility of our platform. For example, we are planning to adopt the ISO/IEC 42001 international standard that has been designed to guide the responsible development and use of AI systems.  

Early detection. While we have invested heavily in detecting abuse at the point of content generation, we are also committed to improving our detection systems earlier on and preventing bad actors from using our platform in the first place. To do this, we are investing in building more sophisticated, AI-powered detection systems for harmful content. We will also be implementing further checks and screens at the account level for our Free, Starter and Creator tiers to allow us to have more visibility into the identity and intentions of the users seeking to access our platform.  

We are committed to playing our role in ensuring that our platform is not used to spread misinformation, and we intend to continue to update this article with the latest Synthesia-related developments, measures or policies so please keep coming back to find out more about our elections-related trust and safety work.

About the author

Head of Corporate Affairs and Policy

Alexandru Voica

Alexandru Voica is the Head of Corporate Affairs and Policy at Synthesia.

Go to author's profile
faq

Frequently asked questions