AI Governance Practices
Overview
Synthesia is committed to the responsible, transparent, and secure development of artificial intelligence technologies as part of its Services (herein referred to as “AI Components”). This page outlines Synthesia’s approach to AI governance and compliance, structured around its guiding principles and key areas of impact. Synthesia aligns its practices with international standards, such as ISO/IEC 42001:2023, to help ensure that its AI Components are trustworthy, compliant, and suitable for the needs of its customers and society at large.
These policies and practices may change as the Services and industry evolve, so please check back regularly for updates. Capitalized terms used below but not defined in this policy have the meaning set forth in the governing agreement.
Guiding Principles
Responsible Innovation
Synthesia is guided by a framework of responsible artificial intelligence innovation whereby it adheres to its founding principles of Consent, Control and Collaboration throughout its AI supply chain. This means Synthesia requires that each person consent before their voice or likeness is cloned for an avatar, whether it’s for a Stock Avatar available to all customers, or a Custom Avatar (e.g., Personal Avatar) that is created by and on behalf of a specific customer. Synthesia’s platform and policies are also integrated with a trust and safety layer that is designed to help prevent the generation and distribution of harmful content. This includes a prescriptive Acceptable Use Policy that is supported by a dedicated Trust and Safety Team and further operationalized by automated and manual verification and content moderation tools and filters. And lastly, Synthesia partners with regulatory bodies, media organizations, and research institutions to develop best practices and educate its customers and the industry on the responsible use of AI. For example, Synthesia is a launch partner in the Partnership on AI (PAI) on Responsible Practices for Synthetic Media and a founding member of the Content Authenticity Initiative.
Rights of Customers
Synthesia is committed to respecting the rights of organizations and businesses that use its Services, by communicating transparently and empowering them with choice. The Services, and Synthesia’s related practices, incorporate measures designed to respect their intellectual property rights, protect their data, and maintain confidentiality. As further described below in the section titled ‘Governing Agreements,’ Customers own the content they generate using the Services, and are not responsible for Synthesia’s separate R&D and development decisions. Synthesia handles customer data in accordance with its customers’ written instructions while adhering to rigorous security and confidentiality safeguards and standards.
Rights of Individuals
Synthesia is deeply committed to upholding the rights of individuals and to protect the public from harmful content and misuse of AI technologies. Synthesia’s Acceptable Use Policy prohibits the use of Services for activities that infringe on individual rights, such as creating defamatory, inciteful, abusive, or discriminatory content. Synthesia enforces these restrictions to help ensure that the AI Components are used responsibly, aligning with the broader goal of safeguarding privacy, promoting freedom of expression within ethical bounds, and preventing discrimination. Furthermore, Synthesia’s Content Integrity Policy offers individuals a streamlined process to report copyright and privacy concerns regarding content generated by users.
Key Areas of Impact
Accountability & Explainability
Synthesia and its customers have a shared responsibility to prevent abuse and mitigate harm. To foster clarity and initiative between the parties in this regard, Synthesia integrates a structured approach to defining roles and reinforcing responsibilities along the entire AI supply chain, beginning with R&D and model creation and extending through to customer deployment.
Roles under Regulations
Synthesia’s role and responsibilities, as well as those of its customers, shift based on the stage of the AI supply chain and the applicable legal framework. When creating and pre-training AI Components, Synthesia serves as the “controller” under privacy frameworks like GDPR and the UK GDPR, and when making these components available to customers as part of the Services, it serves as a “provider” under AI frameworks like the EU AI Act. However, once customers choose to use the Services, they take on the role of “deployer” under the EU AI Act, and under privacy law, they assume the responsibilities of the controller (as they determine the purposes and means of processing), while Synthesia transitions to the role of “processor” (as it processes the data on behalf of the customer). To the best of Synthesia’s knowledge, its AI Components, when used as intended and in accordance with the Acceptable Use Policy, are not classified as High-Risk AI Systems under the EU AI Act.
Customer Choice
The Services are intended for use by businesses of all sizes, across diverse industries and geographies. As a processor and provider of the Services, Synthesia offers features and controls to help customers meet their unique compliance obligations, however, it is ultimately the customers' responsibility to deploy and use the Services in accordance with the laws that apply to them. For instance, paying customers can elect to use Synthesia’s script or editor features to embed audio or visual markers in their videos for transparency purposes under the EU AI Act, though all freemium videos already include watermarks by default. Please review the Help Center for more information about other features and controls available to customers, for example, how to provision or deprovision access to videos, enable or disable third party integrations, and manage permissions, retention and export settings.
Internal Governance & Policies
Synthesia’s internal governance and policies are designed to ensure that roles and responsibilities are defined and enforced for AI Component development and monitoring. The following are examples of roles designed to oversee AI-related decisions and their impacts:
- AI Governance Council: This committee is responsible for reviewing AI-related decisions, particularly those that may have ethical implications, such as the development of new AI features, to ensure they align with Synthesia’s responsible AI standards, further described in the section below titled ‘Responsible AI Development.’
- Data Protection Officer: This role involves overseeing data management practices, including ensuring data quality, provenance, and compliance with data protection regulations.
- Security & Compliance Teams: Synthesia’s formal information security program, as detailed in Synthesia’s SOC-2 Type II audit report (available in the Synthesia Trust Portal), includes clearly defined information security roles, responsibilities, and accountability. These individuals help ensure that the AI Components are secure and comply with relevant regulations.
- Engineering and Research Leads: These individuals are responsible for the design and development of AI Components and follow “Secure Development Lifecycle” processes to ensure they meet technical, ethical, and legal requirements.
- Customer Support Liaison: This role is dedicated to managing customer interactions related to AI Components, ensuring transparency and providing clear explanations.
Governing Agreements
Synthesia’s governing agreements with its customers are designed to promote accountability by allocating the rights, responsibilities and remedies of a party based upon their relative ability to exercise control and influence over the means and purposes of processing. For example, Synthesia offers an IP indemnification to its customers for the content and technology Synthesia provides, because as between the parties, Synthesia unilaterally makes its own R&D decisions, it is more incentivized to protect it, and it is in a better position to avoid infringement claims and navigate defenses. Similarly, Synthesia believes it’s appropriate for a customer to indemnify it for the content that they generate because Synthesia is a passive service provider – the customer has agency to write its own scripts, to direct avatars to speak and perform, to generate final videos and to distribute and post them.
Explainability
Synthesia is committed to making its AI Components understandable and explainable to customers and their users. The inherent transparency of the Services, where AI outputs are directly tied to customer-provided inputs, ensures that discrepancies between expected and actual outputs are immediately apparent. This clear linkage between input (scripts, voice, likeness) and output (avatar performance) provides a straightforward explanation, allowing customers to both trust and effectively interact with its systems, but also to anticipate harmful scenarios and mitigate abuse.
Fairness & Transparency
Synthesia’s AI Components are designed and operated to help ensure equitable outcomes for its customers and their users. Synthesia implements bias mitigation strategies at multiple stages of the AI lifecycle, from performance data collection to model development that are designed to avoid discrimination based on race, gender, age, and other protected characteristics. Regular audits and bias assessments are conducted to monitor and correct emerging biases.
Inclusive Sourcing
Synthesia prioritizes inclusive sourcing, production and design principles to ensure that its AI Components are clear, accessible, customizable and beneficial to a diverse range of customers, across a variety of industries and geographies. This includes the careful selection of actors, models and performances for the creation of stock avatars, the implementation of fairness constraints during model training, and the ongoing evaluation of AI Component outputs to ensure they do not stereotype or disadvantage any group.
Transparency-by-Design
Transparency is inherent to the Services given their nature. Once customers script and prompt a performance, they can generate a video and assess the outputs (e.g., the avatars performed the scenes and delivered the content in a photorealistic and engaging manner, as intended). Transparency is also fundamental to Synthesia’s AI practices. In its governing agreements, as supplemented by this page, Synthesia provides its customers with information about its data processing practices, the measures it takes to help ensure fairness and accountability (e.g., enforcement of the Acceptable Use Policy via its Content Moderation Guidelines), the responsible AI considerations integrated into AI development life-cycle, and its commitment to explainable AI. Further, Synthesia operates Research & Development initiatives whereby it collaborates with academic institutions and consortiums to publish research and white papers and to open source certain information.
Training, Data and Provenance
Synthesia specializes in developing AI Components that generate photorealistic performances of avatars, tailored for use within its enterprise video creation tools. Synthesia does not develop or offer standalone or general-purpose AI models. Accordingly, Synthesia’s approach to AI development is focused on identifying and understanding the physical and human elements of a performance, so that it can learn how their sequences, correlations and interactions come together to make movement, expression and emotion seem natural and engaging. In order to infer those subtleties that will make avatar performances also appear realistic and compelling, Synthesia pre-trains its AI Components using performances by paid actors and models that Synthesia commissions, performances made public, and performances that Synthesia licenses.
No Customer Data
Synthesia does not use Customer Data, including inputs or outputs, to pre-train its AI Components. Any processing of Customer Data that requires, or results in, the fine-tuning of AI Components is subject to the written instruction of the customer, in accordance with their governing agreement.
Documented Practices
Synthesia establishes and maintains a documented process for tracking the origin and history of performance data used to develop its AI Components, however, the specifics of the data sets and their preparation processes are proprietary and confidential, and only disclosed in certain cases, such as where Synthesia is required by law to do so. Synthesia does disclose and publish certain datasets it develops through academic partnerships or open source initiatives, such as the Actors HQ dataset.
Security and Privacy
Synthesia employs appropriate data protection and privacy measures designed to ensure that its AI Components are developed in a privacy-centric and compliant manner, and that their use by customers, including any processing of Customer Data, is safeguarded and secure.
Synthesia adheres to the principles of data minimization, encryption, and secure storage. Security by design and by default is integral to Synthesia’s AI development process. Synthesia implements industry-leading security practices, including secure coding, regular penetration testing, and continuous vulnerability assessments. Synthesia’s AI Components are designed to be resilient to both accidental failures and malicious attacks, in order to help protect the integrity and confidentiality of AI-generated outputs.
Synthesia is SOC2 Type II compliant; more information about Synthesia’s security practices can be found in its Security Practices Page.
Safety and Health
Synthesia recognizes the potential impact of AI Components on the physical and mental well-being of individuals. Accordingly, Synthesia designs its AI Components with safety as a primary concern, implementing safeguards to prevent harm, for example, by requiring that all customers and users abide by the Acceptable Use Policy, and moderating content at the point of generation, using algorithmic and manual means.
Red Teaming
Synthesia regularly tests its trust and safety measures, using independent experts, to test the robustness of safety systems. Practices, such as red teaming content moderation, are then used to further improve our safeguards.
Ethical Impact Assessments
Synthesia’s AI development process includes comprehensive ethical impact assessments by its internal AI Governance Council that are conducted prior to launching any new AI feature or update. These assessments are designed to evaluate potential psychological and emotional impacts on users and the broader public. For example, Synthesia requires that avatars which replicate a person’s voice or likeness, are used in a manner that respects the dignity and autonomy of the individuals involved.
Financial and Economic Impact
Synthesia assesses the economic impact of its AI Components on individuals and groups, and implements moderation processes designed to prevent financial harm by way of impersonation, fraud or misinformation campaigns. Synthesia continuously evaluates potential scenarios where AI outputs could influence financial decisions and iteratively improves its moderation practices and filters to stay ahead of emerging trends, tactics and potential for abuse.
Accessibility
Synthesia is committed to making its AI Components and Services accessible, regardless of ability. This includes support for assistive technologies, adherence to accessibility standards, and continuous improvement based on customer and user feedback. To date, Synthesia incorporates a range of accessibility features into its platform. For example, Synthesia offers customizable contrast-ratios, and text-to-speech and text-to-video options to accommodate users with certain impairments. Further, auto-generated subtitles can be included in videos by default, or when downloaded by the user for distribution.
Ongoing Improvements & Audits
Synthesia is committed to the continuous monitoring and improvement of its AI Components. It regularly monitors AI performance, incorporates customer feedback, and adapts to new developments in AI technology and regulation.
Further, Synthesia has a robust audit system in place designed to continuously monitor its AI Components for compliance with the principles and tenets described in this document. Auditing is performed by internal parties as well as respected and accredited external firms. Synthesia undergoes a periodic ISO/IEC 42001:2023 audit and publishes its certificate on the Synthesia Trust Portal, when available. Further, Synthesia has performed an artificial intelligence impact assessment regarding its AI Components, which customers may access at https://security.synthesia.io/documents.
Need more help?
If you need assistance with Synthesia or have a question about our products or services, please contact our customer support team.