Turn your texts, PPTs, PDFs or URLs to video - in minutes.
Today, the Partnership on AI (PAI), a nonprofit community of academic, civil society, industry, and media organizations focused on the future of AI, released the first batch of in-depth case studies showing how risks related to synthetic media can be mitigated effectively based on PAI’s Synthetic Media Framework launched one year ago.
You can read Synthesia’s case study on PAI’s website and watch the video below (generated with Synthesia) for a summary of how our responsible AI principles connect with the Responsible Practices for Synthetic Media framework.
I encourage you to read the entire collection of case studies from BBC, OpenAI, TikTok, and learn about the unique challenges these organizations have faced in their respective industries and the strategies they’ve implemented based on PAI’s guidance to ensure transparency and digital dignity.
The Responsible Practices for Synthetic Media is a framework for collective action, backed by ten launch partners: Adobe, BBC, Bumble, CBC/Radio Canada, D-ID, OpenAI, Respeecher, Synthesia, TikTok, and WITNESS. Given the increased accessibility of tools to create AI-generated audio and video content, PAI's framework represents a significant step forward in promoting transparency, accountability, and responsible innovation in the field of generative AI.
In particular, the case studies show how we can work together to build trust, protect against misuse, and ensure that these technologies are leveraged for positive societal impact. Synthesia is committed to upholding the highest standards in the development and deployment of generative AI video technologies.
By being a launch member of this Framework, we reaffirm our dedication to promoting the responsible development of synthetic media and contributing to a safer and more trustworthy digital landscape.