Argon Ventures

MediaTech

Reality Defender

Reality Defender
Comprehensive deepfake scanning with actionable results

In today’s world, “what you see” is not necessarily “what you get.”

Reality Defender is building an enterprise solution to address the growing challenge of deepfakes across multiple industries.

For those not familiar with this term, deepfakes are most often associated with synthetically-created media in which a person in an existing image or video is replaced with someone else's likeness. This topic has come to the forefront through malicious and sometimes entertaining impersonations of public figures. These early experiments presage a looming misinformation challenge for governments, media companies and global corporations. But the problem of deepfakes also impacts a wide variety of industries, including insurance, financial and identity fraud, and includes media types beyond video, including photos, audio, and other communications mediums. Interestingly, though the “headlines” around deep fakes are often with video, one of the most immediate pressing issues is actually with the challenge of fake audio.

While the act of faking content is not new, deepfakes leverage powerful techniques from machine learning and artificial intelligence to manipulate or generate visual and audio content with a high potential to deceive. The main machine learning methods used to create deepfakes are based on AI deep learning.  The widespread availability of affordable and powerful computing, coupled with the democratization of artificial intelligence, has led to the rapid growth of synthetic media - that is, media created or modified using AI. Like any new technology, synthetic media can be used either beneficially or maliciously.

Reality Defender was initially incubated within the corporation AI Foundation. The work was done as a non-partisan, non-commercial public service to help reporters and campaigns uphold truth and ethical standards during the 2020 US presidential campaign. Much of this work was done in partnership with other academic research organizations and corporations including Microsoft.

Techniques that generate and manipulate multi-media content can now guarantee a very advanced level of realism. The boundary between real and synthetic media has become very thin. On one hand, this opens the door to a series of exciting applications in different fields such as creative arts, advertising, film production and video games. On the other hand, it poses enormous security threats and uncertainty of authenticity. Software packages freely available allow any individual, often without special skills, to create very realistic fake images, audio tracks and videos. These deepfakes can be used to manipulate public opinion during elections, commit fraud, discredit or blackmail people. Therefore, there is an urgent need for automated tools capable of detecting false media content. 

We believe global corporations, institutions and governments will naturally require and demand tools to assess such risks. The Reality Defender team is poised to develop and bring to market a compelling solution designed to evolve with the progress of technology in the field.