No Silver Bullet for Deep Fakes
The IFIP IP3 Global Industry Council (GIC) serves as the principal forum for employers and educators to engage with IP3 and shape the global ICT profession. Each month, they will feature relevant and insightful ideas in IFIP Insights. This month, GIC director Robin Raskin shares her recent WIPO talk on Deep Fakes in which she discusses the scope of the problem, whether it’s fake news, false images, election tampering or harassment, and looks at potential solutions.
I recently delivered a talk on Deep Fakes as part of the World Intellectual Property Organization (WIPO)’s cybersecurity month. I gave a similar talk at the ITU AI for Good Summit this past summer. We all know the scope of the problem, whether it’s fake news, false images, election tampering or harassment. The talk looked at potential solutions.
The upshot? Don’t expect a silver bullet solution to determine whether an image/video or other media has been generated by an AI program or it’s “real”. Instead, we’ll likely rely on a multi-pronged strategy combining the following:
Observation and Education
In this modern-day recreation of that infamous email from the Prince of Nigeria, who wrote to tell you about the money he needed, we adapt and learn to question. We are learning to anticipate fake images and question the veracity of everything. Many of us have gotten pretty good at spotting the “six-fingered lady”, “the chest that doesn’t breathe” or the “uncannily dead irises.” The problem is that the GenAI image programs are learning (and faster than us) how to make more lifelike images without the telltale signs of fakes.
Deepfake Detectors
Deepfake detectors analyze images and video by looking at everything from under-the-skin blood flow of a video’s facial image, to manipulated pixels to determine the realness of an image. Again, as the GenAI technology continues to improve this will continue to be a cat-and-mouse game for deep fake catchers and the fakes.
Intel’s FakeCatcher claims to detect deepfakes in video with a high degree of certainty.
Watermarking
Digital watermarking, a way of embedding unseeable digital meta-identifiers can determine the origin of an image, video, or voice. But once an image is manipulated, say changed to a different format, or changed in pixel resolution, the watermark is often rendered ineffective.
Collaboration
Countries across the globe are beginning to regulate the use of AI images including how they should be labelled. Corporations, media outlets and other organizations that value IP and want to avoid liability are also determining internal rules for disclosing how content is generated. And groups like The Coalition for Content Provenance and Authenticity (C2PA) hope the industry can self-regulate by creating a body of technical standards across different types of media.
Regulation and Legislation
Different geographic regions are working, and not yet in unison, to establish a legal frameworks for disclosing the ownership of images and for imposing punitive damages for those who abuse the medium.But the efforts are localized, some by country, and some, like in Ameria, state by state. One of the best resources to track this topic is subscribing to Caro Robson’s blog on AI ethics.
Good Uses of Deep Fakes
Finally, it’s important to note that not all deep fakes are pejorative and that the use of digital replicas is quickly becoming an area of interest for media and entertainment industries along with others. One of the best experiments to date is a digital replica called ReidAI, a digital twin of Reid Hoffman, a co-founder of LinkedIn and venture capitalist at Greylock partners. Reid’s AI has been trained on everything that Reid has ever written and is now capable of having a full conversation with the “Real Reid” or can go off on his own at speaking engagements. Ultimately, we will see systems where ownership of a person’s work is traceable and where compensation for these sorts of works will be commonplace.
What’s next: built-in deep fake detection
In my talks, I liken deep fakes to cybersecurity and fraud detection: we used to buy lots of antivirus software and consumers had to fend for themselves. But today your service provider has built-in features like biometrics in your mobile device and cloud monitoring so that you feel pretty safe.
I think that’s how we’ll wind up with digital fakes: they’ll be detected by a combo of smart AIs and search engines built-in to our devices, though it may take a while and there may be some fallout in the interim.