How has counterfeit detection technology evolved?

Pronounced ... posthuman Deepfake Technology - Identity in Crisis

Do you have the movie Rogue One: A Star Wars Story seen from 2016? If so, do you remember Grand Moff Tarkin and the young Princess Leia? Both are well played, but they are not "real". The filmmakers used face swapping and video synthesis technologies to reconstruct the faces of living actors and superimpose those of the artificial doubles. Nobody was able to see through the trick. In the meantime, this technology poses unsolvable challenges for human society.

Computers are learning to forge better and better

The technology that generates images and sound with the help of artificial intelligence is called “deepfake”, a combination of the terms “deep learning” and “fake”. This magic trick has been widespread in the entertainment industry as well as pornography for years. Now the whole thing goes one step further: Using a person's facial expressions - from images or videos - the artificial intelligence can synthesize a completely new model of them. She can even make a face model based on just a single image, even if the results are not very convincing. Deepfakes are developed on open source platforms, where they are revised and improved on a daily basis.

Fake nude pictures and porn possible

People, such as politicians and celebrities, who have a lot of public image data on them, are among the easier targets of deepfake technology. Their faces could be used in fake news or fake porn videos. As more and more people post selfies and short videos of themselves online, the technology can be used maliciously against them. Such cases are becoming more and more common. The US state of Virginia was recently forced to amend a "Revenge Porn" law so that it now also includes deepfakes. This is to deter vengeful people from using the technology to harm their ex-spouse or lover. Recently another app called DeepNude posted on the Internet, which can be used to create fake nude pictures of women.

Is that presidential candidate really in favor of the death penalty?

Losing your identity to deepfakes is worrying enough for individuals. Even more worrying, however, is the possibility that deepfakes could be used to produce fake news. Their targeted use for political purposes could shake the foundations of democracy and with it the mechanism for creating a social consensus. Did that presidential candidate really advocate the death penalty in a private speech? Nobody can prove whether it actually happened.

Since artificial intelligences learn extremely quickly, there is unfortunately still no technology that can expose deepfakes in the long term. As soon as a fake video is found to be unnatural in a certain way, for example if the person doesn't blink enough, an algorithm is used to improve that aspect.

As science and lawmakers seek preventive action against the misuse of deepfakes, individuals can help raise awareness about the technology. “Seeing is believing” is no longer a reliable guideline. We need to develop more complex methods of knowing what is true.
 

"Pronounced ..."

In our column series “Pronounced…”, Liwen Qin, Maximilian Buddenbohm, Dominic Otiang’a and Gerasimos Bekas write alternately every week. In “Outspoken… posthuman”, Liwen Qin observes technical progress and how it affects our lives and society: in the car, in the office and at the supermarket checkout.


 

Author

Liwen Qin is the founder and CEO of Trends Eurasia GmbH, a Berlin consulting company that connects the digital markets between Germany and China.

Translation: Sabine Bode
Copyright: Goethe-Institut e. V., online editing
July 2019

Do you have any questions about this article? Write us!