How has counterfeit detection technology evolved?
Pronounced ... posthuman Deepfake Technology - Identity in Crisis
Computers are learning to forge better and betterThe technology that generates images and sound with the help of artificial intelligence is called “deepfake”, a combination of the terms “deep learning” and “fake”. This magic trick has been widespread in the entertainment industry as well as pornography for years. Now the whole thing goes one step further: Using a person's facial expressions - from images or videos - the artificial intelligence can synthesize a completely new model of them. She can even make a face model based on just a single image, even if the results are not very convincing. Deepfakes are developed on open source platforms, where they are revised and improved on a daily basis.
Fake nude pictures and porn possiblePeople, such as politicians and celebrities, who have a lot of public image data on them, are among the easier targets of deepfake technology. Their faces could be used in fake news or fake porn videos. As more and more people post selfies and short videos of themselves online, the technology can be used maliciously against them. Such cases are becoming more and more common. The US state of Virginia was recently forced to amend a "Revenge Porn" law so that it now also includes deepfakes. This is to deter vengeful people from using the technology to harm their ex-spouse or lover. Recently another app called DeepNude posted on the Internet, which can be used to create fake nude pictures of women.
Is that presidential candidate really in favor of the death penalty?Losing your identity to deepfakes is worrying enough for individuals. Even more worrying, however, is the possibility that deepfakes could be used to produce fake news. Their targeted use for political purposes could shake the foundations of democracy and with it the mechanism for creating a social consensus. Did that presidential candidate really advocate the death penalty in a private speech? Nobody can prove whether it actually happened.
Since artificial intelligences learn extremely quickly, there is unfortunately still no technology that can expose deepfakes in the long term. As soon as a fake video is found to be unnatural in a certain way, for example if the person doesn't blink enough, an algorithm is used to improve that aspect.As science and lawmakers seek preventive action against the misuse of deepfakes, individuals can help raise awareness about the technology. “Seeing is believing” is no longer a reliable guideline. We need to develop more complex methods of knowing what is true.
In our column series “Pronounced…”, Liwen Qin, Maximilian Buddenbohm, Dominic Otiang’a and Gerasimos Bekas write alternately every week. In “Outspoken… posthuman”, Liwen Qin observes technical progress and how it affects our lives and society: in the car, in the office and at the supermarket checkout.
Liwen Qin is the founder and CEO of Trends Eurasia GmbH, a Berlin consulting company that connects the digital markets between Germany and China.
Translation: Sabine Bode
Copyright: Goethe-Institut e. V., online editing
Do you have any questions about this article? Write us!
- Why is there atheism
- Why are people pulling back
- What are some examples of Islamic music
- Which countries have the best entrepreneurs
- What are the characteristics of a legend
- How do I renew my MSN Premium subscription
- Are we really blessed
- How good is Yes Bank
- Is Eminem left-handed or right-handed
- What is the latest version of PMBOK
- Who built Masjid Al Aqsa
- I am a bad uncle
- What are biomolecules 3
- Can I get an Upwork account?
- Is mining crypto currency at home still profitable?
- Which brand name ends with ASF
- How do I start writing about Patreon
- What antibiotics are glycopeptides
- What are some of the best TV shows to watch
- Will humans be extinct at some point?
- What is the latest version of SSL
- How are compartments in ships fireproof?
- Is Jammu good
- What is your favorite mat foundation