The Internet is full of photos and videos. When there are pictures to go with a text, people often take that as proof that it is a true news story. Unfortunately, photos and videos can also be deceiving or even fake. Deep Fakes refers to just such fakes. They lead to the fact that disinformation can be spread even better, because they look very convincing.
Artificial intelligence, or very clever computer programs, can be used to falsify or completely recreate sound or video recordings. Deep Fake developers can, for example, put any statement into a person’s mouth or make them do things they didn’t do in real life. The software analyzes recordings of a person and “learns” their facial expressions and gestures. After that, any sentences can be spoken and the recording manipulated to make it look as if the person said it themselves.
Meanwhile, this software can be downloaded from the net for free. There are even relatively easy-to-use apps, so almost anyone can create and distribute Deep Fakes.
Fake videos are dangerous because they look so convincing. Information in text form is viewed more critically by people than information that is seemingly proven with photos or videos. Many people know that photos can be faked, but this is not yet so well known about videos. Children and younger adolescents in particular are at great risk of falling for this deception because their media skills are not yet very well developed.
Many deep fakes are created for fun, such as to alter well-known movie scenes and entertain the audience. However, it is also becoming increasingly common for the fakes to be produced with malicious intent: Fake news spread to influence political opinions thus becomes more credible and also more dangerous. Fake video and audio recordings can also be used to commit fraud by initiating money transfers using false identities. The goal of a Deep Fake is usually to harm a single person. In addition to politicians and celebrities, private individuals also become victims time and again.
When teenagers fall for deep fakes because they don’t recognize them as fakes, in most cases it’s harmless. If the trick is cleared up after the fact, such as in a video by a famous German YouTuber, it can even be an educational experience.
It becomes problematic when young people allow themselves to be manipulated by Deep Fakes into revealing certain information or into putting themselves in inappropriate or dangerous situations.
It can also happen that teenagers themselves are targeted and a Deep Fake is created that exposes them. This can be a very embarrassing and traumatic experience for those involved.
New technical possibilities are also always attractive to young people. It may be that your child is trying to create Deep Fakes himself. So far, there are no specific regulations on deep fakes, nevertheless they can be legally problematic. For example, there is a risk of copyright infringement through the use of protected video recordings. Videos can also violate personal rights if they are insulting or defamatory.
Deep fakes are a relatively new phenomenon that is rapidly evolving. While it’s not always easy to understand the technical details, it’s important to talk to your child about these topics. Here are a few tips on what points you can address:
“logo!” ZDF’s children’s news explains how to recognize deep fakes.
“Reporter” a YouTube channel of public broadcasters, takes a closer look at how Deep Fakes are created.
“Deutschlandfunk Nova” the young information program from Deutschlandfunk explains in a youth-friendly way how to recognize manipulated videos.