Chatting via Messenger, playing games together or taking part in social media trends – media enable us to be in contact with others. Children and young people face many challenges when communicating online. On Elternguide.online, we explain how you and your family can deal safely and competently with communication risks online.
When we write messages via Messenger, we don’t just use letters, we also like to use emojis. However, care should be taken to avoid misunderstandings. Chatting, posting and gaming is fun. However, being constantly available can overwhelm children and young people, lead to digital stress and the fear of missing out(FOMO). Be aware of your role model function and, if necessary, make technical adjustments together to regulate media use.
Whether through online gaming, video chats or social media – it’s easy to meet new people on the internet. Contact with strangers can be risky because we don’t know the person’s intentions and don’t know who is actually communicating with us. Is it really the same age gamer friend? When paedophile criminals write to children or young people to initiate sexual contact, this is known as cybergrooming. If supposedly private images such as nude photos are used to blackmail someone, this is called sextortion.
Sometimes communication with friends and acquaintances can also become problematic. Among young people, there is a risk of cyberbullying, for example, via chat groups. Sexting, the sending of revealing messages and images, can be problematic in relationships. It is helpful if rules are agreed on how to deal with messenger chats. Discuss this with other parents and your child’s teachers. Talk to your child about being careful with their own data, such as nude images. Explain to them how they should deal with insults and nasty comments and make them aware of reporting points.
AI applications have long since arrived in the everyday lives of children and young people and automatically accompany them when they use search engines, messengers and social media. They chat with chatbots such as MyAI on Snapchat, enter into intimate relationships with AI contacts or use programs such as ChatGPT or MetaAI to collect ideas or find solutions. In doing so, they encounter challenges such as misinformation, problematic content and data misuse as well as the difficulty of distinguishing between human and machine communication. Talk to your child about the opportunities and risks of AI tools and make safety settings in the apps together. Promote your child’s critical thinking and encourage them to question answers from chatbots, check information and understand AI as a tool – not as a substitute for their own services or real friendships.
The internet is not always a friendly place. Trolls and haters launch attacks under the guise of anonymity and deliberately provoke people in comment columns. Online hate speech can spoil the fun of posting videos and photos online. Thinking carefully about what you post or share is the first step to a safe browsing experience.
Forming their own opinion is one of the developmental tasks of children and young people. During the orientation phase, they can be susceptible to simple answers and radical positions from extremists. Whether on social media, in forums, chats or in online games – children and young people can come across extreme opinions and conspiracy myths everywhere online. Make it clear to your child why they should not trust all content online. Show your child how they can check information and familiarize them with the various reporting points on the internet.
Many gamers play games together, even if they are sitting in different places. When gaming, communication takes place via a headset or the chat function within a game. It is not always clear who is talking to you on the other end. If possible, players should block other people’s contacts. Gamers sometimes use harsh language, known as trash talk. If insults and conflicts escalate, this can lead to hatred among gamers. Keep in touch with your child about their favorite games and use technical youth media protection solutions.
You can learn even more about communication risks and how to deal with them in these posts:
False reports, “fake news”, alternative facts or conspiracy myths – you hear these terms again and again when it comes to news and information on the internet. But they don’t necessarily mean the same thing. We explain the differences between the terms – and what you should look out for as a parent.
Disinformation is content that is demonstrably false or misleading – and is spread deliberately. It often appears credible at first glance because it is packaged in a story, contains individual true facts or is presented in a highly emotionalized way.
The aim of disinformation is to unsettle people, influence certain opinions or create a targeted mood – for example against individual groups or political decisions. It is often also about economic interests, for example through advertising revenue on dubious websites. Disinformation becomes particularly problematic when it undermines trust in science, the media or democratic processes. It can therefore pose a serious threat to democracy.
Especially in times of crisis, such as during the coronavirus pandemic or in connection with the war against Ukraine, disinformation plays a major role.
“Fake news” is a term that is often used in everyday life – usually as a synonym for disinformation. Literally translated, it means “fake news”.
However, the term is also deliberately used to denigrate critical reporting in serious media or to discredit political opponents.
It is therefore important to check carefully whether it really is a deliberately manipulated message – or whether the term is only being used to devalue another opinion.
Tip: When children or young people talk about “fake news”, ask what exactly they mean by this – and look at the source together.
A good introduction to the topic is the Inform” module from the Genial Digital material of the German Children’s Fund. Here, children learn in a playful way how to better assess and question information on the Internet.
False information is content that is not correct – but is passed on without intention. For example, because someone has misunderstood something or shared outdated information.
Mistakes can also occur in the media, for example in research or translation. In the past, this was sometimes called a “newspaper hoax”. It is important that such errors are corrected later.
Satire is an artistic form used to exaggerate social or political issues, for example in the heute-show, the Postillon or in memes.
Sometimes satire works with similar means as disinformation – such as exaggeration or simplification. However, it is not intended to deliberately deceive people.
The aim of satire is to criticize and make people think. Children and young people sometimes need help to classify satire correctly – talk about it together.
Propaganda means that information and messages are disseminated in a targeted manner in order to steer public opinion in a certain direction.
This can happen through language, images, music or even misinformation. Propaganda is often used in political conflicts – in the past on posters or on the radio, today also via social media and messenger services.
Conspiracy myths claim that secret groups or powers are behind major events. These stories offer simple explanations for complex relationships – without providing any scientific evidence or proof.
For example:
Such myths divide the world into “good” and “evil” – and often make certain groups responsible for everything. It becomes dangerous when they stir up hatred and mistrust or undermine faith in science and democracy.
These stories are not scientific theories, but are based on unsubstantiated claims. This is why experts deliberately refer to them as conspiracy myths or conspiracy narratives – and not as “conspiracy theories”.
The annual Safer Internet Day, which is coordinated in Germany by the EU initiative klicksafe, will take place on February 11, 2025. Under the motto “No likes for lies! Recognize extremism, populism and deepfakes online”, children and young people are to be encouraged to deal critically with online content. On Elternguide.online we answer the most important questions on this topic.
Disinformation refers to the deliberate dissemination of false or misleading information with the aim of deceiving or manipulating people. The aim is to deliberately create a certain opinion or mood, for example against certain groups of people or political decisions. We explain everything about this topic in the article Fake news, conspiracies and disinformation – what does it actually mean? The text False information on the internet explains the background in plain language.
“Fake news” is invented or distorted news that relies on strong emotions to attract attention and spread quickly. They can distort public opinion and promote false beliefs. You can find out more about this in our parents’ guide article Fake news – dealing with disinformation and false reports on the internet.
Deepfakes are videos or audios that have been faked with the help of artificial intelligence. They look real even though they are not. People are shown as if they were saying or doing things that never actually happened. Deepfakes can be used to spread false information or make someone look bad. Read the article Deep fakes – deceptively real fakes to find out what you can watch out for as a parent.
Simple answers to difficult questions – conspiracy narratives are often behind this. These complex narratives explain events or situations with secret plans or powers. Such myths can quickly spread online, fuel mistrust of official bodies and lead to unreasonable behavior. You can find out more about this in our article Conspiracy myths on the internet.
Whether on social media, messengers or in online games – children and young people can come across extremist propaganda anywhere online. Extremist groups use the internet to spread their ideologies and recruit new followers. They often use manipulative content and misleading disinformation to achieve their goals. Our article Extremism online explains more about the background and how you can protect your child.
Some symbols, such as the swastika, are prohibited due to their association with unconstitutional organizations or ideologies. The dissemination of such symbols can contribute to the spread of extremist views and have consequences under criminal law. Our article Prohibited symbols on the internet sheds light on the dangers for children and where parents can get information.
The term dark social refers to the dissemination of content via private channels such as messenger services or emails that are not publicly visible. Such distribution channels make it more difficult to track disinformation and can increase its reach. If you would like to find out more, read the article Dark Social – the dark side of the internet.
Chatting, posting, liking – online communication is an important part of children and young people’s media use. However, it is associated with a number of challenges. Contact with strangers harbors risks such as hate speech, cybergrooming or sextortion. Conflicts are also possible among friends, for example through cyberbullying. Problems can arise in gaming through anonymous communication and trash talk. In our article, we explain how your family can deal with communication risks safely and confidently.
The ability to connect with others online and develop their own opinions is an important part of children and young people’s development. However, during the orientation phase, they can be susceptible to easy solutions and radical views from extremists. Keep talking to your child about their media use, keep an open mind and listen. Explain to your child why they should not trust all content on the internet. Show them how to critically question and check information and give your child access to age-appropriate news formats. There are numerous online resources and tools that can help to recognize disinformation and deal with communication risks. klicksafe, for example, offers materials and explanatory videos that are specially designed for young people.
The Internet is full of photos and videos. Images are often seen as proof of the truth of a report. However, photos and videos can also be manipulated or even faked. Deep fakes are just such counterfeits. They lead to the fact that disinformation can be spread even better, because they look very convincing.
Thanks to artificial intelligence, sound or video recordings can be falsified or even completely recreated. Developers of deep fakes can, for example, put any statement in a person’s mouth or make them do things that they did not do in real life. The software analyzes recordings of a person and “learns” their facial expressions and gestures. After that, any sentences can be spoken and the recording manipulated to make it look as if the person said it themselves.
The three most common forms of deep fakes:
Such software can now be downloaded free of charge from the Internet. There are even relatively easy-to-use apps so that almost anyone can create and distribute deep fakes. As technology is constantly improving, counterfeits are becoming increasingly difficult to detect.
Many young people encounter deep fakes in the form of humorous clips or parodies. If they fall for funny deep fakes, this is harmless in many cases. If the trick is explained afterwards, as in the video by a famous German YouTuber, it can even be an educational experience.
It becomes problematic when young people allow themselves to be manipulated by deep fakes or are targeted themselves. In other words, when a deep fake is created that exposes them. This can put a heavy burden on those affected and lead to serious consequences.
Fake videos are dangerous because they look so convincing. Children and young people in particular must first learn to question content critically.
Although the use of third-party images is generally regulated by the right to one’s own image, deep fakes have long been a legal gray area. The Federal Council has been dealing with the issue since July 2024. The new law on the “violation of personal rights through digital falsification” provides for penalties of up to two years’ imprisonment, and up to five years in serious cases. The German government is also planning programs to improve the detection and regulation of deep fakes.
Deep fakes are a rapidly growing phenomenon. While it’s not always easy to understand the technical details, it’s important that you talk to your child about these issues. Here are a few tips:
Nicht mehr Kind, aber noch nicht erwachsen: Jugendliche stecken in einer spannenden Phase. Sie formen ihre Identität und ihre Meinungen, suchen ihren Platz in der Welt und Menschen, mit denen sie sich wohlfühlen. Dabei stellen sie viele Fragen, sind neugierig und offen für vieles. Das eröffnet Chancen für neue Ideen – kann aber auch ein Einfallstor für extremistische Weltanschauungen sein.
Extremismus tritt in vielen Formen auf – sei es Rechts- oder Linksextremismus, religiöser Extremismus oder in Form von Verschwörungsmythen. All diesen Ausprägungen gemeinsam ist, dass sie mit einfachen, aber irreführenden Antworten arbeiten, die erst auf den zweiten Blick als falsch entlarvt werden können, dass sie den Menschen vermeintliche Sündenböcke für jedes Problem präsentieren und Hass schüren.
Mit stark vereinfachten Inhalten und kurzen Aussagen versuchen Extremist*innen, Jugendliche über Onlinekanäle zu erreichen. Sie setzen auf aktuelle Netztrends wie unterhaltsame Videos und Bilder, posten Memes und scheinbar lustige Bilder oder verbreiten Falschmeldungen, um ihre Botschaften unauffällig und einprägsam zu verbreiten. Dabei sind sie in allen großen Netzwerken aktiv – sei es YouTube, WhatsApp oder Facebook.
Besonders TikTok ist ein bevorzugtes Medium. Es wird von mehr als der Hälfte aller 12- bis 19-Jährigen genutzt und bietet durch seinen Algorithmus eine enorme Reichweite. Durch eingängige und scheinbar harmlose Clips sorgen Extremist*innen hier dafür, dass ihre Inhalte vom Algorithmus in viele Kanäle gespült werden – und profitieren dann davon, dass sich Inhalte auf dem Netzwerk wie Schneebälle verbreiten, sobald Nutzer*innen damit interagieren. Die Jugendlichen, die die Clips ansehen und teilen, wissen zunächst oft selbst nicht, dass sie extremistisches Gedankengut unterstützen, weil die Botschaften zunächst nur sehr versteckt und subtil in scheinbar harmlose Lieder oder Clips eingebaut sind. Diese Videos werden häufig geteilt, bevor Jugendliche deren problematischen Ursprung erkennen.
Auch Netzwerke wie Discord, die eigentlich für Gaming genutzt werden, werden oft von Extremist*innen genutzt. Sie klinken sich hier in Gaming-Gruppen ein, stellen Kontakt über ein gemeinsames Spiel her und verbreiten dann ihre Ideologie. Dafür nutzen sie häufig auch Bilder oder bekannte Geschichten aus Spielen, um diese für ihre eigene Botschaft umzudeuten. Für Jugendliche kann hier schnell die Grenze zwischen dem eigentlichen Spiel und der extremistischen Botschaft verschwimmen.
Besonders aktiv im Internet sind Rechtsextremist*innen und Islamist*innen. Wenn ihre Propaganda klar erkennbar ist, kann sie von den Betreibern der Plattformen meistens schnell gelöscht werden. Deshalb weichen die Extremist*innen nach dem ersten Kontakt gerne auf weniger bekannte und weniger stark kontrollierte Online-Angebote wie etwa die russische Plattform vk.com aus. Ein Großteil der Kommunikation findet auch gar nicht öffentlich sichtbar statt, sondern in geschlossenen Gruppen, z. B. bei Telegram oder Facebook. Dorthin werden Jugendliche gelockt, die über öffentliche Portale kontaktiert wurden – und dort begegnet ihnen dann die wirkliche, teils brutale oder verstörende Propaganda.
Sowohl Rechtsextremist*innen als auch Islamist*innen sehen sich gerne in der Opferrolle. Sie geben vor, vom eigenen oder anderen Staaten unterdrückt zu werden. Rechtsextremist*innen in Deutschland sprechen oft von der sogenannten „Lügenpresse“. Sie werfen den Medien vor, von der Regierung gesteuert zu sein, weshalb rechtsextreme Meinungen keine Stimme hätten. Beide Gruppen äußern auch Kritik am Kapitalismus. Das ist besonders gefährlich. Denn Aspekte der Kritik sind durchaus berechtigt und werden von vielen jungen Menschen geteilt. Das nutzen Extremist*innen, um sie für ihre Sache zu gewinnen. In jüngerer Zeit werden aus beiden Lagern vermehrt wieder antisemitische (also gegen Jüd*innen gerichtete) Verschwörungsmythen verbreitet. Diese Verschwörungsmythen tauchen mittlerweile auch bei Musiker*innen auf, die bei Jugendlichen beliebt sind – z. B. in einigen Liedern des deutschen Rappers Kollegah.
Deshalb ist es besonders wichtig, dass Sie mit Ihrem Kind darüber sprechen, was es beschäftigt. Welche Themen werden im Freund*innenkreis besonders heiß diskutiert? Welche Bilder und Videos schauen sie sich an und teilen sie mit anderen? Thematisieren Sie auch die Ziele, die bestimmte Gruppierungen verfolgen, wenn sie Inhalte mit extremen politischen Aussagen ins Netz stellen.
Machen Sie Ihr Kind darauf aufmerksam, dass jede*r seine Meinung im Netz kundtun kann – auch Menschen mit schlechten Absichten. Deshalb darf man nicht allen Inhalten einfach trauen, sondern muss sie hinterfragen. Auf der Seite knowyourmeme.com lassen sich bekannte Memes inklusive ihrer Geschichte nachschlagen (leider ist die Seite nur in englischer Sprache verfügbar). Mimikama.org ist eine gute Anlaufstelle, um Meldungen aus Sozialen Netzwerken wie Facebook auf ihren Wahrheitsgehalt zu prüfen.
Sollten Sie oder Ihr Kind auf offensichtlich extremistische Inhalte stoßen, können Sie diese direkt bei den Plattformbetreibern melden. Bei großen Angeboten wie Facebook und YouTube ist das mit wenigen Klicks erledigt. Sie können ganz allgemein auch die Internetbeschwerdestelle nutzen. In besonders schweren Fällen kann es sinnvoll sein, sich direkt an die Polizei zu wenden. In den meisten Bundesländern geht das mittlerweile online über eine sogenannte Onlinewache.
Nutzen Sie zusätzlich technische Jugendschutzeinstellungen. Diese können dabei helfen, die Kontaktaufnahme durch Fremde einzuschränken, zu blockieren oder stummzuschalten. Sie bieten zwar keinen vollständigen Schutz vor extremistischen Inhalten, können Ihr Kind aber zusätzlich absichern. Eine hilfreiche Plattform hierfür ist medien-kindersicher.de, die speziell für verschiedene Netzwerke – darunter Discord – Anleitungen für Jugendschutzeinstellungen bietet.
Um Jugendliche gegen Extremismus zu stärken, gibt es viele gute Angebote:
Artificial intelligence (AI) has long since found its way into our everyday lives. Where flying cars and robots were once seen as symbols of AI, the reality today is more diverse, but no less fascinating. We take a look at where we already encounter artificial intelligence in everyday life and what significance this has for media education.
Artificial intelligence, or AI for short, is a very broad term that describes machines or computer systems that can imitate human intelligence. To do this, they are fed information until they can apply it independently to solve tasks. This also means that they can learn from mistakes and thus constantly improve. For example, if a computer is fed a very large number of photos of human faces, at some point it will be able to tell for sure whether or not a photo has a human face in it. In this case, it is a so-called “weak AI” because it is intelligent only in relation to a specific subject. Research is also being conducted on a “strong AI” that could have the intellectual capabilities of a human, e.g., think logically or plan ahead. However, the strong AI does not yet exist. And if it should exist one day – it will probably not have feelings and thus will be fundamentally different from us humans.
The areas of application for AI in family life are diverse. Facial recognition technologies unlock smartphones, voice assistants such as Alexa and Siri fulfill our commands and streaming services such as Netflix suggest films that match our preferences. Algorithms also play a role in this. Toys(smart toys) can also actively interact with children with the help of AI. For example, an intelligent cuddly toy can search for a child’s questions on the Internet and read out the answers. Chatbots such as ChatGPT can provide support with school tasks.
Artificial intelligence can make our lives easier in many situations. But there are also risks associated with the use of AI. For example, so-called deep fakes can be used to create deceptively real images or videos that support the spread of fake news. If AI is used at home, for example via a voice assistant or smart toys, it is also important to look at the manufacturer’s data protection measures and use existing security settings. If the data is not stored on the device itself, but in a cloud, there is a risk that third parties can access and misuse the data. There are also many legal questions for which there is no conclusive solution at the present time: For example, who should be liable in the future if a decision made by an AI causes damage? This is one reason why the use of self-driving cars, for example, is not yet readily possible.
In order to promote a better understanding of AI, it is important that children and young people are familiarized with the concept at an early age. It is important that they understand what AI is and how it works. Younger children often find it difficult at first to distinguish between an object activated by AI and a real living being. Age-appropriate explanatory videos and articles are suitable for teaching children and young people about artificial intelligence. There are also games in which you can train an AI yourself and thus learn to understand how it works in a playful way.
We have put together a few offers for you:
Open communication: Talk openly with your child about AI and explain how it is used in their everyday life. Encourage them to ask questions and take time to discuss any concerns.
Critical media literacy: Help your child develop a critical attitude towards the information they find online. Show them how to recognize false information and encourage them to check sources.
Data protection: Discuss the importance of data protection with your child and encourage them to handle personal data responsibly. Explain what information can and cannot be shared safely.
Self-determination: Encourage your child to decide for themselves which technologies they want to use. Help them to set their own boundaries and feel comfortable saying no when they feel uncomfortable.
Joint activities: Take the opportunity to play games or do activities together with your child that provide a better understanding of AI. Discuss how AI-based technologies work and let your child gain their own experience.
The latest news, preparation for a paper or the weather forecast – check TikTok right away. Teenagers and young adults in particular are frequent users of social media platforms such as TikTok and YouTube as search engines. This can work, but it also brings its own unique challenges.
It was taken for granted for a long time – if you want to find something on the Internet, you “Google” it. But that seems to be faltering. Young people are increasingly starting their online searches on social media platforms such as YouTube, TikTok and the like. In some statistics, YouTube even appears as the second largest search engine after Google – and the trend is rising.
Why? That’s quite simple: Social media is the digital home of many young people anyway. That’s where they know their way around, that’s where they feel comfortable – and that’s why they have great confidence in the search results. When young people search here for products, events or places, the results are mostly (seemingly) personal recommendations and experiences from celebrities or from the community, instead of rather impersonal and complicated web links. This makes a credible and approachable impression on young people. In addition, videos or images are easier and more entertaining than eternal clicking through text deserts.
Platforms like TikTok and YouTube are responding to young people’s need to be able to search content easily. TikTok, for example, has made the search field significantly larger and more prominent, and now offers a widget for smartphones that can be used to operate the TikTok search directly from the home screen. The term “widget” comes from English and is a compound word from “window” = window and gadget = technical gadget. “Widget” refers to a type of interactive window.
But how can children and young people distinguish trustworthy from dubious information on social media? Is everything there really as authentic as it sometimes seems?
Because, of course, influencers are not always the nice buddies next door – but earn a lot of money with their appearances and recommendations. So if a restaurant is praised here with particularly warm words, it may well be that there is simply a particularly lucrative advertising contract behind it.
In addition, classic advertisements also appear on social networks. The algorithm also still has a say and constantly presents us with similar results – just like other search engines. And caution is also called for in other respects: In addition to serious information, fake news or even deliberate propaganda from various interest groups can also be found on the networks. Social media platforms often collect and collate at least as much data as traditional search engines.
As a parent, you should think carefully with your child about how to use the search function of social media services safely:
Show interest in your child’s media use and his or her favorite offerings on TikTok and Co. Encourage your child to use social media platforms critically. Only if your child knows the possibilities and also the advantages and disadvantages of different offers, he can choose consciously and purposefully.