The ongoing debate on hate and agitation on the Internet shows that the protection of children and young people on the Net is still facing major challenges. In the age of real-time content, social media and apps, the question inevitably arises as to what contribution technical youth media protection can make. For example, a chance can be seen in pre-installed, modern youth protection software for mobile end devices. After all, the smartphone has long been the media centre of young people. Current developments also indicate that artificial intelligence (AI) has potential in the protection of minors from harmful media. Thus, major providers use machine learning - a part of AI - to automatically identify problematic content and accounts on their platforms. Anything that offers opportunities on the one hand, also has problematic issues on the other hand. This raises the question of how automated recognition mechanisms can be harmonised with classical procedures for youth media protection.
It is important for society to decide what role providers of content and platforms, automated processes but also classic supervisory procedures should play with case-by-case decisions by bodies independent from the state. This will be the subject of a panel discussion organised by the Kommission für Jugendmedienschutz (KJM) [Commission on Youth Media Protection] within the framework of the event series “KJM im Dialog [KJM in Dialogue] on 7 November 2018 at the permanent representation of Rhineland-Palatinate, In den Ministergärten 6, 10117 Berlin.