Why should online platforms take more responsibility?
Online platforms are important drivers of innovation and growth in the digital economy. They have enabled unprecedented access to information and exchanges as well as new market opportunities, notably for small and medium-sized enterprises (SMEs). With the surge of illegal content online, including online terrorist propaganda and xenophobic and racist speech inciting violence and hatred, online platforms carry an increasing societal responsibility in terms of protecting users and society at large and preventing criminals from exploiting the online space.
The Commission has been encouraging voluntary action by the industry to remove illegal content online through initiatives such as the Code of Conduct on Illegal Hate Speech Online and the EU Internet Forum. However, the important spread of illegal content that can be uploaded and therefore accessed online raises serious concerns that need forceful and effective replies.
To this end, the Commission is today presenting a set of guidelines and principles for online platforms to step up the fight against illegal content online in cooperation with national authorities, Member States and other relevant stakeholders
Following up on the European Council conclusions of June 2017, echoed by G7 and G20 leaders, the proposed measures constitute a first element of the Anti-Terrorism package announced by President Juncker. The measures also feed into the Digital Single Market Strategy and deliver on the actions announced in the Online Platform Communication of May 2017.
How quickly and effectively is illegal content taken down?
According to the latest reports, removals of illegal hate speech have increased from 28% to 59%. Whilst some improvements in the speed of removal have been noted, 28% still took place only after a week[1]. The Commission has agreed on a specific Code of Conduct with major online platforms; however, important differences across those platforms still remain[2]. In the framework of the EU Internet Forum tackling terrorist content, approximately 80-90% of content flagged by Europol has been removed since its inception. In the context of child sexual abuse material, the INHOPE system of hotlines reported already in 2015 removal efficiencies of 91% within 72 hours, with 1 out of 3 content items being removed within 24 hours. This shows that a non-regulatory approach may produce some results in particular when flanked with measures to ensure the facilitation of cooperation between all the operators concerned.
Why is the Commission proposing new guidelines?
Currently, a harmonised and coherent approach to the removal of illegal content does not exist in the EU. A more aligned approach however would make the fight against illegal content more effective. It would also benefit the development of the Digital Single Market and reduce the cost of compliance with a multitude of rules for online platforms, including for new entrants.
Today's Communication therefore provides a set of guidelines and principles for online platforms on the ways in which they can live up to their responsibility as regards tackling the illegal content they host. It also aims to mainstream good procedural practices across different forms of illegal content, to promote closer cooperation between platforms and competent authorities. As such it outlines a European approach to address illegal content for online platforms, combining the need for fast and effective removal of illegal content and prevention and prosecution of crimes with safeguarding the right to free speech online. This guidance will complement and reinforce the ongoing sector-specific dialogues.
What are the main actions expected from the online platforms?
The Communication invites online platforms step up their efforts to remove illegal content online and proposes a number of practical measures to ensure faster detection and removal of illegal content online:
- Establishing easily accessible mechanisms to allow users to flag illegal content and to invest in automatic detection technologies, including to prevent the re-appearance of illegal content online;
- Cooperate with law enforcement and other competent authorities, including by sharing evidence;
- Allow trusted flaggers, i.e. specialised entities with specific expertise in identifying illegal content, and dedicated structures for detecting and identifying such content online, to have a privileged relationship, while ensuring sufficient standards as regards training, quality assurance and safeguards;
- Use voluntary, proactive measures to detect and proactively remove illegal content and step up cooperation and the use of automatic detection technologies;
- Take measures against repeat infringers;
- Develop and use automatic technologies to prevent the re-appearance of illegal content online.
The Communication also calls for broader transparency measures (including on the number and speed of take-downs), as well as complaint mechanisms and other safeguards to prevent the over-removal of content.
Furthermore, exchanges and dialogues with online platforms and other relevant stakeholders such as trusted flaggers, civil rights and consumer associations will continue.
Is the Commission planning on taking any legislative steps?
Over the next six months, the Commission will:
- Continue exchanges and dialogues with online platforms and their relevant stakeholders and evaluate the progress made under the various sector-specific initiatives, such as application of the Code of conduct on countering online illegal hate speech and the results of the EU Internet Forum;
- Monitor progress and assess whether additional measures are needed, in order to ensure the swift and proactive detection and removal of illegal content online, including possible legislative measures to complement the existing regulatory framework.
This will be completed by May 2018.
What makes content illegal at EU level?
Illegality is determined by specific legislation at EU level, as well as by national law – the Communication does not change anything in this respect. There is a vast regulatory framework at national and European level to determine what is illegal. It covers material such as incitement to terrorism, illegal hate speech and child sexual abuse material.
When it comes to content which is objectionable but not necessarily illegal, the EU Audiovisual Media Services Directive, for instance, asks video-sharing platforms to protect minors from harmful content. It is however not asking online platforms for the removal of this content, but for the creation of tools to allow users to identify and avoid minors being exposed to harmful content.
The guidance does not provide measures to be taken in respect of fake news, which is not necessary illegal. The problem of fake news will be addressed separately.
What are the current EU rules on removing illegal content online?
The e-Commerce Directive requires online platforms to act "expeditiously" to remove illegal content after they have obtained knowledge of it; it does not define what this means in practical terms. In the current legal environment, this usually has to be decided on a case-by-case basis depending on the specific circumstances, in particular the type of illegal content, the accuracy of the notice and the potential damage caused. Today's Communication calls for faster action where serious harm is at stake, for instance in cases of incitement to commit terrorist acts. On the basis of the information provided by online platforms (for instance through transparency reports), the Commission will explore the possibility to fix specific timeframes for removal.
How do the new guidelines complement other measures under the Digital Single Market Strategy?
This Communication is a non-binding measure that encourages online platforms to take certain measures to deal swiftly with illegal content online. It covers all types of illegal content, and as such it complements sector-specific binding measures as the EU Audiovisual Media Services Directive and copyright directive proposals. In particular:
- The Copyright Directive proposal requires (Article 13) that services that store and give access to a large amount of works uploaded by their users take measures to prevent the availability on their services of works identified by rights holders, or, in case of agreements between rights holders and services, to ensure the functioning of such agreements. As recalled in the Communication, rights holders also have the possibility of notifying infringing content to the services.
- In parallel, Article 28a of the proposal for the revision of the Audiovisual Media Services Directive requires video-sharing platforms to take measures to protect minors from harmful audiovisual content (such as pornography and violence) and to protect all citizens from incitement to hatred. Detailed measures include tools for users to report and flag harmful content, age verification and parental control systems. The proposal is currently being discussed by the EU co-legislators.
What is the Code of Conduct on countering illegal online hate speech and how is it complemented by this Communication?
Illegal hate speech is defined in EU law (Framework Decision on combating certain forms and expressions of racism and xenophobia by means of criminal law) as the public incitement to violence or hatred on the basis of certain characteristics, including race, colour, religion, descent and national or ethnic origin.
The Code of Conduct on countering illegal online hate speech is a series of commitments by Facebook, Twitter, YouTube and Microsoft to combat the spread of hate content in Europe. It was adopted on 31 May 2016.
Each of the IT companies that signed this Code of Conduct is committed to countering the spread of illegal hate speech online, and to having rules that ban the promotion of violence and hatred. When they receive a request to remove content from their online platform, the IT companies will assess the request against their rules and community guidelines and, where applicable, national laws on combating racism and xenophobia. They then decide if the content can be considered as illegal online hate speech and if it needs to be removed. The aim of the Code is to make sure that requests to remove content are dealt with speedily. The companies have committed to reviewing the majority of these requests in less than 24 hours and to removing the content if necessary. To ensure the Code of Conduct is having the intended impact, NGOs and public bodies from across the EU provide data on how quickly such illegal content was removed. The European Commission has so far published two such evaluations, with the last one showing some encouraging results (see IP/17/1471).
What is the EU Internet Forum and how does it contribute to the fight against illegal content online?
The EU Internet Forum, launched in December 2015, is one of the key commitments made in the European Agenda on Security. The Forum brings together EU Interior Ministers, high-level representatives of major internet companies, Europol, the EU Counter Terrorism Co-ordinator and the European Parliament.
The EU Internet Forum has two key objectives: to reduce accessibility to terrorist content online and to empower civil society partners to increase the volume of effective alternative narratives online. These two objectives have materialised into a referral mechanism with the participation of Europol to remove internet content; the creation of a prototype "database of hashes" developed by the internet industry to create a shared database to help identify potential terrorist content on social media and prevent its re-appearance on other platforms; and the establishment of a Civil Society Empowerment Programme, launched by the European Commission in March 2017.
Since the launch of the EU Internet Forum, concrete steps have been taken to stop the abuse of the internet by international terrorist groups, with measurable outcomes. Approximately 80-90% of content referred to the internet companies by Europol has been removed.
How does the Communication feed into the work of the EU Internet Forum?
The EU Internet Forum set out an ambitious Action Plan to combat terrorist content online in July of this year. This includes measures to step up the automated detection of illegal terrorist content online, share related technology and tools with smaller companies, achieve the full implementation and use of the 'database of hashes' and empower civil society on alternative narratives. The Communication encourages industry to enhance their efforts in all these areas; in this way the Communication and the Action Plan mutually reinforce one another.
The Commission has convened the second meeting of the Senior Officials of the EU Internet Forum to take stock of the implementation of the Action Plan. At the high level meeting of the Forum on 6 December 2017, the Commission will draw the first conclusions of the results achieved and assess how to move forward. Lessons and recommendations will feed into the progress measured under the current communication.
(Source: European Commission)