The European Parliament has approved new legislation that aims to "stop the spread of terrorist content on the Internet" and that will force Internet platforms, such as Google, Twitter or Facebook to remove content related to terrorism within less than an hour since it is notified and without this decision having to go through a judge. Failure to do so will result in penalties.
Now, two great doubts arise: what is considered terrorist content, according to the new legislation of the European Union. And who will be in charge of filtering this information.
What content will have to be removed
The law will act on "texts, images, audio or video recordings, including Live broadcasts that incite, demand or contribute to the commission of terrorist crimes; provide instructions to do so, or encourage participation in terrorist groups. "This includes information on the manufacture of firearms or explosives.
Yes indeed, each country will have to decide what exactly is terrorist content. And it will be each country that has a competent authority to control these contents and will be in charge of notifying Internet companies of the contents that qualify as terrorists. After this notification, the firms will have at most one hour to delete it. In addition, the blockade that must be applied in all the countries of the European Union, not only in the one of origin.
On the other hand, if the content that talks about terrorism published on a social network or on a website has "educational, journalistic, artistic or research purposes, or used to raise awareness, it will not be considered terrorist content. "
If the content is not removed, companies will face penalties that will be calculated "taking into account the nature of the infringement and the size of the company." The companies they don't need to filter all the content they host nor should they use automatic control tools.
The European Commissioner for the Internal Market, Thierry Breton explained last year talking about the Digital Services Act, Digital Services Act or DSA that a solution to avoid harmful content on the Internet was for companies to hire a greater number of moderators to be able to have more control over publications.