In November 2020, following the terrorist attacks in France, Germany, and Austria, the European Council[i] stated that “access to digital information is becoming ever more crucial and the mobility of this data demands effective cross-border instruments, because otherwise terrorist networks will in many cases be a step ahead of the investigating authorities … access to the digital information, that is essential for preventing and eliminating terrorist action must be ensured and boosted.[ii]”
It is well-known how in the field of online propaganda, radicalization[iii], and terrorist financing, non-incisive regulations on social media and end-to-end chats can become a barrier to an effective counter-terrorism strategy. In this field, the dilemma of taking something from privacy to improve security, that is flowing from enhancing access to online communication, might affect also the economic aspect related to the spread of these instruments. In this respect, Wojciech Wiewiórowski, European Data Protection Supervisor, claimed: “encryption is as critical to the digital world, as is the physical lock to the physical world”. Then, to stress the need to differentiate the approach for requesting lawful access that can be applied to different technologies or means of communication, he declared that is useless to focus only on a strict dichotomy between “confidentiality of communications can never be restricted” or “law enforcement will be unable to protect the public unless it can obtain access to all encrypted data”. To satisfy the requirement of proportionality, the legislation must lay down clear and precise rules governing the scope and application of these measures and impose that the people whose personal data is affected have sufficient guarantees that their data will be effectively protected against the risk of abuse[iv].
It has to be taken into account that, besides ethical dilemmas, an institutional man-in-the-middle approach in these sectors may directly affect the core business of hosting service providers, causing impacts on investments, users’ behavior, and government budget spending. However, even though a 100% level of security is utopian, nowadays, with technologies and human resources currently available and EU or member states’ ongoing plans and regulations, is an overall control over social media still possible? Then, even if it was, is it directly proportional to an increase in radicalization and terrorist attack prevention? To answer these questions, it is useful to analyze what has been done to control open-source social media platform contents in both the private and the public sector so far.
Facebook and the multifaceted cost of content monitoring
End-to-end encryption is a security tool used by some apps and services (e.g. WhatsApp[v], Signal[vi], and Telegram)[vii] to provide a greater level of privacy and securing communication, by applying encryption to messages before they leave the sender’s device and allowing only the device to which it is sent to decrypt it. This process makes providers’ servers act as blind routers, passing messages on without being able to read them and securing messages intercepted during transmission by a hacker or a government agency.[viii]
So far, institutions have bypassed encryption barriers through the injection of state-sponsored malware on target devices, as, for example, the Italian Legislative Decree n. 216/2017 which introduced the use of the trojan software during investigations.[ix] Furthermore, there are also old methods to fight increasingly sophisticated crime, as the $900,000 FBI expense to hack the San Bernadino shooter’s $350 iPhone 5.[x] Or also the case of London Attacker, Khalid Masood, that used Facebook-owned fully encrypted chat service, Whatsapp, to declare he was waging jihad in revenge against Western military action in Muslim countries in the Middle East. The message detection was made possible only because Masood’s mobile telephone was recovered after he was shot dead[xi]. Discovering Masood’s last recorded thoughts was the key part of the investigation into what lay behind the assault. A result brought by human and technical intelligence rather than end-to-end chat monitoring.[xii]
Cases like this generated an increasing pressure from institutions to the private sector to regulate contents spread on social media. Indeed, from the chat providers point of view, end-to-end encryption doesn’t only represent a move towards users’ right to privacy but also a discharge of responsibility that allows them being no longer bound to create backdoor access to users’ messages.[xiii] In this regard, Facebook, in contrast to its business built around the monetization of user data, plans to make all messages on the app fully end-to-end encrypted by default.[xiv] Indeed, this change, which imposes a complex and long-lasting re-architecture of the entire product involving an expensive rebuilding of every feature of Facebook Messenger,[xv] is likely to make the company physically unable to moderate a large part of encrypted contents in users chats.[xvi]
Despite the costs of changing the messaging infrastructure and being deprived of over 2.7 billion monthly active users’[xvii] private conversations, Facebook’s priority seems to be that, with end-to-end encryption, the company will no longer have backdoor access to users’ messages. Thus, it won’t be forced to comply with requests from law enforcement agencies to access data.
According to researchers and journalists, this move seems to be more related to the growing pressure applied on Facebook to moderate user content by Australia, the US, the EU, and the UK with the threat of sanctions, rather than to the accomplishment of the legitimate requests made by privacy advocates.[xviii] Indeed, content moderation is becoming an ever-growing issue for the company.
In 2017, Facebook had more than 7000 content moderators[xix]. They earned roughly $15 per hour,[xx] a fraction of what full-time workers earn (median annual salaries for Facebook employees was $240,000 in 2017), and, after only a two-week training course,[xxi] they started deciding if removing or escalating terrorist content, flagged either by users or algorithms, by looking at the captions as well as the images themselves.[xxii]
In May 2020, Facebook agreed to pay $52mn to current and former moderators to compensate them for PTSD[xxiii] developed on the job.[xxiv] Besides the relatively irrelevant cost for the company, this episode highlighted the its lack of consciousness on such a delicate issue as content monitoring.
With global IP traffic predicted to grow at a compound annual growth rate[xxv] of 20% from 2018-2023[xxvi], the number of Facebook content moderators nowadays has already doubled (roughly 15,000, at 20 sites globally, who speak over 50 languages combined) and they’re mostly outsourced from companies like Accenture, Cognizant,[xxvii] Arvato, and Genpact[xxviii]. Moreover, once the number of moderators raised in only two years, the working conditions deteriorated and training for moderators was depleted.[xxix] These events inevitably brought to a 10% flagging posts error rate, as Facebook has itself admitted.[xxx] Given that reviewers have to wade through three million posts per day, that equates to 300,000 mistakes daily.[xxxi]
Nevertheless, only in the second quarter of 2020 Facebook removed about 8.7 million pieces of terrorist contents (according to the company’s definition: non-state actors that engage in or advocate for violence to achieve political, religious or ideological aims).[xxxii] But researchers, nowadays as in the past,[xxxiii] argue it is still impossible to gauge just how many posts escape the dragnets on a platform so large.[xxxiv] In this respect, automated systems using AI and machine learning notably invoked by Facebook CEO as the future solution to Facebook’s current political problems, are certainly helping with moderation. AI classifies user-generated content based on either matching or prediction, leading to a decision outcome (e.g. removal, blocking, account takedown)[xxxv] theoretically making suspect contents quicker to process to human moderators, at a later stage.[xxxvi]
Using a technique called Whole Post Integrity Embeddings (WPIE) Facebook’s systems ingest deluges of information, including images, videos, text titles and bodies (that can translate between 100 languages)[xxxvii], comments, text in images from optical character recognition, transcribed text from audio recordings, user profiles, interactions between users, external context from the web, and knowledge base information. Then, fusion models combine the representations to create millions of embeddings, which are used to train learning models that flag content for each category of violations.[xxxviii] In early January 2020, the company also released software that turns speech into text in real-time, opening up the possibility of better captioning of live video[xxxix]. Nonetheless, not every content can be classified, even by humans. Some posts have many shades of meaning or are very context-dependent, making it crucial to find the right balance between technology and human expertise.
In 2018, when Facebook stated that 99% of terrorist content on the platform were deleted, the Counter Extremism Project found that some of the most prolific Islamist extremists remained active on Facebook[xl]. Nowadays, for instance, the Islamist preacher who reportedly played a role in radicalizing Bataclan suicide bomber,[xli] Oman Mostefai, through sermons at a Paris mosque, continues, at the time of reporting, to have an active presence online, including on his official Facebook page. Same as Yusuf al-Qaradawi, banned from entering the United States, the United Kingdom, and France due to his declaration of support for suicide bombings and incitement of Islamist violence who still keeps, at the time of writing, his official Facebook page, as well as a few Facebook fan accounts.
Preventing social media exploitation: public sector plans
The Impact Assessment[xlii] of the “Proposal for a Regulation of the European Parliament and of the Council on preventing the dissemination of terrorist content online”[xliii] states that terrorist content online is a multifaceted security challenge due to a complex legal framework at the member state level. This situation is complicated by the fact that Article 3 of the EU’s 2000 e-commerce directive, created before the advent of peer-to-peer internet technology and social media[xliv], establishes the principle of the country of origin, which ensures that providers of online services are subject to the law of the member state in which they are established and not the law of the member states where the service is accessible.[xlv] However, the Directive on electronic commerce does not preclude a court of a Member State from ordering a hosting provider, such as Facebook, to remove identical and, in certain circumstances, equivalent comments previously declared to be illegal.[xlvi]
Anyway, State monitoring and flagging illegal content online are marred with difficulties. For instance, France’s most important element in the fight against online radicalization and terrorist propaganda is PHAROS system (platform for harmonization, reports, analysis, and checking of digital content)[xlvii]. The platform, which now has 28 investigators (police and gendarmes), was established in 2009, for an initial investment of €100,000,[xlviii] recently proposed to be increased to €500,000,[xlix] within the central office for the fight against crime linked to information and communication technologies (OCLCTIC), placed within the sub-directorate of the fight against cybercrime of the central directorate of the judicial police. Investigators at PHAROS monitor various information and communication services in France and produced more than 228,000 reports in 2019.[l] Moreover, as part of a European Union-wide testing campaign, this unit notified Twitter, Facebook, and Youtube of 796 contents, of which 512 were withdrawn. Unfortunately, the 16 October Samuel Paty murder exposed many of the drawbacks of French and social media platform counter-terrorism efforts online. A student’s parent expressed via Facebook and WhatsApp his disapproval of Paty’s teaching methods and produced a video against him. The content was quickly disseminated online, but not flagged immediately,[li] even though Paty had filed a complaint to the police after he was made aware of threats coming from social media[lii] and an NGO reported the attacker’s Twitter account to authorities in July 2020.[liii]
In Austria, in the wake of the January 2015[liv] Charlie Hebdo attacks in Paris, the government announced the allocation of a €290mn plan to fight jihadist terror. €126m went into hiring new personnel with special skills, including specialists in cybersecurity, crime-fighting, and forensics; €34m targeting special IT technology upgrades, such as the Schengen Information System database and evidence collection software; €12m was allocated to either online or offline deradicalization efforts, including awareness education.[lv] In December 2020, the National Council passed a comprehensive legislative package, including the Communications Platforms Act and the Hate-on-the-Net Fight Act, which already passed in autumn 2020, to curb hate speech, threats, and other illegal content on large social media platforms such as Facebook. The majority of the legislative package takes effect on January 1, 2021, with social platform operators having until the end of March 2021 to implement the new protection measures.[lvi] In particular, the Austrian law is based on Germany’s Network Enforcement Act (NetzDG), which states that users notice potentially illegal content, report it, and platforms must then decide whether it is illegal, in which case it must delete the content within 24 hours of reporting. According to NetzDG, online platforms face fines of up to €50 million for systemic failure to delete illegal content[lvii]. Besides these measures and investments since 5 years, due to the sheer volume of content, there are no plans for preventive government control, thus courts will only be able to check afterward whether the platform has acted illegally.[lviii]
A multidisciplinary, long term, and cooperative strategy
So far, as mostly in every aspect of counter-terrorism, a multidisciplinary approach is the only way to understand the online extremist environment and effectively counter the spread of jihadist propaganda and detect dangerous subjects through social media. Cooperation through the acceptance of responsibilities between the public and private sector is the best method to counter the spread of terrorism online and create a resilient environment. To date, not all projects are frustrated over the lack of factual data.
At an EU level, together with Europol, providers of online services developed a database of hashes, allowing content identified as harmful to be tagged electronically, preventing it from reappearing. The database contains over 300,000 unique hashes of known terrorist videos and images.[lix] This made the Check-the-Web (CtW), accessible only to Law Enforcement: an electronic reference library of jihadist terrorist online propaganda. It contains structured information on original statements, publications, videos, and audios produced by jihadi terrorist groups and their supporters. An operational tool not only to identify new content, groups, or media outlets but also new trends and patterns in terrorist propaganda, as well as operational leads for attributing crimes to perpetrators.[lx]
With an annual budget[lxi] of about €150mn, an increased of over €62 mn since 2010,[lxii] of which roughly €1mn are spent on research and developments projects and €700,000 for the maintenance costs for Europol’s decryption platform[lxiii], Europol is succeeding in countering extremism online through repressive operations, analysis of the jihadist online environment, and cooperation with the private sector. For instance, the16th Referral Action Day, an operation that was joined by 9 online service providers as Telegram, Google, Files.fm, Twitter, and Instagram, which pushed away from Telegram a significant portion of key actors within the Daesh network and, most importantly, established further cooperation with global private firms operating in the social media environment.[lxiv]
Europol, taken as an example, stresses the fact that, in terms of terrorist attack prevention, not in every case adding more data to the databases helps to find potential attackers. There is a lot of work that can still be done, at a public and private level, in understanding the online environment and all of its communication aspects, improving technology, investing in public awareness to report terrorist contents to authorities or online service providers, and investing in content moderators recruitment and training. In this framework, not only one actor could be held accountable. Each one could and can do something more by renouncing a little bit of ego to counter a wide-spread and still not effectively assessed threat.
[i] European Council. EU’s response to the terrorist threat. https://www.consilium.europa.eu/en/policies/fight-against-terrorism/
[ii] European Council (November 13, 2020) Joint statement by the EU home affairs ministers on the recent terrorist attacks in Europe. https://www.consilium.europa.eu/en/press/press-releases/2020/11/13/joint-statement-by-the-eu-home-affairs-ministers-on-the-recent-terrorist-attacks-in-europe/#
[iii] I. von Behr, A.Reding, C. Edwards, L. Gribbon (2013) Radicalisation in the digital era – The use of the internet in 15 cases of terrorism and extremism. RAND https://www.rand.org/content/dam/rand/pubs/research_reports/RR400/RR453/RAND_RR453.pdf
[iv] W. Wiewiórowski (November 19, 2020) The Future of Encryption in the EU. ISOC 2020 Webinar. https://edps.europa.eu/sites/edp/files/publication/2020-19-11-the_future_of_encryption_eu_en.pdf
[v] https://www.whatsapp.com/security/?lang=en
[vii] https://core.telegram.org/api/end-to-end
[viii] A. Greenberg (October 10, 2020) Facebook Says Encrypting Messenger by Default Will Take Years. Wired. https://www.wired.com/story/facebook-messenger-end-to-end-encryption-default/
[ix] Gazzetta Ufficiale (January 11, 2018) DECRETO LEGISLATIVO 29 dicembre 2017, n. 216. https://www.gazzettaufficiale.it/eli/id/2018/01/11/18G00002/sg
[x] CNBC (May 5, 2017) Senator reveals that the FBI paid $900,000 to hack into San Bernardino killer’s iPhone. https://www.cnbc.com/2017/05/05/dianne-feinstein-reveals-fbi-paid-900000-to-hack-into-killers-iphone.html
[xi] (April 5, 2018) CEP To Facebook: Zuckerberg Must Explain Failure To Remove Extremist Content. Counter Extremism Project. https://www.counterextremism.com/press/cep-facebook-zuckerberg-must-explain-failure-remove-extremist-content
[xii] K. Sengupta (April 27, 2017) Last message left by Westminster attacker Khalid Masood uncovered by security agencies. The Independent. https://www.independent.co.uk/news/uk/crime/last-message-left-westminster-attacker-khalid-masood-uncovered-security-agencies-a7706561.html
[xiii] R. Musotto, D.S. Wall (December 16, 2020) Facebook’s push for end-to-end encryption is good news for user privacy, as well as terrorists and paedophiles. The Conversation. https://theconversation.com/facebooks-push-for-end-to-end-encryption-is-good-news-for-user-privacy-as-well-as-terrorists-and-paedophiles-128782
[xiv] M.Zuckerberg (March 6, 2019) A Privacy-Focused Vision for Social Networking. https://www.facebook.com/notes/mark-zuckerberg/a-privacy-focused-vision-for-social-networking/10156700570096634/
[xv] I. Metha (October 31, 2019) Facebook is testing end-to-end encryption for secret Messenger calls. TNW. https://thenextweb.com/facebook/2019/10/31/facebook-is-testing-end-to-end-encryption-for-secret-messenger-calls/
[xvi] Z. Doffman (October 6, 2019) Here Is What Facebook Won’t Tell You About Message Encryption. Forbes. https://www.forbes.com/sites/zakdoffman/2019/10/06/is-facebooks-new-encryption-fight-hiding-a-ruthless-secret-agenda/#6ec67b3b5699
[xvii] J. Clement (November 24, 2020) Facebook: number of monthly active users worldwide 2008-2020. Statista. https://www.statista.com/statistics/264810/number-of-monthly-active-facebook-users-worldwide/
[xviii] H. Abelson, R. Anderson, S. M. Bellovin, J. Benaloh, M. Blaze, W. Diffie, J. Gilmore, M. Green, S. Landau, P.G. Neumann, R.L. Rivest, J.I. Schiller, B. Schneier, M. Specter, D.J. Weitzner (July 7, 2015) Keys Under Doormats: mandating insecurity by requiring government access to all data and communications. https://www.schneier.com/wp-content/uploads/2016/09/paper-keys-under-doormats-CSAIL.pdf
[xix] M. Zuckerberg (May 3, 2017) https://www.facebook.com/zuck/posts/10103695315624661
[xx] O. Solon (May 25, 2017) Underpaid and overburdened: the life of a Facebook moderator. The Guardian. https://www.theguardian.com/news/2017/may/25/facebook-moderator-underpaid-overburdened-extreme-content
[xxi] (May 24, 2017) How Facebook guides moderators on terrorist content. The Guardian. https://www.theguardian.com/news/gallery/2017/may/24/how-facebook-guides-moderators-on-terrorist-content
[xxii] P.M. Barret (June 2020) Who Moderates the Social Media Giants? A Call to End Outsourcing. NYU Stern. https://bhr.stern.nyu.edu/tech-content-moderation-june-2020
[xxiii] S.E. Garcia (September 25, 2018) Ex-Content Moderator Sues Facebook, Saying Violent Images Caused Her PTSD. The New York Times. https://www.nytimes.com/2018/09/25/technology/facebook-moderator-job-ptsd-lawsuit.html
[xxiv] C. Newton (May 12, 2020) Facebook will pay $52 million in settlement with moderators who developed PTSD on the job. The Verge. https://www.theverge.com/2020/5/12/21255870/facebook-content-moderator-settlement-scola-ptsd-mental-health
[xxv] Compound annual growth rate (CAGR) is the net gain or loss of an investment over a specified time period that would be required for an investment to grow from its beginning balance to its ending balance, assuming the profits were reinvested at the end of each year of the investment’s lifespan. https://www.investopedia.com/terms/c/cagr.asp
[xxvi] Cisco Annual Internet Report (March 9, 2020) https://www.cisco.com/c/en/us/solutions/collateral/executive-perspectives/annual-internet-report/white-paper-c11-741490.html
[xxvii] E. Dwoskin, N. Tiku (March 24, 2020) Facebook sent home thousands of human moderators due to the coronavirus. Now the algorithms are in charge. The Washington Post. https://www.washingtonpost.com/technology/2020/03/23/facebook-moderators-coronavirus/
[xxviii] Q. Wong (June 19, 2019) Facebook content moderation is an ugly business. Here’s who does it. CNet. https://www.cnet.com/news/facebook-content-moderation-is-an-ugly-business-heres-who-does-it/
[xxix] D. Gilbert (January 9, 2020) Facebook Is Forcing Its Moderators to Log Every Second of Their Days. Vice News. https://www.vice.com/en/article/z3beea/facebook-moderators-lawsuit-ptsd-trauma-tracking-bathroom-breaks
[xxx] Cambridge Consultants (2019) USE OF AI IN ONLINE CONTENT MODERATION. Ofcom. https://www.ofcom.org.uk/__data/assets/pdf_file/0028/157249/cambridge-consultants-ai-content-moderation.pdf
[xxxi] C. Jee (June 8, 2020) Facebook needs 30,000 of its own content moderators, says a new report. MIT Technology Review. https://www.technologyreview.com/2020/06/08/1002894/facebook-needs-30000-of-its-own-content-moderators-says-a-new-report/
[xxxii] R. Levy (August 11, 2020) Facebook Removed Nearly 40% More Terrorist Content in Second Quarter. The Wall Street Journal. https://www.wsj.com/articles/facebook-removed-nearly-40-more-terrorist-content-in-second-quarter-11597162013
[xxxiii] CEP Staff (October 12, 2020) Updated: Tracking Facebook’s Policy Changes. Counter Extremism Project. https://www.counterextremism.com/blog/updated-tracking-facebook%E2%80%99s-policy-changes
[xxxiv] D. Uberti (July 9, 2020) Why Some Hate Speech Continues to Elude Facebook’s AI Machinery. The Wall Street Journal. https://www.wsj.com/articles/facebooks-artificial-intelligence-doesnt-eliminate-objectionable-content-report-finds-11594287000
[xxxv] R. Gorwa, R. Binns, C. Katzenbach (February 28, 2020) Algorithmic content moderation: Technical and political challenges in the automation of platform governance. Sage Journals. https://journals.sagepub.com/doi/full/10.1177/2053951719897945
[xxxvi] J. Vincent (February 27, 2019) AI won’t relieve the misery of Facebook’s human moderators. The Verge. https://www.theverge.com/2019/2/27/18242724/facebook-moderation-ai-artificial-intelligence-platforms
[xxxvii] J. Khan (November 19, 2020) Facebook’s A.I. is getting better at finding malicious content—but it won’t solve the company’s problems. Fortune. https://fortune.com/2020/11/19/facebook-ai-content-problems-artificial-intelligence/
[xxxviii] K. Wiggers (November 13, 2020) Facebook’s redoubled AI efforts won’t stop the spread of harmful content. Venture beat. https://venturebeat.com/2020/11/13/facebooks-redoubled-ai-efforts-wont-stop-the-spread-of-harmful-content/
[xxxix] Facebook AI (January 13, 2020) Online speech recognition with wav2letter@anywhere. https://ai.facebook.com/blog/online-speech-recognition-with-wav2letteranywhere/
[xl] (April 5, 2018) CEP To Facebook: Zuckerberg Must Explain Failure To Remove Extremist Content. Counter Extremism Project. https://www.counterextremism.com/press/cep-facebook-zuckerberg-must-explain-failure-remove-extremist-content
[xli] A. Robertson (June 27, 2017) Terror suspect arrested in Birmingham and facing extradition to Spain is imam father-of-eight who preached to Bataclan bomber before Paris attacks. The Daily Mail. https://www.dailymail.co.uk/news/article-4646058/Police-arrest-ISIS-supporter-Birmingham.html
[xlii] European Commission (September 12, 2018) COMMISSION STAFF WORKING DOCUMENT IMPACT ASSESSMENT Accompanying the document Proposal for a Regulation of the European Parliament and of the Council on preventing the dissemination of terrorist content onlinehttps://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=SWD:2018:0408:FIN:EN:PDF
[xliii] F. Théron (March 2020) Terrorist content online Tackling online terrorist propaganda. European Parliamentary Research Service (EPRS) https://www.europarl.europa.eu/RegData/etudes/BRIE/2020/649326/EPRS_BRI(2020)649326_EN.pdf
[xliv] Organization for Security and Co-operation in Europe Office of the Representative on Freedom of the Media (October 15, 2020) LEGAL REVIEW OF THE AUSTRIAN FEDERAL ACT ON MEASURES TO PROTECT USERS ON COMMUNICATIONS PLATFORMS [KOMMUNIKATIONSPLATTFORMEN-GESETZ – KOPI-G]. OSCE. https://www.osce.org/files/f/documents/7/8/467292_1.pdf
[xlv] European Commission. E-Commerce Directive. https://ec.europa.eu/digital-single-market/en/e-commerce-directive
[xlvi] Court of Justice of the European Union (October 3, 2019) PRESS RELEASE No 128/19. https://curia.europa.eu/jcms/upload/docs/application/pdf/2019-10/cp190128en.pdf
[xlvii] (04/02/2020) Lutte contre terrorisme – Moyens de l’OCLCTIC. Assemblée nationale. https://questions.assemblee-nationale.fr/q15/15-26385QE.htm
[xlviii] J.V. Placé (October 22, 2013) Police, gendarmerie: what investment strategy?. Sénat. https://www.senat.fr/rap/r13-091/r13-091_mono.html
[xlix] Session of December 3, 2020. Sénat. https://www.senat.fr/basile/visio.do?id=d48936220201203_20&idtable=d48936220201203_20|d48936220201119_6&_c=pharos&rch=ds&de=20191229&au=20201229&dp=1+an&radio=dp&aff=65702&tri=p&off=0&afd=ppr&afd=ppl&afd=pjl&afd=cvn
[l] B. Saragerova (November 29, 2020) France: Towards stronger counter-terrorism regulation online. Global Risk Insights. https://globalriskinsights.com/2020/11/france-towards-stronger-counter-terrorism-regulation-online/
[li] E. Braun, L. Kayali (October 19, 2020) French terror attack highlights social media policing gaps. Politico. https://www.politico.eu/article/french-terror-attack-sheds-new-light-on-social-media-policing-gaps/?utm_source=Tech+Against+Terrorism&utm_campaign=32d761c344-EMAIL_CAMPAIGN_2019_03_24_07_51_COPY_01&utm_medium=email&utm_term=0_cb464fdb7d-32d761c344-162374915
[lii] LCI (October 18, 2020) Pourquoi Samuel Paty n’a-t-il pas fait l’objet d’une protection policière? https://www.lci.fr/police/professeur-decapite-pourquoi-samuel-paty-n-a-t-il-pas-fait-l-objet-d-une-protection-policiere-2167627.html
[liii] A. Zemouri (October 17, 2020) Le père qui avait diffusé la vidéo hostile au professeur d’histoire en garde à vue. Le Point. https://www.lepoint.fr/societe/le-pere-qui-avait-diffuse-la-video-hostile-au-professeur-d-histoire-en-garde-a-vue-17-10-2020-2396817_23.php#
[liv] Parlamentskorrespondenz Nr. 152 (February 02, 2015) Nationalrat beschließt neues Islamgesetz. Österreichisches Parlament. https://www.parlament.gv.at/PAKT/PR/JAHR_2015/PK0152/index.shtml
[lv] (January 21, 2020) Austria’s €290m plan to fight terror. The Local. https://www.thelocal.at/20150121/austrias-290m-plan-to-fight-terror
[lvi] Counter Extremism Project. Austria: Extremism & Counter-Extremism. https://www.counterextremism.com/countries/austria
[lvii] CEPS Project. The Impact of the German NetzdG law. https://www.ceps.eu/ceps-projects/the-impact-of-the-german-netzdg-law/
[lviii] P. Grüll (July 4, 2020) Austria’s online hate speech law prompts question marks about ‘overblocking’. EURACTIV. https://www.euractiv.com/section/data-protection/news/austrias-law-against-online-hate-speech-question-marks-in-the-home-stretch/
[lix] European Commission. A Counter-Terrorism Agenda for the EU and a stronger mandate for Europol: Questions and Answers.https://ec.europa.eu/commission/presscorner/detail/en/qanda_20_2325
[lx] Europol (October 13, 2020) EU IRU TRANSPARENCY REPORT 2019. https://www.europol.europa.eu/publications-documents/eu-iru-transparency-report-2019
[lxi] EU Budget 2020 – Europol Position Paper. https://www.europarl.europa.eu/cmsdata/186846/7-Europol-Paper-EU-Budget-2020-original.pdf
[lxii] D. Clark (October 12, 2020) Annual budget of Europol in the European Union from 2010 to 2020. Statista. https://www.statista.com/statistics/1178070/europol-budget/
[lxiii] STATEMENT OF REVENUE AND EXPENDITURE OF THE EUROPEAN UNION AGENCY FOR LAW ENFORCEMENT COOPERATION FOR THE FINANCIAL YEAR 2020 – AMENDING BUDGET NO 2. https://www.europol.europa.eu/about-europol/finance-budget
[lxiv] (November 22, 2019) REFERRAL ACTION DAY AGAINST ISLAMIC STATE ONLINE TERRORIST PROPAGANDA. Europol. https://www.europol.europa.eu/newsroom/news/referral-action-day-against-islamic-state-online-terrorist-propaganda