Article by Dr Adrian Venables: Anti-social media – the rise in online censorship
Social media has been one of the defining technologies of the 21st century. Previously, Internet users had primarily been consumers of information, but these websites and applications enabled everybody to become content producers. The introduction of the iPhone in 2007 contributed to its growth by providing mobile Internet access and freeing users from the constraints of desktop and laptop computers. Combined with the launch of software optimised for mobile devices, numbers of users have steadily increased and is expected to exceed 3 billion by 2021.
The first decade of social media was a halcyon period in which the medium was regarded as harmless entertainment and was mostly free from state interference. However, as its power to inform, influence and alter behaviour became increasingly apparent, governments began to take a closer interest in the online behaviour of their populations. Although its exact contribution is still debated, Social Media played a role in the ‘Arab Spring’ of 2011. Popular with the younger generation, it enabled protesters to organise gatherings and spread news of events to a wider global audience. Initially caught unawares, affected governments responded swiftly by blocking access to social media sites and even temporarily severing Internet connections.
Whereas the result of mobilising populations in the Arab Spring were immediate, obvious and sometimes violent, another subtler effect was seen later in the decade. The 2016 Trump election victory is now infamous for the widespread and systematic online information campaign directed at the US population. This included what became known as fake news - outrageous news headlines designed to tempt readers to click on the stories and their embedded advertisements to generate revenue for their creators. More insidious though was the Russian government interference in the election, described by the 2019 Muller Report as ‘sweeping and systematic’. This was conducted through the St Petersburg based Internet Research Agency, which combined a range of techniques to generate a sentiment favourable to the Trump campaign. In addition to conventional paid advertisements, the Russians generated fake accounts purporting to be from US citizens. These online personas, termed sockpuppets, were used to comment on, promote or defend an issue. This was achieved by posing as a leader of a reputable group, reliable news source or trusted individual, which simulated grassroots support for Trump – a process termed astroturfing. These were supplemented with trolls; accounts set up to create disruption and division by posting provocative, misleading or pointless comments. In addition to the human operators, automated programmes termed spambots were also used to open accounts and generate traffic to develop online engagement. By clever use of hashtags, it was possible to manipulate the algorithms used by social media to dominate online discussion.
Although social media is a technology based medium, it harnesses some very human characteristics, which can be employed to manipulate and influence users. These were utilised in the 2016 US election and are now being widely employed to censor and control behaviour. The first of these is homophily, which is the tendency for people to have ties with those of similar beliefs. Social media users tend to self-censor by only associating with those of similar views and accessing news outlets that promote stories that do not challenge existing viewpoints. In time, this leads users to only be exposed to opinions that coincide with their own. This reinforces their opinions as being the ‘right’ or those of the majority and does not allow alternative ideas to be considered. This comfort zone is termed an echo chamber and can result in a very narrow perspective of an issue with social media and some search engines contributing to the process. First termed as the filter bubble by Internet activist Eli Pariser, this is the situation that occurs when website algorithms selectively provide information based on past browsing behaviour. This can be illustrated by comparing the returns from similar inputs to different Internet search engines and how Facebook’s personalised news stream differs from other news sources. As beliefs become more entrenched, confirmation bias can emerge. This is the situation in which individuals favour information that aligns with their preconceived knowledge, even if flawed, and choose to disregard alternative opinions.
Homophily, echo chambers and confirmation bias are human traits, and yet even for those seeking alternative perspectives, online censorship and manipulating is increasingly preventing access to some opinions. Recent research by Northwestern University in the US highlighted a potential bias in Google’s search algorithm that favoured predominantly left leaning news organisations in their rankings. This is particularly significant as online news sources are gaining prominence over traditional media organisations. Moreover, social media now often pushes news items to users, who may not actively seek other information sources. Combined with the prevalence, ease of access and convenience of online resource, consumers may be subject to unconscious bias and censorship without their knowledge.
For those who do wish to access alternative views and form their own opinions, the range of online resources available may be increasingly limited. Following the role that fake news played in the 2016 elections, governments and news organisations are increasingly citing it as the reason for censoring and removing material. Whereas attempts to verify and confirm the factual content of stories are admirable, there is a danger that their definition of fake news will spread to unpopular or divisive news. Some countries such as China are well known for its authoritarian control over the Internet within its borders, but others are also seeking to control what may be posted. Russia has recently introduced a new law that could effectively disconnect its Internet from the rest of the world. Justified as ensuring resilience in case of cyberattack from abroad, by directing all traffic through centrally controlled routers monitoring and filtering of information originating outside Russia could also be implemented. The EU’s new copyright laws, which applies to social media companies have also raised concerns. With 19 nations voting in favour with 9 including Estonia voting against or abstaining, the law is intended to bring existing regulations into the online age by making Internet platforms liable for content uploaded to their sites. Licenses must be obtained from rights holders for copyrighted works to be hosted with filtering used to remove unauthorised material. Critics have stated that this is impracticable and unworkable and will result in online expression and free speech being curtailed. Faced with prosecution some believe that internet companies will take the safe course of action and will remove the majority of images and media currently available online.
Freedom of expression and the issue of online free speech is becoming closely related to censorship and is an increasingly contentious issue in western democracies. Opinions and views vary but with the Internet’s infrastructure and websites owned by either governments or private companies, consumers have little influence over how it operates. In May 2019, Facebook removed a number of prominent conservative figures from its platform labelling them as ‘dangerous’. Critics were quick point out that several far-left activist groups openly advocating violence remained active. European governments are also active in policing online content. The UK is very active in this area with specialist units devoted to monitoring social media. Using the justification of investigating ‘hate crimes’ social media activity can be sufficient to attract police interest if a post causes someone to be offended on a range of issues. This has effectively muted many forms of debate and criticism on a range of contentious issues including gender and religion. Restricting freedom of expression and free speech may lead to what is termed a spiral of silence. Proposed by political scientist Elisabeth Noelle-Neumann in 1974, the term relates to the tendency of people to remain silent on an issue when they feel that their views are in opposition to the majority. Today, an individual may remain silent and feeling prevented from expressing an opinion online for fear of being accused of a ‘hate crime’. In doing so it deters others from stating a similar view and leads to the views of a silent majority being supressed by a vocal minority.
At the opening of the Estonian Riigikogu (Parliament) on 25 April 2019, President Kersti Kaljulaid wore a sweatshirt with the slogan 'Sõna on vaba' (the word is free). This commitment to free speech was again emphasised during a meeting with the European Federation of Journalists a month later. Freedom of speech is binary – you either have it, or you do not. With complete freedom of expression is the acceptance that those you disagree with, including extremists of all persuasions, will have a free platform. Once limitations are imposed, the challenge is where to draw the line and accept the risk that the restrictions may increase over time. It will be for future generations to debate whether the free speech permitting, pre-2016 Internet and its social media applications was better than what it subsequently became. That is of course, if they will be allowed to debate such issues online.
Author: Dr Adrian Venables, TalTech Centre for Digital Forensics and Cyber Security senior researcher
The article was published in Edasi.org.
PhD Thesis Defence: Bernhards Blumbergs
On Monday, May 27th, 2019 at 9:00 AM PhD student of Deptartment of Software Science and Centre for Digital Forensics and Cyber Security workgroup Bernhards Blumbergs (supervisors Prof. Rain Ottis and Dr. Risto Vaarandi) is going to defend his PhD thesis „Specialized Cyber Red Team Responsive Computer Network Operations“. The PhD defence will take place at TalTech ICT builidng (Akadeemia tee 15a) in room ICT-315. Find the thesis in the digital library: https://digi.lib.ttu.ee/i/?12015.
- Professor Dr. Hiroki Takakura, National Institute of Informatics, Tokyo, Japan
- Fregattenkapitän PD Dr. Dr. habil. Robert Koch, Bundeswehr University of Munich, Munich, Germany
Dr. Hayretdin Bahsi named Professor
We are happy to announce that Dr Hayretdin Bahsi from TalTech Centre of Digital Forensics and Cyber Security has been named Professor!
Dr. Hayretdin Bahşi received his PhD from Sabancı University (Turkey) in 2010. He was involved in many R&D and consultancy projects on cyber security as a researcher, consultant, trainer, project manager and program coordinator at the Informatics and Information Security Research Centre of the Scientific and Technological Research Council of Turkey between 2000 and 2014.
His research interests include critical information infrastructure security and cyber situational awareness systems.
TalTech CyberCentre partner in the European Commission’s ECHO project
TalTech Centre for Digital Forensics and Cyber Security started collaboration as a partner in the ECHO project (European network of Cybersecurity centres and competence Hub for innovation and Operations).
The ECHO project is one of four Pilot projects, launched by the European Commission, to establish and operate a Cybersecurity Competence Network. The project was officially launched at the Conference Hall of the Royal Military Academy of Belgium, on February 25th, 2019.
The ECHO project will deliver an organized and coordinated approach to strengthen proactive cyber defence in the European Union, through effective and efficient multi-sector collaboration. The Partners will execute on a 48-month work plan to develop, model and demonstrate a network of cyber research and competence centres, with a centre of research and competence at the hub. To make this vision a concrete reality in Europe, ECHO comprises 30 partners from 15 EU Countries plus Ukraine, representing 14 Industrial partners and 16 Research Institutes and Academic Organisations with 13 cybersecurity disciplines. The project is funded by the European Union’s Horizon 2020
Research and Innovation Programme.
Press release about the kick-off of the project, which can be read here: www.echonetwork.eu/downloads/press-releases/press-release-kick-off/
For more information about the project please visit:
Alejandro Guerra Manzanares awarded at the Estonian Research Council student thesis competition
Alejandro Guerra Manzanares awarded at the Estonian Research Council student thesis competition in the category of science and engineering.
Our centre's PhD Student & Early Stage Researcher Alejandro Guerra Manzanares was awarded with third prize for his master thesis: “Application of full machine learning workflow for malware detection in Android on the basis of system calls and permissions" (supervised by dr Hayretdin Bahsi and dr Sven Nõmm)
Congratulations to Alejandro and to both of his supervisors!
Student Brief 2019
We’re offering a unique opportunity to meet the team of TalTech Centre for Digital Forensics and Cyber Security, learn more about the research interests of our academic staff, discuss internship options and introduce potential thesis supervisors and topics, as well as get detailed insight into the new Cyber Security Research Excellence Course.
Cyber Security MSc students are invited to join Cyber Security Student Briefing on 5th of November 2018 at 15:00 – 17:00. The briefing will take place in auditorium U01-202 in TalTech main building, Ehitajate tee 5 (located near the assembly hall).
- 14:45 Gathering
- 15:00 Welcome and introduction of the Centre by Prof. Rain Ottis
- Opening words for the Cyber Security Research Excellence Course by Prof. Olaf Maennel and Prof. Matthew Sorell
- 15:15 Introduction of members of the Centre. Presenting research interests of supervisors and thesis topics. Q&A.
- 16:30 Official launch and detailed insight into Cyber Security Research Excellence Course. Introduction of the objective, topics, and timeline. Q&A. Prof. Olaf Maennel and Prof. Matthew Sorell
People who might be interested in applying for the Cyber Security MSc program in the future are also welcome to join.
If you have any questions regarding the event, please contact kristi dot ainen at taltech dot ee.
Join us on the 5-year anniversary of ICR! Since 2015, the Tallinn University of Technology Centre for Digital Forensics and Cyber Security has been co-hosting the annual Interdisciplinary Cyber Research (ICR) workshop taking place at the Tallinn University of Technology.
The event brings together hundreds of participants from various academic backgrounds to share their research related to information and communication technologies. The ICR format is particularly appealing since the workshop promotes interdisciplinarity and therefore strives for the synergy between technical and other (such as law, political sciences, psychology, etc) research domains. Presentations for the event are carefully chosen via double-blind peer review process and the extended abstracts are published in ICR proceedings.
You can participate as a speaker (submitting an abstract+delivering a presentation) or simply join our wonderful audience. Speakers are requested to submit a 1000-word abstract. Abstracts should explain the relevance of the research, outline principle research questions, and expected or achieved results together with your research methods. In addition to young researchers and scholars, we welcome student submissions based on Master or PhD thesis research (and bachelor level students are very welcome to join in as audience). All authors will get feedback from our distinguished peer reviewers and selected authors are invited to present their ideas at the workshop. All selected abstracts will be published as workshop proceedings by Tallinn University of Technology (with an ISBN number). Selected authors are also invited to submit their research as an academic article for established academic journals, subject to additional review process.
- ICR2019 on the 29 June 2019
- Call for abstracts deadline: 15 April 2019
- Notification of authors: 6 May 2018
- Registration open until: 25 June 2019
From Battlewatch to civvy street: keeping your people safe from attack
There’s no such thing as cyber security, just security – and it’s everybody’s problem, says Kieren Niĉolas Lovell, keynote speaker at the Jisc Security Conference. After a career spent battling pirates of the watery kind, he sets out what university IT teams can learn from the navy’s approach to security.
What do extinguishing a fire on a naval warship and tackling a security breach at a university have in common? Quite a lot, actually, according to Kieren Niĉolas Lovell. He should know. While Lovell is currently incident management specialist at Tallinn Technical University in Estonia and spent three years as head of computer emergency response (CERT) at the University of Cambridge, in a previous life he was a Nato Battlewatch captain, charged with leading five warships against the pirate threat in Somalian waters.
“If we were ever practising a fire aboard a ship, if somebody were to turn up with a fire extinguisher within two minutes of that fire starting, the fire was dead. Ship saved, no harm done. If they take more than two minutes then that small fire becomes a complete inferno. Time is of the essence. Dealing with a fire quickly and firmly is how you get it under control,” says Lovell. In contrast, universities tend to take the opposite approach to cyber attacks, with security teams practising scenarios in which a small incident happens and slowly gets bigger for three or four hours, when there is a big crescendo and the exercise stops.
“That sounds logical unless you’ve ever done an incident,” says Lovell. “It’s actually the other way round. It starts off as a little incident but quickly gets massively huge and chaotic before becoming smaller and more manageable as you deal with it. If you practise it the first way, with the gradual incline, you don’t manage the chaos – you’re slowly getting yourselves organised just as the incident is ramping up rather than quickly taking control and reducing it.”
At Cambridge, Lovell introduced the idea that – contrary to the university norm that experts are called in one by one as needed – the military approach is taken and everybody is called in at once and then sent away again if not needed. It reduces process and bureaucracy and ensures that the emergency team are all in place at the most critical time.
The progress of incidents is not the only similarity between the military and academia. Both sectors are drowning in too much information and that, says Lovell, means that crucial command, control and communication – those fundamental leadership and communication skills – are getting worse.
“Every university, every college, every department, every research group, all the staff, researchers and students are generating so much information – on Facebook, on Twitter, on every other network – all day every day and the divide between personal and work life is non-existent,” argues Lovell. “It provides an excellent baseline for launching personally targeted attacks, for emotional attacks.”
He gives the example of the “sexploitation emails” many universities have experienced. The emails, sent to staff and students, were along the lines of “you were on YouPorn last night at 9pm, I hacked into your webcam and I recorded it. If you don’t pay me one bitcoin I will publish the photos online”. The emails were completely fake and they didn’t have much of an impact. But then the attackers changed one thing. Using databases that had been leaked online in various breaches, such as LinkedIn and MySpace, they sent the same emails but included the user’s leaked usernames and password in each case. The attackers’ revenues went through the roof, according to the evidence of the Bitcoin stack.
“We’re seeing more and more of these social engineering attacks, which do not require any actual hacking because it’s now a lot harder to do a technical attack,” says Lovell. “Organisations have detection systems and firewalls. But when it comes to the individual we really don’t help them at all. We may have firewalls on our university network but 90% of people are using laptops, tablets, phones – they are not always at the office. People are always working from home, airports, everywhere and none of these tools really help unless you’re helping to protect the individual. That’s what we need to change our mindset to – help the individual to protect their own data so that, collectively, our organisation is better protected.”
End-user education is, of course, the first line of defence – if it is done in the right way. Lovell suggests emphasising that it is a human problem, not a technical problem, and encouraging users to understand and research what information they have put online and is still out there – all those abandoned accounts, from MySpace to Friends Reunited, that may well contain embarrassing conversations and photos. At Tallin, Lovell also shows teams of researchers how easy it is to use the same intelligence gathering techniques against naval warships. While the actual cyber security on a ship is quite high, the exercise shows how you can get full compromise on an entire warship and track ship movements just by using Twitter, Facebook and Snapchat.
“When I went on a nine-month deployment in the navy it was much easier because you didn’t have so much connection on a phone – I had a phone to make phone calls, that was it really. But now your entire life is on there and you communicate entirely through Facebook, and Whatsapp. It’s against policy but it happens – you can’t expect sailors not to have that connection any more. But in doing that, because they are not entirely sure how this data can be used against them or against an armed force, they don’t know that they are sometimes unwittingly putting themselves and their fellow sailors at risk. It’s exactly the same issue we have in universities and organisations and blue chip companies,” warns Lovell.
His second solution to the human problem draws, again, on his naval experience: to get universities to share when things go wrong and not to be embarrassed by it.
“There’s a sentence within the IT security industry that is stolen from the military: the ‘need to know principle’. Unfortunately, that’s not the military principle at all – it’s half the sentence. The full military one is ‘need to know, responsibility to share’. That completely changes the whole dynamic. Yes, people should know and secure data and look after it but if anything goes wrong you have a responsibility to share with your industry partners, your friends, your colleagues, even your competitors, that this is going on,” says Lovell, offering a good example of what happens when such information is not shared.
“Around three years ago at the University of Cambridge we had a payday fraud. About six or seven months later I was at a conference in London and I was talking about this fraud. I could see faces dropping as other universities said, ‘we’ve had that’. Analysing the data it was as clear as day that it was the exact same people and the exact same approach but because we hadn’t told anybody about it, and they hadn’t told us, the attackers were just burning through from one university to the next and the next, stealing thousands of pounds.”
Lovell commends the work that Jisc has been doing with the community in this area and believes that, as a fear of loss of reputation is a key factor in the secrecy, “the only way I see us fixing it is having a safe space established within the Jisc community – and even within the international community as well in the university sector – to share information to better protect and better share from our collective experiences. It could be as simple as a Jisc web page where you report an incident that’s ongoing but you don’t actually say who you are. To be honest, I don’t really care who you are, I care who the attacker is and how they are doing it. That might be a way of getting over the political barrier and that mindset of ‘we can’t tell everybody that we’ve made a mistake’.”
This same fear of discovery is also frequently the attacker’s friend in social engineering scams such as the sexploitation emails or dating fraud. Even when victims do get up the courage to inform authorities what has been happening, the crime is often not taken seriously because it is ‘cyber’ crime, which Lovell finds aggravating. For him, there is no such thing as cyber security, only security.
“We like to add the word ‘cyber’ to everything and it’s annoying – it’s just stupidity. For example, if you were mugged while walking on a London street and somebody steals £100 out of your wallet at knifepoint you would go to the police station, report the crime and it would be treated seriously. If I steal £1000 out of your bank account you’ll report it to Action Fraud and you’ll get an email in two days’ time. The effect is just the same, you still go through the same emotional issues, the breach of trust, the loss of money but we’ve added the word ‘cyber’ to it and taken it less seriously. But it’s not cyber money, it’s money. It’s not cyber crime, it’s crime.
“We try to hide behind it being an IT problem, but it’s everybody’s problem.”