by Mark Skilton
This text was written by my friend and colleague, Professor Mark Skilton as an Introduction for our book "Navigating New Cyber Risks". On Saturday, May 16, 2020, Mark passed away after a short but brave battle with cancer. This article is published to remember Mark and his work. Mark's family is collecting funds to support cancer research. Please, donate here if you can. Every donation (o matter how small) will be greatly appreciated!
Cybersecurity and Trust
This is a story about trust.
We live in an era that might be described as a “post-Snowden” world where the assumptions about trust and anonymity from issues such as what is your privacy and personal rights to what is classified information and how information is communicated as fact or opinion are being challenged by the convergence of new information technology.
Edward Snowden first exposed the bombshell story to the journalist Glenn Greenwald and the Guardian newspaper in 2013 about how the National Security Agency (NSA) had collected millions of domestic phone records of unsuspecting citizens under a top-secret court order issued in April of that year. Facts which surfaced as a result of this story formed what was later described as a “treasure trove” of information, that at the time resulted in a trickle feed of revelations from government wiretapping, spying on friend and foe politicians, tracking Google, Facebook, Microsoft, Apple, and many others, to exposing expert hacking techniques and attempts to crack the encryption and undermine the entire Internet security. Much of this was later contested and counter-argued, but the genie was out and Pandora’s box was open. Another innocence had been torn down. After surviving the 9/11 attacks, international terrorism catalyzation, as well as the 2008 financial crisis, the humanity had been hit with a new challenge of how to uncover and understand the underlying truths of what was real and what was hiding behind the façade of everyday living. Snowden, who was a contractor at the time of obtaining the controversial materials from the NSA, disclosed his identity in the media. From his point of view, as a self-appointed “whistle-blower”, he had to raise a moral objection to surveillance practices that hitherto were unknown to the public at large and he could not remain anonymous.
This in itself was not new: several years earlier, in 2010, Julian Assange had introduced a new term into the lexicon of several generations. This term was “WikiLeaks” and described an international non-profit organisation, committed to publishing secret information, news leaks, and classified media provided by anonymous sources. Even though WikiLeaks was established in 2006, the disclosure of 750,000 classified and sensitive military and governmental documents by Chelsea Manning in 2010, including war logs about the Iraq and Afghanistan conflicts, private cables from the State Department as well as assessments of Guantánamo prisoners made the organization the most talk about in decades. The WikiLeaks publications and their subsequent media footprints revealed that information as well as the digital access to information were and continue to be weapons as well as targets or valuable commodities for a large number of interested parties. In its essence, information became the new valuable and monetizable asset obtained through and exchanged by a variety of agents who can be individuals, groups, and organizations, and even nation states, activists. It would seem that we live in the age of the liar’s paradox. The classical definition of the paradox refers to a situation when a liar makes a declaration that he or she is lying. By making such a declaration the liar is telling the truth about his or her lying. Yet, his or her declaration does not change the fact that the liar is in fact lying.
Therefore, when we say that in the modern digitized world “all people lie”, what does this mean? If a particular person is a liar, is this a classic tautological contraction or is this a false statement that becomes invalid if we can identify at least one person who is not a liar? How do you verify what is true and what is false? Who do you trust? It is clear now that with the development of digital technology under- standing what a “safe place” is and learning how to verify and validate what is “true” and “safe”, have become a whole lot more complicated. Considering that we are struggling to define disclosure and protection in the context of the new digital communication, it is hardly surprising that the terms privacy and safely become more and more obscure and context-dependent.
In 2013, 3 billion Yahoo! user account details were compromized as a result of one of the largest data breaches in human history. Stolen information included names, email addresses, telephone numbers, birth dates, encrypted passwords and, in some cases, security questions. It is believed that a “state-sponsored actor” was behind this act. Yahoo! allegedly remained unaware of this for three years and only disclosed the information about the breach to the general public in 2016. The company breaches like the one which affected Yahoo! are, as such, not new as innovative forms of insidious and panoptical attacks emerge on a daily if not hourly basis worldwide. Consider, for example, the 2015 attack with malware which lurked in the background of the computer systems at the world’s most successful banks and allowed a team of Russian-based hackers to steal €1 billion globally. Note that this is just the amount which was publicly disclosed. In reality, the amount could have been significantly higher due to the fact that the malware might have been in place at these banks for months or even years. We may also recall another global cyberattack in 2017 when the WannaCry malware was deployed in 110 countries worldwide. Its targets included large corporations as well as public entities such as Telefónica, one of Spain’s largest telecommunications operators; Renault, the French auto- motive manufacturer; the German railway system Deutsche Bahn; many ministries of the Russian government; FedEx; and the British National Health Service.
Cyber Crime, Its Motivation and Creativity
These examples lead to one simple conjecture: the scope and scale of modern cyberattacks as well as the fact that even the most resourceful and carefully designed cybersecurity systems can be bypassed by highly motivated adversaries show that any individual, group, organization or even a state as a whole can become a target. But the targets in our living and working spaces are not just people and organizations, but also the objects, buildings, vehicles, malls, highways, and infrastructure networks, enabled with the Internet of Things (IoT), machine learning (ML) and artificial intelligence (AI) algorithms, along with blockchain technologies (BCT), 3D printing, as well as immersive new experiences, which use virtual reality, augmented reality, mixed reality, and extended reality (VR, AR, MR, X-R). Many of these new technologies are fusing into our products and services and living spaces. Yet, despite all the advantages which these technologies may possess, they also make our lives more vulnerable to new cyberattacks and risks.
Exploiting security flaws in IoT and its detrimental consequences for urban infrastructure were demonstrated in a case study of road traffic lights conducted by researchers at Michigan State University in 2014. They showed that unencrypted wireless connections, widely used in traffic control systems, as well as default usernames and passwords, which could be found online, were also easy attack targets. It soon became apparent that automotive vehicles and even aircrafts could be compromised in a similar fashion. Moreover, the widespread nature of such cyberattacks delivered a simple message to individuals, businesses and cities. The cyberthreats are real as attacks are happening here and now: be it the switching of all the traffic lights simultaneously to green at a particular city junction; taking control and disabling the engine of a Jeep Cherokee on a highway; or claim- ing to have bypassed the onboard flight engine control systems of an aircraft. In the aviation sector alone, over 1000 attacks were reported monthly in 2016 by Strategy and Safety Management at the European Aviation Safety Agency (EASA). It is also clear that cybercriminals polish their sophistication not only by penetrating multiple sectors and systems. They also enhance their tools by carefully studying cybersecurity safeguard mechanisms and algorithms used by various organizations. For example, a recent study into cybersecurity tools stolen from the CIA and the NSA showed that they were subsequently used by hackers in over half of all cyberattacks in the healthcare sector in 2017. That same year, the American Food and Drug Administration (FDA) requested 500,000 heart pacemakers to be recalled due to a hacking risk stemming from the fact that their firmware update designed to fix security vulnerabilities actually created new vulnerabilities. Apart from major attacks on personal devices as well as organizational and urban infrastructure, cybercriminals are also targeting critical national infrastructure. For example, in 2018, the US Homeland Security reported that Russian hackers had allegedly breached the US utility network control rooms via trusted vendors, which resulted in access to confidential information about equipment as well as data on how utility networks are configured to be compromized. This created a real risk of adversarially-controlled blackouts.
Whether the target of a particular cybercriminal is a heart pacemaker or a power station, the risk of a cyberattack threatens human lives rather than just human data. As a result, cybersecurity progressively becomes a matter of national safety and security. The knowledge and intelligence to carry out cyberattacks as well as to design effective cyber defense mechanisms are also no longer just the domain of human intelligence, human skills, and human experience. The rise of ML and AI is moving cyberattack vectors to a new level. In early 2017, it was reported that a commercial cybersecurity firm Darktrace Inc had spotted a new type of attack on a client company in India. The cyberattack software used a sophisticated ML algorithm to observe and learn patterns of normal user behavior inside the victim company’s computer network. This software then began to mimic user behavior. As a result, the malicious algorithm was almost undetectable for the company’s security architecture. In 2018, MIT researchers reported the rise of the weaponization of AI as a new AI-driven arms race deployed in a broad set of manifestations, including cyber-physical attacks on Ukraine’s national electricity infrastructure that plunged large parts of the country into darkness in December 2015 and 2016. The use of AI is also rapidly growing in both detecting and committing financial fraud. We also observe the rise of automated social media “bot-farms”, which plant fake news, fake “likes”, and generally imitate user traffic.
The Future of Cyber Defence
So, what is the future of cyberdefense systems?
The technology of cybersecurity attack and defense is constantly evolving. The keynote discussion at the annual Arm TechCon in 2017 facilitated a debate about contemporary trends in security, especially about the way in which cybersecurity system should be built in order to safeguard and man- age all areas of computing, from microchips to clouds. The ubiquity of the IoT era means that building security principles into the ecosystem of computing systems both in its end use and down to the chip level architecture is extremely important. Today, manufacturers, service providers and even individual users need to move to a new culture of “security-by-design” and “security-in-use”, treating everything as potentially untrustworthy and requiring constant verification and validation. One manifestation of this new approach to cybersecurity are the so-called “zero-trust” systems and principles. This culture is necessary to protect users and enterprises, as well as the wider society. Yet, it is also recognized that technology (including technology which is based on “zero-trust”) is not the silver bullet. Equally, AI-enabled technology cannot fully protect individuals and organizations: while we can use AI to monitor and detect patterns in human and machine behavior and activity much faster, 24 hours a day, 7 days a week, new threats may evolve that AI may not have encountered. AI-enabled security suffers from a number of other problems. For example, an AI algorithm may accrue generated bias (due to imperfections in the training sets which informs it) that could affect its performance. This might make it vulnerable to manipulation or prone to errors. Equally, legal basis as well as government policy and regulatory landscape may need to catch up with the rapidly evolving technological advances. For example, if AI could increasingly mimic humans, or, moreover, operate at speeds and multiply across vast attack surfaces, then technical analysis alone would not be enough. The response needs to incorporate technical, cultural, psychological, legal, and sociological as well as policy aspects. Let us not forget that while individuals, organizations and state have access to the AI technology, so do cybercriminals, who can also make use of the smart algorithmic solutions. It is now clear that response to the cybersecurity risks of the future should be based on effective communication, information, knowledge, and intelligence shared among individuals and organizations which collect and store valuable data. By exchanging information about cyberattacks, their features and patterns these individuals and organizations will ensure that existing and future cybersecurity threats will be easier to detect and alleviate. After all, individuals, businesses, and governments are facing cybercriminals who excel in communication and information sharing, and we now need to develop an equivalent, if not more advanced, communication mechanisms and channels.
Business strategies must evolve not only to handle new kinds of cyberattacks, but also the rising expectations about compliance and personal data protection as seen in the recent new laws which emerged in many countries worldwide such as the 2018 European Union’s (EU) General Data Protection Regulation (GDPR), the 2002 US Homeland Security Act; telecommunications, financial services, and healthcare industrial data regulations, as well as the development of the novel social media regulations.
Threats evolve, and governments and regulators formulate new laws to control digital expansion, privacy, as well as human digital rights. In 2016 the US government issued a cyberdefense readiness condition (DEFCON) scale, which represented a cyber incidence severity measure ranging from one (high risk of harm) to five (low risk of harm) and allowed individuals and organizations to capture the imminent cyberthreat propensity in different contexts.
Being ready, understanding threats and vulnerabilities, and managing consequences—all these components are necessary to respond adequately and appropriately to threats and exploits. While the DEFCON scale attempts to provide a useful and simple risk management tool, it is not clear whether and to what extent it is applicable in practice when addressing a live attack in the real time.
It is also clear that cybersecurity is a complex and constantly evolving issue. In fact, cybersecurity space is developing and changing so rapidly that by the time this is in print, many of our examples may seem dated, as in a matter of weeks, days or even hours new attacks may occur which will dwarf events described above. In this regard, two very recent examples come to mind. In September 2018, Facebook reported a data breach, with up to 50 million account credentials stolen, highlighting the importance and evolving nature of cybersecurity threats. The worrying fact was not only the scale of the harmful impact, even though the incident was reported quickly to the Irish data regulator, but also the fact that Facebook significantly underestimated the potential risk of this attack when in July 2017, 14 months prior the attack, they introduced a new “update feature” into a product through which cybercriminals were able to infiltrate the system and gain access to user data. In a similar fashion, another digital giant, Google, revealed its decision in 2018 to shut down its social media product, Google+. Initial suspicions that this was due to low user uptake were overshadowed by the company’s admission that the network was being shut down over cybersecurity concerns. Google revealed that a vulnerability in the system had been discovered which put over 500,000 user profiles at risk. Even though Google made statements that they did not have any reason to believe that the discovered vulnerability was ever exploited by cyber- criminals, time will tell whether Google+ data will surface somewhere on the Dark Web in the future.
These examples clearly show that even the largest digital companies who earn their living through handling, analyzing, and packaging data into a variety of products do not have enough capacity to ensure safety against the enormous attack surface of the current, ever-expanding digital environment.
Considering all this, the main question is the following: what are the new cyber risks and how do we plan, build, and manage safe spaces in the digital age?