A computer chip represents a basic level of digital technology. Chips are used in many areas of human life, but the nature of a chip is such that we (as a society) make billions of them as well as use them in multiple gadgets and devices. This has direct implications for our security. Specifically, if you have an attack on one chip, the same type of attack will potentially work on billions of other chips. Such attacks are usually classified as class level attacks in that there is a whole group of technology that can be affected if compromised. In that regard, chip design flaws (no matter how small) can have a major impact if compromised and turn into cyber security vulnerabilities, which then could become lucrative exploit points for an attack or even multiple attack vectors.
Why a Chip Security is Not Cheap at All
Consider for example the Spanish Identity Card case. In 2017, the Spanish Identity Card was developed using the the Infineon chip. It was found to have the 'ROCA' flaw in Infineon's key pair-generation algorithm, which made it possible for someone to discover a target's private key just by figuring out what the public key was. Research later found that "...the Spanish identity card, which provide[d] enough information to hire online products such as mortgages or loans, was updated to incorporate a near-field communication chip as electronic passports do. This contactless interface [brought] a new attack vector for criminals, who [were able] to take advantage of the radio-frequency identification communication to virtually steal personal information." Essentially, the exploitation of the flaw could even allow attackers to revert or invalidate contracts that people have signed (in part because the Spanish protocols did not not use timestamps for signatures in the past).
The Card called Documento Nacional de Identidad electrónico (or DNIe) had a chip that contained two certificates, one for identification and one for electronic signatures. The cryptography used for Identity cards is high level keys (to save money, the factors for the crypto algorithm was out on the card). Once you knew the factors, the theoretical breakage time goes from the lifetime of the universe to 20,000 computer hours which is trivial for today's computing power, making it possible for the adversaries to orchestrate a very powerful attack very quickly and efficiently as the a potential economic cost of the attack for a cybercriminal is low. A fix required all affected cards to be updated. On Infineon disclosure of the vulnerability, the Spanish Authorities revoked all certificates and stopped letting people sign things with the card at the self-service terminals found at many police stations. That decision affected every card, not only those that had the flaw. However, people could still digitally sign documents online, using a small card reader that connected to their PCs.
A similar situation in the same year occurred in Estonia as a security flaw was discovered in the National Security eResident ID Card. Affecting approximately 750,000 cards, the mobile phone app version could still be used but the certificates in the eCard would require a fix. Another well-known chip design flaws, found in 2018, could lead to injection attacks such as Spectre and Meltdown. Meltdown is a hardware vulnerability affecting Intel x86 microprocessors, IBM POWER processors, and some ARM-based microprocessors, for which the attack is considered to cause catastrophic failures. Spectre is a vulnerability that exploits modern chip design using branch prediction calculations in memory. This can be misdirected and potentially reveal observable private data in the chip memory to the attackers. Both Meltdown and Spectre vulnerabilities are fixed through design changes and software fixes.
Many tech companies (e.g., Apple, Microsoft, etc.) have counter-measures in their gaming chipsets. Backdoors are usually defined as "typically covert methods of bypassing normal authentication or encryption in a computer, product, embedded device, or its embodiment. Backdoors are most often used for securing remote access to a computer, or obtaining access to plaintext in cryptographic systems." In other words, backdoors aim to
forfeit the authentication/encryption of a device in order gain access to it as a legitimate user (or at a deeper level, e.g., through its working memory or storage).
The use of this technique may even include digital spying. For example, a well-known popular case of backdoor was the Weeping Angels implemented project by the CIA’s Center for Cyber Intelligence and disclosed to the public through the Wikileaks. It is widely known that the project used a backdoor method to turn a particular model (the 2013-model) of the Samsung smart TV into a remote listening device. The backdoor methodology included disabling the LED lights (which indicate the TV is on a fix) in order to stop the smart TV device disabling the Wi-Fi interface when the exploit is run. The reality was that the potential threat from this backdoor was probably not a very serious issue: it could be completely disarmed by simply turning the TV power off or removing the USB that had to be physically inserted into the TV. Another danger of this particular case was not so much the spying scandal, but the disclosure of its mechanism, as once the attack methods were released into the public domain, they could be picked up by hackers and re-tasked to orchestrate other types of attacks.
A high-profile example of information disclosure threat followed the terrorist attack in San Bernardino, California in December 2015 that killed 14 people and injured 22. One of the two terrorist phones was recovered in the aftermath of that attack, which turned out to be an Apple iPhone 5C, locked with a four-digit password that would delete all data after ten failed attempts (i.e., the phone was cryptographically protected). The FBI through district courts tried to require Apple to develop a backdoor software method in order to bypass the phone security (so that future attacks of this sort could be prevented). Yet, Apple declined to do so stating that the mere development of such backdoor software would create many vulnerabilities in all iPhones putting all Apple customers at risk. As a result of this, Apple also called for a public discussion relating to the wider issues of personal privacy.
Securing the Billions of Connected Things
Clearly, any sensor, phone, PC, Internet-connected device, etc. are subject to an injection attack that can pull back critical information. While from an individual's perspective it may not matter to a person too much if someone can see emails on their phone, the harm or potential harm heavily depends on the type of information being compromised and well as the magnitude of the attack. For example, if adversaries can access your information while you are logged into you online banking or when a million people or more are doing their online banking, it becomes a big problem.
The solution to this type of class attack is to make every chip truly unique. This can be done through cryptography innovation, frameworks mass customisation, making changes at the Silicon physical level, creating islands of isolation, etc. As a result, if at a given a PC the attacker can come in through the USB port and get to the main computer processor, yet if the user has a separate security domain (which is not connected) - the important information will not be compromised. In other words, the reset of the PC might be attacked but this area can be maintained in isolation. This would help the user (and organizations) to identify, remediate and recover. Luckily, the cost of such solutions becomes more and more affordable. For example, now organizations can plan to improve security of class level attacks by:
making chip sets unique
giving connected things identity
giving connected things a level of robustness
through managed ownership of connected things
Connected Things Threats
If we come from a position where someone buys a PC/laptop, to make it secure, they will always be the only user, they will never sell it, never get rid of it, and data will never leak. This simply does not work. In the IoT world many issues arise when we consider (unique) identities such as:
How do you manage identity in ownership swap situations?
How do you manage identity in the connected devices?
In the IoT ownership and data privacy, who owns data?
For example, if we consider connected cars - in the case of Jaguar Land Rover cars, if you privately sell your car, the person who buys it off you cannot take control of the car until it has been zeroed at the car dealership. If you buy it through the car dealership, they will blank it. It is about clear asset ownership and exchange of goods and services).
The question who owns the data can be very obscure with multiple levels of a system that may have many actors and corporations involved directly or indirectly. You out your data on Facebook who are mining your data to sell you services. There is an inherent transaction in which you have access to their platform "for free", but you agree to their terms and conditions that include allowing the platform to mine your data and provide you with advertising that you may or may not want. People maybe fine with this until they find that there is enough data there, in which the platform can start to nudge your perspectives and preferences. This can work in unintended ways from the reported generation of fake news to influencing political elections in several countries, to the misuse of data that 3rd party companies obtain from Facebook without direct user consent (e.g., as seen in the Cambridge Analytica case in 2017). Recently, Facebook also came under fire due to requesting user telephone numbers of 2-way authentication purposes and then exploiting the information for customer tracking and profiling.
Our digital foot prints are huge and the consequences of that are that these online data can be out there for another decade or more and we will not know the consequences of this. In a New Zealand Auckland University Study, it was reported that an average New Zealand citizen may be in about 40 different databases, but an American citizen is in about 200, and that's information like your age, date of birth, marital status, where you live, etc. Much of it cannot be changed/does not change much and can be easily used against you (if seen by the "wrong" pair of eyes).
In the words of connected things, the data is both a threat and an opportunity. And data ownership is key. If I own my data, then actually there can be a whole bunch of services that may be available to me. This is the potential of the IoT business model. If I am getting lighting-as-a-service - when I walk into the house do I want the lights to come on? And AI based on my data in my house (if it is under my control) is great. If all this is going back to a cloud to a 3rd party person (who may do whatever they see fit with my data), I do not want the service. Sure enough, parts of my data I may be quite happy to share with the utility provider. I may want to send data back to my insurance company. I may even opt to have services such as my alarm system being trained using my behavioral data. That way, if an intruder enters my house and is behaving in a strange way (a way, which is very different from the usual behavioral pattern), this might be used to deduce a break-in alert to my house. Yet, all these user convenience has a big IF. And that IF is - I only want all these services IF my data is protected and, ultimately, owned by me. I do not want to give away my data for free, although I might be convinced to lease it in exchange for services. To sum up, we are still far away from solving the connected things security issues. To carefully address them, we would need to resolve many questions in the future. These questions include, but are not limited to:
Who owns the data?
How do we protect the data?
How do we ensure the operation of the system is correct against the data that is used as an input?