According to Wikipedia, Zero Trust refers to "an information security framework which states that organisations should not trust any entity inside or outside of their perimeter at any time. It provides the visibility and IT controls needed to secure, manage and monitor every device, user, app and network being used to access business data. It also involves on-device detection and remediation of threats." In other words, Zero Trust represents "never trust, always verify" philosophy. We have previously pointed out the complexity and problematic nature of Zero Trust as a concept. Yet, Zero Trust is problematic not only because it is based on paradoxical principles, it is also creating many issues "in use". This post looks into Zero Trust applications in more detail.
Zero Trust = Zero Creative Work?
Zero Trust systems are currently utilised by many businesses worldwide, who generally consider it to be a model for more effective and "usable" security. Yet, how "usable" is it exactly? If we look behind the scenes, Zero Trust is, essentially a combination of governance procedures, compliance policies, corporate code of conduct and technology, which may manifest itself in such tools as identity and access management, multi-factor authentication, encryption, file system permissions, scoring, automation and orchestration, as well as "on-the-fly" and "ex post" threat analytics. All these tools simultaneously or in some combination are usually applied in order to:
(i) identify the user who is trying to connect to the system;
(ii) determine the endpoint where the connection attempt is coming from; and
(iii) establish this endpoint's security status
Armed with (i), (ii) and (iii), a Zero Trust system then applies a conditional policy, which prescribes which user/endpoint synergies can be "trusted", i.e., can be granted access to different parts of the system.
Zero Trust systems may seem attractive at first sight: they allow cybersecurity teams within organisations to apply a very manageable approach to tackling cyber threats by, for example, micro-segmenting parts of their environments and then defining access rights for each segment. Yet, one of the main issue with this approach is that it only works for well-established, repetitive, and well-defined business tasks, essentially freezing any "out-of-the-box" creative tasks. In other words, if the nature of your work can be described in detail; if you know that today you are going to do exactly what you did yesterday, and tomorrow you will do exactly the same tasks as today, then Zero Trust might work for you. For example, if you work at a call centre talking to clients day in and day out, you will be accessing client information from the same end point, under the same conditions every day, some form of Zero Trust architecture might make sense, but, even then we need to emphasise that Zero Trust does not prevent security breaches from happening (see the previous post by Boris Taratine for more details). If you crack the protocol once, you can do it again. One can think of Zero Trust system as a secure facility, where everyone has a picture ID. Yet, if your ID is stolen, adversaries can (and most probably) will get unauthorised access to the facility using your ID and the security team will be very unlikely to find out.
However, if the nature of your business is creative - it is highly probable that Zero Trust will simply mean Zero Work for you. In many organisations, data science teams, development teams, behavioural insights teams as well as other creative teams and departments engaged in non-standardised, innovative tasks face daily challenges when they have to work in Zero Trust conditions. Such teams often require access to multiple segments of organisational systems, e.g., to complete quick ad hoc analytical (or other creative) assignments; and the reality of Zero Trust is often such that these teams wait for weeks, if not months, to be grated access to data and tools they need in order to be productive.
The Inner Xenomorph
Essentially, one of the main issues with Zero Trust systems and creative work is that such systems are built based on the philosophy that any user should only have "minimum access level rights" necessary for their role. We can think of this restriction as one very similar to principles applied in shepherding or cattle management, where the system provides fencing and guiding rules, and does not tolerate any deviation from these rules. Yet, businesses often need creative, "out-of-the-box" inputs to survuve.
Source: 20th Century Fox, 2010
To give you a specific example, with my team I often do various consultancy projects for companies in many different industries. Most of the time, these projects either involve building innovative analytics or AI tools. Yet, what can you possibly do if you have a restriction of using local corporate PCs or laptops, which can only run the Excel and even within the Excel the functioning is extremely limited? I am not even going to explain the difficulties in getting access to the necessary data variables... Of course, we do find a way to get Python or R working on these machines, but this usually takes many weeks if not months of constant negotiations. And my team is not an exception here. Sure enough, security is important, but very often what ends up happening is that instead of delivering needed outputs, creative teams have to deliver outputs conditional on data and resources they have access to due to Zero Trust. And, as you can imagine, the insights from such outputs are often not ideal. Like Ridley Scott's "Alien" Xenomorphs, who represent mutated hybrid version of its own species mixed with the host species, such "Zero-Trust restricted" outputs often represent a mutated hybrid version of what a creative team needs to do and the restrictive nature of the Zero Trust organisational mindset.
In various organisations, Zero Trust systems may, in many contexts, limit creativity. So what is the potential solution for this problem? I am a realist and I understand that Zero Trust systems will continue to be used, simply because they give psychological perception of increased security to businesses. Yet, at the same time, even though such systems will be used in the future, creative work within organisations can potentially be enabled using "safe sandpits" - safe places, where data and access is "de-risked". For example, in case of customer data, datasets can be stripped of all the identifying (and otherwise compromising) information and then simplified access policies (which do not bend security standards and yet make the access easier) could be developed to allow creative teams work with these datasets. In 2019, we have already seen the trend towards creating such sandpits, which is very encouraging. This, of course, is not the only possible solution. Yet, in order to support creativity within organisations, we do need to defeat their "Inner Xenomorphs", emerging as a result of Zero Trust implementations.