The modern paradigm of hyperconnectivity has softened the boundary between vital networks underpinning our critical infrastructure and the external networks in general use. Notions of a secure perimeter, even in respect to supposed air-gapped systems with no external communication path to the untrusted environment, have been proven flawed by numerous cyberattacks.
At the network edge, someone, or something, will always provide a means to penetrate critical infrastructure networks. Within this context, network permeability is a significant concern. Unauthorized access can allow a threat actor to steal information that offers high-quality intelligence or the opportunity for financial gain, or to compromise the operation of physical assets. In the latter case, the potential for harm and loss of life resulting from a system malfunction should not be underestimated.
Modern control systems are used to monitor and control our power, water, fuel distribution, agricultural and industrial processes, and transportation services. They are a complex mesh of distributed sensors, actuators, and programmable logic controllers (PLCs), with a human machine interface (HMI) at the heart of the supervisory control and data acquisition (SCADA) system. Every day, the cyber-physical architectures built on these components supply the critical products and services that enable the modern economy. But why might these systems fail in the first place, and what can go wrong when they do?
Perhaps the most notorious example of how the cyber domain can produce real consequences in the physical environment is the Stuxnet cyberattack, first reported in 2010. Using sophisticated malware deployed from the edge to an air-gapped control network, probably by external contractors via periodic maintenance, centrifuge arrays used for the enrichment of uranium were targeted at the Natanz nuclear facility in Iran. By infiltrating the PLCs that controlled the operating parameters for the centrifuges and by overriding fail-safe procedures, catastrophic damage was caused at the Natanz installation, temporarily disrupting the output of enriched uranium.
The Stuxnet attack led to indiscriminate infection of global industrial control systems, highlighting the global risk posed by any similar cyberattack prosecuted by a nation-state or advanced persistent threat (APT). More recently, we witnessed the Industroyer attack against the Ukrainian power grid in 2016 and the Triton attack in 2017, directed against the safety instrumented systems (SIS) of industrial control systems. In July 2020, a joint alert issued by the US National Security Agency (NSA) and the Cybersecurity and Infrastructure Security Agency (CISA) recommended immediate steps be taken to protect critical infrastructure.
In May, cybercriminals attacked the Colonial Pipeline company using DarkSide ransomware, and the associated data loss precipitated a precautionary total shutdown of the largest fuel pipeline in the United States. Distribution of 100 million gallons of fuel per day was disrupted and a $4.4 million ransom was paid in cryptocurrency to enable operations to restart. In view of the disruption to supply and the price volatility that ensued, the CEO of Colonial Pipeline decided to pay the ransom on the basis that “it was the right thing to do for the country.” The economic and strategic value of critical infrastructure systems, coupled with an upward trend in the number of attempts to disrupt these vital assets, emphasizes their vulnerability to cyberattacks and the need for a new approach to data and application security.
Although techniques such as network segmentation can help prevent their maneuver, an attacker may still gain access to an area of the network where cyber-physical systems can be compromised. Countermeasures are required that support the integrity and resilience of critical systems, even when the host network has been infiltrated. A new technology with features that enable robust protection of sensitive applications and data is confidential computing, as defined by the Confidential Computing Consortium project of the Linux Foundation.
An important characteristic of confidential computing is the property of attestation. Application code and data is secured within a protected memory region, referred to as a trusted execution environment (TEE), and this protected memory region is provided with a set of cryptographic signatures. The signatures allow for mutual and remote verification of the hardware platform and integrity validation of deployed software. Access to the secure memory region is restricted to prevent malicious applications or privileged users from recovering, or altering, either the application code or the data in use. Consequently, dependent operations can remain resilient to network compromise and the provenance of data and applications can be verified using the cryptographic identity of the TEE in accordance with the requirements of zero-trust architecture.
As confidential computing becomes increasingly available across the cloud infrastructure and the hardware platforms underpinning our critical infrastructure, the enhanced data and applications security provided by this nascent technology looks set to play a key role in protecting the safe and secure function of our most vital services. Whether it is deployed within the network perimeter to protect SCADA systems, or at the edge, where sensors and PLCs interface with physical assets, confidential computing offers additional defense in depth for the industrial control systems upon which we all depend.