Share

As cybersecurity wallows, particularly in the world of cloud security, the next step in securing sensitive data mandates the implementation of data that can defend itself, including self-protecting digital identities. 

Throughout much of history, safeguarding data privacy has centered on securing both access to private information and the encryption of the information itself. Encoding information was a straightforward, albeit effective strategy. However, as electronic communications began to dominate in the 20th century, there was a paradigm shift. The focus veered more exclusively toward securing transmission channels because the increased communications volume made uniquely encrypting individual messages unmanageable. The concept hinged on the belief that as long as those communications channels were impenetrable, so was the data inside. 

On a completely parallel track, as computers started taking over more and more functions in the 1960s, data storage became a problem to overcome. The industry went from punch cards to digital files stored on reel-to-reel tapes, hard drives, and other evolutions, culminating in today’s cloud storage mechanisms. Unfortunately, security wasn’t an initial concern when the transition from punch cards to digital files occurred. Since then, the critical cybersecurity question has been “How do we protect the mechanisms that control digital files?” not “How do we protect the digital file itself?

The trajectory of digital files and cryptography first collided with the realization that computers needed to pass information between themselves to further computing capabilities. There is little to no technical difference between transporting other digital signals and computer-to-computer transmissions. Hence, computer networking (and the entire internet) was built on the telecommunications infrastructure that was fundamentally the same as it had been in the 1970s and ‘80s — all the way back to the early modems that first brought the internet to millions of homes through programs like Prodigy, Compuserve, and AOL. 

The complexity of the data security puzzle is perfectly represented by Amazon’s entry onto the scene in 1995 as an online bookstore. That introduced the world to the idea of “e-commerce.” While the value of secure online data transmission was not novel to shopping, the sharing of payment information over the internet put the safety of these transmission channels into the public consciousness on a far greater level. 

As these changing needs emerged, firewalls evolved from simple database perimeters to focus on a variety of micro-perimeters for more effective and secure transmission networks. Today, advanced firewalls classify traffic access and connect application usage to IP addresses and user identities. 

However, this approach, while having served us well for decades, is increasingly showing its limitations. Even the most secure transmission channels and data storage locations are vulnerable to sophisticated cyberattacks. The threat landscape is not static; it is dynamic and increasingly menacing, particularly with the threats posed by the advent of AI and quantum computing. These combined technologies, with their immense computational powers, have the potential to unravel even the most robust network-based security protocols. 

Within this context, I believe the time is ripe for a “back to basics” approach that encompasses not only data security but the very basics of digital data storage. This re-imagined approach means encoding and securing the data files directly rather than solely relying on the security provided by the devices or constructs acting as perimeters in the Defense-in-Depth construct. If security is directly embedded into the data (or data file), we ensure that it remains protected irrespective of the channel through which it travels or where it is stored.

This approach is not just a nod to the past but a strategic adaptation for the future. Cloud computing tests the older paradigm as it increases the rate at which data moves through communications channels — traversing from on-premises infrastructure to cloud servers, between various cloud services, and back to end users. Each of those movements exposes the data to numerous threat vectors. A back-to-basics approach can potentially obviate swaths of threat vectors rather than attempting to address them one at a time. 

Data security at the file level can transform each piece of data or file into a self-protecting entity capable of defending itself in a landscape where traditional perimeters are increasingly irrelevant. This method not only provides a more robust defense against current threats but also can make data far more resilient against emerging threat vectors like quantum computing, which renders the newer asymmetric encryption method largely ineffective while at the same time leaving the traditional symmetric method relatively sound. 

It is crucial to continue innovating and advancing our cybersecurity technologies and practices, but we must simultaneously question and keep sight of the foundational principles that have long governed the realm of secure information. In the current and foreseeable landscape, where threats are evolving at an unprecedented pace, the status quo has proven incapable of being countered. Embracing data-level security represents both a return to basics and a leap forward into the future of cybersecurity.

About Rich Streeter

Richard Streeter is the Operations Director of Sertainty Federal. Previously, he spent almost three decades in the intelligence community between the US Navy and the private sector.