POST WRITTEN BY
Gaurav Banga
Founder and CEO of Balbix. Serves on the board of several companies.
We live in a world where it appears to be a matter of when, not if, an enterprise is breached. Billions of dollars have been spent on beefing up cybersecurity, but the bad guys keep winning. Securing even a small organization seems quite hard.
How did we get here? Is there hope?
The computer industry has had to fight against evolving sophistication in threats throughout its history. I have been working in cybersecurity since the early ’90s, beginning in college when cybersecurity and the internet were still in their infancy, and in 2015 I started a third-generation cybersecurity company where I currently serve as CEO. We can point to two historical shifts in cybersecurity market conditions that led to a step up in the complexity and scope of attacks and consequently fueled fantastic innovations. With both innovative waves, there were early adopters and laggards — with consequences for who got breached. We are right in the middle of a third major transition in cybersecurity, and by understanding the changes happening, you can be better prepared to protect your organization.
The Early Days (Through 2005)
Up until 1995, the internet was quite small and most of the valuable information online consisted of research papers. Computer threats were typically in the form of hackers trying to gain access to U.S. military computers, often by targeting university research programs linked to the government (e.g., the case chronicled in The Cuckoo’s Egg).
In 1988, a Cornell graduate student named Robert Morris created a computer worm that shut down most of the internet. This spurred the creation of the firewall as a mechanism to restrict outside access to internal network resources and led to the establishment of the CERT/CC at Carnegie Mellon University as a central point for coordinating responses to these types of emergencies.
As the IBM PC gained popularity, computer viruses, which spread over infected floppy disks, became an issue. Anti-virus software was invented to scan executable files and the boot sectors of floppy disks and hard drives for patterns of code and data (“signatures”) that were known to be malicious.
During those early days, anti-viruses and firewalls were quite effective, but not everyone adopted them. As internet usage grew, viruses began to spread online. The ILOVEYOU virus infected untold numbers of PCs, causing email systems across the globe to fail, as it stole network passwords and sent them to remote locations.
The Rise Of Mobile And Cloud (2005–2015)
Attackers continued to develop increasingly sophisticated techniques to bypass cyber defenses. The Web protocol HTTP, usually whitelisted by firewalls, became a favorite method of delivering malware. Polymorphic techniques made it difficult to create anti-virus signatures — by 2008, we were seeing nearly 500,000 new pieces of malware monthly. The rise of mobile and cloud computing created hundreds of new entry points for organizations to worry about. Attacks became multi-staged and were designed to avoid detection, which led to the coining of the term “advanced persistent threat” (APT). Traditional firewalls and anti-virus systems were woefully inadequate to combat cybersecurity challenges coming to the fore.
These developments led to the emergence of next-generation firewalls, which attempt to detect and block undesirable HTTP content, and new end-point solutions that look for malicious behavior instead of relying on signatures. There was a move toward centralized visibility of assets with the evolution of security information and event management systems (SIEMs). To improve cloud and mobile security, cloud access security brokers (CASBs) and mobile security platforms were developed. While effective, these new tools were slow in being adopted.
A 2013 breach at Yahoo! exposed the credentials of 3 billion users — a massive compromise on a scale previously unheard of. In 2015, the U.S. Government’s Office of Personnel Management (OPM) announced that sensitive records of nearly 25 million people, including fingerprints, had been compromised. Both incidents highlighted how breaches can cost hundreds of millions of dollars, and underscored the difficulties in securing large networks in spite of significant investments.
The Era Of Infinite Data, AI And IoT (2015–Present)
A typical enterprise network today consists of a bewildering variety of assets: traditional infrastructure, applications, managed and unmanaged endpoints (fixed and mobile), internets of things (IoTs) and cloud services. Each element can be attacked via numerous methods. Users can be phished, and supply chain trust relationships can be leveraged to launch attacks. The enterprise attack surface is quite massive and growing rapidly.
Unfortunately, not all organizations understand their security posture — the number, type and business value of assets, applicable vulnerabilities and threats and the (in)effectiveness of security controls. As a result, the right decisions don’t get made, and the correct actions don’t get prioritized, leaving the enterprise wide-open to attack and compromise. With both the Equifax breach and WannaCry, leading indicators of the vulnerabilities later exploited by attackers were drowning in a sea of unprioritized security data, and not acted upon in a timely fashion.
Making GenAI Work for Cybersecurity
Join Nvidia and Balbix as they explore how to apply GenAI to cybersecurity, avoid common pitfalls, ensure data privacy, and uncover the true costs of building and maintaining a GenAI solution.
The challenges of a practically infinite attack surface have given rise to a new wave of innovation. In this third wave, innovators are beginning to act upon the asymmetric nature of cybersecurity by going proactive. By leveraging deep learning and specialized artificial intelligence (AI) techniques, we can continuously discover and analyze the massive attack surface. The goal is to have comprehensive, predictive assessments of breach risk that prescribe and prioritize the correct mitigating steps to avoid breaches. We are also creating intelligent playbooks to automate and streamline tasks such as vulnerability management, incidence response and compliance.
Third wave cybersecurity innovators also realize that all software is inherently fragile and subject to human error, which could easily be exploited. Therefore, at scale, some compromised systems are inevitable. The focus is on improving cyber-resilience using zero-trust techniques that limit the impact of cyber incidents in time and in space.
Last but not least, board members, CEOs and CFOs are scrutinizing security spend. Cybersecurity is shifting from being project-oriented to outcome-oriented. New investments must yield measurable reduction in breach risk.
We have entered an era where cybersecurity is no longer a human-scale problem. It will require a collaboration between humans and innovative machine learning techniques to meet the challenge of the latest cyber threats.