Google’s Historical Cybersecurity Progress

VISITING EXPERT. Cyber threats are evolving rapidly, but a significant discovery by Google’s artificial intelligence is changing the game for vulnerability detection.

Here are some important definitions before delving further into the history.

A vulnerability is a system or software flaw attackers can exploit to access or damage the system or data. The term “zero-day” refers to the fact that there have been “zero days” since the developers’ or the manufacturer’s security teams discovered the vulnerability, leaving no time to correct it before it can be exploited. In other words, attackers can exploit the vulnerability immediately because the developer has not yet had a chance to create a patch.

On November 3, Google announced a historic breakthrough in the fight against these threats as Big Sleep, its advanced language model tool, discovered a zero-day in the open-source SQLite database engine. This discovery was made possible by the collaboration between DeepMind, Google’s AI entity, and Project Zero, a team of Google cybersecurity researchers.

This breakthrough represents an important step in defensive cybersecurity and raises serious concerns about how cybercriminals and states can use similar tools to attack critical infrastructure. In this article, we will explore this innovation, the cybersecurity benefits and the challenges of an AI-powered digital weapons race in detail.

A revolution in vulnerability detection

Big Sleep was designed to go beyond traditional vulnerability detection methods, such as fuzzing, which involves testing software with random inputs to discover errors. While fuzzing can effectively detect simple errors, it is limited when faced with complex vulnerabilities requiring a deeper code logic analysis. This is where Big Sleep shines; it can analyze and understand the logical structures of the code in a way similar to a human researcher but with speed and precision.

In the case of SQLite, Big Sleep detected a flaw in the database’s “ROWID” field management. This flaw allowed an input with a negative index to cause memory overflow. This vulnerability could have been exploited to execute malicious code. The SQLite team corrected the vulnerability immediately as soon as it was identified. This shows Big Sleep’s potential to detect and eliminate vulnerabilities before publication.

The positive aspects of this innovation

This Google innovation promises to strengthen software robustness by reducing the number of zero-day vulnerabilities, provided that companies take the necessary steps to secure their code before going into production. Here are some positive aspects.

  1. Speed and efficiency: Unlike traditional methods that require considerable human resources and time to analyze the code, AI can review lines of code quickly and thoroughly. This vulnerability detection automation reduces response time and helps manufacturers correct flaws before they are exploited.
  2. Increased accuracy: Advanced language models like Big Sleep can identify vulnerabilities that could escape human detection. In addition, AI can reduce false positives, which wastes time for cybersecurity teams.
  3. Preempting attacks: Finding zero-day vulnerabilities before attackers use them creates a more secure environment. This puts defenders in a less reactive position, allowing them to get ahead of cybercriminals and maintain resilient systems.
  4. Toward democratized cybersecurity: Google plans to share its research and improve the availability of this tool. This could allow other cybersecurity actors, including SMEs, to access advanced vulnerability detection technologies, creating a stronger collective defence.

The less positive factors

Unfortunately, this advance raises critical questions about the challenges of such powerful technology in the wrong hands. Cybercriminals and states engaged in digital warfare will also leverage AI to improve their offensive techniques.

  1. A double-edged sword for cybercriminals.

Cybercriminals are constantly looking for ways to identify exploitable computer system vulnerabilities. They can automate their search for flaws on a much larger scale with AI-based tools.

Example: Imagine a group of cybercriminals using a language model similar to Big Sleep. They aim to identify flaws in critical infrastructure software, such as hospitals, transportation networks or utilities. AI could accelerate the detection of uncorrected vulnerabilities by accessing source code libraries for many solutions, such as SQLite. This would enable criminals to target large-scale exploitable flaws. Therefore, they can launch more powerful and potentially lucrative attacks.

  1. State Exploitation and the Geopolitical Threats

The risk malicious states will use AI to intensify their cyber espionage and sabotage activities is even more worrisome. Many countries invest heavily in technology to monitor, influence and destabilize geopolitical rivals. Advanced vulnerability detection AI could enable these states to discover weaknesses in their adversary’s systems, access sensitive information or disrupt critical infrastructure.

For example, a hostile government could use AI to detect flaws in a rival country’s energy supply systems, allowing attacks that disrupt the economy or endanger critical infrastructure.

Toward a future of cybersecurity regulation?

As AI technologies continue to advance, the cybersecurity sector is at a crossroads. On the one hand, tools like Big Sleep offer sufficiently advanced defenders a new power to secure their systems and reduce potential attacks. On the other hand, these same tools pose a threat because of their potential to be used offensively.

States and companies are already seeking to develop regulatory measures to limit the risks associated with AI. Information-sharing initiatives on threats, ethical standards for AI development, and international cooperative programs will be needed to limit the misuse of these tools.

All of this sounds promising, but many companies are still struggling to implement the patches provided by software manufacturers. It is not uncommon for even wealthy organizations to delay security updates indefinitely, increasing their exposure to attacks. If software manufacturers begin using AI seriously to eliminate more pre-production vulnerabilities, this could theoretically reduce the need for future patches, thus simplifying vulnerability management for their customers.

However, I have my doubts. This will not solve everything. Vulnerabilities related to specific configurations, exploitable weaknesses due to a lack of skills or due diligence, and risky choices to avoid user friction will remain weak. A good AI can identify these weaknesses rapidly, even faster than a good cybercriminal.

Can regulations be implemented that encourage companies to develop more secure code and strengthen their security controls? This pressure could come from cybersecurity insurers, investors interested in protecting their interests or other influential stakeholders. Undoubtedly, AI innovations will accelerate changes in the cybersecurity industry and can significantly transform practices.

Widespread adoption needed

Google’s Big Sleep announcement is an important step forward in cybersecurity. This AI helps detect zero-day vulnerabilities more effectively than ever before. It gives defenders new ways to reduce cyberattacks. It also complicates the task for attackers.

However, its potential for malicious use shows that software manufacturers and all organizations deploying technologies must widely adopt such a powerful technology. Malicious actors will try to leverage these advances to exploit the flaws, requiring defenders to integrate this technology into their security practices massively.

These new capabilities will undoubtedly result in profound industry changes, leading to a large-scale transformation of cybersecurity practices. The next few months will be fascinating as we look at who is involved and how the new game rules will be set.