BOOK THIS SPACE FOR AD
ARTICLE ADThe arrival of artificial intelligence (AI) in many cybersecurity products can't come too soon, according to the founder of prominent cybersecurity vendor Palo Alto Networks, who sees the spiraling threat landscape as too complex to be managed by human efforts alone.
"They are going to try a million ways to get in," said Nir Zuk, the chief technologist and co-founder of Palo Alto Networks, regarding malicious actors.
Also: AI-powered 'narrative attacks' a growing threat: 3 defense strategies for business leaders
As for the threat hunters, he said: "You can't be correct a million out of a million times -- that doesn't scale."
That's where AI comes in. Zuk and Palo Alto's Chief Product Officer, Lee Klarich, sat down with ZDNET recently to discuss how AI is changing cybersecurity.
Also: The best VPN services (and how to choose the right one for you)
Palo Alto began almost 18 years ago as a network security vendor competing with numerous firewall specialists and intrusion detection and prevention companies and eventually moved into cloud security and managed services.
"If I can detect the abnormal within the organization, which humans cannot, and AI can in a scalable way, it gives me an advantage," said Zuk.
Zuk, a mathematician by training, has a long history running technology for cybersecurity outfits, having previously served as CTO at Juniper Networks, and before that founding cybersecurity startup OneSecure (later sold to NetScreen Technologies, which was sold to Juniper).
Also: Generative AI is new attack vector endangering enterprises, says CrowdStrike CTO
Klarich was previously director of product management for Juniper, and head of firewall technology at NetScreen before that.
The flash point for AI and security, said Zuk, is the security operations center, or SOC, which watches what happens on the network and tries to detect and stop malicious behavior.
The chief information society officer (CISO) and their team are outgunned. "If you look at the numbers for respond, recover, remediate," the main things a CISO does following a breach — those numbers are horrible," said Zuk.
"When the SEC [US Securities and Exchange Commission] announced that it expects public companies to report within four days about a major breach, everybody had an, 'Oh, crap' moment," he said. He noted the security team can't even close routine IT tickets from that day: "They're looking for a needle in a haystack."
Because there aren't enough engineers, or hours in the day, "the idea of AI in the SOC is to do the things that humans do," but in "the most scalable way and faster," said Zuk, to reduce the "mean time to detect" a breach to minutes.
Also: Intel sees AI in enterprise on a 'three to five-year path'
"I think that there's an opportunity where AI effectively automates a majority of how cybersecurity is deployed, configured, and operationalized," said Zuk, because, "it's become so complex for people to do."
Automation is a broad, general term that is widely used. The aim of using emerging technology in the SOC is for the AI model to discover what "normal" means. The CISO and their teams spend their time hunting for traces of suspicious behavior, said Zuk. This effort takes hours, days and, weeks.
It would be better if the machine could find what normal looks like in the enterprise, said Zuk, so that anything malicious stands out.
"The more data sources [AI] has, the more accurate the picture of normal is going to be," said Klarich.
"Let's use AI to learn what's not normal in the organization, irrespective of which attack technique posted," said Zuk. "I don't care how they broke in and I don't care how they move laterally and so on; if I can detect the abnormal within the organization, which humans cannot, and AI can in a scalable way, it gives me an advantage that they don't have today."
Zuk and Klarich see an advantage in using software's breadth to find the normal. The training and the generation of predictions in AI require the integration of sensor data from many sources.
Also: As AI agents spread, so do the risks, scholars say
"You can't collect data into data silos and then expect to run AI on it. It works much better when the sensors and the AI come from the same vendor," said Klarich.
"The more data sources it has, the more accurate the picture of normal is going to be in order to be able to determine what unusual activity looks like."
The concentration of data means that Palo Alto believes AI may fuel consolidation in the cybersecurity industry, which is classically fragmented across vendors.
"Cybersecurity is largely toward one end of the extreme in terms of having a huge number of smaller point product vendors," said Klarich.
"It's not that you need to go from a hundred different security solutions to one, it's that you need to go from a hundred to a lot less. You can't expect to collect data into silos, and then expect to run AI on it."
Also: The best VPN services for iPhone and iPad (yes, you need to use one)
Complexity is rising, of course, as the attacks from errant actors become automated.
"We do assume, in terms of how we think about our technologies, that there will be new attack techniques that they will come up with, and, increasingly, automated attacks," said Zuk.
"That dramatically changes the scale with which attacks can be carried out because they'll no longer be limited by their human resources in terms of their capacity, but rather they'll also be able to use AI to carry out attacks in parallel."
The legitimate use of AI also increases the "threat surface", according to Zuk and Klarich. A programmer who uses a programming "copilot" to write code exposes more of their company's source files to a remote service.
"That's intellectual property that just left the control of the enterprise, right?", said Zuk. "And that's one of hundreds of new AI applications that exist that run the same risk."
The good news is Zuk said he believes the forces of good can win out in the battle of AI.
"I personally think that AI is going to help the defenders more than it's going to help the attackers," he said.