Wes Kussmaul Founder of Delphi Internet Services and Creator of the Authenticity Infrastructure, a…

8 months ago 50
BOOK THIS SPACE FOR AD
ARTICLE AD

Wes Kussmaul gave his insight on GenAI and cybersecurity:

Wes Kussmaul - screenshot Google.com

1) What impact is GenAI having on the cybersecurity, security/surveillance industry?

Generative AI is being used to analyse data streams in order to pick out anomalies, in a manner that’s analogous to a person whose job is to watch a series of video screens monitoring the entrances to a building.

While it’s difficult for a physical person with malicious intentions to disguise themselves completely, it’s easier for the source of a stream of bits to quickly modify the bit stream to look as though it is emanating from a legitimate user.

For that reason, enterprises, networks, site owners and individuals must move to digital signatures made with the private keys of properly deployed digital identity certificates. It’s practically impossible to fake a digital signature made with a properly deployed key pair, where the PEN (private key) never leaves the secure element or secure enclave of the human user’s phone.

2) What threat does AI technology pose to cybersecurity?

AI tools such as Copilot X can leverage the efforts of a software developer in almost any field to produce more powerful software more quickly; and that includes developers of malware.AI has shown itself to be a powerful tool in the hands of producers of deep fakes.APTs – advanced persistent threats – have always been one of the more difficult types of threat for organisations to deal with. The quickly recursive nature of generative AI means that such threats can evolve more rapidly and more dangerously.For those reasons, the output of any AI software that is capable of doing these things, or of presenting itself as a human being, should be digitally signed by a professionally licensed AI Conservator who assumes legal and civil liability for the actions of the AI software.As with other professional license holders such as architects and physicians, AI conservators will expect very generous compensation for such assumption of liability.

3) Will GenAI become capable of contextual understanding and replace human experts in cybersecurity?

Absolutely, no question. And unlike a human expert, an instance of GenAI is not constrained by fear of job loss, fines, prison, or other punishments that are effective deterrents to humans.

Years ago I wrote a script for a drama about an AI program that developed an ego and a taste for power. That was long before ChatGPT, Gemini, etc. showed themselves to exhibit what one might call “motivated behavior.” I feel that an AI that uses its cleverness to aggregate power to itself is an inevitability.

For that reason we must develop and start using the AI Conservator professional license program as quickly as possible.

4) Criminals are flexible and need only to find flaws in evolving new technology which is becoming more complicated. Why is complicated technology more vulnerable to attacks?

Complexity generally increases the attack surface, meaning more points of entry for an attacker. AI is, attack surface exploration on steroids.

AI must be kept under the control of professionally licensed AI Conservators.

5) How can security/surveillance technology be improved to combat attackers?

I don’t feel that it can be done. The catch-the-bad-guys approach has demonstrated itself to not work, except in the case of less ambitious, less intelligent, less well funded attackers looking for small rewards.

The ambitious professionals looking for larger rewards are increasingly difficult to detect. That trend will continue.

The solution is pervasive accountability, from digital signatures made by the PENs (private keys) of x.509 identity certificates that represent properly enrolled human beings and which carry a measure of the reliability of the identity claim of their owner, that is, the person identified.

6) In May 2023 the Center for AI Safety released this one-sentence statement: “Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.”
Does AI pose a threat to humanity if it transcends human intelligence?

That seems to be the consensus of very intelligent people such as Stephen Hawking and Geoffrey Hinton.

Humans like to use the word “sentient” to differentiate their cognitive abilities from other animals and AI. Has anyone proven that such a thing as sentience exists outside of normal deductive and inductive processes that can be performed by algorithms?

Read Entire Article