BOOK THIS SPACE FOR AD
ARTICLE ADThe ongoing obsession with artificial intelligence (AI), and generative AI specifically, signals a need for businesses to focus on security -- but critical data protection fundamentals are still somewhat lacking.
Spurred largely by OpenAI's ChatGPT, growing interest in generative AI has pushed organizations to look at how they should use the technology.
Also: How to use ChatGPT: Everything you need to know
Almost half (43%) of CEOs say their organizations are already tapping generative AI for strategic decisions, while 36% use the technology to facilitate operational decisions. Half are integrating it with their products and services, according to an IBM study released this week. The findings are based on interviews with 3,000 CEOs across 30 global markets, including Singapore and the U.S.
The CEOs, though, are mindful of potential risks from AI, such as bias, ethics, and safety. Some 57% say they are concerned about data security and 48% are worried about data accuracy or bias. The study further reveals that 76% believe effective cybersecurity across their business ecosystems requires consistent standards and governance.
Some 56% say they are holding back at least one major investment due to the lack of consistent standards. Just 55% are confident their organization can accurately and comprehensively report information that stakeholders want concerning data security and privacy.
Also: ChatGPT and the new AI are wreaking havoc on cybersecurity in exciting and frightening ways
This lack of confidence calls for a rethink of how businesses should manage the potential threats. Apart from enabling more advanced social-engineering and phishing threats, generative AI tools also make it easier for hackers to generate malicious code, said Avivah Litan, VP analyst at Gartner, in a post discussing various risks associated with AI.
And while vendors that offer generative AI foundation models say they train their models to reject malicious cybersecurity requests, they do not provide customers with the tools to effectively audit the security controls that have been put in place, Litan noted.
Employees, too, can expose sensitive and proprietary data when they interact with generative AI chatbot tools. "These applications may indefinitely store information captured through user inputs and even use information to train other models, further compromising confidentiality," the analyst said. "Such information could also fall into the wrong hands in the event of a security breach."
Also: Why open source is essential to allaying AI fears, according to Stability.ai founder
Litan urged organizations to establish a strategy to manage the emerging risks and security requirements, with new tools needed to manage data and process flows between users and businesses that host generative AI foundation models.
Companies should monitor unsanctioned uses of tools, such as ChatGPT, leveraging existing security controls and dashboards to identify policy violations, she said. Firewalls, for instance, can block user access, while security information and event management systems can monitor event logs for policy breaches. Security web gateways can also be deployed to monitor disallowed application programming interface (API) calls.
Most organizations still lack the basics
Foremost, however, the fundamentals matter, according to Terry Ray, senior vice president for data security and field CTO at Imperva.
The security vendor now has a team dedicated to monitoring developments in generative AI to identify ways it can be applied to its own technology. This internal group did not exist a year ago, but Imperva has been using machine learning for a long time, Ray said, noting the rapid rise of generative AI.
Also: How does ChatGPT work?
The monitoring team also vets the use of applications, such as ChatGPT, among employees to ensure these tools are used appropriately and within company policies.
Ray said it still was too early to determine how the emerging AI model could be incorporated, adding that some possibilities could surface during the vendor's annual year-end hackathon, when employees would probably offer some ideas on how generative AI could be applied.
It's also important to state that, until now, the availability of generative AI has not led to any significant change in the way organizations are attacked, with threat actors still sticking mostly to low-hanging fruits and scouring for systems that remain unpatched against known exploits.
Asked how he thought threat actors might use generative AI, Ray suggested it could be deployed alongside other tools to inspect and identify coding errors or vulnerabilities.
APIs, in particular, are hot targets as they are widely used today and often carry vulnerabilities. Broken object level authorization (BOLA), for instance, is among the top API security threats identified by Open Worldwide Application Security Project. In BOLA incidents, attackers exploit weaknesses in how users are authenticated and succeed in gaining API requests to access data objects.
Such oversights underscore the need for organizations to understand the data that flows over each API, Ray said, adding that this area is a common challenge for businesses. Most do not even know where or how many APIs they have running across the organization, he noted.
Also: People are turning to ChatGPT to troubleshoot their tech problems now
There is likely an API for every application that is brought into the business, and the number further increases amid mandates for organizations to share data, such as healthcare and financial information. Some governments are recognizing such risks and have introduced regulations to ensure APIs are deployed with the necessary security safeguards, he said.
And where data security is concerned, organizations need to get the fundamentals right. The impact from losing data is significant for most businesses. As custodians of the data, companies must know what needs to be done to protect data.
In another global IBM study that polled 3,000 chief data officers, 61% believe their corporate data is secure and protected. Asked about challenges with data management, 47% point to reliability, while 36% cite unclear data ownership, and 33% say data silos or lack of data integration.
The rising popularity of generative AI might have turned the spotlight on data, but it also highlights the need for companies to get the basics right first.
Also: With GPT-4, OpenAI opts for secrecy versus disclosure
Many have yet to even establish the initial steps, Ray said, noting that most companies typically monitor just a third of their data stores and lakes.
"Security is about [having] visibility. Hackers will take the path of least resistance," he said.
Also: Generative AI can make some workers a lot more productive, according to this study
A Gigamon study released last month found that 31% of breaches were identified after the fact, either when compromised data popped up on the dark web, or when files became inaccessible, or users experienced sluggish application performance. This proportion was higher, at 52%, for respondents in Australia and 48% in the U.S., according to the June report, which polled more than 1,000 IT and security leads in Singapore, Australia, EMEA, and the U.S.
These figures were in spite of 94% of respondents saying their security tools and processes offered visibility and insights into their IT infrastructure. Some 90% said they had experienced a breach in the past 18 months.
Asked about their biggest concerns, 56% pointed to unexpected blindspots. Some 70% admitted they lacked visibility into encrypted data, while 35% said they had limited insights into containers. Half lacked confidence in knowing where their most sensitive data was stored and how the information was secured.
"These findings highlight a trend of critical gaps in visibility from on-premises to cloud, the danger of which is seemingly misunderstood by IT and security leaders around the world," said Gigamon's security CTO Ian Farquhar.
"Many don't recognize these blindspots as a threat... Considering over 50% of global CISOs are kept up at night by the thought of unexpected blindspots being exploited, there's seemingly not enough action being taken to remediate critical visibility gaps."