BOOK THIS SPACE FOR AD
ARTICLE ADGenerative artificial intelligence (AI) is being championed as essential for organizations to ensure their market relevance, but some remain hesitant to take the plunge over concerns about data and trust.
These issues are especially pertinent for businesses that operate in sectors with stringent data governance rules and large customer bases, pushing them to bide their time in adopting generative AI tools despite their touted benefits.
Also: Businesses need pricing clarity as generative AI services hit the market
The ability to generate sales reports via a prompt, for instance, instead of having to manually fiddle with spreadsheets, offers an interesting potential for generative AI tools such as Salesforce's Einstein Copilot, said Sarwar Faruque, head of development operations at Jollibee Foods Corporation. The Philippine restaurant chain operator uses Salesforce's Heroku to build its applications and Mulesoft as the middleware to connect its applications, including ERP and order management systems.
Jollibee has 15,000 employees and operates almost 4,000 stores worldwide across 34 countries. Its applications run predominantly on the cloud, so it does not maintain its own data centers, with the exception of a small intranet.
Faruque also sees potential for AI to be used in manufacturing, where it can drive efficiencies in its production pipeline and assembly. For instance, AI can help monitor food quality and forecast demand.
His interest in the potential use of AI, however, remains limited to backend operations. Faruque is adamant about keeping generative AI away from customer interactions and customer-facing operations -- for now, at least.
With the technology still in its infancy, there still is a lot that needs to be understood and worked through, he noted.
"We see the output [and responses] it generates, but we don't really understand how [it got to the answer]," he said. "There's this black box...it needs to be demystified. I want to know how it works, how it arrived at its response, and whether this answer is repeatable [every time the question is asked]."
Also: Why companies must use AI to think differently, and not simply to cut costs
Currently, this is not the case, he said, adding that the risk of hallucination also is a concern. And in the absence of a security incident, little is known about whether there are any inherent cybersecurity issues that need to be resolved, he noted.
"Right now, there's just a lot of marketing [hype]," Faruque said, adding that it was not enough to simply talk about "trust" without providing details about what exactly that meant.
He urged AI vendors to explain how their large language models are formed, what data they consume, and what exactly they do to generate responses. "They need to stop acting like it's magic [when] there's a code running it and there's science behind it," he said. "Help us understand it [because] we don't like adopting a technology that we don't have a solid understanding of."
He underscored the need for accountability and transparency, alongside guarantees that customers' data used to train AI models will not be made public. This is critical, especially for organizations that need to comply with data privacy regulations in their local jurisdiction.
Also: Measuring trust: Why every AI model needs a FICO score
Until these issues are ironed out, he said he is not willing to put his own customers' data at risk.
Trust also is something Singapore's Ministry of Trade and Industry (MTI) takes seriously, specifically, in terms of data privacy and security. Ten government agencies sit under the ministry, including EDB and the Singapore Tourism Board.
In particular, the ministry's data must be retained in Singapore, and this is proving to be a big hurdle in ensuring data security and governance, said MTI's ministry family CIO Sharon Ng. It means any AI and large language models it uses should be hosted in its own environment, even those run by US vendors such as Salesforce's Einstein Copilot platform.
Like Faruque, Ng also stressed the need for transparency, in particular the details of how the security layer operates, including what kind of encryption is used and whether data is retained, she noted.
Also: How trusted generative AI can improve the connected customer experience
Her team currently is exploring how generative AI tools, including Salesforce's, can benefit the ministry, which remains open to using different AI and large language models that are available in the market. This would be less costly than building its own models and would shorten the time to market, she said.
The use of any AI model, however, still would be subject to trust and security considerations, she noted. MTI currently is running generative AI pilots that aim to improve operational efficiencies and ease work tasks across its agencies.
For Singapore telco M1, delivering better customer service is the clear KPI for generative AI. Like MTI and Jollibee, though, data compliance and trust are critical, said Jan Morgenthal, chief digital officer of M1. The telco currently is running proof-of-concepts to assess how generative AI can enhance interactions its chatbot has with customers and whether it can support additional languages, other than English.
This means working with vendors to figure out the parameters and understand where the large language and AI models are deployed, Morgenthal said. Similar to MTI and Jollibee, M1 also has to comply with regulations that require some of its data, including those hosted on cloud platforms, to reside in its local market.
Also: Ahead of AI, this other technology wave is sweeping in fast
This necessitates the training of AI models to be carried out in M1's network environment, he said.
The Singapore telco also needs to be careful about the data used to train the models and responses generated, which should be tested and validated, he said. These not only need to be checked against guidelines stipulated by the vendor, such as Salesforce's Trust Layer, but also against the guardrails that M1's parent company Keppel has in place.
Addressing the generative AI trust gap
Such efforts will prove critical amid falling trust in the use of AI.
Both organizations and consumers now are less open to the use of AI than they were before, according to a Salesforce survey released last month. Some 73% of business buyers and 51% of consumers are receptive to the technology being used to improve their experiences, a drop from 82% and 65%, respectively, in 2022.
And while 76% of customers trust businesses to make honest claims about their products and services, a lower 57% trust them to use AI ethically. Another 68% believe AI advancements have made it more important for companies to be trustworthy.
The trust gap is a significant issue and concern for organizations, said Tim Dillon, founder and director of Tech Research Asia, pointing to the backlash Zoom experienced when it changed its Terms of Service, giving it the right to use its users' video, audio, and chat data to train its AI models.
Also: AI, trust, and data security are key issues for finance firms and their customers
Generative AI vendors would want to avoid a similar scenario, Dillon said in an interview with ZDNET, on the sidelines of Dreamforce 2023 held in San Francisco this week. Market players such as Salesforce and Microsoft have made efforts to plug the trust gap, which he noted was a positive step forward.
Apart from addressing trust issues, organizations planning to adopt generative AI also should look at implementing change management, noted Phil Hassey, CEO and founder of research firm CapioIT.
This is an area that often is left out of the discussion, Hassey told ZDNET. Organizations have to figure out the cost involved and skillsets they need to acquire and roles that have to be reskilled, as a result of rolling out generative AI.
A proper change management strategy is key to ensuring a smooth transition and retaining talent, he said.
Based in Singapore, Eileen Yu reported for ZDNET from Dreamforce 2023 in San Francisco, at the invitation of Salesforce.com.