BOOK THIS SPACE FOR AD
ARTICLE ADBusinesses in all sectors want to gain a competitive advantage through artificial intelligence (AI), and senior executives are tasked with ensuring their organizations have the rules and regulations in place to use new technologies safely and effectively.
Keith Woolley, chief digital and information officer at the University of Bristol, is a digital leader helping his organization, one of the UK's leading academic institutions, embrace AI across campus.
Also: AI transformation is a double-edged sword. Here's how to avoid the risks
Bristol is a pioneer in emerging technologies, including being home to Isambard-AI, the UK's fastest supercomputer. However, the propagation of AI tools across the organization is not just led by senior executives overseeing world-leading research activities.
Day-to-day professionals use AI
While the institution recognizes the power of using AI to drive innovations in all academic areas, Woolley told ZDNET how people in day-to-day roles in teaching, administration, and research are also using emerging technologies.
Also: Managing AI agents as employees is the challenge of 2025, says Goldman Sachs CIO
Like the rollout of cloud services in the past, Woolley said professionals want to make their own choices on emerging technology in a movement called Bring Your Own AI (BYOAI).
"It's happening," said Woolley, who suggested the broad acceptance of the cloud and a rush by providers to bolt on AI-enabled services means emerging technologies can enter organizations without the knowledge of IT teams.
"I'm seeing it already, where departments are now building or bringing tools into the institution because every supplier that provides you with a SaaS system is sticking AI into it."
Bring Your Own AI is a growing trend
Other experts also point to BYOAI as a growing trend. A survey by the MIT Center for Information Systems Research suggests the trend occurs when employees use unvetted, publicly available Gen AI tools for work.
Also: Your AI transformation depends on these 5 business tactics
Woolley acknowledged that the stealthy introduction of AI, whether driven by users or vendors, raises significant concerns for his team and the university's executives.
"Bring your own AI is a challenge," he said. "It's like when you used to see storage appearing on the network from Dropbox and other cloud providers. People thought they could get a credit card and start sharing things, which isn't great."
MIT's research suggests Woolley is justified in expressing concerns. Although generative AI tools offer promising productivity gains, they also introduce risks, including data loss, intellectual property leakage, copyright violations, and security breaches.
Woolley said Bristol's key concern is a potential loss of control over how AI-enabled SaaS services communicate and what data sources they share.
"The system could be taking our data, which we think is in a secure SaaS environment, and running this information in a public AI model."
Also: 5 ways AI is changing baseball - and big data is up at bat
Banning Gen AI to mitigate risks
So, how should organizations deal with the rise of BYOAI? One approach would be for executives to ban Gen AI.
However, MIT's research suggests business leaders should stay open to Gen AI and provide strong guidance to turn BYOAI into an innovation source.
That sentiment resonates with Woolley, who said tight control of application boundaries is the best way to manage BYOAI.
Also: Integrating AI starts with robust data foundations. Here are 3 strategies executives employ
"The enforcement of policies is a discussion we're having inside our organization. We're getting guardrails out to people for what they can and can't do," he said.
He said the starting point is an approved set of tools that will aim to control the creeping introduction of AI.
Students want to use AI
To establish the context for these guardrails, senior executives at Bristol spent time with students and asked them for their views on using Gen AI in education.
"The conversation went from how you would use AI for learning to enrolments, marking, and everything else," said Woolley.
"What was surprising was how much students wanted us to use AI. One of the things that came out clearly from our students was that if we don't allow them to use AI, they will be disadvantaged in the marketplace against others that offer the opportunity."
Woolley likened the introduction of Gen AI to the nascent use of calculators in the classroom. Decades ago, people were concerned that using calculators was a form of cheating.
Also: How your business can best exploit AI: Tell your board these 4 things
Today, every math student uses calculators. Woolley expects a similar journey with the adoption of Gen AI.
"We're going to have to rethink our curriculum and the capability to learn using that technology," he said.
"We'll have to teach the next generation of students to differentiate information provided through AI. Once we can get that piece cracked, we'll be fine."
Woolley said Bristol aims to make Gen AI a carefully controlled element of the education process: "We've been clear that AI is about assisting the workforce, the students, and our researchers, and where practical and possible, automating services."
Key considerations
However, identifying the potential use of AI is just a starting point. He described the costs associated with emerging technology as a hockey stick; invest too heavily or let users bring their own AI tools without strict rules and regulations, and costs can zoom upwards.
Also: 4 ways to turn generative AI experiments into real business value
Woolley said the institution's senior executives are focused on some key considerations.
"The first question is, 'How much failure do we want?' Because AI is a guessing engine at the moment, and it's one of those situations where it will make assumptions based upon the information it's got. If that information is slightly flawed, you'll get a slightly flawed answer," he said.
"So, we are looking at what services we can offer. We've put policy and process around it, but that's a living document because everything's changing so fast. We are trying to drive change through carefully."
In the longer term, Woolley expects the institution to adopt one of three approaches: Consume generative AI as part of the education system, feed data into existing models, or develop language models as a form of competitive differentiation.
"That's the debate we're having," he said. "Then, once the right approach is chosen, I can create policies based on how we use AI."
Also: The five biggest mistakes people make when prompting an AI
That approach chimes with Roger Joys, vice president of enterprise cloud platforms at Alaskan telecoms firm GCI. Like MIT and Woolley, he said policy and process are crucial to introducing Gen AI without risks.
Reviewed and approved AI models
"I would like to see our data scientists have a curated list of models that have been reviewed and approved," he said, reflecting on the rise of BYOAI. "Then you can say, 'You can select from these models,' rather than them just going out and using whatever they like or find.'"
Joys said successful executives cut through the hype and create an acceptable use policy that allows people to overcome challenges.
"Find the business cases," he said. "Move methodically, not necessarily slowly, but toward a known target, and let's show the value of AI."