BOOK THIS SPACE FOR AD
ARTICLE ADA continuously improving set of artificial intelligence (AI) resources over the next decade is set to have a huge impact on businesses and the human workforce.
Initially, AI will have a broadly augmentative effect, taking over low-value tasks and empowering humans to focus efforts on more strategic and creative jobs. However, the agent-first AI enterprise is evolving at an unprecedented velocity in direction and speed.
Also: How your business can best exploit AI: Tell your board these 4 things
What we describe as the six levels of autonomous work refer to the maturity levels of AI assistants versus AI agents. To better understand the adoption forecasts and the impact of AI assistants and agents in the workplace, AI agents are made possible through the emergence of large language models (LLMs) that enable deep language understanding, reasoning, and decision-making.
Yet some limitations need to be addressed for agents to be adopted in the enterprise, including a lack of access to private data and a lack of a built-in ability to take action. For agent adoption to increase, these concerns must be addressed.
Also: Time for businesses to move past generative AI hype and find real value
Agents can have different levels of autonomy. Assistive agents (sometimes called copilots) collaborate with humans, enhancing capabilities rather than acting alone. Copilots often require human input and feedback to refine suggestions or actions.
Autonomous agents operate independently without direct human supervision. A hybrid version of these agents -- unlike other fully autonomous agents -- can seamlessly hand off tasks to humans as needed. Appropriate guardrails are crucial to ensure reliability, adherence to business practices, and data security and privacy, as well as to prevent hallucinations, toxicity, and harmful content.
I spoke with two technology and innovation analysts and enterprise market strategists to better understand the business adoption of AI agents, the challenges and opportunities, and forecasts for mainstream implementation.
Michael Maoz is senior vice president of innovation strategy at Salesforce. Before joining Salesforce, Maoz was a research vice president and distinguished analyst at analyst Gartner, serving as the research leader for the customer service and support strategies area.
Also: When's the right time to invest in AI? 4 ways to help you decide
Ed Thompson is a senior vice president of market strategy at Salesforce. Before joining Salesforce, Thompson was a research vice president and distinguished analyst at Gartner, covering customer experience (CX), and CRM strategy and implementation. Maoz and Thompson shared their points of view on the future of AI agents in the enterprise.
AI agents are upon us, but it's early days. When do you think they'll go mainstream and where?
Ed Thompson (ET): I think for it to get to mainstream will take longer than people think. Not due to technical limitations, but instead due to adoption and changing habits. Crudely, agents can either be assistants to employees or they can replace employees. In the first case, they are like PAs, and they're already here, not in a perfect form but within the next five years, I'd argue almost every white-collar worker and some blue-collar staff will see the applications they use will have assistants built-in that help reduce the time spent on laborious tasks -- but the employee will still be there and still in control.
In the second case, they replace employees. Replacing a whole job is a tough ask -- that'll take a long time, unless the job is hated and highly repetitive. I'm going to bet it's 10 years before we see that happen in the mainstream. And I'm going to bet it's startups that make it happen, who can redesign work from scratch with lots of digital employees rather than existing businesses. I'm thinking travel brokers or insurance brokers or financial advisors who have only, say, two human employees but a dozen digital agent employees and appear like they're a 20-employee company.
Vala Afshar/ZDNETMichael Maoz (MM): The question of which use cases will be first is different, although it's related to the question of when we will see broad adoption. I agree that the early use of AI agents like the Salesforce Agentforce Service Agent will be for the abundant number of lower-risk and lower complexity use cases, such as automatically gathering the information that a customer service agent requires to handle a chat or phone call and displaying that information on the screen.
The AI agent will use a knowledge graph to present the targeted content that the human agent needs to help the customer. Another AI agent will formulate an email or text follow-up for the human agent to check and approve. At the end of the interaction, yet another AI agent will summarize the conversation.
Another set of use cases will be for the next generation of chatbots. Current chatbots have rigid knowledge bases and try to guess the customer's intent, and are poor at handling multimodal interactions requiring images and other media. The emerging AI agent bots have intelligent knowledge answers, by which we mean that they not only answer questions but also carry out actions. They are based on LLMs, sure, but the prompts are much richer in several ways. Here are four very cool characteristics of the new prompts:
Prompts know the role of the person asking (or the permissions of another AI agent that is asking) and can permission access to certain information for the answer and not others.Prompts use advanced natural language modeling and are multimodal, and can focus on the knowledge relevant to the specific context, answering with a combination of text, audio, and images.Prompts can execute a set of actions, such as 'pull up the claims form' or 'retrieve order status' or analyze the attached photo.Prompts can act based on rules about privacy, compliance, or any industry regulation.A valuable side effect of this filtering is that the compute power is greatly reduced, which is good for the environment.
Though I mentioned customer service, there are dozens of other uses such as crafting sales follow-up emails, exploring a group of phone calls, creating dynamic marketing segments and the right message for each segment, and for coders: translating natural language to code.
Also: 4 ways to help your organization overcome AI inertia
Those examples were all about the 'what.' To predict when AI agents will become mainstream, we can look at this in terms of Geoffrey Moore's Crossing the Chasm. He suggests that there are technology innovators, usually in the low single digits as a percentage of all IT leaders, who run ahead and embrace new technologies. Behind these innovative shock troops come early adopters who see their innovative peers and want to copy their successes.
On a high level, at some point over the next few quarters, the fascination with AI agents that drove early adopters will give way to a broader conversation among the early adopters about budgets and innovation bandwidth. Most companies have a very limited ability to reallocate resources to new IT projects that require new processes and new supplier relationships. They will do this when there is the promise of dramatically new capabilities, such as new business models for which there is a high probability of revenue growth or operational efficiency.
Also: Do AI tools make it easier to start a new business? 5 factors to consider
Unless an economic slowdown hampers the technology innovation cycle, we should see the early adopters start to roll out plans for scaled AI agent projects by the end of 2024, and, with the success stories more understood across industries, we can expect to see the second half of 2025 as the time when there is more widespread uptake of scaled and coordinated AI agent programs across multiple departments and lines of business.
That leaves us with the majority of buyers -- in excess of 80% -- who sit back until the implications of the IT change are better known, the business benefits are clearer, and the true costs can be more dependably planned for.
Generative AI (Gen AI) has been with us for 18 months, but many businesses have tried and many have failed. Some may call that process experimentation, as only 9% have scaled up use cases to large numbers of employees. What's causing the scaling-up problem?
ET: Well there are issues of security, bias, toxicity, governance guardrails, compliance with regulations, copyright and data provenance, the cost of the tools, and more recently it's been issues related to the energy use of LLMs and the impact on sustainability goals. But the big one is, obviously, the inaccuracy in responses from prompts caused by the data on which the models are grounded. Data sources and quality are the primary causes.
I've talked to companies getting 40% accurate answers when first testing and they've found as the models learn, and as they strip out poor-quality data and add better sources of data, the accuracy improves 5% per week. It doesn't mean employees are 100% accurate but you need to at least match in accuracy what employees do. The result is lots of employees see the first set of suggested answers or emails or summaries and conclude it's rubbish and refuse to adopt. So, the challenge for practitioners is often whether they cut and run and move to another use case that is more likely to yield benefits or do they give it time to learn and give it new sources of data?
MM: There are a few realities businesses need to deal with in Gen AI. The first is the need to de-risk every Gen AI project. To do that, good data governance is needed, so that the data for AI can be trusted. Then you need to be able to audit the data. Next, it has to get past the 'ethical use' test, so biases are not baked into results. A privacy layer has to exist. For a business, unlike external Gen AI tools, the data for the Gen AI must be 'zero copy', meaning it does not store any data. Unless you can do all that, you might run foul of existing or emerging regulations, such as the EU's AI Act.
Also: AI 'won't replace' creative skills, study finds
The second factor is that humans prefer humans, even when AI is more accurate. Consumers prefer a judge over an algorithm, even when data shows that a judge is less consistent than AI. Or self-driving vehicles: the majority of people say that they prefer an automobile when it is driven by a human, even if the driverless vehicle would be more accurate.
The final factor might be called 'the human touch'. For interactions ranging from help with a question on an invoice, or medical advice, or much technical support, people are looking for empathy, concern, transparency, understanding, and trust. These traits are difficult to capture in software in a cost-effective way.
The technologies for the new world of connected customers have arrived. What one 'soft' factor might slow down adoption?
MM: You are right, we've covered two of the three elements of change, technology and process change, and the open question is, "Is management ready to embrace change?" Surveys from HR globally show that employees consistently say that the worst day of any month is the day when they have to speak with their manager. When you dig into the reasons, there is insecurity and a lack of transparency around the metrics that matter, low wages or poor conditions, a lack of adequate training, a feeling that accountability only cuts one way, and a feeling that the manager does not trust them. Sadly, these are more real than imagined. I tend to recommend looking at companies with the happiest employees and asking, "Why them, and does it pay off?"
In part, the happiest employees are also at the most successful companies, and we have to wonder, are they happy because it is more fun to be on the winning team, or are they on the winning team because working for that company is more satisfying? ADP, Apple, Ferrari, Costco, BMW, Cisco, Airbus, Rossman, Samsung, and Salesforce are all among the top rated for employee satisfaction, and all are successful companies. They cross six different industries, so that isn't it. They are well-managed workforces.
There is that word again: manager. A manager -- a good manager -- needs to be a leader. There are enough pieces of research from Harvard Business Review and others that talk about this. But they need to be a type of entrepreneur that is always ready with an open mind on how to do what Toyota says is 'Better, Better, Never Best'.
They are equal parts leader and entrepreneur for their team. They take reasonable risks to improve and they are not primarily in the game for themselves only, but for the good of the company, the customer, and the employee. They also tend to look at the big picture when making decisions, and they take them in collaboration with their team to the extent possible.
Also: A third of all generative AI projects will be abandoned, says Gartner
Finally, they care about the success of their direct reports. Just as they are quick to praise success, they are also unafraid of helping an employee who cannot perform at the level required to find other opportunities inside or outside of the business. It is this type of leader who will boldly lead their team forward in embracing AI agents as a new part of the team, dedicated to making every team member more effective and successful.
Ed Thompson, senior vice president of market strategy at Salesforce.
ET: Management -- if I blend that topic with the rollout of agents, then that's a really interesting topic. Agent technology is about to set a big challenge for managers. Not so much when agents act as agent assistants to employees, but when they replace employees, things will change for managers. The limited evidence we have so far is that when agents are assistants to employees, and they offload boring, mundane work, then, for the manager, it's a great way to improve low performers in the team. The benefits are far less for the highest performers. In many ways, it's a boon for managers, although performance reviews become more difficult when the lower performers now look a lot like the high performers.
But we haven't seen much of agents that act as full-agent employees yet. That changes the manager-employee relationship entirely. Now the manager has to decide if the human or the digital agent employee can do the job better. Imagine a situation where the manager now has five human employees and five digital agents in the team. It certainly sounds like that situation will mean more friction. It will change the definition of a good manager.
But then it depends on which jobs are replaced. Many jobs and roles are disliked. Often early in a career, we're all given the least-liked tasks. Will anyone mourn those jobs going to agents? Likewise, many jobs are performed by contractors or outsourcers who are not managed day-to-day by internal managers.
Also: Make room for RAG: How Gen AI's balance of power is shifting
I suspect gig, temporary, contract, and outsourced workers will be some of the first roles to be experimented with in using agent employees. However, what if the agent employees replace the jobs that everyone aspires to, where they leapfrog those hoping to be promoted to those roles? Then the manager's job becomes very painful.
In my view, the impact agents have on managers and employees all depends on the speed of introduction. If a company chooses to replace 50% of its employees in less than two years, like Klarna, then it's likely it will be painful for employees and managers, even if it's great for investors and executives. If that takes place over a decade, it's very different. No one questions self-checkout in supermarkets now -- but it took a decade to roll out. So, I'd expect management's happiness and dissatisfaction will depend on the speed of implementation.
This article was co-authored by Ed Thompson, who is a senior vice president of market strategy at Salesforce, and Michael Maoz, who is senior vice president of innovation strategy at Salesforce.