10 key reasons AI went mainstream overnight - and what happens next

1 day ago 10
BOOK THIS SPACE FOR AD
ARTICLE AD
10 surprising reasons why AI went mainstream overnight
ZDNET

This AI thing has taken off really fast, hasn't it? It's almost like we mined some crashed alien spacecraft for advanced technology, and this is what we got. I know, I've been watching too much *Stargate*.

But the hyper-speed crossing the chasm effects of generative AI are real. Generative AI, with tools like ChatGPT, hit the world hard in early 2023. All of a sudden, many vendors are incorporating AI features into their products, and our workflow patterns have changed considerably.

Also: The best AI for coding in 2025 (and what not to use - including DeepSeek R1)

How did this happen so quickly, essentially transforming the entire information technology industry overnight? What made this possible, and why is it moving so quickly?

In this article, I look at ten key factors that contributed to the overwhelmingly rapid advancement of generative AI and its adoption into our technology stacks and workday practices.

As I see it, the rapid rise of AI tools like ChatGPT and their widespread integration came in two main phases. Let's start with Phase I.

Phase I: Fundamental innovations

Researchers have been working with AI for decades. I did one of my thesis projects on AI more than 20 years ago, launched AI products in the 1990s, and have worked with AI languages for as long as I've been coding.

Also: 15 ways AI saved me time at work in 2024 - and how I plan to use it in 2025

But while all of that was AI, it was incredibly limited compared to what ChatGPT can do. As much as I've worked with AI throughout my educational and professional career, I was rocked back on my heels by ChatGPT and its brethren.

That's Phase I. The 2020s marked an era of fundamental AI innovation that took AI from solving specific problems with the ability to work in very narrow domains to the ability to work on almost anything. There are three key factors in this phase.

1. Advancements in transformer models

While AI has been researched and used for decades, for most of that time, it had some profound limitations. Most AIs had to be pre-trained with specific materials to create expertise.

In the early 1990s, for example, I shipped an expert system-based product called *House Plant Clinic* that had been specifically trained on house plant maladies and remedies. It was very helpful as long as the plant and its related malady were in the training data. Any situation that fell outside that data was a blank to the system.

Also: How to run DeepSeek AI locally to protect your privacy - 2 easy ways

AIs also used neural networks that processed words one at a time, which made it hard for an AI to understand the difference between "a bank of the river" and "a bank in the center of town."

But in 2017, Google posted a paper called "Attention Is All You Need." In it, they proposed a model called "self-attention" that lets AIs focus on what they identify as important words, allowing AIs to process entire sentences and thoughts at once. This "transformation of attention mechanisms" enabled the AIs to understand context (like whether the "bank" in a sentence refers to the side of a river or a building that holds money).

2. Widely-trained foundation models

The transformer approach gave researchers a way to train AIs on broad collections of information and determine context from the information itself.

That meant that AIs could scale to train on almost anything, which enabled models like OpenAI's GPT-3.5 and GPT-4 to operate with knowledge bases that encompassed virtually the entire Internet and vast collections of printed books and materials.

Also: What is sparsity? DeepSeek AI's secret, revealed by Apple researchers

This makes them almost infinitely adaptable and able to pull on vast arrays of real-world information. That meant that AIs could be used for nearly any application, not ones specifically built to solve individual problems. While we spent months training *House Plant Clinic* on plant data, ChatGPT, Google Gemini, and Microsoft Copilot can all diagnose house plant problems (and so much more) without specialized training.

The one gotcha has been the question of who owns all that training data? There are numerous lawsuits currently underway against the AI vendors for training (and using) data from copyrighted sources. This could restrict data available to large language models and reduce their usefulness.

Another issue with the sort of infinitely scaled training data being used is that much of that information isn't vetted. I know this comes as a surprise to all of you, but information published on the Internet isn't always accurate, appropriate, or even sane. Vendors are working to strengthen guardrails, but we humans aren't even sure what is considered appropriate. Just ask two people with wildly divergent perspectives what the truth is, and you'll see what I mean.

3. Breakthroughs in hardware (GPUs and TPUs)

By the early 2020s, a number of companies and research teams developed software systems based on the transformer model and world-scale training datasets. But all of those sentence-wide transformation calculations required enormous computing capability.

Also: AI data centers are becoming 'mind-blowingly large'

It wasn't just the need to be able to perform massively parallel and matrix operations at high speed, it was also the need to do so while keeping power and cooling costs at a vaguely practical level.

Early on, it turned out that NVIDIA's gaming GPUs were capable of the matrix operations needed by AI (gaming rendering is also heavily matrix-based). But then, NVIDIA developed its Ampere and Hopper series chips, which substantially improved both performance and power utilization.

Also: 5 reasons why Google's Trillium could transform AI and cloud computing - and 2 obstacles

Likewise, Google developed its TPUs (Tensor Processing Units), which were specifically designed to handle AI workflows. Microsoft and Amazon also developed custom chips (Maia and Graviton) to help them build out their AI data centers.

There were three major impacts from these huge AI-chip-driven data centers:

World-scale training became affordable, at least to the biggest players.AI capabilities could be metered and sold via a SaaS model, making AI accessible to most businesses.AI processing speeds increased rapidly, allowing for the beginning of real-time and near real-time AI analysis of data (which has proven to be mission-critical for self-driving cars).

Phase II: Market forces drive adoption

Okay, so now we have working technology. What of it? I mean, how many times has an engineering team produced a product or capability it thought was revolutionary, only to have their work output die due to lack of practicality or market acceptance?

But here, now, with generative AI, the market forces are what are driving the real change. Let's dig into seven more key factors.

4. ChatGPT for everyone, and API access

And then came ChatGPT. It's a funny name and took a while for most of us to learn it. ChatGPT literally means a chat program that's generative, pre-trained, and uses transformer technology. But despite a name that only a geek could love, in early 2023, ChatGPT became the fastest-growing app of all time.

OpenAI made ChatGPT free for everyone to use. Sure, there were usage limitations in the free version. It was also as easy (or easier) to use than a Google search. All you had to do was open the site and type in your prompt. That's it. And because of the three innovations we discussed earlier, ChatGPT's quality of response was breathtaking. Everyone who tried it suddenly realized they were touching the future.

Also: Are ChatGPT Plus or Pro worth it? Here's how they compare to the free version

Then, OpenAI opened the ChatGPT models to other programmers through an API. All any programmer needed was a weekend of learning and a credit card number in order to add world-changing AI into any application. Cost per API call wasn't much more than for any other commercial APIs, which suddenly meant that AI was a very high-profile, easy addition that could expand a company's product line with a super-hot new income-producing service.

Barrier to entry? What barrier to entry?

5. Open source acceleration

While vendor-supported APIs like those from OpenAI can reduce time to market considerably, they also can lead to vendor lock-in. To prevent total reliance on proprietary technologies, the open-source community has embraced AI in a big way.

Open-source models (LLaMa, Stable Diffusion, Falcon, Bloom, T5, etc.) provide non-proprietary and self-hosted AI capabilities without relying on big technology monopolies. Open source also democratizes AI by allowing developers to create AI solutions for areas outside the guardrails the big model providers are required to keep in operation.

Also: The best open-source AI models: All your free-to-use options explained

Platforms like those from Hugging Face provide easy-to-use and easy-to-test tools that allow developers of varying skill levels to integrate AI into their projects quickly.

Then, of course, there are the classic benefits of open source: large-scale collaboration, continuous improvements, community-generated and validated optimizations, and the introduction of new features, including some too obscure to be profitable for a big vendor but necessary for certain projects.

All of this gives businesses of all sizes, researchers, and even nights-and-weekends developers the opportunity to add AI into their projects, which, in turn, is accelerating AI adoption across a wide range of application uses.

6. Consumer and enterprise demand

The thing was, generative AI wasn't just hype. It worked and provided value. Separate from help with writing (which ZDNET policy prohibits for its writers), I documented 15 different ways AI helped me tangibly in just 2024.

Also: The work tasks people use Claude AI for most, according to Anthropic

These uses ranged from programming and debugging help to fixing photos, to doing that sentiment analysis I mentioned above, to creating album covers, to generating monthly images for my wife's e-commerce store, to creating moving masks in video clips, to cleaning up bad audio, to tracking me during filming, to doing project research, to so much more.

And I'm not alone. Small and large businesses alike, as well as students and individual contributors, all noticed that generative AI could help, for real. Not only were the valuations of the AI companies skyrocketing, but consumers actually bought -- and really used -- the AI tools that suddenly became available.

7. Virality and network effects

For years, decades really, AI was far from mainstream. Sure, there were limited AIs in video games. Expert systems were built that helped solve specific problems for some companies. There was a lot of promise and research. But when it came to "Show me the money," there was never the overwhelming return that vulture capitalists and their ilk required from tech investments.

Also: From zero to millions? How regular people are cashing in on AI

Then, all of a sudden, Aunt Marge was talking about ChatGPT during family gatherings. AI was a thing, it was astonishing, and oh-my-gosh, the things it could do. Did you know you could make it talk like a pirate? Did you know you could get it to write a *Star Trek* story? Did you know it could analyze your siloed business data and give you sentiment analysis in minutes without a bit of programming? And did you know it could write code that worked?

Within a few months, ChatGPT became the fastest-growing app of all time, hitting 100 million active users. A year later, that doubled to 200 million active users.

8. Competitive market pressure

Suddenly, AI was a headliner rather than the personality mark of the geeky neighbor you ask over to fix your PC but really would prefer they went away once the PC was working again and they were paid in fresh baked cookies.

Oddly specific analogies about my geeky past aside, AI was clearly an opportunity. OpenAI was suddenly worth billions, and it seemed like Google, Microsoft, Meta, Amazon, Apple, and all the rest had been left behind.

Investment and licensing deals were everywhere, and AI was being baked into mainstream products either as a bonus feature or (far more prevalent) a very nice upsell to a monthly annuity. Microsoft had Copilot, Google had Gemini, Meta had Meta AI, Amazon had Q, and Apple… eventually had Apple Intelligence (for whatever that's worth).

9. Legislative and regulatory lag

This new AI boom took on characteristics of the wild, wild west. Governments were just trying to get their heads around what it all was, and whether this was an enormous economic opportunity or an existential threat. Hint: it's both.

The US government set up some plans for AI oversight, but they were tepid at best. AI vendors warned of catastrophe if AI isn't regulated. Lawsuits over copyright issues complicated matters. Then, the new administration changed the game, with a focus on substantially reduced regulation.

All this opens the door for AI companies and businesses using AI to innovate and introduce new capabilities. This is great for rapid growth and innovation, but it also means the technology is running without guardrails. It definitely fuels the mainstreaming of AI technology, but it could also be very, very baaaaaad.

10. Continuous innovation and investment

So, then we get to the rinse-wash-repeat phase of our discussion. AI isn't going anywhere. All of the self-fulfilling prophecies are fueling new innovation because they actually work. Major companies are continuing to not only make billion-dollar bets on the technology, but are also offering compelling products and services that can provide real value to their customers.

More and more companies and individuals are investing in AI startups and ongoing services. We're seeing breakthroughs like multimodal AI with text/images/video/audio, autonomous agents, and even AIs used to code AIs.

Also: What is Perplexity Deep Research, and how do you use it?

The closest example I can think of to this virtuous cycle was the app economy of the mid-2000s. Data speeds became fast enough and affordable enough for phones to always be connected to the Internet, startups offered app services that proved to be tangibly valuable, those companies grew huge and continued to offer services, and more and more investment into mobile-first computing paid off for both consumers and producers.

It's very likely that a virtuous cycle is also driving AI innovation and production, pushing generative AI and other AI-based services very much into the mainstream, where it's unlikely to ever go away.

Phase III: The future

When I went to college in the 1980s and majored in computer science, my mom said that all she wanted from me was a computer that would vacuum her floors. Now, we have a wide range of little robots that go forth and do just that. This morning, while having coffee, I taped "Vac and mop bedroom," and Wally the Narwal did just that.

My dream is to be able to say, "Alexa, bring me coffee," and have a device actually make me a cup of coffee and bring it to me while I'm sitting here writing. Don't laugh. Whether it's Tesla, Apple, or Meta, real work is being done right now on humanoid robots.

Given how many times my Alexa screws up and how many times ChatGPT makes up stuff to save face, I'm not exactly sure that having a romping, stomping robot in my living room or office is a good idea. But I do want my coffee.

Also: What is DeepSeek AI? Is it safe? Here's everything you need to know

Stay tuned. The past two years have been a wild ride, and I suspect we've only just seen the beginning.

What do you think has been the most significant factor in AI's rapid adoption? Have you incorporated AI tools like ChatGPT into your daily workflow? If so, how have they changed the way you work or create?

Do you see AI as a long-term game-changer, or do you think we're in the midst of a hype cycle that will eventually stabilize? And what about the ethical and regulatory concerns? Do you think AI development is moving too fast for proper oversight? Let us know in the comments below.


You can follow my day-to-day project updates on social media. Be sure to subscribe to my weekly update newsletter, and follow me on Twitter/X at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, on Bluesky at @DavidGewirtz.com, and on YouTube at YouTube.com/DavidGewirtzTV.

Read Entire Article