Generative AI adoption outpacing all other forms of AI
Just a few years ago, enterprise adoption of generative AI was insignificant.
Machine learning, natural language processing (NLP) and graph technology are all types of AI that were being used by enterprises to inform decisions when Gartner conducted the 2021 edition of “AI in the Enterprise Survey.”
Generative AI (GenAI), meanwhile, was not even a consideration.
But when the research and advisory firm surveyed over 600 respondents in late 2023 for the 2024 edition of its report on AI adoption, generative AI was not only a consideration for many enterprises but also the most popular form of AI being deployed.
The November 2022 release of ChatGPT by AI vendor OpenAI was a significant advance in generative AI technology, enabling NLP and content generation in ways no previous platform had.
Following its launch, and the subsequent releases of platforms such as Bard from Google — now Gemini — and Claude from Anthropic, enterprises quickly began using ChatGPT and other large language models (LLMs) together with proprietary data to apply generative AI to their own business.
By embedding generative AI assistants in existing work applications and developing domain-specific models trained with proprietary data, organizations can enable employees who previously didn’t have skills such as coding and data literacy to use data to inform decisions. In addition, by reducing repetitive tasks such as coding, generative AI tools can make trained experts more efficient.
As a result, organizations are quickly increasing their adoption of generative AI capabilities, Gartner’s survey found.
Recently, Gartner analyst Leinar Ramos discussed the findings of the survey, including the different ways organizations are using generative AI, common traits shared by organizations seeing the most success with the technology and the barriers to adoption as they attempt to make generative AI a significant means of informing decisions.
In addition, Ramos spoke about potential disillusionment with generative AI as hype evolves into reality and the rising need for AI governance as generative AI sparks increasing interest in all types of AI.
Gartner’s survey found that generative AI is now the most frequently deployed form of AI deployed in the enterprise. Before generative AI, what was the most popular form of enterprise AI adoption?
Leinar Ramos: In 2021, the number one technique deployed was more traditional machine learning such as machine learning for predictions. Natural language processing, optimization techniques, graph techniques and rule-based systems were the other options. This year was the first year that we introduced generative AI as one of the options, and already, it was the top answer, with 29% of organizations saying that generative AI is deployed or in use today. It is already ahead of all of those other techniques.
That doesn’t mean that generative AI is taking over the entire AI space. Some of those other techniques are often better fits for use cases. So it is important for organizations to consider that generative AI is not the right tool for every use case.
What is an example of a use case for which generative AI might be best and an example of a use case for which a different type of AI such as machine learning might be best?
Ramos: We did an analysis to find the use-case families when generative AI is a better fit than others. The three use-case families we believe generative AI is really good at are content generation; the ability to serve as knowledge discovery, such as creating Q&A systems and using enterprise search; and for conversational interfaces. Those are areas where it is a really good fit. But then there are use cases that are not the right fit for generative AI, though this can change because it is evolving very quickly.
We identified four use-case families where generative AI is not the right fit. The first is forecasting, such as inventory predictions and predictive maintenance. These are use cases where you’re trying to forecast from a set of data and make a prediction. There are much better AI ways to do this than generative AI, particularly predictive machine learning. The second is planning and optimization, and at least for now, generative AI models tend to be notoriously bad at planning ahead. The third family is decision engineering. The final one is autonomous systems, which are systems like automated trading.
For generative AI, you typically need a human in the loop. There is a push toward generative AI agents that will be able to operate more autonomously, but for now, it is not the best fit.
Why has generative AI adoption become so popular, surpassing all the other forms of AI after not even being a consideration just a few years ago?
Ramos: A big driver is how generative AI is consumed in organizations. The main way organizations are using generative AI is through applications. It’s not by building things from scratch or customizing models. We found that organizations that have already deployed generative AI tend to do so by utilizing GenAI that is embedded in existing applications. According to the survey, 34% of respondents say that this is their primary way of using generative AI. Customizing models is 25%, and training and fine-tuning models is 21%.
As generative AI features are embedded into many different applications, that really infuses generative AI across the organization because the surface area of those applications is quite large.
What does embedding generative AI into an application look like for the end user?
Ramos: There’s a wide variety of ways it can be surfaced. A conversational interface could be a big part of that. Content generation can be a use case as well. And knowledge discovery, where you can ask questions about the documents you have in your [applications, is another use case].
In addition to embedding, what are some of the more common ways enterprise generative AI adoption is taking place?
Ramos: The second-most common is the customization of generative AI models with things like prompt engineering. That includes things like retrieval-augmented generation [RAG]. The third is fine-tuning or re-training models. The difference between that and customization is that organizations are changing existing models. With RAG, you’re not changing the model. With fine-tuning, you’re starting with a pre-trained model and using your own data to continue training it. The fourth is using standalone generative AI tools like ChatGPT or Google Gemini.
How would someone use ChatGPT or Google Gemini as a standalone tool in the enterprise?
Ramos: It goes back to some of the use cases we discussed earlier such as content generation. To some extent, you could also use it for knowledge discovery … and as an assistant to carry out certain tasks while you’re working on something else. It could be used to help improve productivity.
What are the biggest challenges to AI adoption and, in particular, more widespread use of generative AI?
Ramos: Within our survey, we focused on a subset of 9% of organizations that are more advanced in terms of AI than others and have deployed generative AI. The top three barriers were technical challenges, issues related to the cost of running generative AI initiatives — cost is a big concern — and difficulty getting the talent required.
Technical challenges is a broad category that includes anything from how to create a good RAG system, including the components that go into that, such as vector databases and prompt templates, to the guardrails that need to be put in place to make the systems resilient. There are a wide set of potential challenges of a technical nature.
What roles do accuracy, trust and security play in potentially slowing generative AI adoption? Are they a barrier to more widespread deployment?
Ramos: We did ask about this in the survey, and trust appears as the fourth most common barrier. One of the challenges is governing generative AI, and that is very much related to trust accuracy.
Interestingly, we didn’t find that generative AI implementation suffers from challenges around cultural assistance or obtaining sponsorship. Those were at the bottom of our list of barriers. My view here is that the popularity of generative AI has actually decreased resistance and there is a good window of opportunity to drive adoption.
But those other concerns are concerns we hear. We definitely hear concerns around governance, trust, risk and security.
How can those concerns be addressed? Are there means at this early stage to make generative AI more accurate and secure?
Ramos: One of the things we found when identifying the subset of mature organizations — a subset of about 9% of all organizations that are deploying AI more widely and deploying more use cases that stay in production for longer — was that there were four things they had in common. One was that they invest in AI trust, risk and security management. More than 70% of that subset considered their investment in AI privacy, security and risk to be impactful for different business outcomes, including regulatory compliance and cost optimization.
As a result, we believe these tools around risk and security management can make AI systems more transparent and predictable, which helps with risk mitigation and improves the system performance that drives those outcomes around [increasing trust].
Leinar RamosAnalyst, Gartner
Beyond that investment in AI trust, risk and security management, what are some other common traits shared by organizations adopting AI more broadly than others?
Ramos: The survey was really about broader AI adoption rather than just generative AI. So what I mentioned about mature organizations broadly has to do with AI. We found that what makes mature AI organizations different is that they focus on four foundational capabilities.
One is that focus on trust, risk and security management. The second is they tend to have a scalable AI operating model, which means they have a dedicated, central AI team along with distributed capabilities as well. The third is that they have a focus on AI engineering, meaning they have a systematic way of building and deploying AI products. The most mature organizations tend to double down on AI engineering activities such as testing, developing and deploying models. The final one is that they invest in people. They have a focus on investing in things like generative AI literacy programs and change management.
What will generative AI adoption look like a couple of years from now?
Ramos: We do see the adoption continuing. We speak to a lot of vendors and a lot have generative AI features on the roadmaps. There’s a lot of investment going into the space. But we also think there is a risk.
If you look at our hype cycles, generative AI is at the top. It’s at the peak of inflated expectations right now. When that happens, there tends to be a mismatch between where the technology is right now and the view that is out there, and that mismatch can cause disillusionment. In two years, we might see some of that disillusionment. Some organizations might have over-extended themselves. Earlier, I was talking about use cases where generative AI is not the right fit. That can help organizations navigate the hype that’s out there.
As AI adoption is extended to more users within organizations through generative AI capabilities — much as analytics was extended to more users through self-service tools — will AI governance take on greater importance the way data governance did a decade or so ago?
Ramos: AI governance is a big topic already, but it’s becoming more important. One of the questions we asked was what impact generative AI has had on the broader implementation of AI in their organization. We asked what the key impacts GenAI have been on their practice, and the increased importance of AI governance was one of their top three responses. The second-most popular response was the degree of AI adoption across their organization, so growing AI adoption and AI governance are very linked.
Generative AI has acted like a catalyst for AI adoption across the organization, increasing the importance of things like AI governance. As AI expands across the organization and more people have access to it, the surface of risk increases. The visibility of those risks also becomes more prominent. We can see this in day-to-day conversations when we’re inundated with calls about AI governance, which wasn’t the case a couple of years ago.
Editor’s note: This Q&A has been edited for clarity and conciseness.
Eric Avidon is a senior news writer for TechTarget Editorial and a journalist with more than 25 years of experience. He covers analytics and data management.