top of page

Gartner Predicts 40% of Generative AI Solutions Will Be Multimodal By 2027

Analysts Explore the Latest AI Trends at Gartner IT Symposium/Xpo, September 9-11 on the Gold Coast



Forty percent of generative AI (GenAI) solutions will be multimodal (text, image, audio and video) by 2027, up from 1% in 2023, according to Gartner, Inc. This shift from individual to multimodal models provides an enhanced human-AI interaction and an opportunity for GenAI-enabled offerings to be differentiated.


Speaking at Gartner IT Symposium/Xpo on the Gold Coast, Erick Brethenoux, Distinguished VP Analyst at Gartner, said, “As the GenAI market evolves towards models natively trained on more than one modality, this helps capture relationships between different data streams and has the potential to scale the benefits of GenAI across all data types and applications. It also allows AI to support humans in performing more tasks, regardless of the environment.”


Multimodal GenAI is one of two technologies identified in the 2024 Gartner Hype Cycle for Generative AI, where early adoption has potential to lead to notable competitive advantage and time-to-market benefits. Along with open-source large language models (LLMs), both technologies have high impact potential on organizations within the next five years.


Among the GenAI innovations Gartner expects will reach mainstream adoption within 10 years, two technologies have been identified as offering the highest potential - domain-specific GenAI models and autonomous agents (see Figure 1).


Figure 1: Hype Cycle for Generative AI, 2024

Source: Gartner (September 2024)


“Navigating the GenAI ecosystem will continue to be overwhelming for enterprises due to a chaotic and fast-moving ecosystem of technologies and vendors,” said Arun Chandrasekaran, Distinguished VP Analyst at Gartner. “GenAI is in the Trough of Disillusionment with the beginning of industry consolidation. Real benefits will emerge once the hype subsides, with advances in capabilities likely to come at a rapid pace over the next few years.”


Multimodal GenAI


Multimodal GenAI will have a transformational impact on enterprise applications by enabling the addition of new features and functionality otherwise unachievable. The impact is not limited to specific industries or use cases, and can be applied at any touchpoint between AI and humans. Today, many multimodal models are limited to two or three modalities, though this will increase over the next few years to include more.


“In the real world, people encounter and comprehend information through a combination of different modalities such as audio, visual and sensing,” said Brethenoux. “Multimodal GenAI is important because data is typically multimodal. When single modality models are combined or assembled to support multimodal GenAI applications, it often leads to latency and less accurate results, resulting in a lower quality experience.”


Open-Source LLMs


Open-source LLMs are deep-learning foundation models that accelerate enterprise value from the implementation of GenAI, by democratizing commercial access and allowing developers to optimize models for specific tasks and use cases. Additionally, they provide access to developer communities in enterprises, academia and other research roles that are working toward common goals to improve and make the models more valuable.


“Open-source LLMs increase innovation potential through customization, better control over privacy and security, model transparency, ability to leverage collaborative development, and potential to reduce vendor lock-in,” said Chandrasekaran. “Ultimately, they offer enterprises smaller models that are easier and less costly to train, and enable business applications and core business processes.”


Domain-Specific GenAI Models


Domain-specific GenAI models are optimized for the needs of specific industries, business functions or tasks. They can improve use-case alignment within the enterprise, while delivering improved accuracy, security and privacy, as well as better contextualized answers. This reduces the need for advanced prompt engineering compared with general-purpose models and can lower hallucination risks through targeted training.


“Domain-specific models can achieve faster time to value, improved performance and enhanced security for AI projects by providing a more advanced starting point for industry-specific tasks,” said Chandrasekaran. “This will encourage broader adoption of GenAI because organizations will be able to apply them to use cases where general-purpose models are not performant enough.”


Autonomous Agents


Autonomous agents are combined systems that achieve defined goals without human intervention. They use a variety of AI techniques to identify patterns in their environment, make decisions, invoke a sequence of actions and generate outputs. These agents have the potential to learn from their environment and improve over time, enabling them to handle complex tasks.


“Autonomous agents represent a significant shift in AI capabilities,” said Brethenoux. “Their independent operation and decision capabilities enable them to improve business operations, enhance customer experiences and enable new products and services. This will likely deliver cost savings, granting a competitive edge. It also poses an organizational workforce shift from delivery to supervision.”

4 views0 comments

Comments


connexion_panel_edited.jpg
CXO_8-in-1.png
subscribe_button.png
bottom of page