Finding return on AI investments across industries

The market is officially three years into ChatGPT, and many experts have moved to using terms like “bubble” to point out the reasons why generative AI isn't realizing material returns outside of a handful of technology providers.

In September With NANDA report made a splash because every author and influencer jumped on the bandwagon that 95% of all AI pilot projects failed to scale or deliver a clear and measurable ROI. McKinsey previously published a similar trend indicating that agent-based AI will be the way to achieve huge operational benefits for enterprises. IN Wall Street Journal's Technology Council SummitLeaders in AI technology have advised CIOs to stop worrying about ROI on AI investments because measuring returns is difficult and if they tried, the measurements would be wrong.

This leaves technology leaders in a precarious position: robust technology stacks already support their business operations, so what are the benefits of introducing new technologies?

For decades, deployment strategies have followed a consistent rhythm in which technology operators avoided disrupting business-critical workflows and replacing individual components in technology stacks. For example, better or cheaper technology is of no use if it compromises disaster recovery.

While the price may go up when a new buyer purchases mature middleware, the cost of losing some of your enterprise data because you're halfway through migrating your enterprise to a new technology is much more significant than paying a higher price for the stable technology you've been running your business on for 20 years.

So how do businesses benefit from investing in the latest technological transformations?

The First Principle of Artificial Intelligence: Your Data is Your Value

Most articles on AI data relate to engineering challenges to ensure that the AI ​​model matches business data in repositories that reflect past and present business realities..

However, one of the most common use cases for enterprise AI starts with querying an AI model by uploading file attachments to the model. This step narrows the scope of the AI ​​model to the contents of the uploaded files, speeding up the exact response time and reducing the number of queries required to get the best answer.

This tactic relies on feeding your own business data to the AI ​​model, so there are two important factors to consider in parallel with data preparation: first, ensure appropriate privacy in your system; and second, developing a smart strategy for negotiating with model providers who cannot promote their cutting-edge models without gaining access to proprietary data, such as your business data.

Recently, anthropic And OpenAI have struck major deals with enterprise data platforms and owners because there is not enough valuable first-party data available publicly online.

Most businesses automatically prioritize the confidentiality of their data and design business processes with trade secrets in mind. From an economic value standpoint, especially considering how expensive each model API call actually is, replacing selective access to your data with services or price compensation may be the right strategy. Rather than approaching the buying/implementation model as a typical supplier/procurement process, think about the opportunity to realize mutual benefits while evolving your suppliers' model and your business's adoption of that model.

Second principle of artificial intelligence: boring by design

According to The information is greatIn 2024 alone, 182 new generative AI models were introduced to the market. When GPT5 came to market in 2025, many models released 12–24 months earlier were unavailable until subscription customers threatened to cancel. Their previously stable AI workflows were built on models that no longer worked. Their technology providers thought customers would be excited about the latest models and didn't realize the value business processes placed on stability. Video gamers happily update their own builds throughout the lifespan of the system components of their gaming devices and upgrade the entire system just to play a newly released game.

However, the behavior does not affect the speed of business operations. While many employees can use the latest models to process documents or create content, back office operations can't handle changing the tech stack three times a week to keep up with the latest model releases. Back office work is inherently boring.

The most successful AI deployments have focused on using AI to solve business problems unique to their business, often running in the background to speed up or enhance routine but required tasks. Freeing legal or expense audits from the need to manually cross-check individual reports, but placing the final decision under people's responsibility combines the best of both.

The important point is that none of these tasks require you to constantly update to the latest model to achieve this goal. This is also an area where abstracting your business's workflows away from using direct model APIs can provide additional long-term stability while still allowing the underlying mechanisms to be upgraded or updated at the pace of your business.

AI Principle Three: Minivan Economics

The best way to avoid a disrupted economy is to design systems that meet user needs rather than vendor specifications and criteria.

Too many companies continue to fall into the trap of purchasing new hardware or new types of cloud services based on new vendor-defined benchmarks, rather than starting their AI journey with what their business can consume and at what pace, based on the capabilities they have deployed today.

While Ferrari's marketing is effective and these cars are truly great, they drive at the same speed in school zones and don't have enough trunk space for groceries. Keep in mind that every remote server and model a user accesses is cost-conscious and designed with lean in mind, reconfiguring workflows to minimize the costs of third-party services.

Too many companies have found that their customer-facing AI workflows add millions of dollars in operational costs and ultimately increase development time and implementation upgrade costs to ensure predictable operational costs. Meanwhile, companies that decided to run the system at human-readable speeds (less than 50 tokens per second) were able to successfully deploy scalable AI applications with minimal additional overhead.

There are so many aspects of this new automation technology to unpack. The best guidance is to start with a hands-on approach, develop independent core technology components so as not to disrupt stable applications in the long term, and leverage the fact that AI technology makes your business data valuable to achieve the goals of your technology providers.

This content was created by Intel. It was not written by the editors of MIT Technology Review.

Leave a Comment