95% of generative AI implementations in enterprise 'have no measurable impact on P&L', says MIT — flawed integration cited as why AI projects underperform
AI is a powerful tool, but only if used correctly. | The study shows that AI tools must adjust to the organization’s processes for it to work effectively.

Many companies are rushing to implement various AI tools into their operation, but most of these pilot programs fail, according to an MIT study. Fortune reported that 95% do not hit their target performance, not because the AI models weren’t working as intended, but because generic AI tools, like ChatGPT, do not adapt to the workflows that have already been established in the corporate environment.
As per the report, the study's findings purportedly demonstrate that only "about 5% of AI pilot programs achieve rapid revenue acceleration." It says "the vast majority stall," and deliver "little to no measurable impact" on profit and loss. The findings are based on 150 interviews, a survey of 350 employees, and an analysis of 300 public deployments of AI.
The remaining 5% differs from all the rest because they focus on one thing and do it well. “Some large companies’ pilots and younger startups are really excelling with generative AI,” MIT researcher and lead author Aditya Challapally told the publication. “It’s because they pick one pain point, execute well, and partner smartly with companies who use their tools.”
One more issue that many organizations that use AI face is setting the wrong priorities for the use of these tools. The research reveals that AI works best in back-office automation — basically taking over administrative and repetitive tasks that many corporations tend to outsource. However, over half of the money spent on AI projects reportedly focuses on sales and marketing, departments that arguably need a human touch, especially as most buyers are still humans, not machines.
MIT says that two out of three projects that use specialized AI providers are successful, while only a third of in-house AI tools deliver expected results. Despite this, many organizations working in highly regulated fields, like finance and healthcare, prefer building their own AI programs. They likely do this to reduce regulatory risk, which can be especially damaging if an AI leaks private information — something that has happened in the past.
The research project also touched on AI’s effect on the workforce. While there haven’t been widespread layoffs because of AI yet, MIT reports that companies aren’t replacing positions that have become vacant when a staff member leaves. This was most prevalent in customer support and administrative positions — entry-level jobs that have usually been outsourced. This could be the harbinger of what several CEOs, including Anthropic’s Dario Amodei and Ford’s Jim Farley, are warning about: that AI might wipe out half of all entry-level white collar jobs in five years.
Follow Tom's Hardware on Google News to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button.
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.

Jowi Morales is a tech enthusiast with years of experience working in the industry. He’s been writing with several tech publications since 2021, where he’s been interested in tech hardware and consumer electronics.
-
-Fran- Exec: "Look everyone! This great new invention is called a Hammer. We need to start using it in our processes to increase productivity!"Reply
Employees: "Sir... We bake cakes..."
Regards. -
George³ "AI" didn't exist. They implemented bad trained with trash data from internet models. That's all my friends.Reply -
Notton "Look, if people don't buy into AI, this bubble is going to burst and it's going to crash the market!" -some execReply
IDK about other people, but ads selling AI products have taken over scam ads.
So I'm just going to assume AI is also a scam at worst, ponzi scheme at best. -
Alex/AT Let me guess. These "5% of AI pilot programs" are coming from/on "AI" developers themselves and related hardware/software vendors?Reply -
edzieba I'm surprised even 5% haven't failed, though I guess that's "haven't failed yet". LLMs have little no no practical utility, and areas where big MLNNs do have utility (e.g. image processing) they had already been in use for years before the "deep learning" boom rebranded MLNNs to make them seem a shiny new idea.Reply -
SomeoneElse23 I used ChatGPT the other day to create some very useful code.Reply
That said, ChatGPT is a dumb as a rock. The first two iterations wouldn't even compile. The third iteration had routines in it that were never called.
After very explicit direction and tweaking, and a number of more iterations, I did end up with very useful code, faster than I could have made it.
I found the process very enlightening. There's nothing intelligent about "AI". It does precisely what you say.
Only with intelligence guiding it was I able to be more productive than I would have been otherwise. -
palladin9479 Alex/AT said:Let me guess. These "5% of AI pilot programs" are coming from/on "AI" developers themselves and related hardware/software vendors?
"AI" is just super good pattern recognition. It works best in environments where it's given a ton of data, then instructed to find patterns and act on them. -
ravewulf Even if we assume AI can or will be able to fill entry-level jobs, that's still bad for the industry as it chokes off the future talent pool for higher-level positions.Reply -
fiyz
You sir, lack creative imagination. Do you not bake meat pies? Let me introduce you to what I also call, the meat tenderizer.-Fran- said:Exec: "Look everyone! This great new invention is called a Hammer. We need to start using it in our processes to increase productivity!"
Employees: "Sir... We bake cakes..."
Regards.