Intel CEO says it's "too late" for them to catch up with AI competition — reportedly claims Intel has fallen out of the "top 10 semiconductor companies" as the firm lays off thousands across the world
Dark days ahead, or perhaps already here.

Intel has been in a dire state these past few years, with seemingly nothing going right. Its attempt to modernize x86 with a hybrid big.LITTLE architecture, à la ARM, failed to make a meaningful impact in terms of market share gains, only made worse by last-gen's Arrow Lake chips barely registering a response against AMD’s lineup. On the GPU front, the Blue Team served an undercooked product far too late that, while not entirely hopeless, was nowhere near enough to challenge the industry’s dominant players. All of this compounds into a grim reality, seemingly confirmed by new CEO Lip-Bu Tan in a leaked internal conversation today.
According to OregonTech, it's borderline a fight for survival for the once-great American innovation powerhouse as it struggles to even acknowledge being among the top contenders anymore. Despite Tan's insistence, Intel would still rank fairly well given its extensive legacy. While companies like AMD, Nvidia, Apple, TSMC, and even Samsung might be more successful today, smaller chipmakers like Broadcom, MediaTek, Micron, and SK Hynix are not above the Blue Team in terms of sheer impact. Regardless, talking to employees around the world in a Q&A session, Intel's CEO allegedly shared these bleak words: "Twenty, 30 years ago, we were really the leader. Now I think the world has changed. We are not in the top 10 semiconductor companies."
As evident from the quote, this is a far cry from a few decades ago when Intel essentially held a monopoly over the CPU market, making barely perceptible upgrades each generation in order to sustain its dominance. At one time, Intel was so powerful that it considered acquiring Nvidia for $20 billion. The GPU maker is now worth $4 trillion.
It never saw AMD as an honorable competitor until it was too late, and Ryzen pulled the carpet from underneath the Blue Team's feet. Now, more people choose to build an AMD system than ever before. Not only that, but AMD also powers your favorite handhelds like the Steam Deck and Rog Ally X, alongside the biggest consoles: Xbox Series and PlayStation 5. AMD works closely with TSMC, another one of Intel's competitors, as the company makes its own chips in-house.
This vertical alignment was once a core strength for the firm, but it has turned into more of a liability these days. Faltering nodes that can't quite match the prowess of Taiwan have arguably held back Intel's processors from reaching their full potential. In fact, starting in 2023, the company tasked TSMC with manufacturing the GPU tile on its Meteor Lake chips. This partnership extended to TSMC, essentially making the entire compute tile for Lunar Lake—and now, in 2025, roughly 30% of fabrication has been outsourced to TSMC. A long-overdue admission of failure that could've been prevented had Intel been allowed to make its leading-edge CPUs with external manufacturing in mind from the start. Ultimately its own foundry was the limiting factor.
As such, Intel has been laying off thousands across the world in a bid to cut costs. Costs have skyrocketed due to high R&D spending for future nodes, and the company faces a $16 billion loss in Q3 last year. Intel's resurrection has to be a "marathon," said Tan, as he hopes to turn around the company culture and "be humble" in listening to shifting demands of the industry. Intel wants to be more like AMD and NVIDIA, who are faster, meaner, and more ruthless competitors these days, especially with the advent of AI. Of course, artificial intelligence has been around for a while, but it wasn't until OpenAI's ChatGPT that a second big bang occurred, ushering in a new era of machine learning. An era almost entirely powered by Nvidia's data center GPUs, highlighting another sector where Intel failed to capitalize on its position.
"On training, I think it is too late for us," Lip-Bu Tan remarked. Intel instead plans to shift its focus toward edge AI, aiming to bring AI processing directly to devices like PCs rather than relying on cloud-based compute. Tan also highlighted agentic AI—an emerging field where AI systems can act autonomously without constant human input—as a key growth area. He expressed optimism that recent high-level hires could help steer Intel back into relevance in AI, hinting that more talent acquisitions are on the way. “Stay tuned. A few more people are coming on board,” said Tan. At this point, Nvidia is simply too far ahead to catch up to, so it's almost exciting to see Intel change gears and look to close the gap in a different way.
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
That being said, Intel now lags behind in data center CPUs, too, where AMD's EPYC lineup has overtaken them in the past year, further dwindling the company's confidence. Additionally, last year, Intel's board forced former CEO Pat Gelsinger out of the company and replaced him with Lip-Bu Tan, who appears to have a distinctly different, more streamlined vision for the company. Instead of focusing on several different facets, such as CPU, GPU, foundry, and more, at once, Lip wants to hone in on what the company can do well at one time.
This development follows long-standing rumors of Intel splitting in two and forming a new foundry division that would act as an independent subsidiary, turning the main Intel into a fabless chipmaker. Both AMD and Apple, Intel's rivals in the CPU market, operate like this, and Nvidia has also always used TSMC or Samsung to build its graphics cards. It would be interesting to see the Blue Team shed off weight and move like a free animal in the biome. However, it's too early to speculate given that 18A, Intel's proposed savior, is still a year away.

Hassam Nasir is a die-hard hardware enthusiast with years of experience as a tech editor and writer, focusing on detailed CPU comparisons and general hardware news. When he’s not working, you’ll find him bending tubes for his ever-evolving custom water-loop gaming rig or benchmarking the latest CPUs and GPUs just for fun.
-
usertests Its attempt to modernize x86 with a hybrid big.LITTLE architecture, à la ARM, failed to make a meaningful impact
It had the impact of setting back AVX-512 adoption by 5+ years.
It seems to do well at boosting low-end multi-threading, e.g. 10-14 core chips vs. AMD's 6-cores. The next test for E-cores will be Wildcat Lake, finally bringing hybrid to the Atom lineup with the main benefit being a huge increase in single-threaded performance.
Lunar Lake chips barely registering a response against AMD’s cache-stacked X3D lineup
Those products don't compete with each other. Lunar Lake does well against AMD's Kracken Point, which I guess is the competitor in configuration and even price.
Intel, instead plans to shift its focus toward edge AI, aiming to bring AI processing directly to devices like PCs rather than relying on cloud-based compute.
By the time anyone cares about an NPU in their PC, it will be in every new AMD chip. Maybe starting with Zen 6.
This development follow's
-
bit_user
Beyond what @usertests said, I'd characterize the impact of hybrid as yielding mixed results.The article said:Its attempt to modernize x86 with a hybrid big.LITTLE architecture, à la ARM, failed to make a meaningful impact
Hybrid definitely helped on the multithreading front and created some breathing room for Intel to double-down on its P-cores (which are significantly bigger and more complex than AMD's). That emphasis on making the P-cores as strong as possible has also helped their lightly-threaded performance.
On the negative side of the ledger, their hybrid CPUs have been beset with thread scheduling woes that have dimmed the view of gamers towards Intel's E-cores. ThreadDirector was their deus ex machina solution to these problems, but turned out to fall far short of the billing. I think there's no way Intel can truly solve these problems only on the backend. They need to work with both OS and application developers to find better solutions for hybrid thread scheduling.
Intel has always been mean. For instance, in its dirty dealings with OEMs to try and block AMD's access to the markets it dominated. In the modern chip industry, I think only Nvidia and Qualcomm are possibly meaner.The article said:Intel wants to be more like AMD and NVIDIA, who are faster, meaner, and more ruthless competitors these days -
Fe4rlessCloak Intel CEO says it's "too late" for them to catch up with AI competition — claims Intel has fallen out of the "top 10 semiconductor companies" as the firm lays off thousands across the world
This entire story was sourced from a 'leaked' memo. Your current headline makes it sound like a matter of fact, which it isn't. Tom's Hardware, like other sites, probably has editors pushing for view-grabbing headlines, but a leak or tip should require more careful wording. It puts you in the same clickbaity category as WCCFtech. Consider alternatives like 'reportedly', 'allegedly', 'purportedly', etcetera.
Its attempt to modernize x86 with a hybrid big.LITTLE architecture, à la ARM, failed to make a meaningful impact
How so? Skymont has been a key driver behind Lunar Lake's efficiency, where lightweight tasks can be offloaded to the LPE-cores without engaging the P-core ring. The deficits of axing HT from ARL/LNL have been more than compensated for by Skymont. Heck, even AMD is rumored to take the LPE-core approach with Zen 6 for efficiency purposes. The only downside to E-cores I can think of is the lack of AVX-512, which should be returning with Nova Lake?
Only made worse by last-gen's Lunar Lake chips barely registering a response against AMD’s cache-stacked X3D lineup
Lunar Lake is not "last-gen". And these are entirely different product stacks you're comparing. Lunar Lake is a mobile-only, efficiency-first chip, while the only X3D chip on mobile I'm aware of is Fire Range (Ryzen 9000HX3D); apples and oranges. They're not even comparable, as one is designed for gaming laptops and portable workstations. At the same time, the other is tailor-made from the ground up for lightweight and low-power devices, as a Windows/Linux alternative to MacBooks.
The final nail may have come with Intel’s recent loss of contract manufacturing for its upcoming flagship 18A node
Missing keyword: reportedly
However, it's too early to speculate given that 18A, Intel's proposed savior, is still a year away, so until Nova Lake launches, we'll just be witnesses to a new Titanic.
The first 18A product you'll see is Panther Lake, not Nova Lake; the former is slated for an early 2026 launch, likely at CES, or in about 5-6 months. -
bit_user
Then you haven't been following the matter in gaming circles. They often either use tools to prevent game threads from being scheduled on E-cores or just disable them in BIOS. That said, there are some games that manage to perform better with E-cores enabled, which almost makes the situation worse, since it means there's no blanket solution that applies to all games.Fe4rlessCloak said:The only downside to E-cores I can think of is the lack of AVX-512,
VR is another area where E-cores are viewed very negatively. Pretty much any realtime application is susceptible to performance detriments from critical-path threads being scheduled on E-cores.
Intel's latest solution to this mess it created is to make E-cores bigger and more powerful, thereby narrowing the gap between them and P-cores. However, this comes at the expense of the E-cores' traditional strengths. You'd get better performance density and efficiency by keeping the E-cores smaller, lower-clocking, and just adding more of them. -
Fe4rlessCloak
Most of these can be resolved through proper scheduling, which is easier said than done since we haven't achieved a proper solution, yet.bit_user said:Then you haven't been following the matter in gaming circles. They often either use tools to prevent game threads from being scheduled on E-cores or just disable them in BIOS. That said, there are some games that manage to perform better with E-cores enabled, which almost makes the situation worse, since it means there's no blanket solution that applies to all games.
VR is another area where E-cores are viewed very negatively. Pretty much any realtime application is susceptible to performance detriments from critical-path threads being scheduled on E-cores.
Intel's latest solution to this mess it created is to make E-cores bigger and more powerful, thereby narrowing the gap between them and P-cores. However, this comes at the expense of the E-cores' traditional strengths. You'd get better performance density and efficiency by keeping the E-cores smaller, lower-clocking, and just adding more of them.
The Skymont LPE-core cluster on LNL measures 6.89mm^2 (N3B), compared to 5.90mm^2 on MTL (Intel 4). Not really apples to apples, since we're comparing LPE to E cores, but this increase in size does come with a massive bump to the IPC and performance, and that's important since Crestmont LPE-cores on MTL were too slow (and on an older process node) to run background tasks properly. It's a tradeoff, and I think it really shines in products like Lunar Lake, where the LPE-cores are strong enough to handle background applications without engaging the P-core ring bus.
At the same point, I get your idea. You're pushing for density (smaller E-cores, and we might get 6-8 E-cores in place of a traditional cluster). The engineers are Intel likely know much more than us, and they must've considered this approach sometime during development. It's essentially a case between 4 powerful E-cores versus 8 less-powerful E-cores. -
bit_user
Agreed, but it cannot be done solely by the CPU and kernel. It needs the involvement of userspace and this is where I've seen zero movement. Intel continues to steadfastly act as though it believes these problems can be solved entirely on the backend, but they cannot.Fe4rlessCloak said:Most of these can be resolved through proper scheduling, which is easier said than done since we haven't achieved a proper solution, yet.
Why are you focused on LPE? Most Skymont cores aren't LPE-cores, they're regular E-cores. Lunar Lake is a fairly niche product. Arrow Lake is the volume product. I never restricted what I said to just LPE cores, either. I was talking about the inclusion of E-cores in mainstream products. If Intel had limited its use of E-cores to just LPE cores in laptops, they'd be much less controversial.Fe4rlessCloak said:The Skymont LPE-core cluster
Their track record says otherwise. The only reason we're calling their decisions into question is precisely because they designed products with such significant tradeoffs.Fe4rlessCloak said:The engineers are Intel likely know much more than us, and they must've considered this approach sometime during development.
I give them credit for their willingness to take the bold move of going hybrid. However, their execution was clearly not flawless and I think their efforts at damage-control have needlessly undermined their strategy. They should've worked the problem from all angles, but someone at Intel seems to have decided that touching threading APIs was a red line. The longer they refuse to go there, the more their solution will get watered down and the longer we'll go without a proper solution. -
jg.millirem Fe4rlessCloak said:This entire story was sourced from a 'leaked' memo. Your current headline makes it sound like a matter of fact, which it isn't. Tom's Hardware, like other sites, probably has editors pushing for view-grabbing headlines, but a leak or tip should require more careful wording. It puts you in the same clickbaity category as WCCFtech. Consider alternatives like 'reportedly', 'allegedly', 'purportedly', etcetera.
I see "reportedly" right there in the headline. Regardless, leaked memos in the hands of skilled journalists have become some of the most reliable insights behind corporate and government opaqueness, whether you like what the memos say or not. -
thestryker
A lot of the people advocating and doing this aren't doing so for real reasons. There were certainly a lot of random issues on ADL launch, but currently not so much. I'm sure there are specific outliers and anyone affected probably screams about it as is required on the internet though.bit_user said:Then you haven't been following the matter in gaming circles. They often either use tools to prevent game threads from being scheduled on E-cores or just disable them in BIOS. That said, there are some games that manage to perform better with E-cores enabled, which almost makes the situation worse, since it means there's no blanket solution that applies to all games.
This was the last writeup I remember seeing: https://www.techpowerup.com/review/rtx-4090-53-games-core-i9-13900k-e-cores-enabled-vs-disabled/2.html
Intel of 5-6 years ago this had a chance of happening, but today? No. I do certainly agree that this would be the best way forward. Scheduling isn't some sort of mystery either as CDPR was able to fix CP2077 (ARL performance was bad) in a patch that came ~1.5mo after ARL launch.bit_user said:They need to work with both OS and application developers to find better solutions for hybrid thread scheduling.
Avago... I mean Broadcom is absolutely king of this hill. :ROFLMAO:bit_user said:In the modern chip industry, I think only Nvidia and Qualcomm are possibly meaner. -
rluker5
Very true. The thermal density of a just P cores chip would limit the multicore performance and single core performance compared to what they are doing with ARL:bit_user said:Beyond what @usertests said, I'd characterize the impact of hybrid as yielding mixed results.
Hybrid definitely helped on the multithreading front and created some breathing room for Intel to double-down on its P-cores (which are significantly bigger and more complex than AMD's). That emphasis on making the P-cores as strong as possible has also helped their lightly-threaded performance.
https://tpucdn.com/review/intel-core-ultra-arrow-lake-preview/images/06_small.jpgNote how nicely they broke up those P core heat islands.
The E cores usually, but not always helps gaming performance. HT usually, but not always hurts gaming performance from what I've seen. Sometimes by a lot. Shame there isn't a ton of testing, but here is a big chunk, even if it is a bit old:bit_user said:On the negative side of the ledger, their hybrid CPUs have been beset with thread scheduling woes that have dimmed the view of gamers towards Intel's E-cores. ThreadDirector was their deus ex machina solution to these problems, but turned out to fall far short of the billing. I think there's no way Intel can truly solve these problems only on the backend. They need to work with both OS and application developers to find better solutions for hybrid thread scheduling.
LcQUUmi3rWI:684View: https://youtu.be/LcQUUmi3rWI?t=684
I8DJITHWdaA:637View: https://youtu.be/I8DJITHWdaA?t=637
Personally, I think taking care of the outliers with something like Intel APO is probably the best solution, but Intel seems to have stopped updating that after they made it available for 13th gen.
Was Intel making deals with the OEMs to sell their products during or after AMD was selling products that were reverse engineered copies of Intel chips made from stolen IP?bit_user said:
Intel has always been mean. For instance, in its dirty dealings with OEMs to try and block AMD's access to the markets it dominated. In the modern chip industry, I think only Nvidia and Qualcomm are possibly meaner.
And isn't this the worst example of any foundry ever stealing their clients IP, and isn't it used as an example of why you can't trust Intel as a fab when they were the victim?
The fact that Intel allowed AMD to continue to exist after this is an example of how they weren't always mean. -
bit_user
I know what you mean, but traditionally the E-cores actually have higher thermal density. I haven't run the numbers for Arrow Lake, but I expect it's still true.rluker5 said:Very true. The thermal density of a just P cores chip would limit the multicore performance and single core performance compared to what they are doing with ARL:
https://tpucdn.com/review/intel-core-ultra-arrow-lake-preview/images/06_small.jpgNote how nicely they broke up those P core heat islands.
It's conceivable that both are actually true, but often in different contexts. Like, gaming or other realtime tasks, where most of the action is in the P-cores, then they would be hotter cores. However, something like rendering might still heat up the E-cores more, especially now that they have much closer floating-point performance to the P-cores.
APO is another backwards solution. It's more effective, because is has more specific knowledge about the apps, but imagine the apps could be written in a way that told the scheduler what APO knows about them? Then, any such app could be as fast as the APO version (or faster, for reasons I won't go into), without Intel or any 3rd party having to know anything about it!rluker5 said:Personally, I think taking care of the outliers with something like Intel APO is probably the best solution, but Intel seems to have stopped updating that after they made it available for 13th gen.
That's the power of a better threading API. But, for some reason, they seem to think everyone wants to keep multithreading apps like it's still the 1990's.