AMD teams up with Cisco, Nokia, and Jio Platforms for Open Telecom AI platform
Promises to improve network security while cutting operational costs.

AMD, Cisco, Nokia, and Jio Platforms unveiled plans to develop the Open Telecom AI Platform at Mobile World Congress 2025. The initiative is designed to enhance telecom platforms with AI-driven automation, security, and efficiency, offering a scalable model for service providers worldwide.
Under the terms of the agreement, AMD will provide high-performance computing solutions, including EPYC CPUs, Instinct GPUs, DPUs, and adaptive computing technologies. Cisco will contribute networking, security, and AI analytics solutions, including Cisco Agile Services Networking, AI Defense, Splunk Analytics, and Data Center Networking. Nokia will bring expertise in wireless and fixed broadband, core networks, IP, and optical transport. Finally, Jio Platforms Limited (JPL) will be the platform's lead organizer and first adopter. It will also provide global telecom operators' initial deployment and reference model.
For AMD, getting into a potentially successful telco platform is a big deal. Its arch-rival, Intel, has a major lead with telecom projects, having invested massive amounts of money in 5G and other telecom technologies.
"AMD is proud to collaborate with Jio Platforms Limited, Cisco, and Nokia to power the next generation of AI-driven telecom infrastructure," said Lisa Su, chair and CEO of AMD. "By leveraging our broad portfolio of high-performance CPUs, GPUs, and adaptive computing solutions, service providers will be able to create more secure, efficient, and scalable networks. Together we can bring the transformational benefits of AI to both operators and users and enable innovative services that will shape the future of communications and connectivity."
The platform will function as a multi-layer intelligence system, integrating AI at every level of the telecom infrastructure. It will incorporate various AI approaches, including autonomous AI agents, large and specialized small language models, and traditional machine learning techniques to ensure adaptable and intelligent network management. Open APIs will be a key platform component, enabling seamless integration with existing telecom infrastructure and optimizing network functions for enhanced efficiency.
A primary goal of the project is to improve network security while cutting operational costs. By embedding AI into network management, the system will create self-regulating telecom environments capable of identifying risks, adjusting operations dynamically, and delivering a more secure and reliable service.
"Nokia possesses trusted technology leadership in multiple domains, including RAN, Core, fixed broadband, IP and optical transport. We are delighted to bring this broad expertise to the table in service of today's important announcement," said Pekka Lundmark, President and CEO at Nokia. "The Telecom AI Platform will help Jio to optimize and monetize their network investments through enhanced performance, security, operational efficiency, automation and greatly improved customer experience, all via the immense power of artificial intelligence. I am proud that Nokia is contributing to this work."
Stay On the Cutting Edge: Get the Tom's Hardware Newsletter
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.

Anton Shilov is a contributing writer at Tom’s Hardware. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends.
-
Jame5 I know it's not fair, as it's fundamentally different forms of AI, but I hear AI telecom solutions and my first thought is a hallucinated connection between endpoints that shouldn't be talking to each other.Reply
Imagine calling your wife and being connection to a random person overseas, etc. Or trying to reach a server, and the AI telecom platform decides to reroute your connection due to "traffic shaping" and connects you to a completely different server. Both scenarios wildly unlikely, but also the kind of thing that comes to mind when thinking of jamming AI into telecom platforms.
I know specific, purpose driven AI is much more reliable than general purpose AI solutions, but it's hard to shake that thought. -
bit_user
So far, I think ROCm hasn't exactly demonstrated telco-grade reliability.The article said:Under the terms of the agreement, AMD will provide high-performance computing solutions, including EPYC CPUs, Instinct GPUs, DPUs, and adaptive computing technologies.
Given Nvidia's DPU portfolio, it's interesting Cisco didn't prefer to partner with them. Maybe Cisco feels too threatened by their Mellanox business? -
AkroZ
It's seems the choice has been made by JPL and Nvidia doesn't like open standards, they prefere selling a proprietary solution like Bluefield which is a concurrent to Cisco.bit_user said:Given Nvidia's DPU portfolio, it's interesting Cisco didn't prefer to partner with them. Maybe Cisco feels too threatened by their Mellanox business? -
jp7189
If an AI is trained on what a 'normal' network looks like, it can be very good at recognizing 'not normal' with significant nuance. From that it can decide to drop packets, update route weights, etc. with more granularity compared to a hard coded rule.Jame5 said:I know it's not fair, as it's fundamentally different forms of AI, but I hear AI telecom solutions and my first thought is a hallucinated connection between endpoints that shouldn't be talking to each other.
Imagine calling your wife and being connection to a random person overseas, etc. Or trying to reach a server, and the AI telecom platform decides to reroute your connection due to "traffic shaping" and connects you to a completely different server. Both scenarios wildly unlikely, but also the kind of thing that comes to mind when thinking of jamming AI into telecom platforms.
I know specific, purpose driven AI is much more reliable than general purpose AI solutions, but it's hard to shake that thought.
AI is also very good at complex error recovery. Take a look at stable diffusion, though not the same at all, we can draw some parallels. SD takes a starting array of completely random pixels (pure noise) and 'fixes' the noise back to something recognizable. The concept can be applied to error prone communications like wireless. Start with a noisey, almost unrecognizable signal, and 'fix' it back to something useful. -
bit_user
From my non-expert perspective, I think you have good points about anomaly detection (e.g. DDoS attack) and adaptive routing optimization.jp7189 said:If an AI is trained on what a 'normal' network looks like, it can be very good at recognizing 'not normal' with significant nuance. From that it can decide to drop packets, update route weights, etc. with more granularity compared to a hard coded rule.
I just don't see stream-level error-correction happening in the core. It makes much more sense to delegate this to edge devices, which already have TOPS of AI, the use of which also doesn't incur a cost to core network operators.jp7189 said:AI is also very good at complex error recovery. Take a look at stable diffusion, though not the same at all, we can draw some parallels. SD takes a starting array of completely random pixels (pure noise) and 'fixes' the noise back to something recognizable. The concept can be applied to error prone communications like wireless. Start with a noisey, almost unrecognizable signal, and 'fix' it back to something useful.
A more chilling use of AI, in the network core, would be for content filtering & censorship. However, the use of end-to-end encryption should limit the potential for such measures. Perhaps there's enough to be gleaned simply by looking at overall communication patterns in who's talking to whom. -
jp7189
I mentioned it because Nokia and wireless were mentioned, but you're right that something like that will be at the edge and not core.bit_user said:I just don't see stream-level error-correction happening in the core. It makes much more sense to delegate this to edge devices, which already have TOPS of AI, the use of which also doesn't incur a cost to core network operators. -
jp7189
I imagine your comment is focused on government/public infrastructure, but it's fairly common of corp networks to decrypt packets for inspection, and block or force downgrade of encryption that doesn't comply. E.g. Chrome connecting to Google services can be force downgraded to a standard encryption that is decryptable.bit_user said:A more chilling use of AI, in the network core, would be for content filtering & censorship. However, the use of end-to-end encryption should limit the potential for such measures. Perhaps there's enough to be gleaned simply by looking at overall communication patterns in who's talking to whom. -
bit_user
Yeah, I was thinking this sort of initiative involving GPUs and DPUs is aimed at core infrastructure, because I don't expect most corporations to spend so much money on their network infrastructure. Sure, you have firewalls and some of those will be enforcing content policies, but it makes much more sense for them to put anything more compute-intensive on the actual PCs, which they also control. They already paid for those, and putting it "at the edge" scales better.jp7189 said:I imagine your comment is focused on government/public infrastructure, but it's fairly common of corp networks to decrypt packets for inspection, and block or force downgrade of encryption that doesn't comply. E.g. Chrome connecting to Google services can be force downgraded to a standard encryption that is decryptable. -
jp7189
Content filtering doesn't need packet decryption, but IPS and behavior analytics are very rudimentary without it; malware detection is impossible. Therefore, packet decryption is common at the firewall. It is by far the most compute intensive operation a firewall does and it is dominated by Intel and custom ASICs today. I'm not aware of any hardware appliance that uses AMD.bit_user said:Yeah, I was thinking this sort of initiative involving GPUs and DPUs is aimed at core infrastructure, because I don't expect most corporations to spend so much money on their network infrastructure. Sure, you have firewalls and some of those will be enforcing content policies, but it makes much more sense for them to put anything more compute-intensive on the actual PCs, which they also control. They already paid for those, and putting it "at the edge" scales better.