AI

Telcos explore AI-driven NIaaS revenue opportunities

By :

Telcos explore AI-driven NIaaS revenue opportunities

Network Infrastructure as a Service provides AI opportunities for telecom providers to build AI-capable infrastructure leveraging their investments in 5G, fiber, and edge computing, and deliver multitenant, scalable, and secure networking and hosting solutions.

The race to trial, invest in, and deploy AI-powered solutions is on, upending market dynamics across a range of sectors.

Worldwide spending on AI is projected to reach $632 billion by 2028, according to research firm IDC. This represents a CAGR of 29%, driven by the rapid integration of AI and GenAI into a wide range of products. AI hardware, including servers, storage and infrastructure as a service, is poised to be one of the largest categories of spending, second only to software and AI application development and deployment.

While AI innovation can level playing fields, it also has the potential to expand existing competitive gaps, especially between deep-pocketed companies and more resource-constrained organizations like small businesses or government institutions. These players often lack the necessary infrastructure, capital, and expertise to support advanced AI deployments.

This reality represents a near-term opportunity for telecom providers to leverage their investments in 5G, fiber, and edge computing to provide multitenant, scalable, and secure networking and hosting solutions. These Network Infrastructure as a Service (NIaaS) offerings would provide on-demand access to optimized networking capabilities and remove the burdens associated with planning, financing, managing, and building AI-capable infrastructure.

Telco operators are especially well-positioned to capitalize on unique demands of the public sector and national mission-critical industries, which often impose strict data sovereignty requirements and favor local communication providers.

Spirent has identified several types of services operators can offer with immediate appeal for target audience segments:

  • Scalable, on-demand AI networking and access to xPUs (CPU, GPU, DPU) as a Service for multi-tenant AI, including support for large language models (LLMs) tailored to specific industries and for sovereign and enterprise AI applications.

  • Secure communications, including Zero Trust Network Access (ZTNA), Secure Access Service Edge (SASE), Quantum secure and private edge distributions.

  • Ultra-low-latency network connections required by AI applications and supported by 5G, edge computing, and fiber-based networks.

  • Dedicated wireless and wireline network slices tuned to meet AI inference and AI training performance demands.

Prepping telecom networks to support AI workloads

When modern societies build roadways, they are constructed to exacting standards, usually in anticipation of future traffic trends. AI’s novel workloads and application traffic patterns are akin to Formula 1 meets Wacky Races driving over these roadways. Telecom networks were simply not originally configured to optimally handle the associated data-intensive, high bandwidth, low latency, lossless, and distributed cloud and edge processing demands.

That’s to say nothing of the service-level agreement (SLA) commitments that must be made in an NIaaS scenario, where operators guarantee infrastructure can support novel AI workloads for multi-tenant customers.

Let’s dive deeper into what’s required for telecom operators to offer a distributed hosting model where customer and domain-specific LLMs and their training and retraining activities are hosted in regional data centers, while inferencing is hosted at secure edge sites to provide low latency and localization.

For starters, AI applications will require real-time data processing and inference, with continuous uplink and downlink network access to send raw data for processing and receive inference results from AI models hosted on edge computing nodes. This traffic will require low-latency, high-throughput, guaranteed connectivity across the edge and IP Core, accelerating the adoption of 100G, 400G, and 800G technologies.

AI-driven NIaaS revenue opportunities

AI training use cases will require the transfer of massive data volumes between regional data centers and edge nodes. This necessitates additional capacity and backhaul network prioritization, in addition to a new interconnect fabric for low latency and lossless east-west GPU traffic.

Taking stock of telecom’s NIaaS tech toolbelt

There are a range of existing technologies operators can immediately leverage to meet emerging AI-powered market opportunities. Let’s examine them at each key network segment:

  • Wireline. Segment Routing Traffic Engineering (SR-TE) and Flexible Ethernet (FlexE) can manage demanding AI traffic. SR-TE enables precise, optimized routing, while FlexE provides flexible and efficient bandwidth management. FlexE can allocate dedicated bandwidth slices to critical AI applications while SR-TE ensures traffic follows specific paths to adhere to SLAs.

  • Wireless. 5G network slicing can optimize AI applications without impacting other services, supporting the latency, bandwidth, security, and device density needed for AI workloads.

  • Data center. High speed ethernet with RoCEv2 can support AI training and distributed processing, delivering low latency, lossless transfers, high throughput, scalability, and efficiency.

At every stage, application-specific SLAs will specify the latency and bandwidth, security, scalability, and compliance requirements that telcos will need to deliver. Operators are already skilled in this area, but AI use cases will push the boundaries of what has been required previously. NIaaS services must deliver parity with what the largest players are building in-house, ensuring AI applications run smoothly at scale.

Monetizing new NIaaS offerings

Supporting connectivity for AI-powered use cases requires reliable ways to test, validate, and proactively manage infrastructure as customers scale to meet new AI demands. In Asia, we’re already seeing operators positioning themselves to meet AI demands, navigating regulatory pressures around data sovereignty, and a drive to capitalize on existing infrastructure investments.

Spirent helps uniquely positioned operators differentiate their offerings by efficiently validating and optimizing networks for AI readiness. We support rapid IP Transport network refresh cycle testing for 100G and 400G. Additionally, we ensure secure and effective network infrastructure through rigorous firewall and gateway testing.

This comprehensive 24/7 monitoring solution covers both wireline and wireless networks, including proactive 5G network slice SLA management. This ensures telcos can deliver consistent performance, low latency, and a seamless customer experience. Our support for advanced capabilities like SR-TE and FlexE addresses the increasing need for guaranteed connectivity, optimized routing, and efficient bandwidth management.

Learn more: Our latest Spirent Impact Report – The Future of High-Speed Ethernet across Data Center, Telecom, and Enterprise Networking – provides further insights into how AI-driven developments are transforming modern networking infrastructure.

コンテンツはいかがでしたか?

こちらで当社のブログをご購読ください。

ブログニュースレターの購読

Stephen Douglas
Stephen Douglas

市場戦略統括

Spirent is a global leader in automated test and assurance for the ICT industry and Stephen heads Spirents market strategy organization developing Spirents strategy, helping to define market positioning, future growth opportunities, and new innovative solutions. Stephen also leads Spirent’s strategic initiatives for 5G and future networks and represents Spirent on a number of Industry and Government advisory boards. With over 25 years’ experience in telecommunications Stephen has been at the cutting edge of next generation technologies and has worked across the industry with service providers, network equipment manufacturers and start-ups, helping them drive innovation and transformation.