Network Functions Virtualization (NFV) is all about agility. It’s about moving to a software-centric model using standard off-the-shelf hardware and virtualization technology instead of proprietary hardware to implement infrastructure. There are, of course, challenges along the path to NFV: some are technical in nature; some affect traditional business models and existing organizational structures; and others affect the deployment and rollout of this new cloud-centric approach to delivering services. When it comes to making NFV a reality, which challenges present more of an obstacle?
Also, multimedia applications make use of interrupts from the hardware clock every one or two milliseconds to properly play out and process voice and video. In virtualized environments, these requests get queued up by the scheduler, and if a few of these ticks are missed or delayed, problems with application performance can result. In addition, media packets could be subject to queuing delays in the scheduler not only for bringing packets in, but also accessing the CPU for processing.
This packet scheduling function is performed by the hypervisor scheduler; however, not all hypervisors are created equal. The scheduler within the hypervisor basically allocates CPU resources to workloads and although the algorithms used are designed to be fair, applications requiring deterministic, low latency processing could run the risk of being starved intermittently if they have a noisy neighbor in another virtual machine. Thus, it’s important as service providers move data plane-centric applications to the cloud to make sure the virtualization environment is geared towards supporting low latency processing of packets and doesn’t add appreciable delay.
Technical challenges are not the only things that can slow down NFV deployment. There are also deployment issues when it comes to interoperability. Even in the presence of emerging standards, there’s still the need for ensuring interoperability between applications from different vendors. With NFV there’s this additional concern for interoperability between a virtualized network function (VNF) and the orchestration and management layer (MANO) especially if the applications are from different vendors. Seamless interoperability is critical since one of the key aspects of NFV from an operational perspective is achieving a high degree of automation with the lifecycle management of the virtualized network functions as well as the programmatic orchestration of end-to-end network services. Fortunately, standards bodies like ETSI and OPNFV are involved with NFV and are driving specifications with the help of service providers and network equipment manufacturers.
With NFV there are also business issues emerging that include the need to address huge challenges from an organizational and skills gap perspective. Companies like AT&T are investing heavily in something called Nanodegree programs to bring its entire workforce up to speed on the latest cloud technology. Still, traditional organizational models that have separated different lines of business as well as the operational responsibility for core and access infrastructure will be affected as service providers move to an NFVI hardware and virtualization layer that supports both network functionality and network services.
New business models will also start to emerge because of cloud architectures. Tim Kridel noted in a TM Forum Perspectives article, “If it can be virtualized, then someone else can, and probably should, host it.” The ability to spin up infrastructure very rapidly without the CAPEX barriers-to-entry opens the door to different types of cloud operators and non-traditional service providers who want to vertically integrate their products or content with the means to connect directly to the consumer.
Also, there will be a move away from proprietary monolithic hardware platforms hosting only a single application. With NFV, infrastructure VNFs will be added, expanded, instantiated and then destroyed based on demand by network services, so cloud based non-traditional software licensing models that are usage based or network wide in scope will become the norm that are hopefully equitable to both service providers and vendors alike.
What do you think? Will technical or business challenges be the primary boat anchor when it comes to NFV? Follow this link to check out a recent webinar we did here at Dialogic with the folks from Oracle on NFV interoperability and automated lifecycle management between Dialogic’s PowerMedia™ XMS media server and PowerMedia™ Media Resource Broker and Oracle’s Application Orchestrator VNF Manager. In my next blog, I’ll talk further about data plane centric functionality in the cloud and techniques at automating lifecycle application management that you can see demonstrated in the archived webinar. Tweet us at @Dialogic with your thoughts.