There’s a big difference when it comes to virtualizing real-time multimedia network functions versus those involving control plane processing. I’m not saying that control plane network functions that perform things like mobility management or context creation for LTE are not impervious to packet impairments. However, network functions that process data plane traffic are extremely sensitive to delay and dropped packets and require additional care that service providers should be aware of as they move them to Network Functions Virtualization (NFV) environments.
There’s definitely a difference in performance requirements for web-based transactional applications compared to real-time multimedia processing. In web-surfing for example it’s very common for users to experience delay when it comes to web page loading or changing pages on an app. If packets are dropped, then then they can be retransmitted with no appreciable notice to the subscriber. In fact, there was research done by Google a few years back in which they indicated that their users experienced about a 6% loss with all HTTP responses. But for real-time multimedia conversational services, delay, dropped packets and variation in delay or jitter can be an issue. In LTE, the end-to-end packet delay budget (PDB) for voice should be less than 200 msec (with a target of about 160 msec) and that can be challenging when you consider that international transport routes can add up to 350 msec round trip delay alone.
So it’s important as service providers move data plane-centric applications to the cloud to make sure the virtualization environment is geared towards supporting low latency processing of packets and doesn’t add appreciable delay to the overall PDB compared to the current architecture that uses discrete, stand-alone devices.
Virtualized environments, especially public clouds, have not always been the best platforms to support real-time multimedia applications. This was because applications were no longer running directly on the server but rather running on top of a guest operating system in a virtual machine. In order for the multimedia packets to be processed (mixed, transcoded, converted to text) they had to wait their turn with other packets in other virtual machines to get access to the physical CPUs. This packet scheduling function is performed by the hypervisor scheduler; however, not all hypervisors are created equal. The scheduler within the hypervisor basically allocates CPU resources to workloads and although the algorithms used are designed to be fair, applications requiring deterministic, low latency processing could run the risk of being starved intermittently if they have a noisy neighbor in another virtual machine. Also, multimedia applications make use of interrupts from the hardware clock every one or two milliseconds to properly play out and process voice and video multimedia. In virtualized environments, these requests get queued up by the scheduler, and if a few of these ticks are missed or delayed, problems with application performance will result.
So when service providers are designing their Network Functions Virtualization Infrastructure (NFVI), care must be taken to reduce any delay due to the hypervisor scheduler both to minimize jitter and ensure proper timing for applications that are processing real-time multimedia. Control plane-centric network functions that don’t process data plane traffic don’t have the same sensitivity to delay. But as IMS/VoLTE infrastructure components move to the cloud (for service providers, this will be private – but maybe not always if they use public cloud resources for peak offload purposes), it’s imperative that the virtualized network functions (VNFs) dealing with media processing such as the media resource function (MRF), media resource broker (MRB) and application server (AS) are optimized to work over a virtualized infrastructure. There are some approaches that can be leveraged in order to improve issues with latency such as
- Pass-through (direct) access to resources, e.g. Single Root I/O Virtualization (SR-IOV)
- Tuning the virtualization layer for improved scheduling of packets
- Over provisioning of available physical CPU capacity over what’s required by the applications for virtual CPU resources
- Allocating 100% of the CPU and memory resources to a single application
- Removing power management
Unfortunately some of these approaches will result in causing issues with the NFV environment such as non-standard deployment of COTS hardware or tweaks to the virtualization environment. And while para-virtualization techniques or SR-IOV can speed up processing, they can reduce the elasticity of the system unless all physical hardware is similarly configured limiting the flexibility to scale out resources. Also, over-provisioning reduces the potential cost benefit by requiring more physical equipment. The important thing to remember is that service providers should look at the VNF software architecture to make sure it is designed to operate efficiently in a virtualized environment especially when it comes to real-time multimedia processing.
Regardless, many software vendors mistakenly think that you can take a monolithic piece of software previously deployed on a proprietary platform and port it to a virtual machine and call it NFV. Rather, VNF vendors need to follow some important guiding principles that not only ensure low latency, but also enable VNFs to really take advantage of the flexibility and elasticity that cloud environments enable.
- Software modularity: Modularity is critical to optimize VNF application performance and scalability. This approach enables faster instantiation when additional instances are required and allows operators to better realize the full potential of a virtualized environment.
- Automation: VNF automation, scalability and programmability are not “nice to haves” rather “must have” goals. Automating the lifecycle management of VNFs is an important aspect to realization of the true benefits of NFV with respect to both OPEX savings and CAPEX reduction.
- Architectural flexibility: VNFs should be architected to support forward looking technology advancements at the NFVI virtualization layer such as containers or unikernals. This approach enables backwards compatibility to virtual machines making the applications more portable across different virtualization environments.
I covered these points at the recent SDN and OpenFlow World Congress in Dusseldorf in a presentation I gave entitled, “Achieving Real-time Voice and Video Virtualized Network Functionality in NFV.” You can download the entire presentation from SlideShare by clicking here. Take a look and let us know what you think by tweeting us at @Dialogic.