About a month ago Erik Linask, Group Editorial Director at TMC, asked me for my thoughts on network functions virtualization (NFV)—you can read the entire interview here. One question he asked was why NFV adoption seems to be increasing.
My perspective on this is that, over the past few years, there have been more and more software-based network infrastructure products deployed; they are scalable, telco-hardened and cost-effective.
At the same time, cloud-based enterprise solutions such as Salesforce.com have shown powerful OPEX and CAPEX models. Simply put, the time is right for NFV adoption in telecom infrastructure.
Another question I was asked involves what the biggest pitfalls and/or challenges are that companies should be aware of when virtualizing network functions. Dialogic has a lot of expertise with moving from hardware-based products to software-based products, even before we first heard of the term “NFV,” so our team has a lot experience to offer in this arena.
First of all, the software version won’t exactly work like the hardware version. There are various reasons why—from drivers to different hardware platforms having different memory. As such, a minimum hardware range needs to be set as an example. Secondly, one cannot control the hardware environment that the software will be deployed on. All the while, it’s integral that companies beware of the term “NFV”; just because someone has a piece of telecom software doesn’t mean it is a true virtualized network function.
In the end, I’ve seen in 2015 that carriers are getting increasingly comfortable deploying software-based infrastructure. At Dialogic, we have seen this firsthand with PowerMedia XMS, our software-based MRF/media server.
NFV, while clouded in marketing hype right now, is definitely not a fad. The move to software-based telecoms infrastructure is on.