Dialogic Blog

Why Automation Is Key to Your OPNFV Deployment (OPNFV Demystified Part 7)

by John Hermanski

Aug 25, 2016 9:26:05 AM

Why Automation Is Key to Your OPNFV Deployment

While OPNFV and Openstack provide a convenient virtualized environment for deploying and running network-oriented applications, there is another whole dimension to what may be done with it. With conventional computing, you need to wheel in another box, install things, and then modify your networked environment to take the new hardware into account. With a virtualization, you avoid dealing with hardware each time you need to increase an application’s capacity, and can “spin up” additional virtual machines and then configure the environment accordingly. And, in a well-designed OPNFV environment, all of this can be done automatically.

While OPNFV doesn’t come with a magic “gimme more” button, the components are there to put together such a button yourself. Here’s what’s involved:

  • The Openstack Heat project. This “implements an orchestration engine to launch multiple composite cloud applications based on templates in the form of text files that can be treated like code.” This means that you can define exactly how you want to run your networked application – size of the instance(s) used, which application image(s) you want, networks and ports that need to be created to tie things together, static vs. DHCP IP addresses, and application-specific configuration. Like other Openstack components, you have a choice of invoking Heat templates through the Horizon GUI or through a CLI. While not a true programming language, Heat’s YAML format allows for a fair amount of flexibility, which makes things readable and maintainable.
  • Telemetry. OPNFV, by default, comes with a telemetry node based on the Ceilometer project. By default, this node collects Openstack performance data and uses a Mongo DB to store it. It may also be used as a convenient place to store application performance data.
  • Application-specific Key Performance Indicators (KPIs).  These are statistics that pertain to the application itself, rather than the platform it runs on. This could be things like the number of simultaneous users logged in, the number of licenses in use vs. the number of licenses available on the node, or the number of people using some other limited application resource. By monitoring these sorts of values, it can become apparent that additional application resources are needed, and that new VMs must be started and connected. And, the opposite can happen when resources are no longer needed.
  • Heat Orchestration Templates (HOT). This might be considered the first level of automatic scaling. The infrastructure to direct Openstack on how to add additional VNF components is defined by Heat. HOT has rudimentary abilities that allow some system-oriented performance indicators to be monitored. This would include CPU and disk usage and network performance indicators. Additional VNF components are then started when the need arises, and torn down when they are no longer needed.
  • Full Management and Orchestration (MANO). Not really a part of OPNFV yet, but sure to be in the future. There are a variety of MANO products and projects out there in their formative stages, including the Openstack Tacker project, Open Source MANO/Rift.io and Open Baton.

While I’ve been trying to avoid mentioning too much specific to our media server VNF, it wouldn’t be bad to use it for an example of some things that have to be taken into account when scaling applications. Let’s look at video conferencing. There is a finite capacity to the number of caller sessions that can be done on a single VM, and callers are divided up into specific, discrete conferences. What happens if we are running out of room and need to expand? We can’t just allow more callers into an already jammed conference or put them in a new conference that they aren’t supposed to be in.

Well, there is another node in our VNF call a Media Resource Broker (MRB). Here, the intelligence is found that keeps track of the multiple media servers and their capabilities -  things like codecs and resolutions available.  Knowing what sort of conferencing facilities are available, it is able to quickly move conferences from an almost “full” server to one with spare capacity. All of this can happen when a new caller arrives and puts the media server over the edge.

But, one thing it can’t do is start up additional media servers. It can only deal with existing servers that it already knows about. That’s where OPNFV management and orchestration come into play. When a threshold (as defined by an application KPI) is exceeded, a new media server is started. As part of its startup and configuration, it registers itself with an MRB so that the MRB becomes aware of its additional resources, and can adjust the conferences it manages accordingly.

Now, your application may well work differently, and may require different KPIs and scaling schemes. But, the principles will be the same, and it’s likely that some application involvement will be needed.

This concludes my series of OPNFV blogs for now. But, more will be sure to follow. We might want to take a deeper look into Heat templates and MANO, and there will certainly be things to say about our proposed OPNFV-based, company-wide QA and test environment that is just getting off the ground. And I’m sure there will be other topics I haven’t even thought of yet.

Thanks for reading!

Check out the other posts in this series.

Liked this post? Get more content like this right to your inbox. Subscribe to  the blog.

Topics: NFV/SDN & Cloud, Guides: How-to's, Infographics, and more