Dialogic Blog

Real-World OPNFV Implementation – Networking (OPNFV Demystified Part 6)

by John Hermanski

Aug 12, 2016 1:13:59 PM

Real-World OPNFV Implementation – NetworkingSo by now, you must be wondering what sort of differences we found in deploying OPNFV in a blade server environment? To be honest, not a lot. Some initial ’issues” stemmed from some unfamiliarity with blade servers and their storage and networking environments. We first did the OPNFV deployment on individual rack mount servers (RMS) [I should note that OPNFV at that point was the 1.0 Arno release, and it was a little green]. In parallel, we familiarized ourselves with the blade environment in general – setting up, partitioning, and mounting storage arrays, basic blade switch and network setup, and remote system management. For the grand finale, we successfully combined the blades and the more stable OPNFV 2.0 Brahmaputra release.

The most important difference between the two environments was network setup.

Blade Server Networking

Let’s look at the networking available on our HPE BladeSystem, which uses Gen9 blades and 6125XLG blade switches. Here, we will see the biggest difference between an RMS and blade server configuration.

Here’s an overall picture of the deployment:

blade server networking deployment

The Top of Rack (ToR) Switch needed for the RMS setup, is bypassed. Low-speed (1 GB/sec) fiber-optics link the blade switches directly to our lab distribution switch. This eliminates one hop, but there is still the bandwidth 10GB – 1 GB bottleneck. On the bright side, all inter-blade communication – storage, management, private, internal – will flow at 10GB/sec. This results in a noticeable difference in loading and retrieving images and starting up instances, even on a lightly-loaded system.

All blades have two (e)thernet (o)n-board interfaces, eno49 and eno50. It was best to split up the traffic between them so that the first interface is used for two things:

  • PXE boots. This requires a flat (non-VLAN) network. But, there will seldom be any traffic on this network, as it is only used for deploying OpenStack nodes, and after the initial deployment, maybe only when new compute nodes are added. For this reason, it would be a serious waste of resources to dedicate a full 10 GB/sec interface to it.
  • Internal OpenStack interfaces, split over 4 VLANs.

To be able to run these different types of networks on the same interface, the interface and its fabric switch must support “Hybrid” network ports. A hybrid port will allow a single port to carry multiple VLANs or a combination of a trunk (flat) network and VLAN networks. In the HP 6125XLG blade switch I used, it was necessary to create the VLANs first, and then assign them and the trunk network to the fabric switch port assigned to each blade used.

The other interface is used as the public network for access from other internal corporate networks, and outside VPN’d connections. Due to the nature of our product set, which handles lots of real-time media streaming, dedicating a fast network to our guest VM application use make sense. If, in the future, more bandwidth is needed, a mezzanine network card could be added to the blade and a third and fourth network could be brought into the picture to offload traffic from the single interface. Multiple physical interfaces can be bonded together to form a single, higher-capacity network. As it now stands, the bottleneck is at the 1 GB/sec connection from the outside world into the blade chassis.

Deploying On A Mix of Blade and Rack Mount Servers

It is, of course, possible to configure a mixture of blade and RMS systems. We are, in fact, working on that approach now. We plan on using the higher priced, higher horsepower blades as compute nodes, and the RMS’s as controller and telemetry nodes. We hope to set up a highly available environment with 3 nodes of each, so that would result in significant cost savings. Storage may be more convenient if we use the storage array in our blade center, or we may simply add inexpensive disks to the RMS nodes. Remember, however the RMS’s are used, they should be equipped with 10 GB/sec Ethernet interfaces to keep up with the blades.

To wrap up this blog series, I want to talk about Automation in an OPNFV Environment. This would include ways to start up instances with little if any manual intervention for installation/configuration, an overview of HEAT template use in OPNFV, and NFV scaling in general and with our media processing applications.

Check out the other posts in this series.

Liked this post? Get more content like this right to your inbox. Subscribe to  the blog.

 

Topics: NFV/SDN & Cloud, Guides: How-to's, Infographics, and more