Before we can look at the components of OPNFV, let’s start back with a definition from my last blog. “OPNFV is basically an Openstack deployment framework, with emphasis on the networking side.“ This means that we have at least 3 pieces here – Openstack itself, a way to deploy it, and some networking “extras.” That might lead you to ask – “Well, why don’t I just use Openstack, if that’s at the core of things?”
You could, but remember my warning about re-inventing the wheel. Wouldn’t it be way better to get everything you need to deploy an Openstack cloud in a single package, with good instructions, and a community willing to answer the inevitable questions? Oh, yes it would…
Let’s look at what we will be working with. For those who are not used to dealing with the innards of a cloud, it will help to think in terms of “layers.” Some of these layers are real, and some are virtual. Some allow you to install, deploy, and administer Openstack, and some do the actual work that everything else is there to support – running your application in a virtualized networking environment.
It’s probably best to start by reviewing the layers, at a high level, in the order in which you will be dealing with them. Here they are:
The Openstack deployment controller or master node - It is from here that all good things will spring. Once installed, the master will control how your Openstack environment will both be initially configured, and, how it will be enhanced as time goes on. As more users are added, they will need more image storage and more compute nodes on which to do their work. A deployment controller will allow these additions to be easily done.
The Openstack nodes themselves - There are many different types of nodes that can come into play in Openstack, but not all are needed in every deployment. With OPNFV, there are a set of core nodes that are needed, and a few others that may be deployed if desired. Narrowing down the possibilities helps to alleviate the information overload that comes with figuring out how you want to use something new and complex.
The guest virtual machines running in the Openstack environment. These would be the reason for doing all this in the first place – a set of virtual machines that may be assigned to different users, brought up and down at will, and put into a multitude of virtual network configurations.
Now we can get into specifics. In doing so, I will be telling you what worked for me, and why I made the choices I did. They may not be exactly what you want or need. But, hey, it’s my blog…
FUEL – My Deployment Environment of Choice
There are no less than 4 different installers available with OPNFV. After some experimenting, I chose Fuel. Here are my reasons:
- It has its own semi-automated setup and install program. And, yes, an installer for an installer is not a bad idea.
- Fuel itself uses a well thought-out web GIU to lead you through Openstack configuration.
- The Fuel installer and web GUI both allow you to test/verify your setup choices in many different points throughout the entire process. This is essential. A mistyped IP address or other mistakes can lead to cascading errors a step or two down the road. And you will have no idea what you did or when you did it - unless you are particularly adept at trolling through dozens of strange log files for oddly named things whose function is a complete mystery.
- Fuel supports virtual LANs (VLANs). An Openstack deployment uses many networks, which may be either physical or virtual. You may have an unlimited budget for network cards for your conventional systems or fabric switches for your blade center. In which case, don’t bother with VLANs. But otherwise, you need to be able to divide up a physical network into discreet, virtual networks so they can carry out their Openstack jobs without bumping into one another.
I previously mentioned that I have set up and plan to maintain two separate OPNFV deployments – one for experimental and development purposes, and a second for internal QA, testing, and demos. One is a set of rack mount servers, the other an HPE Bladecenter. As these are two unlike environments, I chose to use two separate installations of Fuel to deploy the two separate OPNFVs. But I was able to save the price of a system by doing the two Fuel master nodes as virtual machines. Overall, Fuel does a lot of sitting around. It’s only hard at work when called on to deploy Openstack. So, the two virtual Fuel nodes on a single modest CentOS 7 host had more than enough resources when each needed them.
Openstack Nodes and Roles
Let’s now take a look at the Openstack nodes and roles that are part of the OPNFV deployment. The “nodes” here are the physical machines themselves, while the “roles” are the necessary Openstack services or functions that must be present to get to a working deployment. And to make matters more interesting, there are many possibilities for assigning roles to nodes.
Openstack Roles – there is a fairly long list of roles shown by Fuel, but only some are relevant for OPNFV. These most important of these are:
- Controller – the brains of the Openstack cloud. It provides a user interface (graphical or command line) for the many functions involved in cloud operation. The front ends for all of the other components are also found here. They talk via sockets to their counterparts on other nodes, where the actual work is performed.
- Compute – the brawn of the Openstack cloud. The guest virtual machines are created, run, and destroyed here.
- Networking – your main networking needs in Openstack today are fulfilled by the Neutron project. It provides its own API to tie together network-related functions across other Openstack components. With OPNFV, there are Software Defined Networking (SDN) network plugin options below Neutron that may be selected. For the current release they would be OpenDaylight and ONOS. Operating at the Neutron level, you would, in most cases, be unaware of what happens via the plugin.
- Storage – “Ceph” is an object storage platform. It may span several nodes, and is used for several kinds of offline storage such as virtual machine images, files and block storage volumes. It may replicate its objects to avoid a single point of failure.
- Telemetry – “Ceilometer” uses MongoDB to keep track of cloud usage, performance statistics and can be used to store data specific to application virtual machines as well.
Now, how does OPNFV suggest deploying the system? It’s not always clear – how many boxes do you really need, how are functions best divided up among them? Often, you will want to know the minimum needed, as few people seem to have closets full of up-to-date servers with good virtualization support. Things that need to be taken into account to arrive at the ”right” (for you, anyway) answers include:
- Number of potential guest VMs expected to be in use.
- What are the guest nodes going to be doing? Are they compute, storage, or network intensive? This will influence the choice of CPUs/cores, available memory and disk storage on the compute nodes. Remember - the compute nodes are where you want to spend your money!
- High availability, fault tolerance and disaster recovery. Is this important? For experimental or development systems, not so much. For production systems, very much so. A minimum of 3 duplicate systems are need for true HA. But I think here we are all at the “get your feet wet” stage here, so this will not be so important. HA configuration is also something I have yet to tackle. It may be sufficient to simply do a nightly backup of development materials – scripts, configuration files, images and snapshots - so they don’t inadvertently disappear.
I realize I’ve given you no hard recommendations here yet. Things that worked well for me will be revealed when I get into more of the details in subsequent blogs. The next blog entry will be relatively short – how to get prepared before actually installing OPNFV. Then we’ll get down to the meat of things.
This post is a part of the "OPNFV Demystified" blog series. The next part of the blog series will be posted on Thursday morning each week. Check out the other blog posts in this series.
- Intro to Building a Carrier Grade Cloud Operating System Using OPNFV (OPNFV Demystified Part 1)
- The Components of OPNFV (OPNFV Demystified Part 2)
- Getting Started with OPNFV - How Should I Prepare? (OPNFV Demystified Part 3)
- Real-World OPNFV Implementation – The Hardware (OPNFV Demystified Part 4)
- Real-World OPNFV Implementation – Traps and Tips (OPNFV Demystified Part 5)
- Real-World OPNFV Implementation – Networking (OPNFV Demystified Part 6)
- Why Automation Is Key to Your OPNFV Deployment (OPNFV Demystified Part 7)