Netronome_Web_Logo_UPE9ULO.original.png

OpenStack Networking: Where to Next?

By Netronome | Apr 25, 2016

On the heels of the OpenStack Summit in Austin this week, the OpenStack Foundation released a very enlightening User Survey. I read the survey with great interest, focusing on networking features in OpenStack. The following key metrics (as percentage of respondents) from the survey came to the forefront:

  • Usage: Full operational use, in production – 65%; On-premise, private cloud deployment – 65%; Neutron Networking – 90%
  • Important emerging technologies: SDN/NFV - 52%; Containers - 70%; Bare Metal - 50%
  • Workloads and frameworks: Private and public cloud products and services - 49%; NFV – 29%
  • Preferred hypervisor: KVM – 93%; VMware – 8%
  • Neutron drivers in Use: OVS - 80%; ML2 - 31%; Linux Bridge - 21%; Contrail - 14%; SR-IOV – 2%
  • Neutron features actively used/interested in: Distributed across many features.Those that got 20% or higher: software load balancing, routing, VPN, accelerated vSwitch, QoS, software firewalling

What is striking is the pervasive use of Open vSwitch (OVS) and among Neutron features, the strong interest in software-based networking on the server, or server-based networking. Emerging technologies such as SDN/NFV and use of containers in bare metal server environments further justify the need for server-based networking. This trend, starkly visible from the survey report, is naturally in line with how the largest data centers in the world (such as Microsoft Azure) have opted for server-based networking as the most flexible and scalable form of networking.

With increasing bandwidth requirements and adoption of 10GbE and higher speed technologies, it is well understood today that accomplishing server-based networking tasks using general purpose CPUs is highly inefficient. Microsoft has indicated use of SmartNICs to scale their server-based networking infrastructure more efficiently. SmartNICs are used to offload and accelerate the server-based networking datapath. Ericsson and Netronome have demonstrated up to 6X lower TCO utilizing Netronome Agilio hardware-accelerated server-based networking utilizing OpenStack-based Ericsson’s Cloud SDN platform.

To achieve such benefits, enhancements to the OpenStack networking plug-in architecture are needed. Let’s start by exploring what is feasible today.

Current state of accelerated OpenStack networking
At the last OpenStack summit in Vancouver, Mirantis presented the state of hardware-accelerated networking. The below discussion is based on excerpts from that comprehensive presentation.

Today, SR-IOV plug-ins used with SR-IOV-capable server NICs are the only form of networking hardware acceleration feasible. When this is implemented in OpenStack Neutron using traditional NICs, bandwidth delivered to VMs is improved significantly, latency for VM to VM traffic is reduced, and CPU use for networking tasks is greatly reduced. However, the number of server-based networking features available at such high performance is severely limited. Figures 1a and 1b show the packet datapath from a network port on the server to a VM. With the datapath shown in figure 1a, high bandwidth and low latency benefits are available for a very limited set of features as shown in table 1, making the solution feasible for a very limited set of applications. When more features are needed, one needs to fall back to the datapath shown in figure 1b, resulting in poor performance and high CPU usage. Note that SR-IOV adoption based on the recent survey is only 2% and this could be a reason.


Diagram of datapath

Feasibility table

Some of the above challenges can be addressed using DPDK where the server-based networking datapath (as in OVS or Contrail vRouter) is maintained in software, but is moved to the user space to reduce user space to kernel space context switching overheads.This approach is shown in figure 2 below. This approach delivers additional server-based networking features (compared to SR-IOV) and improved performance (compared to a kernel-based networking datapath). This however comes at the cost of high latency and high CPU usage, and with use of larger number of flows in the networking data path (as needed in security, load balancing and analytics applications) performance can be severely affected. As such, the DPDK approach has not found widespread adoption either.

Table data of DPDK approach

This is the current state of accelerated OpenStack networking as was aptly summarized by the presentation at the OpenStack Summit in Vancouver. The April 2016 Survey sheds new light on what is needed from OpenStack networking. This leads us to the next question.

Where do we go from here?
At the OpenStack Summit this week in Austin, Netronome will demonstrate how OpenStack networking can be extended to deliver hardware-accelerated server-based networking. Netronome’s Agilio server networking platform will be utilized with OpenStack, OVS, Contrail vRouter and Linux Conntrack technologies to showcase an extended set of features made possible at high performance (high bandwidth and low latency) and efficiency (low server CPU usage, freeing them up for more VMs). This is enabled by offloading the OVS, Contrail vRouter or Linux Conntrack datapaths into the Agilio intelligent server adapter (SmartNIC) as shown in figure 3. This approach shown in table 3, shows features needed in a broad array of workloads and deployment frameworks can be enabled at high performance and efficiency. This includes infrastructure as a service (IaaS), telco cloud SDN and NFV, as well as traditional IT and private/hybrid cloud workloads.

Diagram of server with hypervisor
Workloads and deployment frameworks

This model of delivering server-based networking using SmartNIC hardware such as Netronome’s Agilio platform is also ideally suited for bare metal server-based container deployments of the future. Features such as networking virtualization, service chaining, load balancing, security and analytics can be implemented and provisioned from outside the domain of the operating system running on the server.In this case, control plane orchestration using an SDN Controller or OpenStack is implemented directly with the SmartNIC. This concept is shown in figure 4 below.

Diagram of bare metal server

For all of this to be a reality, the OpenStack networking plug-in specification will have to be enhanced beyond the SR-IOV-based capabilities that exist today.Netronome is taking a leadership role in the development of these enhancements with industry leaders such as Mirantis, Ericsson, Juniper Networks and others. A draft open specification covering these enhancements is expected in Q3 of 2016 and this will be contributed to the OpenStack community for further feedback and eventual acceptance for industry-wide adoption.

Netronome at the 2016 OpenStack Summit in Austin, Texas
Stop by our booth (C5) and check out these important innovations that not only deliver higher OpenStack networking performance, but also improves TCO for a wide range of workloads and deployment frameworks utilizing Linux Conntrack, OVN (Open Virtual Networking) and Contrail vRouter. Also, a demonstration at the Ericsson booth (D5) will showcase OpenStack integration with Netronome Agilio hardware-accelerated Open vSwitch utilizing the Ericsson Cloud SDN platform for traditional IT and NFV applications.