TNS
VOXPOP
You’re most productive when…
A recent TNS post discussed the factors that make developers productive. You code best when:
The work is interesting to me.
0%
I get lots of uninterrupted work time.
0%
I am well-supported by a good toolset.
0%
I understand the entire code base.
0%
All of the above.
0%
I am equally productive all the time.
0%
Containers / Networking

SDN, Docker and the Real Changes Ahead

Jan 29th, 2015 8:34am by
Featued image for: SDN, Docker and the Real Changes Ahead
Feature image: “Hexagonal carpet padding” by Daniel Oines is licensed under CC BY 2.0.
The New Stack has had a continuing series on software-defined networking. Today’s post is part of that ongoing series. In part one, we defined SDN and detailed different SDN controllers and frameworks. In part two of our series we wrote about Trema, a framework for developing OpenFlow controllers in Ruby and C. Part three explored NOX, the original OpenFlow controller. In part four, we wrote about Ryu, an open source SDN Controller supported by NTT Labs. For part five we looked at Floodlight, an OpenFlow controller with more than 15,000 downloads. OpenDaylight, an SDN controller with broad industry support, was the subject of part six.  

A colleague said to me the other night that everyone wants to move from old-school networking to software-defined networking (SDN). I agree, but what about Docker? It’s Docker that points to the real changes ahead for SDN as issues about density and the need for new ways to think about compute become focal issues. In concert with Docker’s impact are the changes happening at the application and platform tiers of the stack. How are these layers impacted by SDN, and how in turn do they impact it?

These questions come at a time when an increasing amount of services and software are running on sophisticated, fast and distributed infrastructure. Apps will increasingly be architected to sync across data centers. These apps will consist of interchangeable services and event driven capabilities for processing real-time streaming data. Monitoring will be increasingly important to detect anomalies in containers running on OpenStack, Amazon Web Services or multiple cloud services.

The complexity of these new application architectures will be invisible for most developers. The network will just be expected to work without the help of a service person. For now, the attention is on Docker as a symbol of the new stack’s transformation. But with Docker will come a host of different demands for all aspects of the technology stack.

For the past several weeks we’ve run a series on software-defined networking by Sridhar Rao. It chronicles the first generation of SDN tools on the market and how SDN has started to mature. Now we are starting to see the overall interest in containers, Docker and application platforms as a mark of a familiar yet somewhat orthogonal approach to the way we think about networking. This new school of networking technologies uses SDN as a precedent but also takes into consideration other factors. Density and data gravity are some of the issues that come into play with the advent of lightweight containers and microservices.

SocketPlane is developing a hybrid networking model that builds on SDN principles and applies them to native Docker environments.

It is developing a programmatic platform that puts DevOps in a networking context. There’s a good post on SocketPlane’s approach on the au courant technology blog that shows how SocketPlane builds “VXLAN tunnels between hosts to connect Docker containers on the same virtual(logical) network with no remote/external SDN controller needed.”

Users will interact with a CLI wrapper for docker that also controls how SocketPlane’s virtual networks are created, deleted and manipulated.

SocketPlane uses Hashicorp’s Consul as a lightweight control plane. It connects to the Consul cluster through Open vSwitch for the network connectivity. Once a Docker host is added, the agent runs as a Docker instance and connects into the cluster. The container then looks like a VM.

With Docker comes changes that will define a new generation of networking technologies that leverage containers across multiple machines and hosts. With the industry anticipating at least a two order of magnitude increase in the number of containers managing the subsequent impact remains ripe ground for innovation

“You are going to see a new order of magnitude in terms of swarming of compute running for shorter time periods,” said John Willis of SocketPlane.  “Now it is a matter of nanocompute. It could go from 1,000 to one billion instances starting and stopping in a week.”

At Dockercon Europe last December, Willis took me through his view of what’s to come in the networking world. Willis reflects a vendor’s perspective. There are lots of other vendor plays, which we will get into future posts. In particular, Weave, is an example of a container-centric networking technology that has gained some notice. CoreOS has developed Flannel and Docker has its own networking configurations.

Willis calls SDN an overloaded term that has all kinds of interpretations. But why? Networks are hard to manage, are not malleable, are hard to evolve and difficult to understand. Put those together and it gets very complicated. SDN on the other hand is supposed to make the data plane more malleable and move the control plane off the device. But with too much centralization, management of a container-centric ecosystem is unrealistic.

The Problem With the Central Brain

The insightful Mark Burgess writes that the problem with a centralized network goes something like this: We need brains but there is only so much one brain can handle. We need lots of brains to work together. It’s about creating a society more than relying on an individual to hold all the knowledge.

For so long, the centralized model worked. You could build out a network with hardware that controlled the network from a central point. SDN was built on this principle. It relied on this concept of a centralized control plane that was separate from the data plane. But what really makes more sense is to have a de-centralized network that borrows the principles of  a centralized brain but distributes the knowledge as we do in modern societies. It is never one mind that controls the way we live and work. It is the collective nature of society, a collage of concepts and ideas that we execute in all kinds of ways that reflect the human soul.  Further, one brain can only process so much information.  Societies have a collective intelligence that comes from thousands of brains interacting with each other. Burgess shows it this way:

controller1

Burgess, who founded CF Engine, and is working with SocketPlane for a few months, likens it to how we need to push the loads to the edges. God-like, brute force models may seem efficient but they are going to be slow and more susceptible to catastrophe:

Societies scale better than brain models, because they can form local cells that interact weakly at the edges, trading or exchanging information. If one connection fails, it does not necessarily become cut off from the rest, and it has sufficient autonomy to reconfigure and adapt. There is no need for there to be a path from every part of the system to a single god-like location, which then has to cope with the load of processing and decision-making by sheer brute force.

Docker fits well into this new hybrid-SDN that SocketPlane is developing. Docker could be the symbol of an OpEx nightmare if there was a reliance on this centralized, single brain approach. For example, let’s say 50 virtual machines (VMs) are on a server and a centralized SDN platform can support 500 hosts with 50 VMs per host. What do you do when there are 5,000 containers per host? A centralized environment becomes unfeasible. There needs to be new ways to manage the inherent changes in network behaviors that arise as the number of containers increase and the volume of data scales.

Willis pointed to two megatrends:

  • Density: As his co-worker Brent Salisbury wrote last week, the operating system is getting fragmented. With Docker comes densities that will change the way applications behave in a network environment. Now that the OS is getting consolidated, and perhaps eventually removed, the subscription ratio will change considerably. There may be 2,000 containers on a physical server in an environment that has hundreds or even thousands of hosts.
  • Data Gravity: A concept developed by Basho CTO Dave McCrory, it means that data should be treated as an object that attracts more objects to it. The more data, the more services and applications it will attract.  As more data collects, which is inevitable, compute resources will need to be faster, called to the app in many ways. The compute will swarm or be streamed, which we are seeing already with technologies like AWS Lambda.

Adrian Cockcroft discussed this a bit in his discussion at Dockercon Europe:

SocketPlane calls its way a retro-SDN. The control plane has a logic that gets populated to the data plane. It matches flow tables, with a packet, for example, that when coming through, gets routed programmatically, allowing for dynamic decision making. VMware calls this micro-segmentation, Willis said. It isolates the segmentation, creating tunnels. VLANs are an abstraction to do multi-tenancy on a switch. But the coupling is on the hardware. In the cloud, the user’s resource of a compute does not go to one machine, it instead gets distributed to a virtual network that sends it dynamically. This, in a broad way, is how Willis defines the overlay network. A logical network is created that has no coupling to a physical switch and can be changed on the fly. It is all being done from the control plane down to the data plane. This is the promise of SDN. Services can be moved into the data plane manipulation. On GitHub, the overlay network in SocketPlane is described as follows:

Overlay networking establishes tunnels between host endpoints, and in our case, those host endpoints are Open vSwitch. The advantage to this scenario is the user doesn’t need to worry about subnets/VLANs or any other layer 2 usage constraints. This is just one way to deploy container networking that we will be presenting. The importance of Open vSwitch is performance and the de facto APIs for advanced networking.

With containers, there is an order of magnitude of scale, Willis said. Thousands of containers may be on a physical host. This creates a bigger problem. With the advent of microservices, there may be any number of services running in the containers. That means new design patterns as data gravity, combined with the density factor, creates an entirely different way to think of how compute resources are distributed.

“The norm may be compute living for minutes or maybe seconds.”

There’s no need for the network to be centrally controlled. Just as in society, there really should be no need for one centralized, autocratic power. But it does mean that we have to think about hubs and the intelligence they offer. In different communities, there are people who are looked to for their knowledge. They are not the only source of information but they are depended on for their expertise and capabilities. We see that all the time in the geek community. There is not one person who controls all the knowledge. More so it is the collective depth of the community that makes the difference.

This offers a different way to think about the way we view the network itself. It’s not one brain but more so it is a network of central data planes that dynamically make decisions.

Perhaps this might be a way for us to think differently about the infrastructure itself. The network as a society more so than one monolithic entity gives us new ways to correlate how these systems truly do impact the way we live and work.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Docker.
TNS DAILY NEWSLETTER Receive a free roundup of the most recent TNS articles in your inbox each day.