SDN, NOS and White-box switches
I am writing this article mainly because one of my colleague. Last month I was doing some internal presentation about SDN, hyper converged systems and software defined datacenter and as a part of the complete picture we of course talked about networking and all new possibilities that are available today. We have a little discussion regarding the established hardware vendors and one of the thing college said was that they will never sell such “open” hardware without their OS and let you install whatever NOS (Network Operating System) you would like.
Well, they are. I did some research and I was quite surprised how many (well of course not hundreds of models it’s still and I think it will still be a niche market) vendors already have some devices.
If we look at the portfolio we will notice one thing – almost all of the available models are quite powerful – most of the models have typical configuration like 48x SFP+ and 4x QSFP+ interfaces for leafs and 32x 40 QSFP+ for spines. This is telling you the main focus of those solutions. They are targeted to really large scaled datacenters.
Some examples from well-known vendors
- HPE – Altoline 5712, Altoine 6712
- Juniper Networks – OCX1100 Open Networking Switch
- Dell – S4810-ON, S6000-ON
- Supermicro – SSW-X3648S
There are also minor vendors who try to be part of this market
- Edge-Core – AS6712-32X
- Penguin Computing – Arctica 3200XLP
- Quanta – QuantaMesh BMS T5032, T3048
- Agema – AG-7448CU
Last part of the portfolio is Gbe switches running NOS for supporting you management network (ILO, OOB interfaces etc.). All of the already mentioned vendors have also this series of switches (typically 48x 1G-T and 4x SFP+).
Since those switches are so-called whit-box switches so it means that there is no NOS installed by default. You have to install whatever NOS you would like to run. Some of the vendors will probably prepare their own NOS in the feature but right now there is nothing special on the market yet (as far I know).
When you receive such a switch (or emulate it to be able to learn some basics) you will need up with ONIE. ONIE stands for Open Network Install Environment and it basically a boot loader through which you install the final NOS. You can use ONIE for recovery tasks like reinstalling NOS or installing new NOS during your PoC projects.
Finally you have to choose some NOS you would like to run on the top of the hardware. Right now there is for example OpenSwitch you can use or another looking-good product is from Cumulus (thanks to Tomas Kubica for mention this).
The point there is that you probably don’t end with just a hundreds of separated white-box switches with some NOS, the key point is that you will connect all those switches to your SDN controller or some automation framework so you will treat your network as a single entity and from “configuring new VLAN” you will move to “provisioning new service” which will be done automatically according your service templates etc.
White-box switches, SDN, OpenFlow and such are of course not for everyone. You have to be reasonably large company so you can benefit from all the automation (which is not easy to set up). But on the other side you will gain significant benefits compared to legacy networks.
If you want to play with those technologies I recommend testing in in the virtual environments. First it would be nice to put hands on ONIE.
Right now the only thing how to test it is to compile your own recovery ISO for installation of the ONIE to your VM. You can either follow this how-to or you can directly download all the files I have already compiled.
You need your build VM for this (or you will install some packages into your workstation but I prefer separated VM for this)
apt-get install build-essential git git clone https://github.com/opencomputeproject/onie cd onie/build-config/ make debian-prepare-build-host make -j4 MACHINE=kvm_x86_64 all recovery-iso
Once done you should have following files in /onie/build/images/ (click to each one to download it)
Installation of the ONIE to VM
I do not know why but it is not working in my VMware Workstation. I have to use Virtual Box for this VM even if I tried different IDE/SATA/SCSI hdd modes it always stucked when loading GRUB.
The point is that once you prepare blank VM and insert the ISO you need to add serial console to the VM also. ONIE by default redirects all outputs from the screen to the serial console. To achieve this add another virtual hardware to your VM.
Once the VM is powered on you can use putty (or whatever tool you use for connecting to the serial console) and point it to the named pipe you have created in previous step.
Finally you should see following installation windows of the ONIE
Now you can install your desired NOS. This is how would your switch looks like once you unpack it.
Second option is to download ready-made virtual appliance with your NOS. You can have a look at the OpenSwitch or already mentioned Cumulus Linux.
I hope that this would be useful for you as an example how the networking within large scale datacenters and cloud providers can be done in near-future. If you know another interesting NOS just let us all know via comments.