I was at Citrix Synergy/Virtualization Congress earlier this week and at the end of the day on Wednesday, Scott Lowe tweeted something interesting when he said:
In my mind, the biggest announcement that no one is talking about is virtual switching for XenServer. #CitrixSynergy
I had missed the announcements since I didn’t get to many of the sessions due to timing, so I sniffed around based on Scott’s hints and looked for some more meat.
I found that Chris Wolf covered the announcement nicely in his blog here but I wanted a little more detail, especially regarding approach, architecture and implementation.
Imagine my surprise when Alessandro Perilli and I sat down for a quick drink only to be joined by Simon Crosby and Ian Pratt. Sometimes, membership has its privileges 😉
I asked Simon/Ian about the new virtual switch because I was very intrigued, and since I had direct access to the open source, it was good timing.
Now, not to be a spoil-sport, but there are details under FreiNDA that I cannot disclose, so I’ll instead riff off of Chris’ commentary wherein he outlined the need for more integrated and robust virtual networking capabilities within or adjunct to the virtualization platforms:
Cisco had to know that it was only a matter of time before competition for the Nexus 1000V started to emerge, and it appears that a virtual switch that competes with the Nexus 1000V will come right on the heels of the 1000V release. There’s no question that we’ve needed better virtual infrastructure switch management, and an overwhelming number of Burton Group clients are very interested in this technology. Client interest has generally been driven by two factors:
- Fully managed virtual switches would allow the organization’s networking group to regain control of the network infrastructure. Most network administrators have never been thrilled with having server administrators manage virtual switches.
- Managed virtual switches provide more granular insight into virtual network traffic and better integration with the organization’s existing network and security management tools
I don’t disagree with any of what Chris said, except that I do think that the word ‘compete’ is an interesting turn of phrase.
Just as the Cisco 1000v is a mostly proprietary (implementation of a) solution bound to VMware’s platform, the new Citrix/Xen/KVM virtual networking capabilities — while open sourced and free — are bound to Xen and KVM-based virtualization platforms, so it’s not really “competitive” because it’s not going to run in VMware environments. It is certainly a clear shot across the bow of VMware to address the 1000v, but there’s a tradeoff here as it comes to integration and functionality as well as the approach to what “networking” means in a virtualized construct. More on that in a minute.
I’m going to take Chris’ next chunk out of order in order to describe the features we know about:
I’m expecting Citrix to offer more details of the open source Xen virtual switch in the near future, but in the mean time, here’s what I can tell you:
- The virtual switch will be open source and initially compatible with both Xen- and KVM-based hypervisors
- It will provide centralized network management
- It will support advanced network management features such as Netflow, SPAN, RSPAN, and ERSPAN
- It will initially be available as a plug-in to XenCenter
- It will support security features such as ACLs and 802.1x
This all sounds like good stuff. It brings the capabilities of virtual networking and how it’s managed to “proper” levels. If you’re wondering how this is going to happen, you *cough* might want to take a look at OpenFlow…being able to enforce policies and do things similar to the 1000v with VMware’s vSphere, DVS and the up-coming VN-Link/VN-tag is the stuff I can’t talk about — even though it’s the most interesting. Suffice it to say there are some very interesting opportunities here that do not require proprietary networking protocols that may or may not require uplifts or upgrades of routers/switches upstream. ’nuff said. 😉
Now the next section is interesting, but in my opinion is a bit of reach in certain sections:
For awhile I’ve held the belief that the traditional network access layer was going to move to the virtual infrastructure. A large number of physical network and security appliance vendors believe that too, and are building or currently offering products that can be deployed directly to the virtual infrastructure. So for Cisco, the Nexus 1000V was important because it a) gave its clients functionality they desperately craved, but also b) protected existing revenue streams associated with network access layer devices. Throw in an open source managed virtual switch, and it could be problematic for Cisco’s continued dominance of the network market. Sure, Cisco’s competitors can’t go at Cisco individually, but by collectively rallying around an open source managed virtual switch, they have a chance. In my opinion, it won’t be long before the Xen virtual switch can be run via software on the hypervisor and will run on firmware on SR-IOV-enabled network interfaces or converged network adapters (CNAs).
…
This is clearly a great move by Citrix. An open source virtual switch will allow a number of hardware OEMs to ship a robust virtual switch on their products, while also giving them the opportunity to add value to both their hardware devices (e.g., network adapters) and software management suites. Furthermore, an open source virtual switch that is shared by a large vendor community will enable organizations to deploy this virtual switch technology while avoiding vendor lock-in.
Firstly, I totally agree that it’s fantastic that this capability is coming to Xen/KVM platforms. It’s a roadmap item that has been missing and was, quite honestly, going to happen one way or another.
You can expect that Microsoft will also needto respond to this some point to allow for more integrated networking and security capabilities with Hyper-V.
However, let’s compare apples to apples here.
I think it’s interesting that Chris chose to toss in the “vendor lock-in” argument as it pertains to virtual networking and virtualization for the following reasons:
- Most enterprise networking environments (from the routing & switching perspective) are usually provided by a single vendor.
- Most enterprises choose a virtualization platform from a single vendor
If you take those two things, then for an environment that has VMware and Cisco, that “lock-in” is a deliberate choice, not foisted upon them.
If an enterprise chooses to invest based upon functionality NOT available elsewhere due to a tight partnership between technology companies, it’s sort of goofy to suggest lock-in. We call this adoption of innovation. When you’re a competitor who is threatened because don’t have the capability you call it lock-in. ;(
This virtual switch announcement does nothing to address “lock-in” for customers who choose to run VMware with a virtual networking stack other than VMware’s or Cisco’s…see what I mean. it doesn’t matter if the customer has Juniper switches or not in this case…until you can integrate an open source virtual switch into VMware the same way Cisco did with the 1000v (which is not trivial,) then we are where we are.
Of course the 1000v was a strategic decision by Cisco to help re-claim the access layer that was disappering into the virtualized hosts and make Cisco more relevant in a virtualized environment. It sets the stage, as I have mentioned, for the longer term advancements of the entire Nexus and NG datacenter switching/routing products including the VN-Link/VN-Tag — with some features being proprietary and requiring Cisco hardware and others not.
I just don’t buy the argument that an open virtual switch “… could be problematic for Cisco’s continued dominance of the network market.” when the longtime availablity of open source networking products (including routers like Vyatta) haven’t made much of a dent in the enterprise against Cisco.
Customers want “open enough” and solutions that are proven and time tested. Even the 1000v is brand new. We haven’t even finished letting the paint dry there yet!
Now, I will say that if IBM or HP want to stick their thumb in the pie and extend their networking reach into the host by integrating this new technology with their hardware network choices, it offers a good solution — so long as you don’t mind *cough* “lock-in” from the virtualization platform provider’s perspective (since VMware is clearly excluded — see how this is a silly argument?)
The final point about “security inspection” and comparing the ability to redirect flows at a kernel/network layer to a security VA/VM/appliance is only one small part of what VMware’s VMsafe does:
Citrix needed an answer to the Nexus 1000V and the advanced security inspection offered by VMsafe, and there’s no doubt they are on the right track with this announcement.
Certainly, it’s the first step toward better visibility and does not require API modification of the security virtual appliances/machines like VMware’s solution in it’s full-blown implementation does, but this isn’t full-blown VM introspection, either.
Moreso, it’s a way of ensuring a more direct method of gaining better visibility and control over networking in a virtualized environment. Remember that VMsafe also includes the ability to provide interception and inspection of virtualized memory, disk, CPU execution as well as networking. There are, as I have mentioned Xen community projects to introduce VM introspection, however.
So yes, they’re on the right track indeed and will give people pause when evaluating which virtualization and network vendor to invest in should there be a greenfield capability to do so. If we’re dealing with environments that already have Cisco and VMware in place, not so much.
/Hoff
Recent Comments