Archive

Archive for the ‘Virtualization’ Category

The Cart Before the Virtual Horse: VMware’s vShield/Zones vs. VMsafe API’s

April 25th, 2009 4 comments

Two years ago VMware announced their intention to develop and release a set of capabilities which would provide a more resilient and secure hypervisor while also extending a set of API’s to a limited number of vetted third-party security ISV’s.

These APIs were designed to regain visibility and add capabilities such as virtual introspection across compute, network and storage realms in order to solve some really difficult issues that I’ve spoken about extensively in my Four Horsemen of the Virtualization Security Apocalypse talks.

The reality is that VMsafe required two very important things to happen before it could see the light of day:

  1. A new version of VMware platform with a substantial overhaul of virtual networking capabilities and
  2. New versions of every ISV’s products who wish to take advantage of the API’s

Both of these things take substantial time and engineering effort and make for some very challenging integration, testing and product management challenges for both VMware and the security ISVs in the ecosystem.  I’ve lived this life on both sides of the fence and it ain’t pretty folks.

Here’s the cool thing, although it’s arrived out of order, the integration of technology from the Blue Lane acquisition (with the IPS and patch proxy functions removed) adds the capability to provide for logical zoning and policy/firewalling enforcement and yields a very interesting side effect..

For all those vendors struggling with having to retool their virtual appliances and write kernel-level drivers for fastpath functionality in order to work with VMsafe API’s as well as their own slowpath drivers in the VA, vShield ultimately offers a solution that instead depends upon VMware’s dvFilters to redirect certain protocols to a virtual appliance based upon zones.

I saw a demo of how RSA has taken their DLP solution (from the Tablus acquisition) and by using  vShield/Zones to provide for the filtering and agreeing on a comms. path between the VMM and the RSA virtual appliance, they can integrate their solution without having to re-write their code or  develop fast path drivers!

Now, there’s a trade-off in extensibility because the capabilities of what are exposed are limited since VMware effectively controls that in this scenario; you might expect only fixed protocol redirection or some other prescribed limitation.

Regardless of how this plays out functionally, both ISV’s and customers now have an expanded choice when it comes to deciding how they might integrate security controls:

  1. Use VMsafe API’s but wait for a vendor to re-write their code, integrate and test and get the best balance of performance, extensibility and customization of the solution or
  2. Use vShield/Zones with shorter development and test cycles without having to modify their code.  This offers potentially less optimized performance, less extensibility but again potentially less attack surface since API’s are not exposed and there is no third party code in the VMM.

vShield/Zones will help the security ISV’s integrate their solutions more easily and hopefully quicker and will give customers the CHOICE of the trade-off between security, performance and functionality in terms of security solution integration.  It also means that the number and choice of ISVs in the ecosystem should expand.

Further, it may mean easier integration of security controls in Cloud scenarios as VMware extends vCloud.

I eagerly await more information regarding how vShield and the VMware/RSA proof-of-concept develops.  I hope that the PoC generates interest and accelerates the delivery of security solutions from ISVs who may not have previously been able to participate in the VMsafe API program.

/Hoff

The VM Mobility Myth

April 25th, 2009 11 comments

It finally dawned on me that if I have a few hundred to a thousand people sitting in front of me at one of my presentations, I should take advantage of that collective intelligence to perform a little selfish information gathering.

I’ve had an opinion for quite some time that the rampant squawking and generalizations regarding hyper-mobility suggesting VM sprawl and uncontrolled instance spawning was nothing more than FUD given where we are today with the technology and platforms that supposedly enable it.

We constantly hear how organizations big and small are suffering (or will) from the evils of virtualization by way of VM’s and information turning up everywhere, putting your data and assets at risk. It gets worse with the multi-tenancy issues surrounding moving to “The Cloud,” they say.

So in a couple of my panels at RSA, I asked for some sanity and fact checking.

Informally, 95% of those in attendance at the two RSA panels I engaged run VMware in production. I asked that in cases OTHER than failure, how many of those in the audience take advantage of VM mobility (such as VMotion) or some other technological capability to provide autonomic mobility of VM’s in their enterprises.

About 5 people (in crowds of 100+ and 500+ respectively) raised their hands.  Given that I asked this question the second time in front of a huge audience at RSA sitting next to the CTO’s of Citrix and VMware, I’m sure they were pretty surprised by the answer, too.

The reality is that in these environments — even extremely complex and large examples — there simply isn’t that much mobility and customers are more interested in resilience than they are agility in terms of what this feature brings. That’s a really interesting and important point.

The reason for this is pretty simple; the capability to provide for integrated networking and virtualization coupled with governance and autonomics simply isn’t mature at this point. Most people are simply replicating existing zoned/perimertized non-virtualized network topologies in their consolidated virtualized environments and waiting for the platforms to catch up. We’re really still seeing the effects of what virtualization is doing to the classical core/distribution/access design methodology as it relates to how shackled much of this mobility is to critical components like DNS and IP addressing and layer 2 VLANs.  See Greg Ness and Lori Macvittie’s scribblings.

Furthermore, Workload distribution is simply impractical for anything other than monolithic stacks because the virtualization platforms, the applications and the networks aren’t at a point where from a policy or intelligence perspective they can easily and reliably self-orchestrate.

Don’t get me wrong, autonomics and business process/governance feedback loops are most definitely coming — and are absolutely required for Cloud — but they’re not here and not used much today.  This is the hard stuff we’ve skipped over because it’s really freaking hard.  Don’t believe me?  See how long folks like HP have been at their “Adaptive Enterprise” solutions.  That’s why unified fabrics make so much sense; you can get your arms around automating much, much more with a consistent set of enforceable policies and SLAs.

So the next time someone brings up this epidemic of runaway VM’s, ask them to kindly provide you with empirical data demonstrating such as just because it *might* happen, doesn’t mean it *does* happen.

So much of the purported risks associated with virtualization and Cloud are things based on what might happen. There’s a huge difference between possibility and probability. One of them is used for prudent analysis and risk assessment, the other for selling you something. I’ll let you figure out which is which.

The management, visibility and security tools and capabilities are arriving on our doorsteps. When and if this sort of problem actually becomes a problem, it’s quite likely we’ll have a good set of solutions to deal with it.

Until then, challenge these assertions and fears, and ask for proof not pandering to panic.

OVF: The Root Of All Evil. We Must Exterminate It NOW!

April 17th, 2009 4 comments

Today I was rudely interrupted from my Cyber-dopamine-drip as I hungrily anticipated Oprah’s next tweet such that I might become complete.

My Google reader flashed its welcome yellow folder highlight as it indicated an RSS feed had been tickled.

Little did I know this pollen-tinted shimmer would bring such discord to what was shaping up otherwise to be a perfectly lovely spring day.

agentmaxwellIt seems the singularity is upon us as chronicled by Kris Buytaert in his post titled: On the Dangers of OVF.

It’s not often that I’m awe-struck into silence, but if you read this, I am convinced you will draw your own conclusions:

Usually I`m all in favour of Open Standards that are supported by different parties, and the Open Virtual Machine Format (OVF) pretty much matches these requirements.
The last Virtualbox has support for it, Simon is telling about it being part of the new XenConvert v2 Tech Preview .
However, Reuven wonders why it hasn’t gained widespread adoption yet.

Here’s my take, .. I`m not in favour of a standard as OVF that provides an easy way to transfer packaged virtual machine instance between different platforms.

Why ? Because I don’t think transferring full images of Virtual machines around is a good idea, not on 1 platform, not on different platforms.
And I`m not the only one with that opinion.

A Virtual Machine image is the perfect vehicle for malware in your network … some prepares an image for you , you run it on your network, and you set loose the devil, who knows it does a networkscan in the background and sends the info

OVF is a good breeding area for VM Image Sprawl,the effect you get when the number of images you have grows beyond what you can easily maintain, and this time it can grow beyond the people only using proprietary software , where as Image Sprawl used to be a disease mostly diagnosed within the VMWare usergroups and sysdamins with no clue on large scale deployments OVF

Sure OVF will assist smooth migration between different platforms so vendors want to keep it as far away from their users as possible, but people that already have a platform agnostic deployment framework in place don’t really need to worry about deploying on different platforms.

<Silence punctuated only by the sounds of me choking on my own tongue>

Sigh.  It must be WTF Friday.

/Hoff

Incomplete Thought: Looking At An “Open & Interoperable Cloud” Through Azure-Colored Glasses

March 29th, 2009 4 comments

As with the others in my series of “incomplete thoughts,” this one is focused on an issue that has been banging around in my skull for a few days.  I’m not sure how to properly articulate my thought completely, so I throw this up for consideration, looking for your discussion to clarify my thinking.

You may have heard of the little dust-up involving Microsoft and the folk(s) behind the Open Cloud Manifesto. The drama here reminds me of the Dallas episode where everyone tried to guess who shot J.R., and it’s really not the focus of this post.  I use it here for color.

What is the focus of this post is the notion of “open(ness),” portability and interoperability as it relates to Cloud Computing — or more specifically how these terms relate to the infrastructure and enabling platforms of Cloud Computing solution providers.

I put “openness” in quotes because definitionally, there are as many representations of this term as there are for “Cloud,” which is a big part of the problem.  Just to be fair, before you start thinking I’m unduly picking on Microsoft, I’m not. I challenged VMware on the same issues.

So here’s my question as it relates to Microsoft’s strategy regarding Azure given an excerpt from Microsoft’s Steven Martin as he described his employer’s stance on Cloud in regard to the Cloudifesto debacle above in his blog post titled “Moving Toward an Open Process On Cloud Computing Interoperability“:

From the moment we kicked off our cloud computing effort, openness and interop stood at the forefront. As those who are using it will tell you, the  Azure Services Platform is an open and flexible platform that is defined by web addressability, SOAP, XML, and REST.  Our vision in taking this approach was to ensure that the programming model was extensible and that the individual services could be used in conjunction with applications and infrastructure that ran on both Microsoft and non-Microsoft stacks. 

What got me going was this ZDNet interview by Mary Jo Foley wherein she interviewed Julius Sinkevicius, Microsoft’s Director of Product Management for Windows Server, in which she loosely references/compares Cisco’s Cloud strategy is to Microsoft’s and apparently a lack of interoperability between Microsoft’s own virtualization and Cloud Computing platforms:

MJF: Did Cisco ask Microsoft about licensing Azure? Will Microsoft license all of the components of Azure to any other company?

Sinkevicius: No, Microsoft is not offering Windows Azure for on premise deployment. Windows Azure runs only in Microsoft datacenters. Enterprise customers who wish to deploy a highly scalable and flexible OS in their datacenter should leverage Hyper-V and license Windows Server Datacenter Edition, which has unlimited virtualization rights, and System Center for management.

MJF: What does Microsoft see as the difference between Red Dog (Windows Azure) and the OS stack that Cisco announced?

Sinkevicius: Windows Azure is Microsoft’s runtime designed specifically for the Microsoft datacenter. Windows Azure is designed for new applications and allows ISVs and Enterprises to get geo-scale without geo-cost.  The OS stack that Cisco announced is for customers who wish to deploy on-premise servers, and thus leverages Windows Server Datacenter and System Center.

The source of the on-premise Azure hosting confusion appears to be this: All apps developed for Azure will be able to run on Windows Server, according to the Softies. However — at present — the inverse is not true: Existing Windows Server apps ultimately may be able to run on Azure. For now only some can do so, and only with a fairly substantial amount of tweaking.

Microsoft’s cloud pitch to enterprises who are skittish about putting their data in the Microsoft basket isn’t “We’ll let you host your own data using our cloud platform.” Instead, it’s more like: “You can take some/all of your data out of our datacenters and run it on-premise if/when you want — and you can do the reverse and put some/all of your data in our cloud if you so desire.”

What confuses me is how Azure, as a platform, will be limited to deployment only in Microsoft’s operating environment (i.e. their datacenters) and not for use outside of that environment and how that compares to the statements above regarding the interoperability described by Martin.

Doesn’t the proprietary nature of the Azure runtime platform, “open” or not via API, by definition limit its openness and interoperability? If I can’t take my applications and information and operate it anywhere without major retooling, how does that imply openness, portability and interoperability?  

If one cannot do that fully between Windows Server and Azure — both from the same company —  what chance do we have between instances running across different platforms not from Microsoft?

The issue at hand to consider is this:

If you do not have one-to-one parity between the infrastructure that provides your cloud “externally” versus “internally,” (and similarly public versus private clouds) can you truly claim openness, portability and interoperability?

What do you think?

Pimping My Friends: Joshua Corman on Virtualization Security

March 29th, 2009 1 comment
Josh Corman - Virtualization Security Tutorial

Josh Corman - Virtualization Security Tutorial

Joshua Corman is IBM/ISS’ Principal Security Strategist and a longtime friend.

Josh has a great virtualization security tutorial up at the Internet Evolution “macro-site.”

I like the layout and functionality as well as the content; there is a ton of great information here.

Check it out.

/Hoff

Joanna Rutkowska: Making Invisible Things Visible…

March 25th, 2009 No comments

I’ve had my issues in the past with Joanna Rutkowska; the majority of which have had nothing to do with technical content of her work, but more along the lines of how it was marketed.  That was then, this is now.

Recently, the Invisible Things Lab team have released some really interesting work regarding attacking the SMM.  What I’m really happy about is that Joanna and her team are really making an effort to communicate the relevance and impact  the team’s research and exploits really have in ways they weren’t doing before.  As much as I was critical previously, I must acknowledge and thank her for that, too.

I’m reasonably sure that Joanna could care less what I think, but I think the latest work is great and really does indicate the profoundly shaky foundation upon which we’ve built our infrastructure and I am thankful for what this body of work points out. 

Here’s a copy of Joanna’s latest blog titled “The Sky is Falling?” explaining such:

A few reporters asked me if our recent paper on SMM attacking via CPU cache poisoning means the sky is really falling now?

Interestingly, not many people seem to have noticed that this is the 3rd attack against SMM our team has found in the last 10 months. OMG 😮

But anyway, does the fact we can easily compromise the SMM today, and write SMM-based malware, does that mean the sky is falling for the average computer user?

No! The sky has actually fallen many years ago… Default users with admin privileges, monolithic kernels everywhere, most software unsigned and downloadable over plaintext HTTP — these are the main reasons we cannot trust our systems today. And those pathetic attempts to fix it, e.g. via restricting admin users on Vista, but still requiring full admin rights to install any piece of stupid software. Or selling people illusion of security via A/V programs, that cannot even protect themselves properly…

It’s also funny how so many people focus on solving the security problems by “Security by Correctness” or “Security by Obscurity” approaches — patches, patches, NX and ASLR — all good, but it is not gonna work as an ultimate protection (if it could, it would worked out already).

On the other hand, there are some emerging technologies out there that could allow us to implement effective“Security by Isolation” approach. Such technologies as VT-x/AMD-V, VT-d/IOMMU or Intel TXT and TPM.

So we, at ITL, focus on analyzing those new technologies, even though almost nobody uses them today. Because those technologies could actually make the difference. Unlike A/V programs or Patch Tuesdays, those technologies can change the level of sophistication required for the attacker dramatically.

The attacks we focus on are important for those new technologies — e.g. today Intel TXT is pretty much useless without protection from SMM attacks. And currently there is no such protection, which sucks. SMM rootkits sound sexy, but, frankly, the bad guys are doing just fine using traditional kernel mode malware (due to the fact that A/V is not effective). Of course, SMM rootkits are just yet another annoyance for the traditional A/V programs, which is good, but they might not be the most important consequence of SMM attacks.

So, should the average Joe Dow care about our SMM attacks? Absolutely not!

I really appreciate the way this is being discussed; I think the ITL work is (now) moving the discussion forward by framing the issues instead of merely focusing on sensationalist exploits that whip people into a frenzy and cause them to worry about things they can’t control instead of the things they unfortunately choose not to 😉 

I very much believe that we can and will see advancements with the “security by isolation” approach; a lot of other bad habits and classes of problems can be eliminated (or at least significantly reduced) with the benefit of virtualization technology. 

/Hoff

Azure Users Seeing Red: When Patching the Cloud Causes Cracks

March 19th, 2009 4 comments

No, this isn’t one of those posts that suggests we can’t depend on the Cloud just because of one (ok, many) outages of note lately.  That’s so dystopic.  Besides, everyone else is already doing that.

I mean just because Azure was offline for 22 hours isn’t cause for that much concern, right?  It’s a beta community technology preview, anyway… 😉  Just like Google’s a beta.

azureWhat I found interesting was what Microsoft reported as the root cause for the outage, however:

 

The Windows Azure Malfunction This Weekend

First things first: we’re sorry.  As a result of a malfunction in Windows Azure, many participants in our Community Technology Preview (CTP) experienced degraded service or downtime.  Windows Azure storage was unaffected.

In the rest of this post, I’d like to explain what went wrong, who was affected, and what corrections we’re making.

What Happened?

During a routine operating system upgrade on Friday (March 13th), the deployment service within Windows Azure began to slow down due to networking issues.  This caused a large number of servers to time out and fail.

You catch that bit about “…a routine operating system upgrade?”  Sometimes we call those things “patches.”  Even if this wasn’t a patch, let’s call it one for argument’s sake, okay?

As such, I was reminded of a blog post that I wrote last year titled: “Patching the Cloud” in which I squawked about my concerns regarding patching and change management/roll-back in Cloud services.  It seems apropos:

 

Your application is sitting atop an operating system and underlying infrastructure that is managed by the cloud operator.  This “datacenter OS” may not be virtualized or could actually be sitting atop a hypervisor which is integrated into the operating system (Xen, Hyper-V, KVM) or perhaps reliant upon a third party solution such as VMware.  The notion of cloud implies shared infrastructure and hosting platforms, although it does not imply virtualization.

A patch affecting any one of the infrastructure elements could cause a ripple effect on your hosted applications.  Without understanding the underlying infrastructure dependencies in this model, how does one assess risk and determine what any patch might do up or down the stack?  …

Huh.  Go figure.  

/Hoff

 

Bypassing the Hypervisor For Performance & Network “Simplicity” = Bypassing Security?

March 18th, 2009 4 comments

As part of his coverage of Cisco’s UCS, Alessandro Perilli from virtualization.info highlighted this morning something I’ve spoken about many times since it was a one-slider at VMworld (latest, here) but that we’ve not had a lot of details about: the technology evolution of Cisco’s Nexus 1000v & VN-Link to the “Initiator:”

Chad Sakac, Vice President of VMware Technology Alliance at EMC, adds more details on his personal blog:

…[The Cisco] VN-Link can apply tags to ethernet frames –  and is something Cisco and VMware submitted together to the IEEE to be added to the ethernet standards.

It allows ethernet frames to be tagged with additional information (VN tags) that mean that the need for a vSwitch is eliminated.   the vSwitch is required by definition as you have all these virtual adapters with virtual MAC addresses, and they have to leave the vSphere host on one (or at most a much smaller number) of ports/MACs.   But, if you could somehow stretch that out to a physical switch, that would mean that the switch now has “awareness” of the VM’s attributes in network land – virtual adapters, ports and MAC addresses.   The physical world is adapting to andgaining awareness of the virtual world…

 

Bundle that with Scott Lowe’s interesting technical exploration of some additional elements of UCS as it relates to abstracting — or more specifically completely removing virtual networking from the hypervisor — and things start to get heated.  I’ve spoken about this in my Four Horsemen presentation:

Today, in the VMware space, virtual machines are connected to a vSwitch because connecting them directly to a physical adapter just isn’t practical. Yes, there is VMDirectPath, but for VMDirectPath to really work it needs more robust hardware support. Otherwise, you lose useful features like VMotion. (Refer back to my VMworld 2008 session notes from TA2644.) So, we have to manage physical switches and virtual switches—that’s two layers of management and two layers of switching. Along comes the Cisco Nexus 1000V. The 1000V helps to centralize management but we still have two layers of switching.

That’s where the “Palo” adapter comes in. Using VMDirectPath “Gen 2″ (again, refer to my TA2644 notes) and the various hardware technologies I listed and described above, we now gain the ability to attach VMs directly to the network adapter and eliminate the virtual switching layer entirely. Now we’ve both centralized the management and eliminated an entire layer of switching. And no matter how optimized the code may be, the fact that the hypervisor doesn’t have to handle packets means it has more cycles to do other things. In other words, there’s less hypervisor overhead. I think we can all agree that’s a good thing

 

So here’s what I am curious about. If we’re clawing back networking form the hosts and putting it back into the network, regardless of flow/VM affinity AND we’re bypassing the VMM (where the dvfilters/fastpath drivers live for VMsafe,) do we just lose all the introspection capabilities and the benefits of VMsafe that we’ve been waiting for?  Does this basically leave us with having to shunt all traffic back out to the physical switches (and thus physical appliances) in order to secure traffic?  Note, this doesn’t necessarily impact the other components of VMsafe (memory, CPU, disk, etc.) but the network portion it would seem, is obviated.

Are we trading off security once again for performance and “efficiency?”  How much hypervisor overhead (as Scott alluded to) are we really talking about here for network I/O?

Anyone got any answers?  Is there a simple  answer to this or if I use this option, do I just give up what I’ve been waiting 2 years for in VMsafe/vNetworking?

/Hoff

Categories: Cisco, Virtualization, VMware Tags:

The Frogs Who Desired a King: A Virtualization & Cloud Computing Fable [Slides]

March 17th, 2009 9 comments

frogs-title001I’m loathe to upload this presentation because really the slides accompany me (not the other way around) and there’s a ton of really important subtext and dialog that goes along with them, but I’m getting hammered with requests to release the deck, so here it is.

I will be giving this presentation at various venues over the next few months as well as the second in the series titled “Cloudifornication: Indiscriminate Information Intercourse Involving Internet Infrastructure.”  

At any rate, it’s another rather colorful presentation. It’s in PDF format and is roughly 12MB.

Click here to download it.

Enjoy

/Hoff

The UFC and UCS: Cisco Is Brock Lesnar

March 17th, 2009 7 comments

Lesnar vs. Mir...My favorite sport is mixed martial arts (MMA.)

MMA is a combination of various arts and features athletes who come from a variety of backgrounds and combine many disciplines that they bring to the the ring.  

You’ve got wrestlers, boxers, kickboxers, muay thai practitioners, jiu jitsu artists, judoka, grapplers, freestyle fighters and even the odd karate kid.

Mixed martial artists are often better versed in one style/discipline than another given their strengths and background but as the sport has evolved, not being well-rounded means you run the risk of being overwhelmed when paired against an opponent who can knock you out, take you down, ground and pound you, submit you or wrestle/grind you into oblivion.  

The UFC (Ultimate Fighting Championship) is an organization which has driven the popularity and mainstream adoption of MMA as a recognizable and sanctioned sport and has given rise to some of the most notable MMA match-ups in recent history.

One of those match-ups included the introduction of Brock Lesnar — an extremely popular “professional” wrestler — who has made the  transition to MMA.  It should be noted that Brock Lesnar is an aberration of nature.  He is an absolute monster:  6’3″ and 276 pounds.  He is literally a wall of muscle, a veritable 800 pound gorilla.

In his first match, he was paired up against a veteran in MMA and former heavyweight champion, Frank Mir, who is an amazing grappler known for vicious submissions.  In fact, he submitted Lesnar with a nasty kneebar as Lesnar’s ground game had not yet evolved.  This is simply part of the process.  Lesnar’s second fight was against another veteran, Heath Herring, who he manhandled to victory.  Following the Herring fight, Lesnar went on to fight one of the legends of the sport and reigning heavyweight champion, Randy Couture.  

Lesnar’s skills had obviously progressed and he looked great against Couture and ultimately won by a TKO.

So what the hell does the UFC have to do with the Unified Computing System (UCS?)

Cisco UCS Components

Cisco UCS Components

 

Cisco is to UCS as Lesnar is to the UFC.

Everyone wrote Lesnar off after he entered the MMA world and especially after the first stumble against an industry veteran.

Imagine the surprise when his mass, athleticism, strength, intelligence and tenacity combined with a well-versed strategy paid off as he’s become an incredible force to be reckoned with in the MMA world as his skills progressed.  Oh, did I mention that he’s the World Heavyweight Champion now?

Cisco comes to the (datacenter) cage much as Lesnar did; an 800 pound gorilla incredibly well-versed in one  set of disciplines, looking to expand into others and become just as versatile and skilled in a remarkably short period of time.  Cisco comes to win, not compete. Yes, Lesnar stumbled in his first outing.  Now he’s the World Heavyweight Champion.  Cisco will have their hiccups, too.

The first elements of UCS have emerged.  The solution suite with the help of partners will refine the strategy and broaden the offerings into a much more well-rounded approach.  Some of Cisco’s competitors who are bristling at Cisco’s UCS vision/strategy are quick to criticize them and reduce UCS to simply an ill-executed move “…entering the server market.”  

I’ve stated my opinions on this short-sighted perspective:

Yes, yes. We’ve talked about this before here. Cisco is introducing a blade chassis that includes compute capabilities (heretofore referred to as a ‘blade server.’)  It also includes networking, storage and virtualization all wrapped up in a tidy bundle.

So while that looks like a blade server (quack!,) walks like a blade server (quack! quack!) that doesn’t mean it’s going to be positioned, talked about or sold like a blade server (quack! quack! quack!)
What’s my point?  What Cisco is building is just another building block of virtualized INFRASTRUCTURE. Necessary infrastructure to ensure control and relevance as their customers’ networks morph.

My point is that what Cisco is building is the natural by-product of converged technologies with an approach that deserves attention.  It *is* unified computing.  It’s a solution that includes integrated capabilities that otherwise customers would be responsible for piecing together themselves…and that’s one of the biggest problems we have with disruptive innovation today: integration.

 

The knee-jerk dismissals witnessed since yesterday by the competition downplaying the impact of UCS are very similar to how many people reacted to Lesnar wherein they suggested he was one dimensional and had no core competencies beyond wrestling, discounting his ability to rapidly improve and overwhelm the competition.  

Everyone seems to be focused on the 5100 — the “blade server” — and not the solution suite of which it is a single piece; a piece of a very innovative ecosystem, some Cisco, some not.  Don’t get lost in the “but it’s just a blade server and HP/IBM/Dell can do that” diatribe.  It’s the bigger picture that counts.

The 5100 is simply that — one very important piece of the evolving palette of tools which offer the promise of an integrated solution to a radically complex set of problems.

Is it complete?  Is it perfect?  Do we have all the details? Can they pull it off themselves?  The answer right now is a simple “No.”  But it doesn’t have to be.  It never has.

There’s a lot of work to do, but much like a training camp for MMA, that’s why you bring in the best partners with which to train and improve and ultimately you get to the next level.

All I know is that I’d hate to be in the Octagon with Cisco just like I would with Lesnar.

/Hoff