Archive

Archive for the ‘Virtualization’ Category

On the Overcast Podcast with Geva Perry and James Urquhart

March 13th, 2009 No comments

Overcastlogo
Geva and James were kind (foolish?) enough to invite me onto their Overcast podcast today:

In this podcast we talk to Christopher Hoff, renowned information security expert, and especially security in the context of virtualization and cloud computing. Chris is the author of the Rational Survivability blog, and can be followed as @Beaker on Twitter.
Show Notes:

    • Chris talks about some of the myths and misconceptions about security in the cloud. He addresses the claim that Cloud Providers Are Better At Securing Your Data Than You Are and the benefits and shortcomings of security in the cloud.
    • We talk about Chris's Taxonomy of Cloud Computing (excuse me, model of cloud computing)
    • Chris goes through some specific challenges and solutions for PCI-compliance in the cloud
    • Chris examines some of the security issues associated with multi-tenant architecture and virtualization
Check it out here.

/Hoff 

Oh Noes: We Can’t Monitor/Protect Against Intra-VM Traffic!

March 10th, 2009 4 comments

Angryguy
I got a press release in my inbox this morning that made me cringe.  It came from a vendor who produces a "purpose-built virtual firewall."

The press release details a customer case study that I found typical of how security solutions are being marketed in the virtualization space today, which again is really more about visibility than pure "security" and preys mostly on poor planning and fundamental issues stemming from treating "security" like a band-aid instead of an element of enterprise architecture.


When we start to cross the streams as to the realities of virtualization, the security implications thereof and making promises to solve problems with products which may or may not be deserving of investment given an assessment of risk, especially in today's trying economic climate, it makes me cranky.


I'm just tiring of the mixing of metaphors in the marketing of these "solutions."


I was specifically annoyed by a couple of statements in the press release and since I haven't had my coffee, I thought I'd point out a few to further underscore what I present in my Four Horsemen presentation regarding where we are in the solution continuum today.


To wit:

[Customer] has selected the [Vendor's] virtual firewall to secure its virtual environment and mitigate an attack before it could hit their network. 

Given the fact that to get to a VM you generally have to (1) utilize the physical network and (2) transit the vSwitch in the VMM, the reality is that an attack has already "hit their network" long before it gets to the VM or the virtual firewall, at least given today's available offerings.  There is no magic security fairy dust that will mitigate an attack presciently.


If you put VM's into production that are already infected, you have other problems to solve…

After moving our production applications to a virtualized environment we realized that we lacked security; I had no visibility into what was going on between VMs and a virtual attack could take down our network,” said [Customer.]  “We sought the same level of security for our virtual environment that we had with our physical network.”

This indicates a lack of proper risk management and planning on the part of the [Customer.]  Further, it underscores an example I use in the Four Horsemen which concerns which tools in a multi-server physical deployment did the [Customer] use to monitor/protect in-line traffic between these physical machines? 

The {Customer] must have done this since the press release suggests they demand the "…same level of security for [their] virtual environment that [they] had with [their] physical network."


Did the [Customer] have each physical server on it's own VLAN/subnet, isolated with firewalls?  Did he SPAN every single port to an IDS/IPS?
 If not, what's the difference here?  The Hypervisor?  What protection mechanisms has the fancy virtual firewall put in place to protect it?  None.

[Customer] was increasingly concerned about the risks of virtual networks, which range from security policy violations such as mixing trusted and un-trusted systems to malware exploits that can propagate undetected within a virtual network. 

Based upon the second paragraph above where the [Customer] admitted they put their virtualized environment into production without visibility or security, they clearly weren't that concerned with the risks.

A large amount of data center network traffic was moving between VMs and [Customer] had no visibility or control over the communication on the virtual network.

So if there were no security or visibility tools in place, how was it determined that traffic was moving between VM's?


Does this mean that all the customer's VM's were in a single VLAN and not segmented? If not and vSwitch configurations via port groups and VLANs were configured around VM criticality, role or function, then they certainly had some insight into what was moving between VM's and the "data center," right?


I must be confused. 

[Customer's] traditional network security tools could not monitor, analyze or troubleshoot inter-VM traffic because communications between VMs on the same physical host never touch the traditional network

Assuming that the VM's weren't in a single VLAN/portgroup on a single vSwitch and instead were segmented via VLANs/subnets, then the only way to get traffic from VLAN/IP Subnet A to VLAN/IP Subnet B (and thus VM A to VM B in these VLAN's) is though a layer 3 routing process which generally means traffic exits the virtual network and hits the physical network…where said "traditional security tools" could see it.


Of course, this doesn't help intra-VM traffic on the same portgroup/VLAN/vSwitch, but that's not what they pointed to above, but assuming they don't look at inter-machine traffic in their physical network on the same VLAN, again I ask what's the difference?

VMs were able to communicate with each other without observation or policy-based inspection and filtering, which left them highly vulnerable to malicious exploits. Additionally, worms and viruses could further spread among physical hosts via unintentional VMotion of an infected VM.

Back to my point above about how the [Customer] monitored traffic between physical hosts…if you don't do it in physical environments, why the fret in the virtual?

Oh and "unintentional VMotion!?" ZOMG!  For a VM to be "infected," excluding direct physical access, wouldn't the threat vector be the network in the first place?  

The [Vendor] virtual firewall was specifically created to mitigate the risks of virtual networks, while maintaining the ROI of virtualization.

What "risks of virtual networks" does this product mitigate in the absence of vulnerability or clearly defined threats that aren't faced in the physical realm?  Let me tell you.  It goes back to the very valid claim that you get better visibility given the integration with the virtualization platform configuration managers to call attention to when CHANGE occurs.

This is the real value of products like this from [Vendor.]  I
n the long run, the big boys who make mature firewalls and IPS products will get to harness API's like VMsafe and combined with the compartmentalization and segmentation capabilities of vShield Zones leaves a very short runway for products like this.

I'm not suggesting that products like this from [Vendor] don't offer value and solve an immediate pain point. I'd even consider deploying them to solve very specific problems I might have, but then again, I know what problem I'd be trying to solve. ROI?  Oy.

However, unlike the picture painted of the [Customer] above, I plan a little better and understand the impact virtualization has on my security posture and how that factors into my assessment and management of risk BEFORE I put it into production.  You should too.


</rant>


{Ed: I use 'intra-' instead of 'inter-' to reflect the "internal" passing of traffic between VM's using the vSwitch. Should traffic exit the vSwitch/host and hit the network as part of interchange between two VM's, I'd count this as 'inter-" VM traffic.}

Categories: Virtualization Tags:

Sun vs. Cisco? I’m Getting My Popcorn…

March 9th, 2009 6 comments

Popcorn
Scott Lowe wrote an interesting blog today wondering if Sun was preparing to take on Cisco in the virtualization space, referencing the development of virtualized networking functionality featuring the novel combination of commodity hardware and open source software to unseat the Jolly Green Giant:

A while back in Virtualization Short Take #25 I briefly mentioned Sun’s Crossbow network virtualization software, which brings new possibilities to the Solaris networking world. Not being a Solaris expert, it was hard for me at the time to really understand why Solaris fans were so excited about it; since then, though, I’ve come to understand that Crossbow brings to Solaris the same kind of full-blown virtual network interfaces and such that I use daily with VMware ESX. Now I’m beginning to understand why people are so thrilled!

In any case, an astute reader picked up on my mention of Crossbow and pointed me to this article by Jonathan Schwartz of Sun, and in particular this phrase:

You’re going to see an accelerating series of announcements over the coming year, from amplifying our open source storage offerings, to building out an equivalent portfolio of products in the networking space…

That seemingly innocuous mention was then coupled with this blog post and the result was this question: is Sun preparing to take on Cisco? Is Sun getting ready to try to use commodity hardware and open source software to penetrate the networking market in the same way that they are using commodity hardware and open source software to try to further penetrate the storage market with their open storage products (in particular, the 7000 series)?

It’s an interesting thought, to say the least. Going up against Cisco is a bold move, though, and I question Sun’s staying power in that sort of battle. Of course, with Cisco potentially distracted by the swirling rumors regarding the networking giant’s entry into the server market, now may be the best time to make this move.

It's really the last paragraph that is of interest to me, specifically the boldfaced sentence I highlighted.  I think the "rumors" have pretty much been substantiated by the mainstream press, so let's assume "California" is going to happen.

Let's make a couple of things really, really clear:
  1. I don't know how anyone can think that Cisco is "distracted" by bringing to market the logical extension of virtualized infrastructure — the compute function — as anything other than a shrewd business decision to offer a complete end-to-end solution to customers.  I talked about it here in blog post titled "Cisco Is NOT Getting Into the Server Business…" This is an Enterprise Architecture play, pure and simple.
  2. Honestly, if we're discussing commoditization, a server is a server is a server, whether it's in a blade form factor or not, and it's not like Cisco has to worry about building things from scratch. The availability of OEM/ODM components (raw or otherwise) means they don't have to start from scratch.  Oh yes, I know HP spent a bazillion dollars on C-Class fan engineering and IBM's BCHT is teh awesome and…
  3. The whole game is Unified Computing; bringing together enterprise class compute, network and storage as a solution with integrated virtualization, management and intelligence; you take the biggest pain point out of the equation — integration — and you drive down cost while increasing utility, agility and efficiency.
  4. If you look at what "California" is slated to deliver it's hard to see how Sun would compete: A blade based chassis with integrated Nexus converged networking/storage, integrated virtualization from VMware (with Nexus/VN-Link,) and management from BMC.  You know, Enterprise stuff, not integration hodge podge. 

So, I ask, does this look like a distraction to you? 

I'm not knocking Sun (or Scott to be clear,) but if I were they, I'd be much more worried about HP or IBM or even Microsoft and Redhat.

I'm grabbing my popcorn, but this battle might be over before the kernels (ha!) start popping.

/Hoff
Categories: Virtualization Tags:

If Virtualization is a Religion, Does That Make Cloud a Cult?

March 9th, 2009 No comments

Skyfalling-angled
I had just finished reading Virtual Gipsy's post titled "VMware as religion" when my RSS reader featured a referential post from VM/ETC's Rich titled "vTheology: the study of virtualization as religion."

While I appreciated the humor surrounding the topic, I try never to mix friends politics, and religion* so I'll not wade into the deep end on this one except to suggest what my title asks: 

If virtualization is a religion, does that make cloud a cult?

If so, to whom do I send my tidings?  Who is the Cardinal of the Cloud?  The Pope of PaaS?  The Shaman of Service?

/Hoff

*…and truth be told, I'm not feeling particularly witty this morning.

I’m Sorry, But Did Someone Redefine “Open” and “Interoperable” and Not Tell Me?

February 26th, 2009 3 comments

3-stooges-football
I've got a problem with the escalation of VMware's marketing abuse of the terms "open," "interoperable," and "standards."  I'm a fan of VMware, but this is getting silly.


When a vendor like VMware crafts an architecture, creates a technology platform, defines an API, gets providers to subscribe to offering it as a service and does so with the full knowledge that it REQUIRES their platform to really function, and THEN calls it "open" and "interoperable," because an API exists, it is intellectually dishonest and about as transparent as saran wrap to call that a "standard" to imply it is available regardless of platform.


We are talking about philosophically and diametrically-opposed strategies between virtualization platform players here, not minor deltas along the bumpy roadmap highway.  What's at stake is fundamentally the success or failure of these companies.  Trying to convince the world that VMware, Microsoft, Citrix, etc. are going to huddle for a group hug is, well, insulting.

This recent article in the Register espousing VMware's strategy really highlighted some of these issues as it progressed. Here's the first bit which I agree with:

There is, they fervently say, no other enterprise server and data centre virtualisation play in town. Businesses wanting to virtualise their servers inside a virtualising data centre infrastructure have to dance according to VMware's tune. Microsoft's Hyper-V music isn't ready, they say, and open source virtualisation is lagging and doesn't have enterprise credibility.

Short of the hyperbole, I'd agree with most of that.  We can easily start a religious debate here, but let's not for now.  It gets smelly where the article starts talking about vCloud which, given VMware's protectionist stance based on fair harbor tactics, amounts to nothing more (still) than a vision.  None of the providers will talk about it because they are under NDA.  We don't really know what vCloud means yet: 

Singing the vcloud API standard song is very astute. It reassures all people already on board and climbing on board the VMware bandwagon that VMware is open and not looking to lock them in. Even if Microsoft doesn't join in this standardisation effort with a whole heart, it doesn't matter so long as VMware gets enough critical mass.

How do you describe having to use VMware's platform and API as VMware "…not looking to lock them in?" Of course they are!  

To fully leverage the power of the InterCloud in this model, it really amounts to either an ALL VMware solution or settling for basic connectors for coarse-grained networked capability.

Unless you have feature-parity or true standardization at the hypervisor and management layers, it's really about interconnectivity not interoperability.  Let's be honest about this.

By having external cloud suppliers and internal cloud users believe that cloud federation through VMware's vCloud infrastructure is realistic then the two types of cloud user will bolster and reassure each other. They want it to happen and, if it does, then Hyper-V is locked out unless it plays by the VMware-driven and VMware partner-supported cloud standardisation rules, in which case MIcrosoft's cloud customers are open to competitive attack. It's unlikely to happen.

"Federation" in this context really only applies to lessening/evaporating the difference between public and private clouds, not clouds running on different platforms.  That's, um, "lock-in."


Standards are great, especially when they're yours. Now we're starting to play games.  VMware should basically just kick their competitors in the nuts and say this to us all:

"If you standardize on VMware, you get to leverage the knowledge, skills, and investment you've already made — regardless of whether you're talking public vs. private.  We will make our platforms, API's and capabilities as available as possible.  If the other vendors want to play, great.  If not, your choice as a customer will determine if that was a good decision for them or not."

Instead of dancing around trying to muscle Microsoft into playing nice (which they won't) or insulting our intelligence by handwaving that you're really interested in free love versus world domination, why don't you just call a spade a virtualized spade.

And by the way, if it weren't for Microsoft, we wouldn't have this virtualization landscape to begin with…not because of the technology contributions to virtualization, but rather because the inefficiencies of single app/OS/hardware affinity using Microsoft OS's DROVE the entire virtualization market in the first place!

Microsoft is no joke.  They will maneuver to outpace VMware. HyperV and Azure will be a significant threat to VMware in the long term, and this old Microsoft joke will come back to haunt to VMware's abuse of the words above:

Q: How many Microsoft engineers does it take to change a lightbulb?  
A: None, they just declare darkness a standard.

is it getting dimmer in here?


/Hoff

Internal v. External/Private v. Public/On-Premise v. Off- Premise: It’s all Cloud But How You Get There Is Important.

February 24th, 2009 No comments

Datacenter
I've written about the really confusing notional definitions that seem to be hung up on where the computing actually happens when you say "Cloud:" in your datacenter or someone else's.  It's frustrating to see how people mush together "public, private, internal, external, on-premise, off-premise" to all mean the same thing.

They don't, or at least they shouldn't, at least not within the true context of Cloud Computing.

In the long run, despite all the attempts to clarify what we mean by defining "Cloud Computing" more specifically as it relates to compute location, we're going to continue to call it "Cloud."  It's a sad admission I'm trying to come to grips with.  So I'll jump on this bandwagon and take another approach.

Cloud Computing will simply become ubiquitous in it's many forms and we are all going to end up with a hybrid model of Cloud adoption — a veritable mash-up of Cloud services spanning the entire gamut of offerings.  We already have today.

Here are a few, none-exhaustive examples of what a reasonably-sized enterprise can expect from the move to a hybrid Cloud environment:
  1. If you're using one or more SaaS vendors who own the entire stack, you'll be using their publicly-exposed Cloud offerings.  They manage the whole kit-and-kaboodle, information and all. 
  2. SaaS and PaaS vendors will provide ways of integrating their offerings (some do today) with your "private" enterprise data stores and directory services for better integration and business intelligence.
  3. We'll see the simple evolution of hosting/colocation providers add dynamic scalability and utility billing and really push the Cloud mantra.  
  4. IaaS vendors will provide (ala GoGrid) ways of consolidating and reducing infrastructure footprints in your enterprise datacenters by way of securely interconnecting your private enterprise infrastructure with managed infrastructure in their datacenters. This model simply calls for the offloading of the heavy tin. Management options abound: you manage it, they manage it, you both do…
  5. Other IaaS players will continue to offer a compelling suite of soup-to-nuts services (ala Amazon) that depending upon your needs and requirements, means you have very little (or no) infrastructure to speak of.  You may or may not be constrained by what you can or need to do as you trade of flexibility for conformity here.
  6. Virtualization platform providers will no longer make a distinction in terms of roadmap and product positioning between internal/external or public/private. What is enterprise virtualization today simply becomes "Cloud."  The same services, split along virtualization platform party lines, will become available regardless of location. 
  7. This means that vendors who today offer proprietary images and infrastructure will start to drive or be driven to integrate more open standards across their offerings in order to allow for portability, interoperability and inter-Cloud scalability…and to make sure you remain a customer.
  8. Even though the Cloud is supposed to abstract infrastructure from your concern as a customer, brand-associated moving parts will count; customers will look for pure-play vetted integration between the big players (networking, virtualization, storage) in order to fluidly move information and applications into and out of Cloud offerings seamlessly 
  9. The notion of storage is going to be turned on its head; the commodity of bit buckets isn't what storage means in the Cloud.  All the chewy goodness will start to bubble to the surface as value-adds come to light: DeDup, backup, metadata, search, convergence with networking, security…
  10. More client side computing will move to the cloud (remember, it doesn't matter whether it's internal or external) with thin client connectivity while powerful smaller-footprint mobile platforms (smartphones/netbooks) with native virtualization layers will also accelerate in uptake

Ultimately, what powers your Cloud providers WILL matter.  What companies adopt internally as their virtualization, networking, application delivery, security and storage platforms internally as they move to consolidate and then automate will be a likely choice when evaluating top-rung weighting when they identify what powers many of their Cloud providers' infrastructure.

If a customer can take all the technology expertise, the organizational and operational practices they have honed as they virtualize their internal infrastructure (virtualization platform, compute, storage, networking, security) and basically be able to seamlessly apply that as a next step as the move to the Cloud(s), it's a win.

The two biggest elements of a successful cloud: integration and management. Just like always.

I can't wait.

/Hoff

*Yes, we're concerned that if "stuff" is outside of our direct control, we'll not be able to "secure" it, but that isn't exactly a new concept, nor is it specific to Cloud — it's just the latest horse we're beating because we haven't made much gains in being able to secure the things that matter most in the ways most effective for doing that.

Virtualization & Security: Disruptive Technologies – A Four Part Video Miniseries…

February 24th, 2009 No comments
About nine months ago, Dino Dai Zovi, Rich Mogull and I sat down for about an hour as Dennis Fisher from TechTarget interviewed us in a panel style regarding the topic of virtualization and security.  It has just been released now.

Considering it was almost a lifetime ago in Internet time, almost all of the content is still fresh and the prognostication is pretty well dead on.

Enjoy:

Part 1: The Greatest Threats to Virtualized Environments

Part 2: The Security Benefits of Virtualization

Part 3: The Organizational Challenges of Virtualization

Part 4: Virtualization and Security Vendors

/Hoff

P.S. The camera adds like 40 pounds, really 😉
Categories: Virtualization Tags:

Incomplete Thought: Separating Virtualization From Cloud?

February 18th, 2009 18 comments

I was referenced in a CSO article recently titled "Four Questions On Google App Security." I wasn't interviewed for the story directly, but Bill Brenner simply referenced our prior interviews and my skepticism for virtualization security and cloud Security as a discussion point.

Google's response was interesting and a little tricky given how they immediately set about driving a wedge between virtualization and Cloud.  I think I understand why, but if the article featured someone like Amazon, I'm not convinced it would go the same way…

As I understand it, Google doesn't really leverage much in the way of virtualization (from the classical compute/hypervisor perspective) for their "cloud" offerings as compared to Amazon. That may be in large part due to the fact of the differences in models and classification — Amazon AWS is an IaaS play while GoogleApps is a SaaS offering.

You can see why I made the abstraction layer in the cloud taxonomy/ontology model "optional."

This post dovetails nicely with Lori MacVittie's article today titled "Dynamic Infrastructure: The Cloud Within the Cloud" wherein she highlights how the obfuscation of infrastructure isn't always a good thing. Given my role, what's in that cloudy bubble *does* matter.

So here's my incomplete thought — a question, really:

How many of you assume that virtualization is an integral part of cloud computing? From your perspective do you assume one includes the other?  Should you care?

Yes, it's intentionally vague.  Have at it.

/Hoff

Old MacDonald Had a (Virtual Server) Farm, I/O, I/O, Oh!

February 13th, 2009 4 comments

Sheep
It's all about the I/O and your ability to shuffle packets…or see them in the first place…

In reading Neil macDonald's first post under the Gartner-branded blog titled "Virtualization Security Is Transformational — If the legacy Security Vendors Would Stop Fighting It," I find myself nodding in violent agreement whilst also shaking my head in bewilderment.  Perhaps I missed the point, but I'm really confused.

Neil sets the stage by suggesting that "established" security vendors who offer solutions for non-virtualized environments simply "…don't get it" when it comes to realizing the shortcomings of their existing solutions in virtualized contexts and that they are "fighting" the encroachment of virtualization on their appliance sales:

Many are clinging to business models based on their overpriced hardware-based solutions and not offering virtualized versions of their solutions. They are afraid of the inevitable disruption (and potential cannibalization) that virtualization will create. However, you and I have real virtualization security needs today and smaller innovative startups have rushed in to fill the gap. And, yes, there are pricing discontinuities. A firewall appliance that costs $25,000 in a physical form can cost $2500 or less in a virtual form from startups like Altor Networks or Reflex Systems.

I'm very interested in which "established" vendors are supposedly clinging to their overpriced hardware-based solutions and avoiding virtualization besides niche players in niche markets that are hardware-bound.  

As far as I can tell the top five vendors by revenue in the security space (that sell hardware, not just software) are all actively engaged in both supporting these environments with the limitations that currently exist based on the virtualization platforms today and are very much investing in development of new solutions to work properly in virtual environments given the unique requirements thereof.


Neil is really comparing apples to muffler brackets.  He points out in his blog that physical appliances can offer multi-gigabit performance whereas software-based VA's cannot, and yet we're surprised that pricing differentials in orders of magnitude exist?  You get what you pay for.


As I pointed out in my Four Horsemen presentation (and is alluded to in the remainder of Neil's post below) EVERY SINGLE VENDOR is currently hamstrung by the same level of integration and architectural limitations involved with the current state of virtual appliance performance in the security space, including those he mentions such as Altor and Reflex.  They are all in a holding pattern.  I've written about that numerous times.

In fact, as I mentioned in my post titled "Visualization Through Virtualization", the majority of these new-fangled, virtualization-specific "security" tools are actually (now) more focused on visibility, management and change montoring/control than they are pure network-level security because they cannot compete from a performance and scalability perspective with hardware-based solutions.

Here's where I do agree with Neil, based upon what I mention above: 

Feature-wise, the security protection services delivered are similar. But, there is a key difference — throughput. What the legacy security vendors forget is that there is still a role for dedicated hardware. There is no way you are going to get full multi-gigabit line speed deep-packet inspection and protocol decode for intrusion prevention from a virtual appliance. A next-generation data center will need both physical and virtualized security controls — ideally, from a vendor that can provide both. I’ll argue that the move to virtualize security controls will grow the overall use of security controls. 

So this actually explains the disparity in both approach and pricing that he alluded to above.  How does this represent vendors "fighting" virtualization?  I see it as hanging on for as long as possible to preserve and milk their investment in the physical appliances Neil says we'll still need while they perform the R&D on their virtualized versions.  They can't deploy the new solutions until the platform to support them exists!

The move to virtualize security controls reduces barriers to adoption. Rather than a sprinkle a few physical appliance here and there based on network topology, we can now place controls when and where they ar
e needed, including physical appliances as appropriate. If fact, the legacy vendors have a distinct advantage over virtualization security startups since you prefer a security solution that spans both your physical and virtual environments with consistent management.

Exactly.  So again, how is this "fighting" virtualization?  


Here's where we ignore reality again:

Over the past six months, I’ve seen signs of life from the legacy physical security vendors. However, some of the legacy physical security vendors have simply taken the code from their physical appliance and moved it into a virtual machine. This is like wrapping a green-screen terminal application with a web front end — it looks better, but the guts haven’t changed. In a data center where workloads move dynamically between physical servers and between data centers, it makes no sense to link security policy to static attributes such as TCP/IP addresses, MAC addresses or servers. 

First of all, what we're really talking about in the enterprise space is VMware, since given its market dominance, this is where the sweet spot is for security vendors.  This will change over time, but for now, it's VMware.


That being the case, the moment VMsafe was announced/hinted at two years ago, 20+ security vendors — big and small — have been diligently working within the constructs of what is made available from VMware to re-engineer their products to take advantage of the API's that will be coming in VMware's upcoming release.  This is no small feat.  Distributed virtual switching and the two-tier driver architecture with DVfilters means re-engineering your products and approach.

Until VMware's next platform is released, every security vendor — big or small — is hamstrung by having to do exactly what Neil says; creating a software instantiation of their hardware products which is integration-limited for the reasons I've already stated.  What should vendors do?  Firesale their inventories and wait it out?  

I ask again: how is this "fighting" virtualization?

The reason there hasn't been a lot of movement is because the entire industry is in a holding pattern. Pretending otherwise is absolutely ridiculous.  The obvious exception is Cisco which has invested in and developed substantial solutions such as the Nexus 1000v and VN-Link (which is again awaiting the availability of VMware's next release.)

Security policy in a virtualized environment must be tied to logical identities – like identities of VM workloads, identities of application flows and identities of users. When VMs move, policies need to move. This requires more than a mere port of an existing solution, it requires a new mindset.

Yep.  And most of them are adapting their products as best they can.  Many companies will follow the natural path of consolidation and wait to buy a startup in this space and integrate it…much like VMware did with BlueLane, for example.  Others will look to underlying enablers such as Cisco's VN-Link/Nexus 1000v and chose to integrate at the virtual networking layer there and/or in coordination with VMsafe.

The legacy vendors need to wake up. If they don’t offer robust virtualization security capabilities (and, yes, potentially cannibalize the sales of some of their hardware), another vendor will. With virtualization projects on the top of the list of IT initiatives for 2009, we can’t continue to limp along without protection. It’s time to vote with our wallets and make support of virtual environments a mandatory part of our security product evaluation and selection.

Absolutely!  And every vendor — big and small — that I've spoken to is absolutely keen on this concept and are actively engaged in developing solutions for these environments with these unique requirements in mind. Keep in mind that VMsafe is about more than just network visibility via the VMM, it also includes disk, memory and CPU…most network-based appliances have never had this sort of access before (since they are NETWORK appliances) and so OF COURSE products will have to be re-tooled.


Overall, I'm very confused by Neil's post as it seems quite contradictory and at odds with what I've personally been briefed on by vendors in the space and overlooks the huge left turns being made by vendors over the last 18 months who have been patiently waiting for VMsafe and other introspection capabilities of the underlying platforms.

I think the windshield needs cleaning on the combine harvester…

/Hoff

Categories: Virtualization Tags:

PCI Security Standards Council to Form Virtualization SIG…

January 24th, 2009 1 comment

I'm happy to say that there appears to be some good news on the PCI DSS front with the promise of a SIG being formed this year for virtualization.  This is a good thing. 

You'll remember my calls for better guidance for both virtualization and ultimately cloud computing from the council given the proliferation of these technologies and the impact they will have on both security and compliance.

In that light, news comes from Troy Leach, technical director of the PCI Security Standards Council via a kind note to me from Michael Hoesing:

A PCI SSC Special Interest Group (SIG) for virtualization is most likely coming this year but we don't have any firm dates or objectives as of yet.  We will be soliciting feedback from our Participating Organizations which is comprised of more than 500 companies (which include Vmware, Microsoft, Dell, etc) as well as industry subject matter experts such as the 1,800+ security assessors that currently perform assessments as either a Qualified Security Assessor or Approved Scanning Vendor (ASV).

The PCI SSC Participating Organization program allows industry stakeholders an opportunity to provide feedback on all standards and supporting procedures.  Information to join as a Participating Organization can be found here on our website.

This is a good first step.  if you've got input, make sure to contribute!

/Hoff

Categories: Compliance, PCI, Virtualization, VMware Tags: