Archive

Archive for the ‘Virtualization’ Category

Clarification from Catbird’s CTO on HypervisorShield…

February 19th, 2008 8 comments

Catbird_logo
Last week I posted about a press release announcing a new product from Catbird called HypervisorShield.

I was having difficulty understanding some of the points raised in the press release/product brief, so I reached out to Michael Berman, Catbird’s CTO (also a blogger,) for a little clarification. 

Michael was kind enough to respond  to the points in my blog posting.

Rather than repost the entire blog entry, I have paraphrased the points Michael responded to and left his comments intact.  I think some of them invite further clarification and I’ll be following up with a Take5 interview shortly.  Some of the answers just beg for a little more digging…

Just to ground us all, here’s the skinny on HypervisorShield:

Catbird, provider of the only comprehensive security solution for virtual and physical networks, and developer of the V-Agent™ virtual appliance, today announced the launch of HypervisorShield™, the industry’s first dedicated comprehensive security solution specifically designed to guard against unauthorized hypervisor network access and attack.

Here are my points and Michael’s responses:

  1. Hoff: The press release speaks to HypervisorShield’s ability to protect both the hypervisor and the "hypervisor management network" which I assume is actually referring to the the virtual interface of the management functions like VMware’s service console? Are we talking about protecting the service console or the network functions provided by the vKernel?

    Berman: We’ve built a monitor function that uses VMware APIs to watch for changes to/management of the virtual machines. We also have signature templates and customizable policies for network connections to the service console and the host.
     

  2. Hoff: The press release makes it sound like protecting the hypervisor is accomplished via an IPS function that isolates VM’s from one another like Reflex and Blue Lane?

    Berman: With all due respect to our colleagues in this space, intrusion detection and protection is one element.  Catbird combines several technologies to extend separation of duties, dual control and strict change control to the virtual infrastructure. Deploying a signature for VMSA-2008-0001 is nice, but detection or prevention of a rogue virtual center administrator from pulling off a Societe Generale hack is priceless.
     

  3. Hoff: What exactly does Catbird do (in partnering with IPS companies like SourceFire) that folks like Reflex and BlueLane? don’t already do.

    Berman: Rather than talk about the differences, let’s talk about the most important similarity.  I think I speak for all of us when I say that it’s like we are in a time warp to 1996 and I am explaining why you need a firewall for your DMZ.  Customers have little appreciation for the magnitude of the threats facing their virtual infrastructure.  Once we get past that, then we can talk about why Catbird is the best. (hint: we’re smarter, faster and stronger)
     

  4. Hoff: How do you monitor the Hypervisor?

    Berman: We deploy a virtual machine that hooks into the vSwitch environment and that also monitors the ESX hypervisor via the VI API.
     

  5. Hoff: You say in the press release that "hypervisor exploits have grown 35% in the last several years."  Which hypervisor exploits, exactly? You mean exploits against the big, fat, Linux-based service console from VMware? That’s not the hypervisor!

    Berman: I believe that the real threat to the virtual infrastructure comes from the collapse of separation of duties and the breakdown in implicit and explicit security controls within the virtual data center.  That being said, the hypervisor management application is probably the most significant area of the attack surface. If I can own the management GUI I own the hypervisor. If I can pull a stack smash against the ESX web server I own the hypervisor. If some poor shlemozzle configures Samba and NFS for the storage network then they become part of the attack surface too. You can blame us for some hyperbole, but the stat came from the CVE database. Gartner/451/Edison report that virtual infrastructure (VI) is less secure than physical and we have private data that shows people are deploying VI with no network security at all – this is just wrong.  I also think that writing about, or writing off the only risk as being some sort of red pill/blue pill hack is also wrong.

Thanks again to Michael for responding.  Look for a follow-on Take5 shortly to dig a little deeper.

/Hoff

Categories: Virtualization, VMware Tags:

Virtualization Hits the Mainstream…

February 13th, 2008 3 comments

Dilbert20183362080212

Sad, but true…

Categories: Virtualization Tags:

Catbird Says It Has a Better Virtualization Security Mousetrap – “Dedicated Hypervisor Security Solution”

February 13th, 2008 2 comments
Catbirdspoof
I spent quite a bit of time in the Catbird booth at VMworld, initially lured by their rather daring advertising campaign of "running naked."  I came away intrigued by the Security SaaS-like business model provided by their V-Agent offering and saw that as the primary differentiator.

I was particularly interested today when I read a latest press release from Catbird that suggests that their new "HypervisorShield" is specifically designed to secure the hypervisor from network access and attack:


Catbird, provider of the only comprehensive security solution for virtual and physical networks, and developer of the V-Agent virtual appliance, today announced the launch of HypervisorShield, the industrys
first dedicated comprehensive security solution specifically designed
to guard against unauthorized hypervisor network access and attack.

The paragraph above seems to be talking about protecting the "hypervisor" itself from network-borne compromise which is very interesting to me for reasons that should be obvious at this point. 

However, the following paragraph seems to refer to the "hypervisor management network" which I assume is actually referring to the the virtual interface of the management functions like VMware’s service console?   Are we talking about protecting the service console or the network functions provided by the vKernel? 

HypervisorShield, the latest service in Catbirds V-Security product, extends best practice security protection to virtualizations
critical hypervisor layer,
thwarting both inadvertent management error
and malicious threats. Delivering continuous, automated 24×7 monitoring
focused on the precise vulnerabilities, known attack signatures and
guest machine access of the hypervisor management network,
HypervisorShield is the only service to proactively secure this
essential component of a virtualization deployment.

Here’s where it gets a little more confusing because the wording seems again to suggest they are protecting the hypervisor itself — or do they mean the virtual switch as a component of the Hypervisor?:

HypervisorShield is the first virtualized security technology which
can monitor and control access to the hypervisor network, detect
malicious network activity directed at the hypervisor from virtual
machines and validate that the hypervisor network is configured
according to best practices and site security policy.

…sounds like an IPS function that isolates VM’s from one another like Reflex and Blue Lane? 

OK, but here’s where it gets really interesting.  Catbird is suggesting that they are able to "…see inside the hypervisor" which implies they have hooks and exposure to elements within the hypervisor itself versus the vSwitch plumbing that everyone has access to.

Via the groundbreaking Catbird V-Agent virtual appliance, protection
is delivered within the virtual network itself. By contrast,
traditional security solutions retrofitted for virtual deployments
cannot see inside the hypervisor. Monitoring from the inside yields
significantly more effective coverage and eliminates the need to
reroute traffic onto the physical network for validation. As an example
of the benefits of running right on the virtual subnet, HypervisorShields exclusive network access control (NAC) will instantly quarantine unauthorized devices on the management network.

They do talk about NAC from the VM perspective, which is something I’ve been
advocating.

From Catbird’s website we see some more detail regarding HypervisorShield which again introduces an interesting assertion:

How do you monitor the Hypervisor?

Securing a virtual host does not only involve applying the
same security controls to virtual networks as were applied to their
physical counterparts. Virtualization introduces a new layer of
abstraction entirely—the Hypervisor. Hypervisor exploits have grown 35%
in the last several years, with more surely on their way.
Catbird’s
patent-pending HypervisorShield protects and defends this essential
component of a virtual deployment.

Really?  Hypervisor exploits have grown 35% in the last several years?  Which hypervisor exploits, exactly?  You mean exploits against the big, fat, Linux-based service console from VMware?  That’s not the hypervisor!

I’m trying to give Catbird the benefit of the doubt here, but this is confusing as heck as to what exactly Catbird does (with partnering with companies like SourceFire) that folks like Reflex and BlueLane don’t already do.

If anyone, especially Catbird, has some clarification for me, I’d be mighty appreciative.

/Hoff


Categories: Virtualization Tags:

Off The Cuff Review: Nemertes Research’s “Virtualization Risk Analysis”

February 12th, 2008 4 comments

Andreas
I just finished reading a research paper from Andreas Antonopoulous from Nemertes titled "A risk analysis of large-scaled and dynamic virtual server environments."  You can find the piece here: 

Executive Summary

As virtualization has gained acceptance in corporate data centers,
security has gone from afterthought to serious concern. Much of the
focus has been on the technologies of virtualization rather than the
operational, organizational and economic context. This comprehensive
risk analysis examines the areas of risk in deployments of virtualized
infrastructures and provides recommendations

I was interested by two things immediately:

  1. While I completely agree with the fact that in regards to virtualization and security the focus has been about the "…technologies of virtualization rather than the
    operational, organizational and economic context"
    I’m not convinced there is an overwhelming consensus that "…security has gone from afterthought to serious concern" mostly because we’re just now getting to see "large-scaled and dynamic virtual server environments.’  It’s still painted on, not baked in.  At least that’s how people react at my talks.
     
  2. Virtualization is about so much more than just servers, and in order to truly paint a picture of analyzing risk within "large-scaled and dynamic virtual server environments" much of the complexity and issues associated specifically with security stem from the operational and organizational elements associated with virtualizing storage, networking, applications, policies, data and the wholesale shift in operationalizing security and who owns it within these environments.

I’ve excerpted the most relevant element of the issue Nemertes wanted to discuss:

With all the
hype surrounding server virtualization come the inevitable security
concerns: are virtual servers less secure? Are we introducing higher
risk into the data center? For server virtualization to deliver
benefits we have to examine the security risks. As with any new
technology there is much uncertainty mixed in with promise. Part of the
uncertainty arises because most companies do not have a good
understanding of the real risks surrounding virtualization.

I’m easily confused…

While I feel the paper does a good job of describing the various stages of
deployment and many of the "concerns" associated with server
virtualization within these contexts, I’m left unsatisfied that I’m anymore prepared to assess and manage risk regarding server virtualization.  I’m concerned that the term "risk" is being spread about rather liberally because there is the presence of a bit of math.

The formulaic "Virtualization Risk Assessment" section is suggested to establish a quantatative basis for computing "relative risk," in the assessment summary.  However, since the variables introduced in the formulae are subjective and specific per asset, it’s odd that the summary table is then seemingly presented generically so as to describe all assets:

Scenario Vulnerability Impact Probability of Attack Overall Risk
Single virtual server (hypervisor risk) Low High Low Low/Medium
Basic services virtualized Low High Medium Medium
Production applications virtualized Medium High High Medium/High
Complete virtualization High High High High

I’m trying to follow this and then get smacked about by this statement, which explains why people just continue to meander along applying the same security strategies toward virtualized servers as they do in conventional environments:

This conclusion might appear to be pessimistic at first glance.
However, note that we are comparing various stages of deployment of
virtual servers. A large deployment of physical servers will suffer
from many of the same challenges that the “Complete Virtualization”
environment suffers from.

Furthermore, it’s unclear to me how to factor in compensating controls into this rendering given what follows:

What is new here is that there are fewer solutions for providing
virtual security than there are for providing physical security with
firewalls and intrusion prevention appliances in the network. On the
other hand, the cost of implementing virtualized security can be
significantly lower than the cost of dedicated hardware appliances,
just like the cost of managing a virtual server is lower than a
physical server.

The security solutions available today are limited by how much integration exists with the virtualization platforms today.  We’ve yet to see the VMM’s/Hypervisors opened up to allow true low-level integration and topology-sensitive security interaction with flow classification, provisioning, and disposition.

Almost all supposed "virtualization-ready" security solutions today are nothing more than virtual appliance versions of existing solutions or simply the same host-based solutions which run in the VM and manage not to cock it up.  Folding your management piece into something like VMware’s VirtualCenter doesn’t count.

In general, I simply disagree that the costs of implementing virtualized security (today) can be significantly lower than the cost of dedicated hardware appliances — not if you’re expecting the same levels of security you get in the conventional, non-virtualized world.

The reasons (as I give in my VirtSec presentations):  Loss of visibility, constraint of the virtual networking configurations, coverage, load on the hosts, licensing.  All really important.

Cutting to the Chase

I’m left waiting for the punchline, much like I was with Burton’s "Immutable Laws of Virtualization," and I think the reason why is that despite these formulae, the somewhat shallow definition of risk seems to still come down to nothing more than reasonably-informed speculation or subjective perception:

So, in the above risk analysis, one must also
consider that the benefits in virtualization far outweigh the risks.

The question is not so much whether companies should proceed with
virtualization – the market is already answering that resoundingly in
the affirmative. The question is how to do that while minimizing the
risk inherent in such a strategy.

These few sentences above seem to almost obviate the need for risk analysis at all and suggests that for most, security is still an afterthought.  High risk or not, the show must go on?

So given the fact that virtualization is happening at breakneck pace, we have few good security solutions available, we speak of risk "relatively," and that operationally the entire role and duty of "security" within virtualized environments is now shifting, how do we end up with this next statement?

In the long run, virtualized security solutions will not only help
mitigate the risk of broadly deployed infrastructure virtualization,
but will also provide new and innovative approaches to information
security that is in itself virtual. The dynamic, flexible and portable
nature of virtual servers is already leading to a new generation of
dynamic, flexible and portable security solutions.

I like the awareness Andreas tries to bring in this paper, but I fear that I am not left with any new information or tools for assessing risk (let alone quantifying it) in a virtual environment. 

So what do I do?!  I still have no answer to the main points of this paper, "With all the
hype surrounding server virtualization come the inevitable security
concerns: are virtual servers less secure? Are we introducing higher
risk into the data center?"

Well?  Are they?  Am I?

/Hoff

The Best Defense is Often, Well, The Best Defense…

February 6th, 2008 No comments

Hoffpats
As it goes in football, so it goes in life…

I delivered the closing presentation of the InfoWorld Executive Virtualization Forum in San Francisco on Monday.  The title of my presentation, which I will upload soon, was "
  Addressing Security Concerns in Virtual Environments."

The conference was a good mix of panels and presentations giving some excellent perspective to senior-level managers and executives on virtualization and its impact.

The night before was obviously the Super Bowl and InfoWorld hosted a get-together complete with beer, snacks and a big screen for us to watch the Big Game.  Most of the InfoWorld staff are out of the MA area, so except for a few Giants fans, it was a room packed with Pats fanatics. 

Ultimately, sad, depressed, and shocked Pats fanatics…

So the next day after having to listen to the fantastic keynote from David Reilly, Head of Technology Infrastructure Services, Credit Suisse — an Irishman who grew up in England and now lives in New York — bleat on about "his beloved Giants," I thought it only appropriate that I take one last stab at regaining my pride.

So, when it was my turn to speak, I slipped a borrowed Randy Moss jersey over my silk shirt and took the stage to stares of bewilderment and confusion.

I explained my costume and expressed my disappointment with the team’s performance in one fell swoop:

You may be wondering why I’m up here presenting in my beloved Patriot’s uniform.  Well, this *is* a security presentation, so I thought I could give you no more spectacular illustration of what happens when you fail to execute on a defensive strategy than this (pointing to the jersey.)

Further, I find it completely amusing and apropos to be standing here in a virtualization conference talking about security *last* in the order of things because that’s exactly the problem I want to talk about…

The crowd seemed to enjoy those couple of opening shots and the rest went quite well — I try to make stabs at involving the audience.  I always gauge the success of a show by how many people come up and talk to me at the podium and afterwards.  By all accounts, it rocked since I spent the next 45 minutes talking to the 30+ folks that engaged me between the podium and the beer stand.

Adrian Lane was kind enough to blog about my performance here…

I very much enjoyed the conversation that ensued with some really interesting people.

Looking forward to the next one in NY in the November timeframe.

Hope to see you there.

/Hoff

Process Control Systems (SCADA and the like) & Virtualization

January 31st, 2008 3 comments

Securingscada
Just when you get out, they pull you back in…

Jason Holcomb over at the Digital Bond blog posted something that attracted my attention.  He popped up an innocuous entry titled "Virtualization in the SCADA World: Part 1" which intrigued me for reasons that should be obvious to anyone who reads my steaming pile of blogginess with any sort of regularity.

It would be easy to knee-jerk and simply roll my eyes suggesting that adding virtualization to the "security by obscurity" approach we’ve seen being argued recently is just inviting disaster (literally,) but I’m trying to be rational about this.  I want to understand the other camp’s position and learn from it, hopefully contributing to break down a wall or two…

Jason sets it up:

A few years back, the traditional IT
world was debating the merits of virtualization. There were concerns
about performance, security, vendor support, and a host of other
issues. Fast-forward to today, however, and you’ll find virtual
machines in use in nearly every data center.

I think it’s fair to say that while most folks would be hard pressed to dispute the merits of virtualization, the concerns regarding "…performance, security, vendor support, and a host of other issues" are hardly resolved.  In fact, they are escalating.

<snip>

So what are the implications of this in the SCADA world? I think
it’s just a matter of time before we see more widespread acceptance of
VMware and other virtualization platforms in production control
systems. The benefit here may be less about cost savings, though, and
more about increased functionality. The ability to snapshot and clone
machines for backup and testing, for example, is very attractive.

I think the paragraph above is extremely telling because it’s really focused on debating the value proposition which is really a foregone conclusion for all the reasons Jason mentions.   The real meat will hopefully be discussed in the follow-on’s:

We’re going to examine this subject over a series of blog posts.
Hopefully we’ll cover all the major topics – security, reliability,
performance, serial communication issues, vendor support, and adoption
rate, to name a few.

I look forward to your comments and opinions.

In my first comment to Jason’s posting, I alluded to a whole host of virtualization-related issues which are grounded in practice and not hype and asked that since SCADA security is billed as being SO much different than "IT security" what this intersection will bring and how one might assess risk (and against what.)

Further, given various C&A standards, I’m interested in how one might approach (depending upon industry) holding these systems up to a C&A process once virtualization is added to the mix. 

It will be an interesting discussion, methinks.

/Hoff

 
Categories: Virtualization Tags:

I/O Virtualization: The Battle for the Datacenter OS and What This Means to Security

January 28th, 2008 3 comments

Datacenter
One of the very profound impacts virtualization will have on security is the resultant collateral damage caused by what I call the "battle for the datacenter OS" framed by vendors who would ordinarily not be thought of as "OS vendors." 

I call the main players in this space the "Three Kings:" Cisco, VMware and EMC.
Microsoft is in there also, but that’s a topic for another post as I bifurcate operating system vendors in the classical sense from datacenter infrastructure platforms.  Google deserves a nod, too.

The "Datacenter OS" I am speaking of is the abstracted amalgam of virtualization and converged networking/storage that delivers the connected and pooled resource equivalent of the utility power grid.   Nick Carr reflects in his book "The Big Switch":

A hundred years ago, companies stopped producing their own power with steam engines and generators and plugged into the newly built electric grid.”

The "datacenter" and its underlying "operating system," in whatever abstracted form they will manifest themselves, will become this service layer delivery "grid" to which all things will connect; services will be merely combinations of resources and capabilities which are provisioned dynamically.

We see this starting to take form with the innovation driven by virtualization, the driving forces of convergence, the re-emergence of grid computing, the architectures afforded by mash-ups and the movements and investments of the Three Kings in all of these areas.

It’s pretty clear that that these three vendors are actively responsible for shaping the future of computing as we know it.  However, It’s not at all clear to me how much of the strategic overlap between them is accidental or planned, but they’re all approaching the definition of how our virtualized computing experience will unfold in very similar ways, albeit from slightly different perspectives.

One of the really interesting examples of this is how virtualization and convergence are colliding to produce the new model of the datacenter which blurs the lines between computing, networking and storage.

Specifically, the industry — as driven by customers — is trending toward the following:

  • Upgrading from servers to blades
  • Moving from hosts and switches to clusters and fabrics
  • Evolving from hardware/software affinity to grid/utility computing
  • Transitioning from infrastructure to service layers in “the cloud”

The topic of this post is really about the second bullet, moving from the notion of the classical hosts/servers plugging into separate network and storage switches to instead clusters of resources connecting to fabric(s) such that what we end up with are pools of resources to be provisioned, allocated and dispatched where, when and how needed. 

This is where I/O virtualization enters the picture.  I/O virtualization at the macro level of the datacenter describes the technology which enables the transition from the discrete and directly-connected model of storage versus networking to a converged virtualized model wherein network and storage resources are aggregated into a single connection to the "fabric."

Instead of having separate Ethernet, fiber channel and Infiniband connections, you’d have a single pipe connected to a "virtual connectivity switch" that provides on-demand, dynamic and virtualized allocation of resources to anything connected to the fabric.  The notion of physical affinity from the server/host’s perspective goes away.

IovirtAndy Dornan from Information Week just did a nice write-up titled "Cisco pitches virtual switches for next-gen data centers." 

It’s obviously focused on Cisco’s Nexus 7000 Series switch, but also gives some coverage of Brocade’s DCX Backbone, Xsigo’s Director and 3Leaf’s v-8000 products.

Check out what Andy had to say about Cisco’s strategy:

Cisco’s vision is one in which big companies off-load an increasing
number of server tasks to network switches, with servers ultimately
becoming little more than virtual machines inside a switch.*

The Nexus
doesn’t deliver that, but it makes a start, aiming to virtualize the
network interface cards, host bus adapters, and cables that connect
servers to networks and remote storage. At present, those require
dedicated local area networks and storage area networks, with each
using a separate network interface card and host bus adapter for every
virtual server. The Nexus aims to consolidate them all into one (or
two, for redundancy), with virtual servers connecting through virtual
NICs.

This stuff isn’t vaporware anymore.  These products are real…from numerous entities.  These companies — and especially Cisco — are on a mission to re-write the datacenter blueprint and security along with it.  VMware’s leading the virtualization charge and Cisco’s investing for the long run.  When you look at their investment in VMware, the I/O virtualization play and what they’re doing with vFrame, it’s impressive — and scary at the same time.

Them’s a lot of eggs in one basket, and it’s perfectly clear that there is a huge sucking sound coming from the traditional security realm as we look out over the horizon.  How do you apply a static security sensibility grounded in the approaches of 20 years ago to an amorphous, fluid, distributed and entirely dynamic pooled set of resources and information?

Cisco has thrown their hat in the ring to address the convergence of role-based admission and access control with the announcement of TrustSec which will be available in the Nexus as it is in the higher-end Catalyst switches.  Other vendors such as HP, Extreme and now Juniper as well as up-starts like Nevis and Consentry have their perspectives.  What each of these infrastructure networking vendors have in store for how their solutions will play in the world of virtualized and distributed computing is still to unfold.

How might this emerging phase of technology, architecture, provisioning, management, deployment and virtualization of resources impact security especially since we’ve barely even started to embrace the impact server virtualization has?  One word:

Completely.

More on this topic shortly…

/Hoff

*Update: A colleague of mine from Unisys, Michael Salsburg, prompted me via discussion to clarify a point.  I think that  for at least the short term, the "server tasks" that will be offloaded to I/O virtualization solutions such as Cisco’s will be fairly narrow in scope and logically defined.  However, given that NX-OS is Linux based, one might expect to see a Hypervisor-like capability within the switch itself, enabling VM’s and applications to be run directly within it.

Certainly we can expect and intermediary technology derivation which would include Cisco developing their own virtual switch that complements/replaces the vSwitch present in the VMM today; at this point given the heft/performance of the Nexus, one could potentially see it existing "outside" the vHost and using a high-speed 10Gb/s connection, redirect all virtual network functions to the external switch…

Categories: Cisco, Virtualization, VMware Tags:

Client Virtualization and NAC: The Fratto Strikes Back…

January 20th, 2008 5 comments

ChubbyvaderAttention NAC vendors who continue to barrage me via email/blog
postings claiming I don’t understand NAC:  You’re missing the point of
this post which basically confirms my point; you’re not paying
attention and are being myopic.
I included NAC with IPS in (the original) post here to illustrate two things:

(1) Current NAC solutions aren’t particularly relevant when you have centralized and virtualized client infrastructure and

(2) If you understand the issues with offline VM’s in the server world
and what it means to compliance and admission control on spin-up or
when VMotioned, you could add a lot of value by adapting your products
(if you’re software based) to do offline VM conformance/remediation and
help prevent VM sprawl and inadvertent non-compliant VM spin-up…

But you go ahead and continue with your strategy…you’re doing swell so far convincing the market of your relevance.

Now back to our regular programming…

— ORIGINAL POST —


I sense a disturbance in the force…

Mike Fratto’s blog over at the NWC NAC Immersion Center doesn’t provide a method of commenting, so I thought I’d respond to his post here regarding my latest rant on how virtualization will ultimately and profoundly impact the IPS and NAC appliance markets titled "How the hypervisor is death by a thousand cuts to the network IPS/NAC appliance vendors."

I think Mike took a bit of a left turn when analyzing my comments because he missed my point.  Assuming I’m wrong, I’ll respond the best I can.

A couple of things really stood out in Mike’s comments and I’m going to address them in reverse order.  I think most of Mike’s comments strike an odd chord to me because my post was about what is going to happen to the IPS/NAC markets given virtualization’s impact and not necessarily what these products look like today.

Even though the focus of my post was not client virtualization, let’s take this one first:

Maybe I am missing something, but client virtualization just doesn’t
seem to be in the cards today. Even if I am wrong, and I very well
could be, I don’t think mixing client VM’s with server VM in the same
hypervisor would be a good idea if for no other reason than the fact
that a client VM could take down the hypervisor or suck up resources.

I don’t say this to be disrespectful, but it doesn’t
appear like Mike understands how virtualization technology works.  I
can’t understand what he means when he speaks of "…mixing client VM’s
with server VM in the same hypervisor."  VM’s sit atop of the
hypervisor, not *in* it.   Perhaps he’s suggesting that despite isolation and the entire operating premise of virtualization that it’s a bad idea to have a virtualized client instance colocated in the same physical host as a VM next to a VM running a server instance?  Why?

Further, beyond theoretical hand wringing,
I’d very much like to see a demo today of how a "…client VM could take down the
hypervisor."

I won’t argue that client virtualization is still not as popular
as server virtualization today, but according to folks like Gartner, it’s on
the uptake, especially when it goes toward dealing with endpoint
management and the consumerization of IT.  With entire product lines from folks like Citrix (Desktop Server, Presentation Server products, XenDesktop) and VMware (VDI) it’s sort of a hard bite to swallow.

This is exactly the topic of
my post here (Thin Clients: Does this laptop make my assets look fat?), underscored with a quick example by Thomas Bittman from Gartner:

Virtualization on the PC has even more potential than server
virtualization to improve the management of IT infrastructure,
according to Mr Bittman.
“Virtualization on the client is perhaps two years behind, but it is
going to be much bigger. On the PC, it is about isolation and creating
a managed environment that the user can’t touch. This will help change
the paradigm of desktop computer management in organizations. It will
make the trend towards employee-owned notebooks more manageable,
flexible and secure.”

Today, I totally get that NAC is about edge deployment (access layer,) keeping the inadvertent client polluter from bringing something nasty onto the network, making sure endpoints are compliant to network policy, and in some cases, controlling access to network resources:

NAC is, by definition, targeting hosts at the edge. The idea is to
keep control access of untrusted or untrustworthy hosts to the network
based on some number of conditions like authentication, host
configuration, software, patch level, activity, etc. NAC is client
facing regardless of whether you’re controlling access at the client
edge or the data center edge.

I understand that today’s version of NAC isn’t about servers,
but the distinction between clients and servers blurs heavily due to
virtualization and NAC — much like IPS — is going to have to change
to address this.  In fact, some might argue it already has.  Further, some of the functionality being discussed when using the TPM is very much NAC-like.  Remember, given the dynamic nature of VMs (and technology like VMotion) the reality is that a VM could turn up anywhere on a network.  In fact, I can run (I do today, actually) a Windows "server" in a VM on my laptop:

You could deploy NAC to access by servers to the network, but I
don’t think that is a particularly useful or effective strategy mainly
because I would hope that your servers are better maintained and better
managed than desktops. Certainly, you aren’t going to have arbitrary
users accessing the server desktop and installing software, launching
applications, etc. The main threat to server is if they come under the
control of an attacker so you really need to make sure your apps and
app servers are hardened.

Within a virtualized environment (client and server) you won’t need a bunch of physical appliances or "NAC switches," as this functionality will be provided by a virtual appliance within a host or as a function of the trusted security subsystem embedded within the virtualization provider’s platform.

I think it’s a natural by-product of the productization of what we see as NAC platforms today, anyhow.  Most of the NAC solutions today used to be IPS products yesterday.  That’s why I grouped them together in this example.

This next paragraph almost makes my point entirely:

Client virtualization is better served with products like Citrix
MetaFrame or Microsoft’s Terminal Services where the desktop
configuration is dictated and controlled by IT and thus doesn’t t
suffer from the same problems that physical desktop do. Namely, in a
centrally managed remote client situation, the administrator can more
easily and effectively control the actions of a user and their
interactions on the remote desktop. Drivers that are being pushed by
NAC vendors and analysts, as well as responses to our own reader polls,
relating the host condition like patch level, running applications,
configuration, etc are more easily managed and should lead to a more
controlled environment.

Exactly!  Despite perhaps his choice of products, if the client environment is centralized and virtualized, why would I need NAC (as it exists today) in this environment!?  I wouldn’t.  That was the point of the post!

Perhaps I did a crappy job of explaining my point, or maybe if I hadn’t included NAC alongside IPS, Mike wouldn’t have made that left turn, but I maintain that IPS and NAC both face major changes in having to deal with the impact virtualization will bring.

/Hoff

 

UPDATED: How the Hypervisor is Death By a Thousand Cuts to the Network IPS/NAC Appliance Vendors

January 18th, 2008 4 comments

Ipsnacdead_2Attention NAC vendors who continue to barrage me via email/blog postings claiming I don’t understand NAC:  You’re missing the point of this post which basically confirms my point; you’re not paying attention and are being myopic.

I included NAC with IPS in this post to illustrate two things:

(1) Current NAC solutions aren’t particularly relevant when you have centralized and virtualized client infrastructure and

(2) If you understand the issues with offline VM’s in the server world and what it means to compliance and admission control on spin-up or when VMotioned, you could add a lot of value by adapting your products (if you’re software based) to do offline VM conformance/remediation and help prevent VM sprawl and inadvertent non-compliant VM spin-up…

But you go ahead and continue with your strategy…you’re doing swell so far convincing the market of your relevance.

Now back to our regular programming…

— ORIGINAL POST —


From the "Out Of the Loop" Department…

Virtualization is causing IPS and NAC appliance vendors some real pain in the strategic planning department.  I’ve spoken to several product managers of IPS and NAC companies that are having to make some really tough bets regarding just what to do about the impact virtualization is having on their business.

They hmm and haw initially about how it’s not really an issue, but 2 beers later, we’re speaking the same language…

Trying to align architecture, technology and roadmaps to the emerging tidal wave of consolidation that virtualization brings can be really hard.  It’s hard to differentiate where the host starts and the network ends…

In reality, firewall vendors are in exactly the same spot.  Check out this Network World article titled "Options seen lacking in firewall virtual server protection."  In today’s world, it’s almost impossible to distinguish a "firewall" from an "IPS" from a "NAC" device to a new-fangled "highly adaptive access control" solution (thanks, Vernier Autonomic Networks…)

It’s especially hard for vendors whose IPS/NAC software is tied to specialty hardware, unless of course all you care about is enforcing at the "edge" — wherever that is, and that’s the point.  The demarcation of those security domain diameters has now shrunk.  Significantly, and not just for servers, either.  With the resurgence of thin clients and new VDI initiatives, where exactly is the client/server boundary?

Prior to virtualization, network-based IPS/NAC vendors would pick arterial network junctions and either use a tap/SPAN port in an out-of-band deployment or slap a box inline between the "trusted" and "untrusted" sides of the links and that was that.  You’d be able to protect assets based on port, VLAN or IP address.

You obviously only see what traverses those pipes.  If you look at the problem I described here back in August of last year, where much of the communication takes place as intra-VM sessions on the same physical host that never actually touch the externally "physical" network, you’ve lost precious visibility for detection let alone prevention.

I think by now everyone recognizes how server virtualization impacts network and security architecture and basically provides four methods (potentially in combination) today for deploying security solutions:

  1. Keep all your host based protections intact and continue to circle the wagons around the now virtualized endpoint by installing software in the actual VMs
  2. When available, take a security solution provider’s virtual appliance version of their product (if they have one) and install in on a host as a VM and configure the virtual networking within the vSwitch to provide the appropriate connectivity.
  3. Continue to deploy physical appliances between the hosts and the network
  4. Utilize a combination of host-based software and physical IPS/NAC hardware to provide off-load "switched" or "cut-through" policy enforcement between the two.

Each of these options has its pros and cons for both the vendor and the customer; trade-offs in manageability, cost, performance, coverage, scalability and resilience can be ugly.  Those that have both endpoint and network-based solutions are in a far more flexible place than those that do not.

Many vendors who have only physical appliance offerings are basically stuck adding 10Gb/s Ethernet connections to their boxes as they wait impatiently for options 5, 6 and 7 so they can "plug back in":

5.  Virtualization vendors will natively embed more security functionality within the hypervisor and continue integrating with trusted platform models

6.  Virtualization vendors will allow third parties to substitute their own vSwitches as a function of the hypervisor

7. Virtualization vendors will allow security vendors to utilize a "plug-in" methodology and interact directly with the VMM via API

These options would allow both endpoint software installed in the virtual machines as well as external devices to interact directly with the hypervisor with full purview of inter and intra-VM flows and not merely exist as a "bolted-on" function that lacks visibility and best-of-breed functionality.

While we’re on the topic of adding 10Gb/s connectivity, it’s important to note that having 10Gb/s appliances isn’t always about how many Gb/s of IPS
traffic you can handle, but also about consolidating what would
otherwise be potentially dozens of trunked LACP 1Gb/s Ethernet and FC connections pouring
out of each host to manage both the aggregate bandwidth but also the issues driven by a segmented network.

So to get the coverage across a segmented network today, vendors are shipping their appliances with tons of ports — not necessarily because they want to replace access switches, but rather to enable coverage and penetration.

On the other hand, most of the pure-play software vendors today who say they are "virtualization enabled" really mean that their product installs as a virtual appliance on a VM on a host.  The exposure these solutions have to traffic is entirely dependent upon how the vSwitches are configured.

…and it’s going to get even more hairy as the battle for the architecture of the DatacenterOS also rages.  The uptake of 10Gb/s Ethernet is also contributing to the mix as we see
customers:

  • Upgrading from servers to blades
  • Moving from hosts and switches to clusters and fabrics
  • Evolving from hardware/software affinity to gird/utility computing
  • Transitioning from infrastructure to service layers in “the cloud”

Have you asked your IPS and NAC vendors who are hardware-bound how they plan to deal with this Tsunami on their roadmaps within the next 12 months.  If not, grab a lifejacket.

/Hoff

UPDATE:  It appears nobody uses trackbacks anymore, so I’m resorting to activity logs, Google alerts and stubbornness to tell when someone’s referencing my posts.  Here are some interesting references to this post:

…also, this is right on the money:

I think I’ll respond to them on my blog with a comment on theirs pointing back over…

On Patch Tuesdays for Virtualization Platforms…

January 14th, 2008 2 comments

Bandaid
In December I made note of an interesting post on the virtualization.info blog titled "Patch Tuesday for VMware."  This issue popped up today in conversation with a customer and I thought to bubble it back up for discussion.

The post focused on some work done by Ronald Oglesby and Dan Pianfetti from GlassHouse
Technologies regarding the volume, frequency and distribution of patches across VMware’s ESX platform. 

When you combine Ronald and Dan’s data with Kris Lamb’s from ISS that I wrote about a few months ago, it’s quite interesting.

The assertion that Ronald/Dan are making in their post is that platforms like VMware’s ESX have to date required just as much care and feeding from a patching/vulnerability management perspective as a common operating system such as a Windows Server:

So why make this chart and look at the time between patches? Let’s take a hypothetical
server built on July 2nd of 2007, 5 months ago almost exactly. Since
being built on that day and put into production that server would have
been put into maintenance mode and patched/updated eight times. That’s
right eight (8) times in 5 months. How did this happen? Let’s look at
the following timeline:


Maybe it’s time to slow down and look at this as a QA issue? Maybe it’s
time to stop thinking about these platforms as rock solid, few moving
parts systems? Maybe it’s better for us not to draw attention to it,
and instead let it play out and the markets decide whether all this
patching is a good thing or not. Obviously patching is a necessary
evil, and maybe because we are so used to it in the Windows world, we
have ignored this so far. But a patch every 18.75 days for our
"hypothetical" server is a bit much, don’t you think?

I think this may come as a shock to some who have long held the belief that bare-metal, Type 1 virtualization platforms require little or no patching and that because of this, the "security" and availability of virtualized hosts was greater than that of their non-virtualized counterparts.

The reality of the situation and the effort and potential downtime (despite tools that help) have led to unexpected service level deviance, hidden costs and latent insecurity in deployed virtualized environments.  I think Ronald/Dan said it best:

If a client is buying into the idea of server virtualization as a piece
of infrastructure (like a SAN or a switch) only to see the types of
patching we see in Windows, they are going to get smacked in the face
with the reality that these are SERVERS. The reality that the vendors
are sticking so much into the OS that patches are going to happen just
as often as with Windows Servers? Or, if the client believes the
stability/rock solidness and skips a majority of general
patches, they wind up with goofy time issues or other problems with iSCSI, until they catch up.

As a counterpoint to this argument I had hoped to convince Kris Lamb to extend his patch analysis of VMware’s releases and see if he could tell how many patched vulnerabilities existed in the service console (the big ol’ fat Linux OS globbed onto the side of ESX) versus the actual VMM implementation itself.  For some reason, he’s busy with his day job. 😉 This is really an important data point.  I guess I’ll have to do that myself ;(

The reason why this is important is exactly the reason that you’re seeing VMware and other industry virtualization players moving to embedded hypervisors; skinnying down the VMM’s to yield less code, less attack surface and hopefully less vulnerabilities.  So, to be fair, the evolution of the virtualization platforms is really on-par with what one ought to expect with a technology that’s still fairly nascent.

In fact, that’s exactly Nand Mulchandani, VMware’s Sr. Director of Security Product Management & Marketing in response to Ronald/Dan’s post:

As
the article points out, "patching is a necessary evil" – and that the
existence of ESX patches should not come as a shock to anyone. So let’s
talk about the sinister plan behind the increase in ESX patches.
Fortunately, the answer is in the article itself. Our patches contain a
lot of different things, from hardware compatibility updates, feature
enhancements, security fixes, etc.

We
also want customers to view ESX as an appliance – or more accurately,
as a product that has appliance-like characteristics.

Speaking
of appliances, another thing to consider is that we are now offering
ESX in a number of different form-factors, including the brand new ESX Server 3i.
3i will have a significantly different patch characteristics – it does
not have a Console OS and has a different patching mechanism than ESX
that will be very attractive to customers.

I see this as a reasonable and rational response to the issue, but it does point out that whether you use VMware or any other vendor’s virtualization platform, you should make sure to recognize that patching and vulnerability management of the underlying virtualization platforms is another — if not really critical — issue that will require operational attention and potential cost allocation.

/Hoff

P.S. Mike D. does a great job of stacking up other vendors against Microsoft in this vein such as Microsoft, Virtual Iron, and SWSoft.