Archive

Archive for the ‘Virtualization’ Category

Network Intelligence is an Oxymoron & The Myth of Security Packet Cracking

May 21st, 2007 No comments

Cia[Live from Interop’s Data Center Summit]

Jon Oltsik crafted an interesting post today regarding the bifurcation of opinion on where the “intelligence” ought to sit in a networked world: baked into the routers and switches or overlaid using general-purpose compute engines that ride Moore’s curve.

I think that I’ve made it pretty clear where I stand.   I submit that you should keep the network dumb, fast, reliable and resilient and add intelligence (such as security) via flexible and extensible service layers that scale both in terms of speed but also choice.

You should get to define and pick what best of breed means to you and add/remove services at the speed of your business, not the speed of an ASIC spin or an acquisition of technology that is neither in line with the pace and evolution of classes of threats and vulnerabilities or the speed of an agile business. 

The focal point of his post, however, was to suggest that the real issue is the fact that all of this intelligence requires exposure to the data streams which means that each component that comprises it needs to crack the packet before processing.   Jon suggests that you ought to crack the packet once and then do interesting things to the flows.  He calls this COPM (crack once, process many) and suggests that it yields efficiencies — of what, he did not say, but I will assume he means latency and efficacy.

So, here’s my contentious point that I explain below:

Cracking the packet really doesn’t contribute much to the overall latency equation anymore thanks to high-speed hardware, but the processing sure as heck does!  So whether you crack once or many times, it doesn’t really matter, what you do with the packet does.

Now, on to the explanation…

I think that it’s fair to say that many of the underlying mechanics of security are commoditizing so things like anti-virus, IDS, firewalling, etc. can be done without a lot of specialization – leveraging prior art is quick and easy and thus companies can broaden their product portfolios by just adding a feature to an existing product.

Companies can do this because of the agility that software provides, not hardware.  Hardware can give you scales of economy as it relates to overall speed (for certain things) but generally not flexibility. 

However, software has it’s own Moore’s curve or sorts and I maintain that unfortunately its lifecycle, much like what we’re hearing @ Interop regarding CPU’s, does actually have a shelf life and point of diminishing return for reasons that you’re probably not thinking about…more on this from Interop later.

John describes the stew of security componenty and what he expects to see @ Interop this week:

I expect network intelligence to be the dominant theme at this week’s Interop show in Las Vegas. It may be subtle but its definitely there. Security companies will talk about cracking packets to identify threats, encrypt bits, or block data leakage. The WAN optimization crowd will discuss manipulating protocols and caching files, Application layer guys crow about XML parsing, XSLT transformation, and business logic. It’s all about stuffing networking gear with fat microprocessors to perform one task or another.

That’s a lot of stuff tied to a lot of competing religious beliefs about how to do it all as Jon rightly demonstrates and ultimately highlights a nasty issue:

The problem now is that we are cracking packets all over the place. You can’t send an e-mail, IM, or ping a router without some type of intelligent manipulation along the way.

<nod>  Whether it’s in the network, bolted on via an appliance or done on the hosts, this is and will always be true.  Here’s the really interesting next step:

I predict that the next bit wave in this evolution will be known as COPM for "Crack once, process many." In this model, IP packets are stopped and inspected and then all kinds of security, acceleration, and application logic actions occur. Seems like a more efficient model to me.

To do this, it basically means that this sort of solution requires Proxy (transparent or terminating) functionality.  Now, the challenge is that whilst “cracking the packets” is relatively easy and cheap even at 10G line rates due to hardware, the processing is really, really hard to do well across the spectrum of processing requirements if you care about things such as quality, efficacy, and latency and is “expensive” in all of those categories.

The intelligence of deciding what to process and how once you’ve cracked the packets is critical. 

This is where embedding this stuff into the network is a lousy idea. 

How can a single vendor possibly provide anything more than “good enough” security in a platform never designed to solve this sort of problem whilst simultaneously trying to balance delivery and security at line rate? 

This will require a paradigm shift for the networking folks that will either mean starting from scratch and integrating high-speed networking with general-purpose compute blades, re-purposing a chassis (like, say, a Cat65K) and stuffing it with nothing but security cards and grafting it onto the switches or stack appliances (big or small – single form factor or in blades) and graft them onto the switches once again.   And by the way, simply adding networking cards to a blade server isn’t an effective solution, either.  "Regular" applications (and esp. SOA/Web 2.0 apps) aren’t particularly topology sensitive.  Security "applications" on the other hand, are wholly dependent and integrated with the topologies into which they are plumbed.

It’s the hamster wheel of pain.

Or, you can get one of these which offers all the competency, agility, performance, resilience and availability of a specialized networking component combined with an open, agile and flexible operating and virtualized compute architecture that scales with parity based on Intel chipsets and Moore’s law.

What this gives you is an ecosystem of loosely-coupled BoB security services that can be intelligently combined in any order once cracked and ruthlessly manipulated as it passes through them governed by policy – and ultimately dependent upon making decisions on how and what to do to a packet/flow based upon content in context.

The consolidation of best of breed security functionality delivered in a converged architecture yields efficiencies that is spread across the domains of scale, performance, availability and security but also on the traditional economic scopes of CapEx and OpEx.

Cracking packets, bah!  That’s so last Tuesday.

/Hoff

The Operational Impact of Virtualizing Security…

May 6th, 2007 No comments

Operationalizingsecurity2
A benefit of a show such as Infosec UK is that one is given the opportunity to organize customer meetings and very unique roundtables because everyone clusters around the show.

Last year we organized a really interesting roundtable discussion with 13 of the UK’s most compelling members of the financial services and telco/service provider industries.  This year we did another similar event with equal representation from industry.

The agenda of this meeting revolves around a central topic about which the group first introduces one another and then adds color and experiential commentary regarding the issue at hand.  The interesting thing is that by the time the "introductions" are complete, we’ve all engaged in fantastic discussion with most people sharing key experiential data and debate that has stretched the time allotment of the event.

This year’s topic was "The Operational Impact of Virtualizing Security."  It was a fascinating topic for me since I was quite interested in seeing how virtualized security was taking hold in these organizations and how operationalizing security was impacted by virtualizing it.

Virtualization (in the classic data center consolidation via virtual machine implementations) is ultimately fueled by two things: the reclamation and reduction of spending (a) time and (b) money.  In the large enterprise it’s about less boxes and services on demand to serve the business.  With Telcos/Mobile Operators/Service Providers it’s about increasing the average revenue per subscriber/customer and leveraging common infrastructure to deliver security as a service (for fun and profit.)

The single largest differentiator between the two (or so) markets really boils down to scale; how many things are you trying to protect and at what cost.  Novel idea, eh?

It was evident that those considering virtualizing their security were motivated primarily by the same criteria, but in many cases politics, religion, regulatory requirements, imprecise use cases, bad (or non-existent) metrics, not aligning security to the business goals, fear and also some very real concerns from the security or network "purists" dramatically impacted people’s opinions regarding whether or not to virtualize their security architecture.

In most cases, it became evident that the most critical issues related to separation of duties, single points of failure, transparency (or lack thereof) fault-isolation domains, silos of administration, and the fact that many of the largest networks on the planet are largely still "flat" which makes virtualization hard. There were some hefty visualization and management concerns, but almost none of the issues were really technical.

I related a story wherein I had to spend an hour on the phone trying to convince some senior security folks at a very large company that VLANs, while they could be misconfigured and misused like any other technology, were not inherently evil.  Imagine the fun involved when I recounted the virtualization of transport, policy and security applications across a cluster of load-balanced application processing modules in a completely virtualized overlaid security services layer!

So, what the discussion boiled down to was that the operational impact of virtualizing security is compelling on many fronts, especially when discussing the economics of time and money.  When it came to downsides, most were the same old song of the fact that with the size of the Fortune 2000, where budgets are certainly larger than anywhere else, it’s still "easier" to just deploy single function boxes because one doesn’t need to think, organize differently, re-architect or buffer the status quo. 

It takes more than a simple firewall refresh to start thinking differently about how, why and where we deploy security.   Sometimes one has to think outside the box, and other times it just takes redefining what the box looks like in the first place.

/Hoff

Categories: Virtualization Tags:

NWC’s Wittmann: Security in Virtualized Environments Overstated: Just Do It!

April 30th, 2007 2 comments

Virtualprotection_dog
In the April, 2007 edition of Network Computing magazine, Art Wittmann talks about server virtualization, its impact on data center consolidation and the overall drivers and benefits virtualization offers. 

What’s really interesting is that while he rambles on about the benefits of power, cooling and compute cycle-reclamation, he completely befuddled me with the following statement in which he suggests that:

    "While the security threat inherent in virtualization is
     real, it’s also overstated."

I’ll get to the meaty bits in a minute as to why I think this is an asinine comment, but first a little more background on the article.

In addition to illustrating everything wrong with the way in which IT has traditionally implemented security — bolting it on after the fact rather than baking it in — it shows the recklessness with which evangelizing the adoption of technology without an appropriate level of security is cavalierly espoused without an overall understanding of the impact of risk such a move creates.

Whittmann manages to do this with an attitude that seeks to suggest that the speed-bump security folks and evil vendors (or in his words: nattering nabobs of negativity) are just intent on making a mountain out of a molehill.

It seems that NWC approaches the evaluation of technology and products in terms of five areas: performance, manageability, scalability, reliability and security.  He lists how virtualization has proven itself in the first four categories, but oddly sums up the fifth category (security) by ranting not about the security things that should or have been done, but rather how it’s all overblown and a conspiracy by security folks to sell more kit and peddle more FUD:

"That leaves security as the final question.  You can bet that everyone who can make a dime on questioning the security of virtualization will be doing so; the drumbeat has started and is increasing in volume. 

…I think it’s funny that he’s intimating that we’re making this stuff up.  Perhaps he’s only read the theoretical security issues and not the practical.  While things like Blue Pill are sexy and certainly add sizzle to an argument, there are some nasty security issues that are unique to the virtualized world.  The drumbeat is increasing because these threats and vulnerabilities are real and so is the risk that companies that "just do it" are going to discover.

But while the security threat is real –and you should be concerned about it — it’s also overstated.  If you can eliminate 10 or 20 servers running outdated versions of NT in favor of a single consolidated pair of servers, the task of securing the environment should be simpler or at least no more complex.  If you’re considering a server consolidation project, do it.  Be mindful of security, but don’t be dissuaded by the nattering nabobs of negativity."

As far as I am concerned, this is irresponsible and reckless journalism and displays an ignorance of the impact that technology can have when implemented without appropriate security baked in. 

Look, if we don’t have security that works in non-virtualized environments, replicating the same mistakes in a virtualized world isn’t just as bad, it’s horrific.   While it should be simpler or at least no more complex, the reality is that it is not.  The risk model changes.  Threat vectors multiply.  New vulnerabilities surface.  Controls multiply.  Operational risk increases.

We end up right back where we started; with a mess that the lure of cost and time savings causes us to rush into without doing security right from the start.

Don’t just do it. Understand the risk associated with what a lack of technology, controls, process, and policies will have on your business before your held accountable for what Whittmann suggests you do today with reckless abandon.  Your auditors certainly will. 

/Hoff

More On the Risks of Virtualization

April 4th, 2007 3 comments

Virtualizationcompliant
I’ve been doing a bit of writing and speaking on panels recently on the topic of virtualization and the impact that it has across the entire spectrum of risk; I think it’s fairly clear to most that virtualization impacts all aspects of the computing landscape, from the client to the data center and ultimately how securing virtualization by virtualizing security is important.

Gartner just released an interesting article that says "Organizations That Rush to Adopt Virtualization Can Weaken Security."   Despite the sensationalism that some people react to in the title, I think that the security issues they bring up are quite valid. 

I’m glad to see that this study almost directly reflects the talking points that we’ve been puttering on about without any glaring omissions as it validates the problem space; it doesn’t take a rocket scientist to state the obvious, but I hope we get solutions to these problems quickly. 

Granted these are fairly well-known issues but most folks have not looked deeply into how this affects their overall risk models:

Organizations must consider these security issues in virtualized
environments:

  • Virtualization software, such as hypervisors, represent a new layer of privileged software that will be attacked and must be protected.
  • The loss of separation of duties for administrative tasks, which can lead to a breakdown of defense in-depth.
  • Patching, signature updates, and protection from tampering for offline VM and VM "appliance" images.
  • Patching and secure confirmation management of VM appliances where the underlying OS and configuration are not accessible.
  • Limited visibility into the host OS and virtual network to find vulnerabilities and assess correct configuration.
  • Restricted view into inter-VM traffic for inspection by intrusion prevention systems (IPSs).
  • Mobile VMs will require security policy and settings to migrate with them.
  • Immature and incomplete security and management tools.

I’m going to be presenting something very similar at the ISSA Metro event in Charlotte on April 10th.  I’ll upload my presentation ahead of time for anyone who might find it useful or interesting.

/Hoff

If it walks like a duck, and quacks like duck, it must be…?

April 2nd, 2007 5 comments

Blackhatvswhitehat
Seriously, this really wasn’t a thread about NAC.  It’s a great soundbite to get people chatting (arguing) but there’s a bit more to it than that.  I didn’t really mean to offend those NAC-Addicts out there.

My last post was the exploration of security functions and their status (or even migration/transformation)  as either a market or feature included in a larger set of features.  Alan Shimel responded to my comments; specifically regarding my opinion that NAC is now rapidly becoming a feature and won’t be a competitive market for much longer. 

Always the quick wit, Alan suggested that UTM was a "technology" that is going to become a feature much like my description of NAC’s fate.  Besides the fact that UTM isn’t a technology but rather a consolidation of lots of other technologies that won’t stand alone, I found a completely orthogonal statement that Alan made to cause my head to spin as a security practitioner. 

My reaction stems from the repeated belief that there should be separation of delivery between the network plumbing, the security service layers and ultimately the application(s) that run across them.  Note well that I’m not suggesting that common instrumentation, telemetry and disposition shouldn’t be collaboratively shared, but their delivery and execution ought to be discrete.  Best tool for the job.

Of course, this very contention is the source of much of the disagreement between me and many others who believe that security will just become absorbed into the "network."  It seems now that Alan is suggesting that the model of combining all three is going to be something in high demand (at least in the SME/SMB) — much in the same way Cisco does:

The day is rapidly coming when people will ask why would they buy a box
that all it does is a bunch of security stuff.  If it is going to live
on the network, why would the network stuff not be on there too or the
security stuff on the network box.

Firstly, multi-function devices that blend security and other features on the "network" aren’t exactly new.

That’s what the Cisco ISR platform is becoming now what with the whole Branch Office battle waging, and back in ’99 (the first thing that pops into my mind) a bunch of my customers bought and deployed WhistleJet multi-function servers which had DHCP, print server, email server, web server, file server, and security functions such as a firewall/NAT baked in.

But that’s neither here nor there, because the thing I’m really, really interested in Alan’s decidedly non-security focused approach to prioritizing utility over security, given that he works for a security company, that is.

I’m all for bang for the buck, but I’m really surprised that he would make a statement like this within the context of a security discussion.

That is what Mitchell has been
talking about in terms of what we are doing and we are going to go
public Monday.  Check back then to see the first small step in the leap
of UTM’s becoming a feature of Unified Network Platforms.

Virtualization is a wonderful thing.  It’s also got some major shortcomings.  The notion that just because you *can* run everything under the sun on a platform doesn’t always mean that you *should* and often it means you very much get what you pay for.  This is what I meant when I quoted Lee Iacocca when he said "People want economy and they will pay any price to get it."

How many times have you tried to consolidate all those multi-function devices (PDA, phone, portable media player, camera, etc.) down into one device.  Never works out, does it?  Ultimately you get fed up with inconsistent quality levels, you buy the next megapixel camera that comes out with image stabilization.  Then you get the new video iPod, then…

Alan’s basically agreed with me on my original point discussing features vs. markets and the UTM vs. UNP thing is merely a handwaving marketing exercise.  Move on folks, nothing to see here.

’nuff said.

/Hoff

(Written sitting in front of my TV watching Bill Maher drinking a Latte)

Another Virtualized Solution for VM Security…

March 19th, 2007 10 comments

Virtualmyspace
I got an email reminder from my buddy Grant Bourzikas today pointing me to another virtualized security solution for servers from Reflex Security called Reflex VSA.  VSA stands for Virtual Security Appliance and the premise appears to be that you deploy this software within each guest VM and it provides what looks a lot like host-based intrusion prevention functionality per VM.

The functionality is defined thusly:

Reflex VSA solves the problem that traditional network security such as
IPS and firewall appliances currently can not solve: detecting and preventing attacks within a virtual server. Because Reflex VSA runs as virtualized
application inside the virtualized environment, it can detect and mitigate
        threats between virtual hosts and networks.

Reflex VSA Features:
        • Access firewall for permission enforcement for intra-host and external network
           communication
        • Intrusion Prevention with inline blocking and filtering for virtualized networks
        • Anomaly, signature, and rate-based threat detection capability
       
        • Network Discovery to discover and map all virtual machines and applications
        • Reflex Command Center, providing a centralized configuration and management
           console, comprehensive reporting tools, and real-time event aggregation and
           correlation
   

Reflex_vsa_deploy
It does not appear to wrap around or plug-in to the HyperVisor natively, so I’m a little confused as to the difference between deploying VSA and whatever HIPS/NIPS agent a customer might already have deployed on "physical" server instantiations.

Blue Lane’s product addresses this at the HyperVisor layer and it would be interesting to me to have the pundits/experts argue the pros/cons of each approach. {Ed. This is incorrect.  Blue Lane’s product runs as a VM/virtual appliance also.  With the exposure via API of the hypervisor/virtual switches, products like Blue Lane and Reflex would take advantage to be more flexible, effective and higher performing.}

I’m surprised most of the other "security configuration management" folks haven’t already re-branded their agents as being "Virtualization Compliant" to attack this nascent marketspace. < :rolleyes here: >

It’s good to see that folks are at least owning up to the fact that intra-VM communications via virtual switches are going to drive a spin on risk models, detection and mitigation tools and techniques.  This is what I was getting at in this blog entry here.

I would enjoy speaking to someone from Reflex to understand their positioning and differentiation better, but isn’t this just HIPS per VM?  How’s that different than firewall, AV, etc. per VM?

/Hoff

John Thompson’s (Symantec) Ironic warning of “Conflict of Interest”

March 19th, 2007 3 comments

Drivethrubeer
Infoworld ran an interesting article on John Thompson’s recent CeBIT keynote in which he took a shot at Microsoft by suggesting that there is an inherently "…huge conflict of interest for one company to provide both an operating platform and a security platform."

I suppose that opinion depends upon whether or not said company suggests that their security controls are all that are needed to secure said operating system or that defense in depth is not needed.

Here’s why I find this statement interesting and I am going to twist it by agreeing with the statement within the context of the same argument pertaining to Cisco as an extension to the many, many articles I have already written on this topic.

Given just the last rash of vulnerabilities in Cisco’s routing, switching and security products a few weeks ago, I believe it’s also a mistake (you can read "conflict of interest" if you desire) for Cisco (le fox) to protect the network (le chicken.)  That’s the same argument of the "operating system" and the "security platform."

I think it’s simply not relevant or appropriate to simply shrug off issues like this just because of Cisco’s size and the apparent manifest destiny associated with security "going into the switch" — just because it does and more than likely will — does not mean it should and does not mean that people will settle for "good enough" security when the network consistently fails to self-defend.

I don’t disagree that more and more security *will* make it’s way into the network switches, much like I don’t disagree that the sun will rise in the east and set in the west, but much in the same way that folks don’t just give up and go to sleep once the sun goes down, the lightbulb that goes on in my head suggests there is a better way.

/Hoff

Blue Lane VirtualShield for VMWare – Here we go…

March 19th, 2007 1 comment

Arms_diagramarmorlg
Greg Ness from Blue Lane and I have known each other for a while now, and ever since I purchased Blue Lane’s first release of products a few years ago (when I was on the "other" side as a *gasp* customer) I have admired and have taken some blog-derived punishment for my position on Blue Lane’s technology.

I have zero interest in Blue Lane other than the fact that I dig their technology and products and think it solves some serious business problems elegantly and efficiently with a security efficacy that is worth its weight in gold.

Vulnerability shielding (or patch emulation…) is a provocative subject and I’ve gone ’round and ’round with many a fine folk online wherein the debate normally dissolves into the intricacies of IPS vs. vulnerability shielding versus the fact that the solutions solve a business problem in a unique way that works and is cost effective.

That’s what a security product SHOULD do.  Yet I digress.

So, back to Greg @ Blue Lane…he let me know a few weeks ago about Blue Lane’s VirtualShield offering for  VMWare environments.  VirtualShield is the first commercial product that I know of that specifically tackles problems that everyone knows exists in VM environments but have, until now, sat around twirling thumbs at.

In fact, I alluded to some of these issues in this blog entry regarding the perceived "dangers" of virtualization a few weeks ago.

In short, VirtualShield is designed to protect guest VM’s running under a VMWare ESX environment in the following manner (and I quote):

  • Protects virtualized servers regardless of physical location or patch-level;
  • Provides up-to-date protection with no configuration changes and no agent installation on each virtual machine;
  • Eliminates remote threats without blocking legitimate application requests or requiring server reboots; and
  • Delivers appropriate protection for specific applications without requiring any manual tuning.

VS basically sits on top of the HyperVisor and performs a similar set of functionality as the PatchPoint solution does for non-VM systems.

Specifically, VirtualShield discovers the virtual servers running on a server and profiles the VM’s, the application(s), ports and protocols utilized to build and provision the specific OS and application protections (vulnerability shielding) required to protect the VM.

Bluelanevs_alt_conceptual_v2 I think the next section is really the key element of VirtualShield:

As traffic flows through VirtualShield inside the
hypervisor, individual sessions are decoded and monitored for
vulnerable conditions. When necessary, VirtualShield can replicate the
function of a software security patch by applying a corrective action
directly within the network stream, protecting the downstream virtual
server.

As new security patches are released by software
application vendors, VirtualShield automatically downloads the
appropriate inline patches from Blue Lane. Updates may be applied
dynamically without requiring any reboots or reconfigurations of the
virtual servers, the hypervisor, or VirtualShield.

While one might suggest that vulnerability shielding is not new and in some cases certain functionality can be parlayed by firewalls, IPS, AV, etc., I maintain that the manner and model in which Blue Lane elegantly executes this compensating control is unique and effective.

If you’re running a virtualized server environment under VMWare’s ESX architecture, check out VirtualShield…right after you listen to the virtualization podcast with yours truly from RSA.

/Hoff

RSA Conference Virtualization Panel – Audio Session Available

March 15th, 2007 No comments

Microphone_2
According to the folks at RSA, people really wanted the audio recording  of the DEPL-107 "Virtualization and Security" panel session I was on @ this year’s RSA show. 

The room was filled to the brim and I think ultimately it’s worth the listen.  Good balance of top-down and bottom-up taxonomy of the challenges virtualization brings to the security world.

The kind folks @ RSA decided that rather than charge for it, they would release it for free:

"Demand for these six sessions was so high at RSAR Conference 2007 that we’re providing the audio recordings for all to enjoy for free. Please download the session audio files below, and enjoy!"

If you think I write a lot, I talk a hell of a lot more!  Yikes.

Here is the link to the .mp3 of the DEPL-107 Session.

Enjoy.  /Hoff

Virtualization is Risky Business?

February 28th, 2007 6 comments

Dangervirtualization_1
Over the last couple of months, the topic of virtualization and security (or lack thereof) continues to surface as one of the more intriguing topics of relevance in both the enterprise and service provider environments and those who cover them.  From bloggers to analysts to vendors, virtualization is a greenfield for security opportunity and a minefield for the risk models used to describe it.

There are many excellent arguments being discussed which highlight in an ad hoc manner the most serious risks posed by virtualization, and I find many of them accurate, compelling, frightening and relevant.  However, I find that overall, to gauge in relative terms the impact  that these new combinations of attack surfaces, vectors and actors pose, the risk model(s) are immature and incomplete. 

Most of the arguments are currently based on hyperbole and anecdotal references to attacks that could happen.  It reminds me much of the ballyhooed security risks currently held up for scrutiny for mobile handsets.  We know bad things could happen, but for the most part, we’re not being proactive about solving some of the issues before they see the light of day.

The panel I was on at the RSA show highlighted this very problem.  We had folks from VMWare and
RedHat in the audience who assured us that we were just being Chicken Little’s and that the risk is
both quantifiable and manageable today.  We also had other indications that customers felt that while the benefits for virtualization from a cost perspective were huge, the perceived downside from the unknown risks (mostly theoretical) were making them very uncomfortable.

Out of the 150+ folks in the room, approximately 20 had virtualized systems in production roles.  About 25% of them had collapsed multiple tiers of an n-tier application stack (including SOA environments) onto a single host VM.  NONE of them had yet had these systems audited by any third party or regulatory agency.

Rot Roh.

The interesting thing to me was the dichotomy regarding the top-down versus bottom-up approach to
describing the problem.  There was lots of discussion regarding hypervisor (in)security and privilege
escalation and the like, but I thought it interesting that most people were not thinking about the impact on the network and how security would have to change to accommodate it from a bottoms-up (infrastructure and architecture) approach.

The notions of guest VM hopping and malware detection in hypervisors/VM’s are reasonably well discussed (yet not resolved) so I thought I would approach it it from the perspective of what role, if any, the traditional  network infrastructure plays in this.

Thomas Ptacek was right when he said "…I also think modern enterprises are so far from having reasonable access control between the VLANs they already use without virtualization that it’s not a “next 18 month” priority to install them." And I agree with him there.  So, I posit that if one accepts this as true then what to do about the following:

Virtualization
If now we see the consolidation of multiple OS and applications on a single VM host in which the bulk of traffic and data interchange is between the VM’s themselves and utilize the virtual switching fabrics in the VM Host and never hit the actual physical network infrastructure, where, exactly, does this leave the self-defending "network" without VM-level security functionality at the "micro perimeters" of the VM’s?

I recall a question I asked at a recent Goldman Sachs security conference where I asked Jayshree Ullal from Cisco who was presenting Cisco’s strategy regarding virtualized security about how their approach to securing the network was impacted by virtualization in the situation I describe above. 

You could hear cricket’s chirp in the answer.

Talk amongst yourselves….

P.S. More excellent discussions from Matasano (Ptacek) here and Rothman’s bloggy.  I also recommend Greg Ness’ commentary on virtualization and security @ the HyperVisor here.