Archive

Archive for the ‘Virtualization’ Category

Performance Implications Of Security Functions In Virtualized Environments

March 28th, 2008 4 comments

Virtsecmodel
In my VirtSec presentations, I lead my audience through the evolution of virtualized security models that describes what configuration and architecture options we have in implementing existing and emerging security solutions both now as well as projected out to about 3 years from now.

I’ll be posting that shortly.

Three of the interesting things that I highlight that result in having light bulbs go off in the audience are when I discuss:

  1. The compute (CPU) and I/O overhead that is added by security software running in either the VM’s on top of the guest OS’s, security virtual appliances in the host, or a combination both.
  2. The performance limitations of the current implementations of virtual networking and packet handling routines due to virtualization architectures and access to hardware
  3. The complexity imposed when having to manage/map a number of physical to virtual NICS and configuring the vSwitch and virtual appliances appropriately to manipulate traffic flows (at L2 and up) through multiple security solutions either from an intra-host perspective, integrated with external security solutions, or both. 

I’m going to tackle each of these issues in separate posts, but I’d be interested in speaking to anyone with whom I can compare results of my testing with. 

Needless to say, I’ve done some basic mock-ups and performance testing with some open source and commercial security products in virtualized configurations under load, and much of the capacity I may have gained by consolidating low-utilization physical hosts into a virtualized single host is eroded by the amount of processing needed by the virtual appliance(s) to keep up with the load under stress without dropping packets or introducing large amounts of latency.

Beware of what this might mean in your production environments.  Ever see a CPU pegged due to a runaway process?  Imagine what happens when every packet between virtual interfaces gets crammed through a virtual appliance in the same host first in order to "secure" it.

I made mention of this in my last post:

The reality is that for reasons I’ve spoken of many times, our favorite ISV’s have been a little handicapped by what the virtualization platforms offer up in terms of proper integration against which we can gain purchase from a security perspective.  They have to sell what they’ve got while trying to remain relevant all the while watching the ground drop out beneath them.

These vendors have a choice: employ some fancy marketing messaging to make it appear as though the same products you run on a $50,000+ dedicated security appliance will actually perform just as well in a virtual form.

Further, tell you that you’ll enjoy just as much visibility without disclosing limitations when interfaced to a virtual switch that makes it next to impossible to replicate most complex non-virtualized topologies.

Or, just wait it out and see what happens hoping to sell more appliances in the meantime.

Some employ all three strategies (with a fourth being a little bit of hope.)

This may differ based upon virtualization platforms and virtualization-aware chipsets, but capacity planning when adding security functions is going to be critical in production environments for anyone going down this path. 

/Hoff

Categories: Virtualization Tags:

It’s Virtualization March Madness! Up First, Montego Networks

March 27th, 2008 No comments

Marchmadness
If you want to read about Montego Networks right off the bat, you can skip the Hoff-Tax and scroll down to the horizontal rule and start reading.  Though I’ll be horribly offended, I’ll understand…

I like being contradictory, even when it appears that I’m contradicting myself.  I like to think of it as giving a balanced perspective on my schizophrenic self…

You will likely recall that my latest post suggested that the real challenge for virtualization at this stage in the game is organizational and operational and not technical. 

Well, within the context of this post, that’s obviously half right, but it’s an incredibly overlooked fact that is causing distress in most organizations, and it’s something that technology — as a symptom of the human condition — cannot remedy.

But back to the Tech.

The reality is that for reasons I’ve spoken of many times, our favorite ISV’s have been a little handicapped by what the virtualization platforms offer up in terms of proper integration against which we can gain purchase from a security perspective.  They have to sell what they’ve got while trying to remain relevant all the while watching the ground drop out beneath them.

Bs_2
These vendors have a choice: employ some fancy marketing messaging to make it appear as though the same products you run on a $50,000+ dedicated security appliance will actually perform just as well in a virtual form.

Further, tell you that you’ll enjoy just as much visibility without disclosing limitations when interfaced to a virtual switch that makes it next to impossible to replicate most complex non-virtualized topologies. 

Or, just wait it out and see what happens hoping to sell more appliances in the meantime.

Some employ all three strategies (with a fourth being a little bit of hope.)

Some of that hoping is over and is on it’s way to being remedied with enablers like VMware’s VMsafe initiative.  It’s a shame that we’ll probably end up with a battle of API’s with ISV’s having to choose which virtualization platform providers’ API to support rather than a standard across multiple platforms.

Simon Crosby from Xen/Citrix made a similar comment in this article:

While I totally agree with his sentiment, I’m not sure Simon would be as vocal or egalitarian had Citrix been first out of the gate with their own VMsafe equivalent.  It’s always sad when one must plead for standardization when you’re not in control of the standards…and by the way, Simon, nobody held a gun to the heads of the 20 companies that rushed for the opportunity to be the first out of the gate with VMsafe as it’s made available.

While that band marches on, some additional measure of aid may come from innovative youngbloods looking to build and sell you the next better mousetrap.


As such, in advance of the RSA Conference in a couple of weeks, the security world’s all aflutter with the sounds of start-ups being born out of stealth as well as new-fangled innovation clawing its way out of up-starts seeking to establish a beachhead in the attack on your budget.

With the normal blitzkrieg of press releases that will undoubtedly make their way to your doorstop, I thought I’d comment on a couple of these companies in advance of the noise.

A lot of what I want to say is sadly under embargo, but I’ll get further in-depth later when I’m told I can take the wraps off.  You should know that almost all of these emerging solutions, as with the one below, operate as virtual appliances inside your hosts and require close and careful configuration of the virtual networking elements therein.

If you go back to the meat of the organization/operational issue I describe above, who do you think has access and control over the virtual switch configurations?  The network team?  The security team?  How about the virtual server admin. team…are you concerned yet?

Here’s my first Virtualized March Madness (VMM, get it!) ISV:

  • Montegomodel
    Montego Networks – John Peterson used to be the CTO at Reflex, so he knows a thing or two about switching, virtualization and security.  I very much like Montego’s approach to solving some of the networking issues associated with vSwitch integration and better yet, they’ve created a very interesting business model that actually is something like VMsafe in reverse. 

    Essentially Montego’s HyperSwitch works in conjunction with the integrated vSwitch in the VMM and uses some reasonably elegant networking functionality to classify traffic and either enforce dispositions natively using their own "firewall" technologies (L2-L4) or — and this is the best part — redirect traffic to other named security software partners to effect disposition. 

    If you look on Montego’s website, you’ll see that they show StillSecure and BlueLane as candidates as what they call HyperVSecurity partners.  They also do some really cool stuff with Netflow.

    Neat model.  When VMsafe is available, Montego should then allow these other third party ISV’s to take advantage of VMsafe (by virtue of the HyperSwitch) without the ISV’s having to actually modify their code to do so – Montego will build that to suit.  There’s a bunch of other stuff that I will write about once the embargo is lifted.

    I’m not sure how much runway and strategic differentiation Montego will have from a purely technical perspective as VMsafe ought to level the playing field for some of the networking functionality with competitors, but the policy partnering is a cool idea. 

    We’ll have to see what the performance implications are given the virtual appliance model Montego (and everyone else) has employed.  There’s lots of software in them thar hills doing the flow/packet processing and enacting dispositions…and remember, that’s all virtualized too.

    In the long term, I expect we’ll see some of this functionality appear natively in other virtualization platforms.

    We’ll see how well that prediction works out over time as well as keep an eye out for that Cisco virtual switch we’ve all been waiting for…*

I’ll be shortly talking about Altor Networks and Blue Lane’s latest goodies.

If you’ve got a mousetrap you’d like to see in lights here, feel free to ping me, tell me why I should care, and we’ll explore your offering.  I guarantee that if it passes the sniff test here it will likely mean someone else will want a whiff.

/Hoff

* Update: Alan over at the Virtual Data Center Blog did a nice write-up on his impressions and asks why this functionality isn’t in the vSwitch natively.  I’d pile onto that query, too.  Also, I sort of burned myself by speaking to Montego because the details of how they do what they do is under embargo based on my conversation for a little while longer, so I can’t respond to Alan…

The Challenge of Virtualization Security: Organizational and Operational, NOT Technical

March 25th, 2008 7 comments

Bullfight
Taking the bull by the horns…

I’ve spoken many times over the last year on the impact virtualization brings to the security posture of organizations.  While there are certainly technology issues that we must overcome, we don’t have solutions today that can effectively deliver us from evil. 

Anyone looking for the silver bullet is encouraged to instead invest in silver buckshot.  No shocker there.

There are certainly technology and solution providers looking to help solve these problems, but honestly, they are constrained by the availability and visibility to the VMM/Hypervisors of the virtualization platforms themselves. 

Obviously announcements like VMware’s VMsafe will help turn that corner, but VMsafe requires re-tooling of ISV software and new versions of the virtualization platforms.  It’s a year+ away and only addresses concerns for a single virtualization platform provider (VMware) and not others.

The real problem of security in a virtualized world is not technical, it is organizational and operational.

With the consolidation of applications, operating systems, storage, information, security and networking — all virtualized into a single platform rather than being discretely owned, managed and supported by (reasonably) operationally-mature teams — the biggest threat we face in virtualization is now we have lost not only visibility, but the clearly-defined lines of demarcation garnered from a separation of duties we had in the non-virtualized world.

Many companies have segmented off splinter cells of "virtualization admins" from the server teams and they are often solely responsible for the virtualization platforms which includes the care, feeding, diapering and powderering of not only the operating systems and virtualization platforms, but the networking and security functionality also.

No offense to my brethren in the trenches, but this is simply a case of experience and expertise.  Server admins are not experts in network or security architectures and operations, just as the latter cannot hope to be experts in the former’s domain.

We’re in an arms race now where virtualization brings brilliant flexibility, agility and cost savings to the enterprise, but ultimately further fractures the tenuous relationships between the server, network and security teams.

Now that the first-pass consolidation pilots of virtualizing non-critical infrastructure assets has been held up as beaconing examples of ROI in our datacenters, security and networking teams are exercising their veto powers as virtualization efforts creep towards critical production applications, databases and transactional systems.

Quite simply, the ability to express risk, security posture, compliance, troubleshooting and measureing SLA’s and dependencies within the construct of a virtualized world is much more difficult than in the discretely segregated physical world and when taken to the mat on the issues, the virtual server admins simply cannot address these issues competently within the scope of language of the security and risk teams.

This is going to make for some unneeded friction in what was supposed to be a frictionless effort.  If you thought the security teams were thought of as speed bumps before, you’re not going to like what happens soon when they try to delay/halt a business-driven effort to reduce costs, speed time-to-market, increase availability and enable agility.

I’ll summarize my prior recommendations as to how to approach this conundrum in a follow-on post, but the time is now to get these teams together and craft the end-play strategies and desired end-states for enterprise architecture in a virtualized world before we end up right back where we started 15+ years ago…on the hamster wheel of pain!

/Hoff

Risky Business — The Next Audit Cycle: Bellweather Test for Critical Production Virtualized Infrastructure

March 23rd, 2008 3 comments

Riskybusiness
I believe it’s fair to suggest that thus far, the adoption of virtualized infrastructure has been driven largely by consolidation and cost reduction.

In most cases the initial targets for consolidation through virtualization have focused on development environments, internally-facing infrastructure and non-critical application stacks and services.

Up until six months ago, my research indicated that most larger companies were not yet at the point where either critical applications/databases or those that were externally-facing were candidates for virtualization. 

As the virtualization platforms mature, the management and mobility functionality provides leveraged impovement over physical non-virtualized counterparts, and the capabilities to provide for resilient services emerge,  there is mounting pressure to expand virtualization efforts to include these remaining services/functions. 

With cost-reduction and availability improvements becoming more visible, companies are starting to tip-toe down the path of evaluating virtualizing everything else including these critical application stacks, databases and externally-facing clusters that have long depended on physical infrastructure enhancements to ensure availability and resiliency.

In these "legacy" environments, the HA capabilities are often provided by software-based clustering capabilities in the operating systems, applications or via the network thanks to load balancers and the like.  Each of these solutions sets are managed by different teams.  There’s a lot of complexity in making it all appear simple, secure and available.

This raises some very interesting questions that focus on assessing
risk in these environments in which duties and responsibilities are
largely segmented and well-defined versus their prospective virtualized counterparts where the opposite is true.

If companies begin to virtualize
and consolidate the applications, storage, servers, networking, security and high-availability
capabilities into the virtualization platforms, where does the buck
stop in terms of troubleshooting or assurance?  How does one assess risk?  How do we demonstrate compliance and
security when "all the eggs are in one basket?"

I don’t think it’s accurate to suggest that the lack of mature security
solutions has stalled the adoption of virtualization across the board,
but I do think that as companies evaluate virtualization candidacy,
security has been a difficult-to-quantify speed bump that has been
danced around. 

We’ve basically been playing a waiting game.  The debate over virtualization and the
inability to gain consensus in the increase/decrease of risk posture has left us at the point where we
have taken the low-hanging fruit that is either non-critical or has
resiliency built in, and simply consolidated it.  But now we’re at a crossroads as virtualization phase 2 has begun.

It’s time to put up or shut down…

Over the last year since my panel on virtualization security at RSA, I’ve been asking the same question in customer engagements and briefings:

How many of you have been audited by either internal or external governance organizations against critical virtualized infrastructure that are in production roles and/or externally facing? 

A year ago, nobody raised their hands.  I wonder what it will look like this year?

If IT and Security professionals can’t agree on the relative "security" or risk increase/decrease that virtualization brings, what position do you think that leaves the auditors in?  They are basically going to measure relative compliance to guidelines prescribed by governance and regulatory requirements.  Taken quite literally, many production environments featuring virtualized production components would not pass an audit.  PCI/DSS comes to mind.

In virtualized environments we’ve lost visiblity, we’ve lost separation of duties, we’ve lost the inherent simplicity that functions spread over physical entities provides.  Existing controls and processes get us only so far and the technology crutches we used to be able to depend on are buckling when we add the V-word to the mix.

We’ve seen technology initiatives such as VMware’s VMsafe that are still 9-12 months out that will help gain back some purchase in some of these areas, but how does one address these issues with auditors today?

I’m looking forward to the answer to this question at RSA this year to evaluate how companies are dealing with GRC (governance, risk and compliance) audits in complex critical production environments.

/Hoff

The Unbearable Lightness of Being…Virtualized

March 10th, 2008 1 comment

Ripple
My apologies to Pete Lindstrom for not responding to his comment regarding my "virtualization & defibrilation" post sooner and hat-tip for Rothman for the reminder.

Pete was commenting on a snippet from my larger post dealing with the following assertion:

The Hoff-man pontificates yet again:

Firewalls, IDS/IPSs, UTM, NAC, DLP — all of them have limited
visibility in this rapidly "re-perimeterized" universe in which our
technology operates, and in most cases we’re busy looking at
uninteresting and practically non-actionable things anyway.  As one of
my favorite mentors used to say, "we’re data rich, but information
poor."

In
a general sense, I agree with this statement, but in a specific sense,
it isn’t clear to me just how significant this need is. After all, we
are talking today about 15-20 VMs per physical host max and I defy
anyone to suggest that we have these security solutions for every 15-20
physical nodes on our network today – far from it.

Let’s shed a little light on this with some context regarding *where* in the network we are virtualizing as I don’t think I was quite clear enough.

Most companies have begun to virtualize their servers driven by the economic benefits of consolidation and have done so in the internal zones of the networks.  Most of these networks are still mostly flat from a layer 3 perspective, with VLANs providing isolation for mostly performance/QoS reasons.

In discussions with many companies ranging from the SME to the Fortune 10, all except a minority are virtualizing hosts and systems placed within traditional DMZ’s facing external networks.

From that perspective, I agree with Pete.  Most companies are still grappling with segmenting their internal networks based on asset criticality, role, function or for performance/security reasons, so it stands to reason that internally we don’t, in general, see "…security solutions for every 15-20
physical nodes on our network today,"
but we certainly do in externally-facing DMZ’s.

However, that’s changing — and will continue to — as we see more and more basic security functionality extend into switching fabrics and companies begin to virtualize security policies internally.  The next generation of NAC/NAP is good example.  Internal segmentation will drive the need for logically applying combinations of security functions virtualized across the entire network space.

Furthermore, there are companies who are pushing well past the 15-20 VM’s per host, and that measure is really not that important.  What is important is the number of VM’s on hosts per cluster.  More on that later.

That said, it isn’t necessarily a great argument for me simply to
suggest that since we don’t do it today in the physical world that we
don’t have to do it in the virtual world. The real question in my mind
is whether folks are crossing traditional network zones on the same physical box.
That is, do trust levels differ among the VMs on the host? If they
don’t, then not having these solutions on the boxes is not that big a
deal – certainly useful for folks who are putting highly-sensitive
resources in a virtual infrastructure, but not a lights-out problem.

The reality is that as virtualization platforms become more ubiquitous, internal segmentation and more strict definition of host grouping in both virtualized and non-virtualized server clusters will become an absolute requirement because it’s the only way security policies will be able to be applied given today’s technology. 

This will ultimately then drive to the externally-facing DMZ’s over time.  It can’t not.  However, today, the imprecise measurement of quantifiable risk (or lack thereof) combined with exceptionally large investment in "perimeter" security technologies, keeps most folks at an arm’s length from virtualizing their DMZ’s from a server perspective.  Lest we forget our auditor friends and things like PCI/DSS which complicate the issue.

Other’s might suggest that with appropriately architected multi-tier environments combined with load balanced/clustered server pools and virtualized security contexts, the only difference between what they have today and what we’re talking about here is simply the hypervisor.  With blade servers starting to proliferate in the DMZ, I’d agree.

Until we have security policies that are, in essence, dynamically instantiated as they are described by the virtual machines and the zones into which they are plumbed, many feel they are still constrained by trying to apply static ACL-like policies to incredibly dynamic compute models.

So we’re left with constructing little micro-perimeter DMZ’s throughout the network. 

If the VMs do cross zones, then it is much more important to have
virtual security solutions. In fact, we don’t recommend using virtual
security appliances as zone separators simply because the hypervisor’s
additional attack surface introduces unknown levels of risk in an
environment. (Similarly, in the physical world we don’t recommend
switch-based security solutions as zone separators either).

What my research is finding in discussion with some very aggressive
companies who are embracing virtualization is that starting internally
— and for some of the reasons like performance which Pete mentions —
companies are beginning to setup clusters of VM’s that do indeed "cross
the streams." 😉

I am told by our virtualization technical expert that there may be
performance benefits to commingling resources in this way, so at some
point it will be great to have these security solutions available. I
suppose we should keep in mind that any existing network security
solution that isn’t using proprietary OS and/or hardware can migrate
fairly easily into a virtual security appliance.

There are most certainly performance "benefits" — I call them band-aids — to co-mingling resources in a single virtual host.  Given the limitations of today’s virtual networking and the ever-familiar I/O bounding we experience with higher-than-acceptable latency in mainstream Ethernet-based connectivity, sometimes this is the only answer.  This is pushing folks to consider I/O virtualization technologies and connectivity other than Ethernet such as InfiniBand.

The real issue here that I have with the old "just run your tired old security solutions as a virtual appliance within a virtual host" is the exact reason that I alluded to in the quote Pete singled in on.  We lose visibility, coherence and context using this model.  This is the exact reason for something like VMsafe, but will come with a trade-off I’ve already highlighted.

The thinner the hypervisor, the more internals will need to be exposed via API to external functions.  While the VMM gets more compact, the attack surface expands via the API.

Keep in mind that we have essentially ignored the whole
de-perimeterization, network without borders, Jericho Forum
predisposition to minimize these functions anyway. That is, we can
configure the functional VMs themselves with better security
capabilities as well.

Yes, we have ignored it…and to our peril.  Virtualization is here. It will soon be in your DMZ if it isn’t already.  If you’re not seriously reconsidering the implications that virtualization (of servers, storage, networking, security, applications, data, etc.) has on your datacenter — both in your externally-facing segments and your internal networks —  *now* you are going to be in a world of hurt within 18 months.

/Hoff

Categories: Virtualization Tags:

I Love the Smell of Big Iron In the Morning…

March 9th, 2008 1 comment

Univac
Does Not Compute…

I admit that I’m often fascinated by the development of big iron and I also see how to some this seems at odds with my position that technology isn’t the answer to the "security" problem.  Then again, it really depends on what "question" is being asked, what "problem" we’re trying to solve and when we expect to solve them.

It’s pretty clear that we’re still quite some time off from having secure code, solid protocols, brokered authentication and encryption and information-centric based controls that provide the assurance dictated by the policies described by the information itself. 

In between now and then, we see the evolution of some very interesting "solutions" from those focused on the network and host perspectives.  It’s within this bubble that things usually get heated between those proponents who argue that innovation in networking and security is constrained to software versus those who maintain that the way to higher performance, efficacy and coverage can only be achieved with horsepower found in hardware.

I always find it interesting that the networking front prompts argument in this vein, but nobody seems to blink when we see continued development in mainframes — even in this age of Web3.0, etc.  Take IBM’s Z10, for example.  What’s funny is that a good amount of the world still ticks due to big iron in the compute arena despite the advances of distributed systems, SOA, etc., so why do we get so wrapped up when it comes to big iron in networking or security?

I dare you to say "value." 😉

I’ve had this argument on many fronts with numerous people and realized that in most cases what we were missing was context.  There is really no argument to "win" here, but rather a need for examination of what most certainly is a manifest destiny of our own design and the "natural" phenomena associated with punctuated equilibrium.

An Example: Cisco’s New Hardware…and Software to Boot [it.]

Both camps in the above debate would do well to consider the amount of time and money a bellwether in this space — Cisco —  is investing in a balanced portfolio of both hardware and software. 

If we start to see how the pieces are being placed on Cisco’s chess board, it makes for some really interesting scenarios:

Many will look at these developments and simply dismiss them as platforms that will only solve the very most demanding of high-end customers and that COTS hardware trumps the price/performance index when compared with specialty high-performance iron such as this. 

This is a rather short-sighted perspective and one that cyclically has proven inaccurate.   

The notion of hardware versus software superiority is a short term argument which requires context.  It’s simply silly to argue one over the other in generalities.  If you’d like to see what I mean, I refer you once again to Bob Warfield’s "Multi-Core Crisis" meme.  Once we hit cycle limits on processors we always find that memory, bus and other I/O contention issues arise.  It ebbs and flows based upon semiconductor fabrication breakthroughs and the evolution and ability of software and operating systems to take advantage of them.

Toss a couple of other disruptive and innovative technologies into the mix and the landscape looks a little more interesting. 

It’s All About the Best Proprietary Open Platforms…

I don’t think anyone — including me at this point — will argue that a
good amount of "security" will just become a checkbox in (and I’ll use *gasp* Stiennon’s language) the "fabric."  There will always be point
solutions to new problems that will get bolted on, but most of the
security solutions out there today are becoming features before they
mature to markets due to this behavior.

What’s interesting to me is where the "fabric" is and in what form it will take. 

If we look downrange and remember that Cisco has openly discussed it’s strategy of de-coupling its operating systems from hardware in order to provide for a more modular and adaptable platform strategy, all this investment in hardware may indeed seem to support this supposition.

If we also understand Cisco’s investment in virtualization (a-la VMware and IOS-XE) as well as how top-side investment trickles down over time, one could easily see how circling the wagons around both hardware for high-end core/service provide platforms [today] and virtualized operating systems for mid-range solutions will ultimately yield greater penetration and coverage across markets.

We’re experiencing a phase shift in the periodic oscillation associated with where in the stack networking vendors see an opportunity to push their agenda, and if you look at where virtualization and re-perimeterization are pushing us, the "network is the computer" axiom is beginning to take shape again. 

I find the battle for the datacenter OS between the software-based virtualization players and the hardware-based networking and security gianst absolutely delicious, especially when you consider that the biggest in the latter (Cisco) is investing in the biggest of the former (VMware.)

They’re both right.  In the long term, we’re all going to end up with 4-5 hypervisors in our environments supporting multiple modular, virtualized and distributed "fabrics."  I’m not sure that any of that is going to get us close to solving the real problems, but if you’re in the business of selling tin or the wrappers that go on it, you can smile…

Imagine a blade server from your favorite vendor with embedded virtualization capabilities coupled with dedicated network processing hardware supporting your favorite routing/switching vendor’s networking code and running any set of applications you like — security or otherwise — with completely virtualized I/O functions forming a grid/utility compute model.*

Equal parts hardware, software, and innovation.  Cool, huh?

Now, about that Information-Centricity Problem…

*The reality is that this is what attracted me to Crossbeam:
custom-built high-speed networking hardware, generic compute stacks
based on Intel-reference designs, both coupled with a Linux-based
operating system that supports security applications from multiple
sources as an on-demand scalable security services layer virtualized
across the network.

Trouble is, others have caught on now…

VMWare’s VMSafe: Security Industry Defibrilator….Making Dying Muscle Twitch Again.

March 2nd, 2008 6 comments

Defibrilator
Nurse, 10 cc’s of Adrenalin, stat!

As I mentioned in a prior posting, VMware’s VMsafe has the potential to inject life back into the atrophied and withering heart muscle of the security industry and raise the prognosis from DOA to the potential for a vital economic revenue stream once more.

How?  Well, the answer to this question really comes down to whether you believe that keeping a body on assisted life support means that the patient is living or simply alive, and the same perspective goes for the security industry.

With the inevitable consolidation of solutions and offerings in the security industry over the last few years, we have seen the commoditization of many markets as well as the natural emergence of others in response to the ebb and flow of economic, technological, cultural and political forces.

One of the most impacting disruptive and innovative forces that is causing arrhythmia in the pulse of both consumers and providers and driving the emergence of new market opportunities is virtualization. 

For the last two years, I’ve been waving my hands about the fact that virtualization changes everything across the information lifecycle.  From cradle to grave, the evolution of virtualization will profoundly change what, where, why and how we do what we do.

I’m not claiming that I’m the only one, but it was sure lonely from a general security practitioner’s perspective up until about six months ago.  In the last four months, I’ve given two keynotes and three decently visible talks on VirtSec, and I have 3-4 more tee’d up over the next 3 months, so somebody’s interested…better late than never, I suppose.

How’s the patient?

For the purpose of this post, I’m going to focus on the security implications of virtualization and simply summarize by suggesting that virtualization up until now has quietly marked a tipping point where we see the disruption stretch security architectures and technologies to their breaking point and in many cases make much of our invested security portfolio redundant and irrelevant.

I’ve discussed why and how this is the case in numerous posts and presentations, but it’s clear (now) to most that the security industry has been clearly out of phase with what has plainly been a well-signaled (r)evolution in computing.

Is anyone really surprised that we are caught flat-footed again?  Sorry to rant, but…

This is such a sorry indicator of why things are so terribly broken with "IT/Information Security" as it stands today; we continue to try and solve short term problems with even shorter term "solutions" that do nothing more than perpetuate the problem — and we do so in a horrific display of myopic dissonance, it’s a wonder we function at all.   Actually, it’s a perfectly wonderful explanation as to why criminals are always 5 steps ahead — they plan strategically while acting tactically against their objectives and aren’t afraid to respond to the customers proactively.

So, we’ve got this fantastic technological, economic, and cultural transformation occurring over the last FIVE YEARS (at least,) and the best we’ve seen as a response from most traditional security vendors is that they have simply marketed their solutions slimly as "virtualization ready" or "virtualization aware" when in fact, these are simply hollow words for how to make their existing "square" products fit into the "round" holes of a problem space that virtualization exposes and creates.

Firewalls, IDS/IPSs, UTM, NAC, DLP — all of them have limited visibility in this rapidly "re-perimeterized" universe in which our technology operates, and in most cases we’re busy looking at uninteresting and practically non-actionable things anyway.  As one of my favorite mentors used to say, "we’re data rich, but information poor."

The vendors in these example markets — with or without admission — are all really worried about what virtualization will do to their already shrinking relevance.  So we wait.

Doctor, it hurts when I do this…

VMSafe represents a huge opportunity for these vendors to claw their way back to life, making their solutions relevant once more, and perhaps even more so.

Most of the companies who have so far signed on to VMsafe will, as I mentioned previously, need to align roadmaps and release new or modified versions of their product lines to work with the new API’s and management planes. 

This is obviously a big deal, but one that is unavoidable for these companies — most of which are clumbsy and generally not agile or responsive to third parties.  However, you don’t get 20 of some of the biggest "monoliths" of the security world scrambling to sign up for a program like VMsafe just for giggles — and the reality is that the platform version of VMware’s virtualization products that will support this technology aren’t even available yet.

I am willing to wager that you will, in extremely short time given VMware’s willingness to sign on new partners, see many more vendors flock to the program.  I further maintain that despite their vehement denial, NAC vendors (with pressure already from the oncoming tidal wave of Microsoft’s NAP) will also adapt their wares to take advantage of this technology for reasons I’ve outlined here.

They literally cannot afford not to.

I am extremely interested in what other virtualization vendors’ responses will be — especially Citrix.  It’s pretty clear what Microsoft has in mind.  It’s going to further open up opportunities for networking vendors such as Cisco, f5, etc., and we’re going to see the operational, technical, administrative, "security" and governance lines  blur even further.

Welcome back from the dead, security vendors, you’ve got a second chance in life.  I’m not sure it’s warranted, but it’s "natural" even though we’re going to end up with a very interesting Frankenstein of a "solution" over the long term.

The Doctor prescribes an active lifestyle, healthy marketing calisthenics, a diet with plenty of roughage, and jumping back on the hamster wheel of pain for exercise.

/Hoff

VMware’s VMsafe: The Good, the Bad, the Bubbly…

February 28th, 2008 4 comments

Safe
Back in August before VMworld 2007, I wrote about the notion that given Cisco’s investment in VMware, we’d soon see the ability for third parties to substitute their own virtual switches for VMware’s. 

Further, I discussed the news that VMware began to circulate regarding their release of an API originally called Vsafe that promised to allow third party security and networking applications to interact with functions exposed by the Hypervisor. 

Both of these points really put some interesting wrinkles — both positive and potentially negative — in how virtualization (and not just VMware’s) will continue to evolve as the security and networking functions evolve along with it.

What a difference a consonant makes…

Several months later, they’ve added the consonant ‘m’ to the branding and VMsafe is born.  Sort of.  However, it’s become more than ‘just’ an API.  Let’s take a look at what I mean.

What exactly is VMsafe? Well, it’s a marketing and partnership platform, a business development lever, an architecture, a technology that provides an API and state of mind:

VMsafe is a new program that leverages the properties of VMware Infrastructure to protect machines in ways previously not possible with physical machines. VMsafe provides a new security architecture for virtualized environments and an application program interface (API)-sharing program to enable partners to develop security products for virtualized environments.

What it also provides is a way to make many existing older and consolidating security technologies relevant in the virtualized world given that today their value is suspect in the long term without it:

The VMsafe Security Architecture provides an open approach that gives security vendors the ability to leverage the inherent properties of virtualization in their security offerings.

Customers running their businesses on VMware Infrastructure will be assured that they are getting the best protection available – even better than what they have on physical infrastructure.

VMsafe adds a new layer of defense that complements existing physical security solutions such as network and host protection, shielding virtual machines from a variety of network, host and applications threats. This additional layer of protection can help enterprise organizations to dramatically increase the security posture of their IT environments.

There is then hope that a good deal of the visibility we lost in the limited exposure existing security solutions have across virtualized environments, we may get back.

Of course, this good news will be limited to those running VMware’s solutions and re-tooled versions of your favorite security vendor’s software.  What that means for other virtualization platforms is a little more suspect.  Since it requires the third party software to be re-written/adapted to use the VMsafe API, I can’t see many jumping on the wagon to support 3-4 VMM platforms.

Ack!  This is why we need a virtualization standard like OVF and a cross-platform signaling, telemetry and control plane definition!

So how does VMsafe work?

VMsafe enables third-party security products to gain the same visibility as the hypervisor into the operation of a virtual machine to identify and eliminate malware, such as viruses, trojans and key-loggers. For instance, security vendors can leverage VMsafe to detect and eliminate malware that is undetectable on physical machines. This advanced protection is achieved through fine-grained visibility into the virtual hardware resources of memory, CPU, disk and I/O systems of the virtual machine that can be used to monitor every aspect of the execution of the system and stop malware before it can execute on a machine to steal data.

VSAFE enables partners to build a virtualization-aware security solution in the form of a security virtual machine that can access, correlate and modify information based on the following virtual hardware:

  • Memory and CPU: VMsafe provides introspection of guest VM memory pages and cpu states.
  • Networking: Network packet-filtering for both in-hypervisor and within a Security VM.
  • Process execution (guest handling): in-guest, in-process APIs that enable complete monitoring and control of process execution.
  • Storage: Virtual machine disk files (VMDK) can be mounted, manipulated and modified as they persist on storage devices.

So where do we go from here?

With nothing more than a press release and some flashy demos, it’s a little early to opine on the extensibility of VMsafe, but I am encouraged by the fact that we will have some more tools in the arsenal, even if they are, in essence, re-branded versions of many that we already have.

However, engineering better isolation combined with brokered visibility and specific authorized/controlled access to the VMM is both a worthy endeavor that yields all sorts of opportunities, but given my original ramblings, makes me a bit nervous.

Alessandro from the virtualization.info blog gave us a description of the demo given at VMworld Europe that illustrates some of this new isolation and anti-malware capability:

A Windows XP virtual machine gets attacked with a malicious code that
copies away corporate documents but another virtual machine with
security engine is able to transparently recognize (a virtual memory
scan through VMsafe APis access) the threat and stop it before it
compromises the guest OS.

Steps in the right direction, for sure.  Since details regarding the full extent of anti-malware capabilities are still forthcoming, I will reserve judgment until I get to speak with Nand @ VMware to fully understand the scope of the capabilities.

I am sure we will see more claims surface soon suggesting with technology such as this will produce virtualized environments that are "more secure" than their non-virtualized counterparts.  The proof is in the pudding, as they say.  At this point, what we have is a very tantalizing recipe.

I’d suggest we’ll also see a fresh batch of rootkit discussions popping up soon…I’d really like to see the documentation surrounding the API.  I wonder how much it costs to sign up and be authorized to participate with VMsafe?

Some of the Determina functionality is for sure making its way into VMsafe and it will be really interesting to see who will be first first out of the gate to commercially offer solutions that will utilize VMsafe  once it’s available — and it’s a little unclear when that will be.

Given who demoed at VMworld Europe, I’d say it’s safe to bet that McAfee will be one of the first along with some of the more agile startups that might find it easier to include in their products.  The initial lineup of vendor support makes up some of the who’s-who in the security space — all except for, um, Cisco.  Also, where the heck is SourceFire?

When do the NAC and DLP vendors start knocking?

More detailed analysis soon.

/Hoff

Categories: Virtualization, VMware Tags:

News Flash: If You Don’t Follow Suggested Security Hardening Guidelines, Bad Things Can Happen…

February 26th, 2008 10 comments

Noticeangle
The virtualization security (VirtSec) FUD meter is in overdrive this week…

Part I:
     So, I was at a conference a couple of weeks ago.  I sat in on a lot of talks.
     Some of them had demos.  What amazed me about these demos is that in
     many cases, in order for the attacks to work, it was disclosed that the
     attack target was configured by a monkey with all defaults enabled and no
     security controls in place.  "…of course, if you checked this one box, the
     exploit doesn’t work…" *gulp*

Part II:
     We’ve observed a lot of interesting PoC attack demonstrations such as those at shows being picked
     up by the press and covered in blogs and such.  Many of these stories simply ham it up for the
     sensational title.  Some of the artistic license and innacuracies are just plain recockulous. 
     That’s right.  There’s ridiculous, then there’s recockulous. 

Example:
     Here’s a by-line from an article which details the PoC attack/code that Jon Oberheide used to show
     how, if you don’t follow VMware’s (and the CIS benchmark) recommendations for securing your
     VMotion network, you might be susceptible to interception of traffic and bad things since — as
     VMware clearly states — VMotion traffic (and machine state) is sent in the clear.

     This was demonstrated at BlackHat DC and here’s how the article portrayed it:

Jon Oberheide, a researcher and PhD candidate at the
University of Michigan, is releasing a proof-of-concept tool called
Xensploit that lets an attacker take over the VM’s hypervisor and
applications, and grab sensitive data from the live VMs.

     Really?  Take over the hypervisor, eh?  Hmmmm.  That sounds super-serious!  Oh, the humanity!

However, here’s how the VMTN blog rationally describes the situation in a measured response that does it better than I could:

Recently a researcher published a proof-of-concept called
Xensploit which allows an attacker to view or manipulate a VM undergoing live
migration (i.e. VMware’s VMotion) from one server to
another. This was shown to work with
both VMware’s and Xen’s version of live migration. Although impressive, this work by no means
represents any new security risk in the datacenter. It should be emphasized this proof-of-concept
does NOT “take over the hypervisor” nor present
unencrypted traffic as a vulnerability needing patching, as some news
reports incorrectly assert. Rather, it a
reminder of how an already-compromised network, if left unchecked, could be
used to stage additional severe attacks in any environment, virtual or
physical. …

Encryption of all data-in-transit is certainly one well-understood mitigation
for man-in-the-middle attacks.  But the fact
that plenty of data flows unencrypted within the enterprise – indeed perhaps
the majority of data – suggests that there are other adequate mitigations. Unencrypted VMotion traffic is not a flaw,
but allowing VMotion to occur on a compromised network can be. So this is a good time to re-emphasize hardening best practices for VMware
Infrastructure and what benefit they serve in this scenario.

I’m going to give you one guess as to why this traffic is unencrypted…see if you can guess right in the comments.

Now, I will concede that this sort of thing represents a new risk in the datacenter if you happen to not pay attention to what you’re doing, but I think Jon’s PoC is a great example of substantiating why you should follow both common sense, security hardening recommendations and NOT BELIEVE EVERYTHING YOU READ.

If you don’t stand for something, you’ll fall for anything.

/Hoff

VMWare Hosted Virtualization Platform Vulnerability = Guest System Break-Out via Shared Folders…

February 25th, 2008 4 comments

Jailbreak_2

There’s a little bit of serendipity floating about today and timing is everything.

Ed Skoudis (IntelGuardians) and I were chatting last week at ShmooCon regarding his previous research on VM guest escapes in hosted platforms and I raised a concern regarding my use of Parallel shared folders between my hosted XP installation and the underlying OSX host operating system.

I reckoned that this would be a very interesting vector for potential exploitation as it provides a direct pipeline to the underlying host OS and filesystem. 

While this bit of news isn’t about Parallels, it is about VMware’s comparable products (workstation, ACE, player, etc.) and it exploits the same vector.  From Computerworld:


February 24, 2008 (Computerworld)  A critical vulnerability in VMware Inc.’s virtualization software for Windows lets attackers escape the "guest" operating system and modify or add files to the underlying "host" OS, the company has acknowledged.

As of Sunday, there was no patch available for the flaw, which affects VMware’s Windows client virtualization programs, including Workstation, Player and ACE. The company’s virtual machine software for Windows servers, and for Mac- and Linux-based hosts, are not at risk.

The bug was reported by Core Security Technologies, makers of the penetration testing framework CORE IMPACT, said VMware in a security alert issued last Friday. "Exploitation of this vulnerability allows attackers to break out of an isolated Guest system to compromise the underlying Host system that controls it," claimed Core Security.

According to VMware, the bug is in the shared folder feature of its Windows client-based virtualization software. Shared folders lets users access certain files — typically documents and other application-generated files — from the host OS and any virtual machine on that physical system.

"On Windows hosts, if you have configured a VMware host-to-guest shared folder, it is possible for a program running in the guest to gain access to the host’s complete file system and create or modify executable files in sensitive locations," confirmed VMware.

There is currently no patch available.  The mitigation strategy is to disable shared folders.

It’s important to reiterate that this vulnerability does not affect VMware’s Type 1 (bare metal) virtualization platforms such as ESX.  However, on Friday, VMware released fixes for 5 vulnerabilities in ESX, some of which could be exploited to bypass security controls, gain access to data or result in denial of service.

/Hoff

{image from Anthony Martin Escapes}

UPDATE: Coverage of this is being hammed up quite a bit in the press to sound like it’s going to shake the very foundations of virtualization…not so much.  It’s an issue that is reasonably easy to address and represents what can be generally referred to as a relatively small attack surface.  Yes, it reinforces the need to think about VirtSec in the Type 2 (hosted) virtualization world, but as I said in the comments, it really depends upon how and why you’ve deployed client-side virtualization.

Categories: Virtualization, VMware Tags: