Archive

Archive for the ‘Virtualization’ Category

Can We End the “Virtualization Means You’re Less/More Secure” Intimation

September 22nd, 2007 1 comment

Childscratchinghead_2
I’d like to frame this little ditty with a quote that Marcus Ranum gave in a face-off between he and Bruce Schneier in this month’s Information Security Magazine wherein he states:

"Will the future be more secure? It’ll be just as insecure as it
possibly can, while still continuing to function. Just like it is today."

Keep that in mind as you read this post on virtualization security, won’t you?

Over the last few months we’ve had a serious malfunction in the supply chain management of Common Sense.  It’s simply missing from the manifest in many cases.

Such is the case wherein numerous claims of undelivered security common sense are being filed as instead of shipping clue in boxes filled with virtualization goodness, all we get are those styrofoam marketing peanuts suggesting that we’re either "more" or "less" secure instead.  More or less compared to what, exactly?

Monkeys_2
It’s unfortunate that it’s still not clear enough at this point that we’re at a crossroads with virtualization.

I believe it’s fair to suggest that the majority of us know that the technology represents fantastic opportunities, but vendors and customers alike continue to cover their ears, eyes and mouths and ignore certain realities inherent in the adoption of any new application or technology when it comes to assessing the risk associated with deploying this technology.

Further, generalizations regarding virtualization as being "more" or "less" secure than non-virtualized platforms represents an exercise in tail-chasing that seems more and more to be specious claims delivered in many cases without substantiated backing…

Here’s a perfect example of this sort of thing from a CMP ChannelWeb story titled "Plotting Security Strategy in a Virtual World":

"In many ways, securing virtual servers is little different from securing physical servers, said Patrick Lin, senior director of product management at VMware."

We’ve talked about this before.  This is true, except (unfortunately) for the fact that we’ve lost a tremendous amount of visibility from the network security practitioner’s perspective as now the "computer is the network" and many of the toolsets and technologies are not adapted to a accomodate a virtualized instantiation of controls and detection mechanisms such as Firewalls, IDS/IPS and other typical gateway security functions.

"At the end of the day, they are just Windows machines," Lin said. "When you turn a physical server into a virtual server, it’s no more vulnerable that it was before. There are not new avenues of attack all of a sudden."

That statement is false, misleading and ironic given the four vulnerabilities we just saw reported over the last 3 days that are introduced onto a virtual host thanks to the VMware software which enables the virtualization capabilities.

If you aren’t running VMWare’s software, then you’re not vulnerable to exploit from these vulnerabilities.  This
is an avenue of attack.  This represents a serious vulnerability.  This is a new threat/attack surface "all of a sudden."

[Ed: I simply had to add this excerpt from Kris Lamb’s fantastic blog post (IBM/ISS X-Force) that summarized empirically the distribution of VMware-specific vulnerabilities from 1999-2007.]

We pulled all known vulnerabilities across all of VMware’s products
since 1999. I then focused on categorizing by year, by severity, by
impact, by vector and by whether the vulnerability was in VMware’s
proprietary first-party components or in third-party components that
they use in their products.

Once I pulled all the data, sorted and structured it in various ways, it summarized like this:

VMware Vulns by Year Total Vulns High Risk Vulns Remote Vulns Vulns in First Party Code Vulns in 3rd Party Code
Vulns in 1999 1 1 0 1 0
Vulns in 2000 1 1 0 1 0
Vulns in 2001 2 0 0 2 0
Vulns in 2002 1 1 1 1 0
Vulns in 2003 9 5 5 5 4
Vulns in 2004 4 2 0 2 2
Vulns in 2005 10 5 5 4 6
Vulns in 2006 38 13 27 10 28
Vulns in 2007 34 18 19 22 12
TOTALS 100 46 57 48 52


So what are some of the interesting trends?

  • There have been 100 vulnerabilities disclosed across all of VMware’s virtualization products since 1999.
  • 57% of the vulnerabilities discovered in VMware products are remotely accessible, while 46% are high risk vulnerabilities.
  • 72% of all the vulnerabilities in VMware virtualization products have been discovered since 2006.
  • 48% of the vulnerabilities in VMware virtualization products
    are in first-party code and 62% are in third-party code that their
    non-hosted Linux-based products use.
  • Starting in 2007, the number of vulnerabilities discovered
    in first-party VMware components almost doubled that of vulnerabilities
    discovered in third-party VMware components. 2007 is the first time
    where first-party VMware vulnerabilities were greater than third-party
    VMware vulnerabilities.

How do I interpret these trends?

  • It is clear that with the increase in popularity, relevance and
    deployment of virtualization starting in 2006, vulnerability discovery
    energies have increasingly focused on finding ways to exploit
    virtualization technologies.
  • Combine the vulnerabilities in virtualization software,
    vulnerabilities in operating systems and applications that still exist
    independent of the virtualization software, the new impact of virtual
    rootkits and break-out attacks with the fact that in a virtual
    environment all your exploitation risks are now consolidated into one
    physical target where exploiting one system could potentially allow
    access and control of multiple systems on that server (or the server
    itself). In total, this adds up to a more complex and risky security
    environment.
  • Virtualization does not equal security!

I’ve already blog-leeched enough of Kris’ post, so please read his blog entry to see the remainder of his findings, but I think this does a really good job of putting to rest some of the FUD associated with this point.

Let’s continue to deconstruct at Mr. Lin’s commentary:

Even so, server virtualization vendors are taking steps to ensure that their technology is itself up-to-date in terms of security.

OK, I’ll buy that based upon my first hand experience thus far.  However, here’s where the example takes a bit of a turn as it seeks to prove that a certification somehow suggests that a virtualization platform is more secure:

Lin said VMware Server ESX is currently certified at Common Criteria Level 2 (CCL2), a security standard, and is in the process of applying for CCL4 for its Virtual Infrastructure 3 (VI3) product suite.

Just to be clear, Common Criteria Evaluation Assurance Levels don’t guarantee the security of a product; they demonstrate that under controlled evaluation, a product configured in a specific way meets certain assurance requirements that relay a higher level of confidence that a product’s security functions will be performed correctly and effectively. 

CC certification does not mean that a system is not vulnerable to or resilient against many prevalent classes of attack.  Also, in many ways, CC provides the opposite of being "up-to-date" from a security perspective because once a system has been certified, modifying it substantially with security code additions or even patches, renders the certification invalid and requires re-test.

If you’re interested, here’s a story describing some interesting aspects of CC certification and what it may mean for a product’s security and resilience titled "Common Criteria is Bad for You."

I’m working very hard to pull together a document which outlines exposure and risk associated with deploying virtualization with as much contextual and temporal relevance as I can muster.  Numerous other people are, also.  This way we can quantify the issues at hand rather than listening to marketing squawk boxes yelp from their VoxHoles about how secure/insecure virtualization platforms are without examples or solutions…

Mouth_tape
In the meantime, as a friendly piece of advice, might I suggest that the virtualization vendors such as VMWare kindly limit the outflow of how security information regarding virtualization is communicated? 

Statements like those above made by product managers who are not security domain experts only seeks to erode the trust and momentum you’re trying to gain.

There are certainly areas in which virtualization provides interesting, useful, unique and (in some cases) enhanced security over those in non-virtualized environments.  The converse can also be held as true.

Let’s work on communicating these differences in specifics instead of generalities.

/Hoff

Categories: Virtualization, VMware Tags:

Virtualization Threat Surface Expands: We Weren’t Kidding…

September 21st, 2007 No comments

Skyisfalling
First the Virtualization Security Public Service Announcement:

By now you’ve no doubt heard that Ryan Smith and Neel Mehta from IBM/ISS X-Force have discovered vulnerabilities in VMware’s DHCP implementation that could allow for "…specially crafted packets to gain system-level privileges" and allow an attacker to execute arbitrary code on the system with elevated privileges thereby gaining control of the system.   

Further, Dark Reading details that Rafal Wojtczvk (whose last name’s spelling is a vulnerability in and of itself!) from McAfee discovered the following vulnerability:

A vulnerability
that could allow a guest operating system user with administrative
privileges to cause memory corruption in a host process, and
potentially execute arbitrary code on the host. Another fix addresses a
denial-of-service vulnerability that could allow a guest operating
system to cause a host process to become unresponsive or crash.

…and yet another from the Goodfellas Security Research Team:

An additional update, according to the advisory, addresses a
security vulnerability that could allow a remote hacker to exploit the
library file IntraProcessLogging.dll to overwrite files in a system. It
also fixes a similar bug in the library file vielib.dll.

It is important to note that these vulnerabilities have been mitigated by VMWare at the time of this announcement.  Further information regarding mitigation of all of these vulnerabilities can be found here.


You can find details regarding these vulnerabilities via the National Vulnerability Database here:

CVE-2007-0061The DHCP server in EMC VMware Workstation before 5.5.5 Build 56455 and
6.x before 6.0.1 Build 55017, Player before 1.0.5 Build 56455 and
Player 2 before 2.0.1 Build 55017, ACE before 1.0.3 Build 54075 and ACE
2 before 2.0.1 Build 55017, and Server before 1.0.4 Build 56528 allows
remote attackers to execute arbitrary code via a malformed packet that
triggers "corrupt stack memory.

CVE-2007-0062 – Integer overflow in the DHCP server in EMC VMware Workstation before
5.5.5 Build 56455 and 6.x before 6.0.1 Build 55017, Player before 1.0.5
Build 56455 and Player 2 before 2.0.1 Build 55017, ACE before 1.0.3
Build 54075 and ACE 2 before 2.0.1 Build 55017, and Server before 1.0.4
Build 56528 allows remote attackers to execute arbitrary code via a
malformed DHCP packet that triggers a stack-based buffer overflow.

CVE-2007-0063 – Integer underflow in the DHCP server in EMC VMware Workstation before
5.5.5 Build 56455 and 6.x before 6.0.1 Build 55017, Player before 1.0.5
Build 56455 and Player 2 before 2.0.1 Build 55017, ACE before 1.0.3
Build 54075 and ACE 2 before 2.0.1 Build 55017, and Server before 1.0.4
Build 56528 allows remote attackers to execute arbitrary code via a
malformed DHCP packet that triggers a stack-based buffer overflow.

CVE-2007-4496 – Unspecified vulnerability in EMC
VMware Workstation before 5.5.5 Build 56455 and 6.x before 6.0.1 Build
55017, Player before 1.0.5 Build 56455 and Player 2 before 2.0.1 Build
55017, ACE before 1.0.3 Build 54075 and ACE 2 before 2.0.1 Build 55017,
and Server before 1.0.4 Build 56528 allows authenticated users with
administrative privileges on a guest operating system to corrupt memory
and possibly execute arbitrary code on the host operating system via
unspecified vectors.

CVE-2007-4155 – Absolute path traversal vulnerability in a certain ActiveX control in
vielib.dll in EMC VMware 6.0.0 allows remote attackers to execute
arbitrary local programs via a full pathname in the first two arguments
to the (1) CreateProcess or (2) CreateProcessEx method.


I am happy to see that VMware moved on these vulnerabilities (I do not have the timeframe of this disclosure and mitigation available.)  I am convinced that their security team and product managers truly take this sort of thing seriously.

However, this just goes to show you that as the virtualization platforms enter further-highlighted mainstream adoption, exploitable vulnerabilities will continue to follow as those who follow the money begin to pick up the scent. 

This is another phrase that’s going to make a me a victim of my own Captain Obvious Award, but it seems like we’ve been fighting this premise for too long now.  I recognize that this is not the first set of security vulnerabilities we’ve seen from VMware, but I’m going to highlight them for a reason.

It seems that due to a lack of well-articulated vulnerabilities that extended beyond theoretical assertions or POC’s, the sensationalism of research such as Blue Pill has desensitized folks to the emerging realities of virtualization platform attack surfaces.

I’ve blogged about this over the last year and a half, with the latest found here and an interview here.  It’s really just an awareness campaign.  One I’m more than willing to wage given the stakes.  If that makes me the noisy canary in the coal mine, so be it.

These very real examples are why I feel it’s ludicrous to take seriously any comments that suggest by generalization that virtualized environments are "more secure" by design; it’s software, just like anything else, and it’s going to be vulnerable.

I’m not trying to signal that the sky is falling, just the opposite.  I do, however, want to make sure we bring these issues to your attention.

Happy Patching!

/Hoff

An Excellent Risk-Focused Virtualization Security Assessment & Hardening Document

September 17th, 2007 1 comment

Xtravirt
Reader Colin was kind enough to forward me a link to a great security and hardening document which begins to address many of the  elements I posted in my recent "Ephiphany…" blog entry regarding virtualization and hardening documentation.

This document was produced by the folks at XtraVirt  who describe themselves as "…a company of innovative experts dedicated to VMware virtualisation, storage, operating systems and deployment methods."  These guys maintain an impressive cache of tools, whitepapers and commercial products focused on virtualization, many of which are available for download.

I’m rather annoyed and embarrassed that it took me this long to discover this site and its resources!

As a wonderful study in serendipity, I’ve recently signed up to contribute to the follow-on to the CIS Virtualization Benchmark that specifically addresses VMware’s ESX environment.   This draft is under construction now, and it represents a good first pass, but continues to need (IMHO) some additional focus on the network/vSwitch elements.

I respectfully suggest that many of the contents of the XtraVirt document, need to make their way into the CIS draft.

One of the other really interesting approaches this document takes is to classify each of the potential hardening elements by risk expressed as a function of threat, likelihood, potential impact and countermeasure as measured against impact to C, I, and A.

Secondly, there is a much-needed section on the VirtualSwitch and network constituent functions.

Here’s the snapshot of the XtraVirt effort:

One of the more difficult challenges found when introducing
virtualisation technologies to a new environment, whether it be your
own or as a consultant to a client, can be gaining the understanding
and support of the IT Security team, especially so if they haven’t been
exposed to virtualisation technologies in the past. 

As a
Solutions Architect having faced this in this situation on several
occasions and being tied up in weeks of claim and counter-claim about
how secure VMware VI3 was, I tried several approaches; one was to
simply email the published VMware security documents to them, and two
was sit down and explain why and how VI3 was inherently secure.

Both
of these approaches could take weeks and at times frustration on both
sides could lead to unnecessary discussions.  Although the VMware
documents are excellent and pitched at the right level, I found that
security team engagement could be limited and it wasn’t always enough
to simply provide these on their own as the basis for a solution.

So the idea was sown to create the ‘VMware® VI3 Security Risk Assessment Template’ that could be repeatedly used as the basis for any VI3 design submission. There’s
nothing particularly clever about it, the information is already out
there, I just felt it needed to be presented in a customised way for IT
Security review and approval.

This MS Word document template is designed to:

· Provide detail of around security measures designed into each major component of VI3

· Provide a ‘best practice’ security framework for VI3 designs that can be repeated again and again

· Detail
real world scenario’s that IT Security personnel can relate to their
environment, including built-in countermeasures and additional
configuration options.

· Significantly reduce the time and stress involved with gaining design approvals.

The idea is to take your own VI3 design and apply it to each of the major VI3 components in this template:

· ESX Server – Service Console

· ESX Server – Kernel

· ESX Server – Virtual Networking Layer

· Virtual Machines

· Virtual Storage

· VirtualCenter

This
means that in most cases it’s just a case of filling in the gaps, and
putting a stake in the ground as to which additional configuration
options you wish to implement.  In all cases you end up with a
document that should relate to your design and the IT Security teams
have a specific proposal which details all the things they want to see
and understand.

The first time I used it (on a
particularly tough Security Advisor who had never seen VMware products
I might add) I had nothing but great feedback which allowed my low
level design to proceed with confidence and saved weeks of explanation
and negotiation.

I’ve reached out to the guys at XtraVirt to both thank them and to gain some additional insight into their work.

I think this is a great effort.

Oh, did you want the link? 😉

/Hoff


Categories: Virtualization, VMware Tags:

Epiphany: For Network/InfoSec Folks, the Virtualization Security Awareness Problem All Starts With the vSwitch…

September 13th, 2007 9 comments

Solutions_desk_datasecurity
It’s all about awareness, and as you’ll come to read, there’s just not enough of it when we talk about the security implications of virtualizing our infrastructure.  We can fix this, however, without resorting to FUD and without vendors knee-jerking us into spin battles. 

I’m attending VMworld 2007 in San Francisco, looking specifically for security-focused information in the general sessions and hands-on labs.  I attended the following sessions and a hands-on lab yesterday:

  • Security Hardening and Monitoring of VMware Infrastructure 3 (Hands-on Lab)
  • Security Architecture Design and Hardening VMware Infrastructure 3 (General Session)
  • Using the Secure Technical Implementation Guide (STIG) with VMware Infrastructure 3 (General Session)

I had high hopes that the content and focus of these sessions would live up to the hype surrounding the statements by VMware reps at the show.   As Dennis Fisher from  SearchSecurity wrote, there are some  powerful statements coming from the VMware camp on the security of virtualized environments and how they are safer than non-virtualized deployments.  These are a bold, and in my opinion, dangerously generalized statements to make when backed up with examples which beg for context:

To help assuage customers’ fears, VMWare executives and security
engineers are going on the offensive and touting the company’s ESX
Server as a more secure alternative to traditional computing setups.

Despite the complexity of virtualized environments, they are inherently
more secure than normal one-to-one hardware and operating system
environments
because of the hypervisor’s ability to enforce isolation
among the virtual machines, Mukundi Gunti, a security engineer at
VMWare said in a session on security and virtualization Tuesday.

 

"It’s a much more complex architecture with a lot of moving parts.
There are a lot of misconceptions about security and virtualization,"
said Jim Weingarten, senior technical alliances manager at VMWare, who
presented with Gunti. "Virtual machines are safer."

So I undertook my mission of better understanding how these statements could be substantiated empirically and attended the sessions/labs with as much of an open mind as I could given the fact that I’m a crusty old security dweeb.

Security Hardening and Monitoring of VMware Infrastructure 3 (Hands-on Lab)

The security hardening and monitoring hands-on lab was very basic and focused on general unix hardening of the underlying RH OS as well as SWATCH log monitoring.  The labs represented the absolute minima that one would expect to see performed when placing any Unix/Linux based host into a network environment.  Very little was specific to virtualization.  This  session presented absolutely nothing new or informative.

Security Architecture Design and Hardening VMware Infrastructure 3 (General Session)
The security architecture design and hardening session was at the very end of the day and was packed with attendees.  Unfortunately, it was very much a re-cap of the hands-on lab but did touch on a couple of network-centric design elements (isolating the VMotion/VIC Management ports onto separate NICs/VLANs, etc) as well as some very interesting information regarding the security of the virtual switch (vSwitch.)  More on this below, because it’s very interesting.

The Epiphany
This is when it occurred to me, that given the general roles and constituent responsibilities of the attendees, most of whom are not dedicated network or security folks, the disconnect between the "Leviathan Force" (the network and network security admins) and the "Sysadmins" (the server/VMM administrators) was little more than the mashup of a classic turf battle and differing security mindsets combined with a lack of network and information security-focused educational campaigning on the part of VMware.

After the talk, I got to spend about 30 minutes chatting with VMware’s Kirk Larsen (Engineering Product Security Officer) and Banjot Chanana (Product Manager for Platform Security) about the lack of network-centric security information in the presentations and how we could fix that.

Virtualization_quote
What I suggested was that since now we see the collapse and convergence of the network and the compute stacks into the virtualizaton platforms, the operational impact of what that means to the SysAdmins and Network/Information Security folks is huge. 

The former now own the keys to the castle whilst the latter now "enjoy" the loss of visibility and operational control.  Because the network and InfoSec folks aren’t competently trained in the operation of the VM platforms and the SysAdmins aren’t competently trained in securing (holistically — from the host through to the network) them, there’s a natural tendency for conflict.

So here’s what VMware needs to do immediately:

  1. Add a series of whitepapers and sessions that speak directly to assuage the fear of the virtualization unknowns targeting the network and InfoSec security staffers. 
  2. Provide more detail and solicit feedback relating to the technical roadmaps that will get the network and InfoSec staffer’s visibility and control back by including them in the process, not isolating them from it.
  3. Assign a VMware community ombudsman to provide outreach and make his/her responsibility to make folks in our industry aware — and not by soundbites that sponsors contention — that there are really some excellent security enhancements that virtualization (and specifically VMware) bring to the table.
  4. Make more security acquisitions and form more partnerships.  Determina was good, but as much as we need "prevention" we need "detection" — we’ve lost visibility, so don’t ignore the basics.
  5. Stop fighting "FUD" with "IAFNAB" (It’s a feature, not a bug) responses
  6. Give the network and InfoSec folks solid information and guidance against which we can build budgets to secure the virtualization infrastructure before it’s deployed, not scrap for it after it’s already in production and hackbait.
  7. Address the possibility of virtualization security horizon events like Blue Pill and Skoudis’ VM Jail escapes head-on and work with us to mitigate them.
  8. Don’t make the mistakes Cisco does and just show pretty security architecture and strategy slides featuring roadmaps that are 80% vapor and call it a success.
  9. Leverage the talent pool in the security/network space to help build great and secure products; don’t think that the only folks you have to target are the cost-conscious CIO and the SysAdmins.
  10. Rebuild and earn our trust that the virtualization gravy train isn’t an end run around the last 10 years we’ve spent trying to secure our data and assets.  Get us involved.

I will tell you that both Kirk and Banjot recognize the need for these activities and are very passionate about solving them.  I look forward to their efforts. 

In the meantime, a lot of gaps can be closed in a very short period by disseminating some basic network and security-focused information.  For example, from the security architecture session, did you know that the vSwitch (fabric) is positioned as being more secure than a standard L2/L3 switch because it is not susceptible to the following attacks:

  • Double Encapsulation
  • Spanning Tree Floods
  • Random Frames
  • MAC Address Flooding
  • 802.1q and ISL Tagging
  • Multicast Brute Forcing

Basic stuff, but very interesting.  Did you know that the vSwitch doesn’t rely on ARP caches or CAM tables for switching/forwarding decisions?  Did you know that ESX features a built-in firewall (that stomps on IPtables?)  Did you know that the vSwitch has features built-in that limit MAC address flux and provides features such as traffic shaping and promiscuous mode for sniffing?

Many in the network and InfoSec career paths do not, and they should.

I’m going to keep in touch with Kirk and Banjot and help to ensure that this outreach is given the attention it deserves.  It will be good for VMware and very good for our community.  Virtualization is here to stay and we can’t afford to maintain this stare-down much longer.

I’m off to the show this morning to investigate some of the vendor’s like Catbird and Reflex and what they are doing with their efforts.

/Hoff

 

CIS Releases Virtual Machine Security Guidelines

September 5th, 2007 1 comment

Cis_2
The Center for Internet Security has released their v1.0 guidelines for generic virtual machine security.  I will say that this is a basic, concise and generally helpful overview to practical things one might consider when deploying, configuring and beginning to secure a virtual machine.

It also does a good job of describing general threat classes and mitigation considerations.

CIS’ summary and representation of this document, its scope and audience are accurately represented by this paragraph from the text:

Recommendations contained in the Products ("Recommendations") result from a consensus-building process that involves many security experts and are generally generic in nature. The Recommendations are intended to provide helpful information to organizations attempting to evaluate or improve the security of their networks, systems, and devices.
Proper use of the Recommendations requires careful analysis and adaptation to specific
user requirements. The Recommendations are not in any way intended to be a "quick fix" for anyone’s information security needs.   

This first effort is focused on non-vendor specific virtualization platforms, and CIS is planning on releasing a similar set of documents that speak specifically to securing VMware ESX’s virtualization platforms.  They suggest they will also consider other virtualization platforms such as XenSource.

You can read more on the background of this work on the Computerworld Blog.

/Hoff

Categories: Virtualization Tags:

Oh, Wait…Now We Should Take Virtualization Security Seriously, Mr. Wittmann?

September 4th, 2007 7 comments

Virtualprotection_dog
Back in April, when apparently virtualization and the securing the mechanics thereof appeared not be that interesting, Art Wittmann wrote a piece in Network Computing titled "
Strategy Session: Server Consolidation: Just Do It"

You may remember that I responded rather vehemently to this article because of a quote that unreasonably marginalized the security impact that virtualization and consolidation have in the data center as well as suggesting that the security "hype" surrounding virtualization was due to "nattering nabobs of negativity" (that would be you and me) who were just being our old obstructionist security selves.  Art said:

"While the security threat inherent in virtualization is real, it’s also overstated."

Overstated? Here are a couple of other choice quotes from his article:

"That
leaves security as the final question.  You can bet that everyone who
can make a dime on questioning the security of virtualization will be
doing so; the drumbeat has started and is increasing in volume.

…which apparently meant that Art was dancing to a different beat, and…

If you can eliminate 10 or 20 servers running outdated versions of NT
in favor of a single consolidated pair of servers, the task of securing
the environment should be simpler or at least no more complex.  If you’re considering a server consolidation project, do it.  Be mindful of security, but don’t be dissuaded by the nattering nabobs of negativity."

I’m not sure Art ever deployed an ESX cluster with virtualized storage and networking, because if he had, I don’t think he would suggest that it’s "…simpler or at least no more complex."

Furthermore, in terms of security issues of late, I guess that besides the BluePill debacle, evading VM Jails and API exploitation just aren’t serious enough glimpses of what is coming down the pike to warrant concern?

Why am I dragging this back up to the surface?  Because I am one of those "nattering nabobs" who has spent the last year plus drawing attention to the very issues Art previously suggested were overstated and yet now proudly flies as a badge of honor on the NWC Virtualization Immersion Center Blog with this posting titled (strangely enough) "Taking Virtualization Security Seriously":

Virtualization security
has been on the minds of a lot of IT folks lately. There’s no doubt
that virtualization changes the security game – and because it involves
new software – the potential for new exploits exists

While I’m happy to see that Art has softened his tune and admitted that virtualization security is important and is not "overstated" I find it ironic that he, himself, is now dancing to the same drumbeat to which all of those money-hungry vendor scum and nabobers were shuffling along to when we were just hyping this all up…

Now that I’ve gotten rid of that bitter little pill, I will say that I think that Joe Hernick’s (seems to write for Information Week also) article titled "Virtualization Security Heats Up" did a good job of summarizing what I’ve been writing about for the last year specifically regarding virtualization security, and you should read it…but be warned, you might come away feeling a little less secure.

If you want to replay the most recent articles I wrote regarding virtualization and security, you can check out the listing here. I’m glad that Art and Crew are drawing attention to virtualization and the security ramifications thereof.  That’s a good thing.

/Hoff

Categories: Virtualization Tags:

How the DOD/Intel Communities Can Help Save Virtualization from the Security Trash Heap…

September 3rd, 2007 5 comments

Sausagemachine
If you’ve been paying attention closely over the last year or so, you will have noticed louder-than-normal sucking sounds coming from the virtualization sausage machine as it grinds the various ingredients driving virtualization’s re-emergence and popularity together to form the ideal tube of tasty technology bologna. 

{I rather liked that double entendre, but if you find it too corrosive, feel free to substitute your own favorite banger marque in its stead. 😉 }

Virtualization is a hot topic; from clients to servers  applications to datastores, and networking to storage, virtualization is coming back full-circle from its MULTICS and LPAR roots and promises to change everything.  Again.

Unfortunately, one of the things virtualization isn’t changing quickly enough (for my liking) and in enough visible ways is the industry’s approach to engineering security into the virtualization product lifecycle early enough to allow us to deploy a more secure product out of the box.   

Sadly, most of the commercial virtualization offerings as well as the open source platforms have lacked much in the way of guidance as how to secure VMs beyond the common sense approach of securing non-virtualized instances, and the security industry has been slow to see any more than a few innovative solutions to the problems virtualization introduces or in some way intensifies.

You can imagine then the position that leaves customers.

I’m from the Government and I’m here to help…

However, here’s where some innovation from what some might consider an unlikely source may save this go-round from another security wreck stacked in the IT boneyard: the DoD and Intelligence communities and a high-profile partnering strategy for virtualized security.

Both the DoD and Intel agencies are driven, just like the private sector, to improve efficiency, cut costs, consolidate operationally and still maintain an ever vigilant high level of security.

An example of this dictate is the Global Information Grid (GIG.) The GIG represents:

"…a net-centric system
operating in a global context to provide processing, storage,
management, and transport of information to support all Department of
Defense (DoD), national security, and related Intelligence Community
missions and functions-strategic, operational, tactical, and
business-in war, in crisis, and in peace.

GIG capabilities
will be available from all operating locations: bases, posts, camps,
stations, facilities, mobile platforms, and deployed sites. The GIG
will interface with allied, coalition, and non-GIG systems.

One of the core components of the GIG is building the capability and capacity to securely collapse and consolidate what are today physically separate computing enclaves (computers, networks and data) based upon the classification, sensitivity and clearances of information and personnel which govern the access to data by those who try to access it.

Multi-Level Security Marketing…

This represents the notion of multilevel security or MLS.  I am going to borrow liberally from this site authored by Dr. Rick Smith to provide a quick overview, as the concepts and challenges of MLS are really critical to fully appreciate what I’m about to describe.  Oddly enough, the concept and work is also 30+ years old and you’d recognize the constructs as being those you’ll find in your CISSP test materials…You remember the Bell-LaPadula model, don’t you?

The MLS Problem

We use the term multilevel
because the defense community has classified both people and
information into different levels of trust and sensitivity. These
levels represent the well-known security classifications: Confidential,
Secret, and Top Secret. Before people are allowed to look at classified
information, they must be granted individual clearances that are based
on individual investigations to establish their trustworthiness. People
who have earned a Confidential clearance are authorized to see
Confidential documents, but they are not trusted to look at Secret or
Top Secret information any more than any member of the general public.
These levels form the simple hierarchy shown in Figure 1.
The dashed arrows in the figure illustrate the direction in which the
rules allow data to flow: from "lower" levels to "higher" levels, and
not vice versa.

Figure 1: The hierarchical security levels
 

When speaking about these levels, we use three different terms:

  • Clearance level
    indicates the level of trust given to a person with a security
    clearance, or a computer that processes classified information, or an
    area that has been physically secured for storing classified
    information. The level indicates the highest level of classified
    information to be stored or handled by the person, device, or location.
  • Classification level
    indicates the level of sensitivity associated with some information,
    like that in a document or a computer file. The level is supposed to
    indicate the degree of damage the country could suffer if the
    information is disclosed to an enemy.
  • Security level is a generic term for either a clearance level or a classification level.

The
defense community was the first and biggest customer for computing
technology, and computers were still very expensive when they became
routine fixtures in defense organizations. However, few organizations
could afford separate computers to handle information at every
different level: they had to develop procedures to share the computer
without leaking classified information to uncleared (or insufficiently
cleared) users. This was not as easy as it might sound. Even when
people "took turns" running the computer at different security levels
(a technique called periods processing), security officers had to worry
about whether Top Secret information may have been left behind in
memory or on the operating system’s hard drive.
Some sites purchased
computers to dedicate exclusively to highly classified work, despite
the cost, simply because they did not want to take the risk of leaking
information.

Multiuser
systems, like the early timesharing systems, made such sharing
particularly challenging. Ideally, people with Secret clearances should
be able to work at the same time others were working on Top Secret
data, and everyone should be able to share common programs and
unclassified files. While typical operating system mechanisms could
usually protect different user programs from one another, they could
not prevent a Confidential or Secret user from tricking a Top Secret
user into releasing Top Secret information via a Trojan horse.

When
a user runs the word processing program, the program inherits that
user’s access permissions to the user’s own files. Thus the Trojan
horse circumvents the access permissions by performing its hidden
function when the unsuspecting user runs it. This is true whether the
function is implemented in a macro or embedded in the word processor
itself. Viruses and network worms are Trojan horses in the sense that
their replication logic is run under the context of the infected user.
Occasionally, worms and viruses may include an additional Trojan horse
mechanism that collects secret files from their victims. If the victim
of a Trojan horse is someone with access to Top Secret information on a
system with lesser-cleared users, then there’s nothing on a
conventional system to prevent leakage of the Top Secret information.
Multiuser systems clearly need a special mechanism to protect
multilevel data from leakage.

Think about the challenges of supporting modern-day multiuser Windows Operating Systems (virtualized or not,) together onto a single compute platform while also consolidating multiple networks of various classifications (including the Internet) into a single network transport while providing ZERO tolerance for breach.

What’s also different here from the compartmentalization requirements of "basic" virtualization is that the segmentation and isolation is
critically driven by the classification and sensitivity of the data itself and the clearance of those trying to access it.   

To wit:

VMware and General Dynamics are partnering to provide the NSA with the next evolution of their High Assurance Platform (HAP) to solve the following problem:

… users with multiple security clearances, such as members of the U.S. Armed Forces and Homeland Security personnel, must
use separate physical workstations. The result is a so-called "air gap"
between systems to access information in each security clearance level
in order to uphold the government’s security standards.

VMware
said it will provide an extra layer of security in its virtualization
software, which lets these users run the equivalent of physically
isolated machines with separate levels of security clearance on the
same workstation.

HAP builds on the current
solution based on VMware, called NetTop, which allows simultaneous
access to classified information on the same platform in what the
agency refers to as low-risk environments.

For HAP, VMware has added a thin API of
fewer than 5,000 lines of code to its virtualization software that can
evolve over time. NetTop is more static and has to go through a lengthy
re-approval process as changes are made. "This code can evolve over
time as needs change and the accreditation process is much quicker than
just addressing what’s new." 

HAP encompasses standard Intel-based commercial hardware that
could range from notebooks and desktops to traditional workstations. Government agencies will see a minimum 60 percent
reduction in their hardware footprints and greatly reduced energy
requirements.

HAP
will allow for one system to maintain up to six simultaneous virtual
machines. In addition to Windows and Linux, support for
Sun’s Solaris operating system is planned."

This could yield some readily apparent opportunities for improving the security of virtualized environments in many sensitive applications.  There are also other products on the market that offer this sort of functionality such as Googun’s Coccoon and Raytheon’s Guard offerings, but they are complex and costly and geared for non-commercial spaces.  Also, with VMware’s market-force dominance and near ubiquity, this capability has the real potential of bleeding over into the commerical space.

Today we see MLS systems featured in low risk environments, but it’s still not uncommon to see an operator tasked with using 3-4 different computers which are sometimes located in physically isolated facilities.

While this offers a level of security that has physical air gaps to help protect against unauthorized access, it is costly, complex, inefficient and does not provide for the real-time access needed to support the complex mission of today’s intelligence operatives, coalition forces or battlefield warfighters.

It may sound like a simple and mundane problem to solve, but in today’s distributed and collaborative Web2.0 world (which is one the DoD/Intel crowd are beginning to utilize) we find it more and more difficult to achieve.   Couple the information compartmentalization issue with the recent virtualization security grumblings: breaking out of VM Jails, Hypervisor
Rootkits and exploiting VM API’s for fun and profit…

This functionality has many opportunities to provide for
more secure virtualization deployments that will utilize MLS-capable
OS’s in conjunction with strong authentication, encryption, memory
firewalling and process isolation.  We’ve seen the first steps toward that already.

I look forward to what this may bring to the commercial space and the development of more secure virtualization platforms in general.  It’s building on decades of work in the information assurance space, but it’s getting closer to being cost-effective and reasonable enough for deployment.

/Hoff

Shrdlu’s Model Of Virtual(ization) Insanity…

September 1st, 2007 8 comments

With apologies to Jamiroquai, I answer the following query by Shrdlu (Layer8 Blog) regarding virtualization in which I was challenged:

L8virtualization


So what’s the big deal?  Am I missing something, Herr Hoff?

To which I respond:

Layer8virtualization_2

…nothing except: better colors, proper use of capitalization and  the all-important "Layer 8"

’nuff said 😉

/Hoff

 

Categories: Virtualization Tags:

Those of You Wanting the .PPT/.KEY version of the Virtualization Deck…

August 30th, 2007 3 comments

Pptdevil_2
Here you go.

As I explained, the export from Keynote to PowerPoint renders some of the font formatting and shadow effects rather poorly.  You can fix this by:

     1. Using a Mac, and
     2. Using Keynote 😉

You will need to clean this up should you hope it matches the .PDF you first saw.

Just in case you’re interested in the Keynote version, here is the link to it, also.  I compressed this one.

I apologize for the filesizes, I didn’t spend much time optimizing anything for these.  I hope they help.  I will, at some point, probably revise them to include some timely information.

Enjoy

/Hoff


{Link was broken for the Keynote file.  Fixed as of 2:31am EST.  😉 }

Categories: Virtualization Tags:

A Play on Negroponte’s OLPC. I present “OHPC” – One Honeypot per Computer…

August 29th, 2007 1 comment

Poohhoneypotbluesalt
I was catching up with an old friend the other day, and in chatting with Lance Spitzner, we got to talking about virtualization and Honeypots.  Lance, as you no doubt already know, is one of the ringleaders of the Honeynet Project whose charter is the following:

The Honeynet Project is a non-profit (501c3) volunteer, research organization dedicated to improving
the security of the Internet at no cost to the public.  All of our work is released as and we are
firmly committed to the ideals of OpenSource
Our goal, simply put, is to make a difference.  We accomplish this goal in the following three ways.

 

Awareness
We raise awareness of the threats and vulnerabilities that exist in the Internet
today.  Many individuals and organizations do not realize they are a target, nor
understand who is attacking them, how, or why.  We provide this information so
people can better understand they are a target, and understand the basic measures
they can take to mitigate these threats.
This information is provided through our Know Your Enemy
series of papers.

Information
For those who are already aware and concerned, we provide
details to better secure and defend your resources. Historically,
information about attackers has been limited to the tools they use. We
provide critical additional information, such as their motives in attacking,
how they communicate, when they attack systems and their actions after compromising
a system.  We provide this service through our
Know Your Enemy
whitepapers and our Scan of the
Month
challenges.

Tools

For organizations interested in continuing their own research about cyber threats,
we provide the tools and techniques we have developed.  We provide these through
our Tools Site.

Look for an upcoming Take5 Interview with Lance shortly.

We were chatting about the application of Honeypots within a virtualized environment and how, for detection purposes, one might integrate them into virtual environments.  Lance brought up the point that the Honeynet Project already talks about the deployment of virtualized Honeypots and the excellent new book by Provos and Holz titled "Virtual Honeypots: From Botnet Tracking to Intrusion Detection" talks about utilizing virtualization and HN’s.

I clarified that what I meant was actually integrating a HoneyPot running in a VM on a production host as part of a standardized deployment model for virtualized environments.  I suggested that this would integrate into the data collection and analysis models the same was as a "regular" physical HoneyPot machine, but could utilize some of the capabilities built into the VMM/HV’s vSwitch to actually make the virtualization of a single HoneyPot across an entire collection of VM’s on a single physical host.

He seemed intrigued by this slightly different perspective.

We’ve seen some pretty interesting discussions both pro and con for production Honeypots in the last couple of weeks.  First there was this excellent write up by InfoWorld’s Roger Grimes which prompted an "operational yeah, but…" from LonerVamp’s blog.

So, with the hopes that this will actually turn into a discussion, Lance said he was going to bring this up internally within the HN Project forums, but I wanted to raise it here.

I’d be very interested in discussing how folks perceive the  notion of OHPC and whether you’d consider deploying one as a VM on each production virtualized host machine you put into production?  If so, why. If not, why?

/Hoff

Categories: Intrusion Detection, Virtualization Tags: