Archive

Archive for the ‘VMware’ Category

Epiphany: For Network/InfoSec Folks, the Virtualization Security Awareness Problem All Starts With the vSwitch…

September 13th, 2007 9 comments

Solutions_desk_datasecurity
It’s all about awareness, and as you’ll come to read, there’s just not enough of it when we talk about the security implications of virtualizing our infrastructure.  We can fix this, however, without resorting to FUD and without vendors knee-jerking us into spin battles. 

I’m attending VMworld 2007 in San Francisco, looking specifically for security-focused information in the general sessions and hands-on labs.  I attended the following sessions and a hands-on lab yesterday:

  • Security Hardening and Monitoring of VMware Infrastructure 3 (Hands-on Lab)
  • Security Architecture Design and Hardening VMware Infrastructure 3 (General Session)
  • Using the Secure Technical Implementation Guide (STIG) with VMware Infrastructure 3 (General Session)

I had high hopes that the content and focus of these sessions would live up to the hype surrounding the statements by VMware reps at the show.   As Dennis Fisher from  SearchSecurity wrote, there are some  powerful statements coming from the VMware camp on the security of virtualized environments and how they are safer than non-virtualized deployments.  These are a bold, and in my opinion, dangerously generalized statements to make when backed up with examples which beg for context:

To help assuage customers’ fears, VMWare executives and security
engineers are going on the offensive and touting the company’s ESX
Server as a more secure alternative to traditional computing setups.

Despite the complexity of virtualized environments, they are inherently
more secure than normal one-to-one hardware and operating system
environments
because of the hypervisor’s ability to enforce isolation
among the virtual machines, Mukundi Gunti, a security engineer at
VMWare said in a session on security and virtualization Tuesday.

 

"It’s a much more complex architecture with a lot of moving parts.
There are a lot of misconceptions about security and virtualization,"
said Jim Weingarten, senior technical alliances manager at VMWare, who
presented with Gunti. "Virtual machines are safer."

So I undertook my mission of better understanding how these statements could be substantiated empirically and attended the sessions/labs with as much of an open mind as I could given the fact that I’m a crusty old security dweeb.

Security Hardening and Monitoring of VMware Infrastructure 3 (Hands-on Lab)

The security hardening and monitoring hands-on lab was very basic and focused on general unix hardening of the underlying RH OS as well as SWATCH log monitoring.  The labs represented the absolute minima that one would expect to see performed when placing any Unix/Linux based host into a network environment.  Very little was specific to virtualization.  This  session presented absolutely nothing new or informative.

Security Architecture Design and Hardening VMware Infrastructure 3 (General Session)
The security architecture design and hardening session was at the very end of the day and was packed with attendees.  Unfortunately, it was very much a re-cap of the hands-on lab but did touch on a couple of network-centric design elements (isolating the VMotion/VIC Management ports onto separate NICs/VLANs, etc) as well as some very interesting information regarding the security of the virtual switch (vSwitch.)  More on this below, because it’s very interesting.

The Epiphany
This is when it occurred to me, that given the general roles and constituent responsibilities of the attendees, most of whom are not dedicated network or security folks, the disconnect between the "Leviathan Force" (the network and network security admins) and the "Sysadmins" (the server/VMM administrators) was little more than the mashup of a classic turf battle and differing security mindsets combined with a lack of network and information security-focused educational campaigning on the part of VMware.

After the talk, I got to spend about 30 minutes chatting with VMware’s Kirk Larsen (Engineering Product Security Officer) and Banjot Chanana (Product Manager for Platform Security) about the lack of network-centric security information in the presentations and how we could fix that.

Virtualization_quote
What I suggested was that since now we see the collapse and convergence of the network and the compute stacks into the virtualizaton platforms, the operational impact of what that means to the SysAdmins and Network/Information Security folks is huge. 

The former now own the keys to the castle whilst the latter now "enjoy" the loss of visibility and operational control.  Because the network and InfoSec folks aren’t competently trained in the operation of the VM platforms and the SysAdmins aren’t competently trained in securing (holistically — from the host through to the network) them, there’s a natural tendency for conflict.

So here’s what VMware needs to do immediately:

  1. Add a series of whitepapers and sessions that speak directly to assuage the fear of the virtualization unknowns targeting the network and InfoSec security staffers. 
  2. Provide more detail and solicit feedback relating to the technical roadmaps that will get the network and InfoSec staffer’s visibility and control back by including them in the process, not isolating them from it.
  3. Assign a VMware community ombudsman to provide outreach and make his/her responsibility to make folks in our industry aware — and not by soundbites that sponsors contention — that there are really some excellent security enhancements that virtualization (and specifically VMware) bring to the table.
  4. Make more security acquisitions and form more partnerships.  Determina was good, but as much as we need "prevention" we need "detection" — we’ve lost visibility, so don’t ignore the basics.
  5. Stop fighting "FUD" with "IAFNAB" (It’s a feature, not a bug) responses
  6. Give the network and InfoSec folks solid information and guidance against which we can build budgets to secure the virtualization infrastructure before it’s deployed, not scrap for it after it’s already in production and hackbait.
  7. Address the possibility of virtualization security horizon events like Blue Pill and Skoudis’ VM Jail escapes head-on and work with us to mitigate them.
  8. Don’t make the mistakes Cisco does and just show pretty security architecture and strategy slides featuring roadmaps that are 80% vapor and call it a success.
  9. Leverage the talent pool in the security/network space to help build great and secure products; don’t think that the only folks you have to target are the cost-conscious CIO and the SysAdmins.
  10. Rebuild and earn our trust that the virtualization gravy train isn’t an end run around the last 10 years we’ve spent trying to secure our data and assets.  Get us involved.

I will tell you that both Kirk and Banjot recognize the need for these activities and are very passionate about solving them.  I look forward to their efforts. 

In the meantime, a lot of gaps can be closed in a very short period by disseminating some basic network and security-focused information.  For example, from the security architecture session, did you know that the vSwitch (fabric) is positioned as being more secure than a standard L2/L3 switch because it is not susceptible to the following attacks:

  • Double Encapsulation
  • Spanning Tree Floods
  • Random Frames
  • MAC Address Flooding
  • 802.1q and ISL Tagging
  • Multicast Brute Forcing

Basic stuff, but very interesting.  Did you know that the vSwitch doesn’t rely on ARP caches or CAM tables for switching/forwarding decisions?  Did you know that ESX features a built-in firewall (that stomps on IPtables?)  Did you know that the vSwitch has features built-in that limit MAC address flux and provides features such as traffic shaping and promiscuous mode for sniffing?

Many in the network and InfoSec career paths do not, and they should.

I’m going to keep in touch with Kirk and Banjot and help to ensure that this outreach is given the attention it deserves.  It will be good for VMware and very good for our community.  Virtualization is here to stay and we can’t afford to maintain this stare-down much longer.

I’m off to the show this morning to investigate some of the vendor’s like Catbird and Reflex and what they are doing with their efforts.

/Hoff

 

How the DOD/Intel Communities Can Help Save Virtualization from the Security Trash Heap…

September 3rd, 2007 5 comments

Sausagemachine
If you’ve been paying attention closely over the last year or so, you will have noticed louder-than-normal sucking sounds coming from the virtualization sausage machine as it grinds the various ingredients driving virtualization’s re-emergence and popularity together to form the ideal tube of tasty technology bologna. 

{I rather liked that double entendre, but if you find it too corrosive, feel free to substitute your own favorite banger marque in its stead. 😉 }

Virtualization is a hot topic; from clients to servers  applications to datastores, and networking to storage, virtualization is coming back full-circle from its MULTICS and LPAR roots and promises to change everything.  Again.

Unfortunately, one of the things virtualization isn’t changing quickly enough (for my liking) and in enough visible ways is the industry’s approach to engineering security into the virtualization product lifecycle early enough to allow us to deploy a more secure product out of the box.   

Sadly, most of the commercial virtualization offerings as well as the open source platforms have lacked much in the way of guidance as how to secure VMs beyond the common sense approach of securing non-virtualized instances, and the security industry has been slow to see any more than a few innovative solutions to the problems virtualization introduces or in some way intensifies.

You can imagine then the position that leaves customers.

I’m from the Government and I’m here to help…

However, here’s where some innovation from what some might consider an unlikely source may save this go-round from another security wreck stacked in the IT boneyard: the DoD and Intelligence communities and a high-profile partnering strategy for virtualized security.

Both the DoD and Intel agencies are driven, just like the private sector, to improve efficiency, cut costs, consolidate operationally and still maintain an ever vigilant high level of security.

An example of this dictate is the Global Information Grid (GIG.) The GIG represents:

"…a net-centric system
operating in a global context to provide processing, storage,
management, and transport of information to support all Department of
Defense (DoD), national security, and related Intelligence Community
missions and functions-strategic, operational, tactical, and
business-in war, in crisis, and in peace.

GIG capabilities
will be available from all operating locations: bases, posts, camps,
stations, facilities, mobile platforms, and deployed sites. The GIG
will interface with allied, coalition, and non-GIG systems.

One of the core components of the GIG is building the capability and capacity to securely collapse and consolidate what are today physically separate computing enclaves (computers, networks and data) based upon the classification, sensitivity and clearances of information and personnel which govern the access to data by those who try to access it.

Multi-Level Security Marketing…

This represents the notion of multilevel security or MLS.  I am going to borrow liberally from this site authored by Dr. Rick Smith to provide a quick overview, as the concepts and challenges of MLS are really critical to fully appreciate what I’m about to describe.  Oddly enough, the concept and work is also 30+ years old and you’d recognize the constructs as being those you’ll find in your CISSP test materials…You remember the Bell-LaPadula model, don’t you?

The MLS Problem

We use the term multilevel
because the defense community has classified both people and
information into different levels of trust and sensitivity. These
levels represent the well-known security classifications: Confidential,
Secret, and Top Secret. Before people are allowed to look at classified
information, they must be granted individual clearances that are based
on individual investigations to establish their trustworthiness. People
who have earned a Confidential clearance are authorized to see
Confidential documents, but they are not trusted to look at Secret or
Top Secret information any more than any member of the general public.
These levels form the simple hierarchy shown in Figure 1.
The dashed arrows in the figure illustrate the direction in which the
rules allow data to flow: from "lower" levels to "higher" levels, and
not vice versa.

Figure 1: The hierarchical security levels
 

When speaking about these levels, we use three different terms:

  • Clearance level
    indicates the level of trust given to a person with a security
    clearance, or a computer that processes classified information, or an
    area that has been physically secured for storing classified
    information. The level indicates the highest level of classified
    information to be stored or handled by the person, device, or location.
  • Classification level
    indicates the level of sensitivity associated with some information,
    like that in a document or a computer file. The level is supposed to
    indicate the degree of damage the country could suffer if the
    information is disclosed to an enemy.
  • Security level is a generic term for either a clearance level or a classification level.

The
defense community was the first and biggest customer for computing
technology, and computers were still very expensive when they became
routine fixtures in defense organizations. However, few organizations
could afford separate computers to handle information at every
different level: they had to develop procedures to share the computer
without leaking classified information to uncleared (or insufficiently
cleared) users. This was not as easy as it might sound. Even when
people "took turns" running the computer at different security levels
(a technique called periods processing), security officers had to worry
about whether Top Secret information may have been left behind in
memory or on the operating system’s hard drive.
Some sites purchased
computers to dedicate exclusively to highly classified work, despite
the cost, simply because they did not want to take the risk of leaking
information.

Multiuser
systems, like the early timesharing systems, made such sharing
particularly challenging. Ideally, people with Secret clearances should
be able to work at the same time others were working on Top Secret
data, and everyone should be able to share common programs and
unclassified files. While typical operating system mechanisms could
usually protect different user programs from one another, they could
not prevent a Confidential or Secret user from tricking a Top Secret
user into releasing Top Secret information via a Trojan horse.

When
a user runs the word processing program, the program inherits that
user’s access permissions to the user’s own files. Thus the Trojan
horse circumvents the access permissions by performing its hidden
function when the unsuspecting user runs it. This is true whether the
function is implemented in a macro or embedded in the word processor
itself. Viruses and network worms are Trojan horses in the sense that
their replication logic is run under the context of the infected user.
Occasionally, worms and viruses may include an additional Trojan horse
mechanism that collects secret files from their victims. If the victim
of a Trojan horse is someone with access to Top Secret information on a
system with lesser-cleared users, then there’s nothing on a
conventional system to prevent leakage of the Top Secret information.
Multiuser systems clearly need a special mechanism to protect
multilevel data from leakage.

Think about the challenges of supporting modern-day multiuser Windows Operating Systems (virtualized or not,) together onto a single compute platform while also consolidating multiple networks of various classifications (including the Internet) into a single network transport while providing ZERO tolerance for breach.

What’s also different here from the compartmentalization requirements of "basic" virtualization is that the segmentation and isolation is
critically driven by the classification and sensitivity of the data itself and the clearance of those trying to access it.   

To wit:

VMware and General Dynamics are partnering to provide the NSA with the next evolution of their High Assurance Platform (HAP) to solve the following problem:

… users with multiple security clearances, such as members of the U.S. Armed Forces and Homeland Security personnel, must
use separate physical workstations. The result is a so-called "air gap"
between systems to access information in each security clearance level
in order to uphold the government’s security standards.

VMware
said it will provide an extra layer of security in its virtualization
software, which lets these users run the equivalent of physically
isolated machines with separate levels of security clearance on the
same workstation.

HAP builds on the current
solution based on VMware, called NetTop, which allows simultaneous
access to classified information on the same platform in what the
agency refers to as low-risk environments.

For HAP, VMware has added a thin API of
fewer than 5,000 lines of code to its virtualization software that can
evolve over time. NetTop is more static and has to go through a lengthy
re-approval process as changes are made. "This code can evolve over
time as needs change and the accreditation process is much quicker than
just addressing what’s new." 

HAP encompasses standard Intel-based commercial hardware that
could range from notebooks and desktops to traditional workstations. Government agencies will see a minimum 60 percent
reduction in their hardware footprints and greatly reduced energy
requirements.

HAP
will allow for one system to maintain up to six simultaneous virtual
machines. In addition to Windows and Linux, support for
Sun’s Solaris operating system is planned."

This could yield some readily apparent opportunities for improving the security of virtualized environments in many sensitive applications.  There are also other products on the market that offer this sort of functionality such as Googun’s Coccoon and Raytheon’s Guard offerings, but they are complex and costly and geared for non-commercial spaces.  Also, with VMware’s market-force dominance and near ubiquity, this capability has the real potential of bleeding over into the commerical space.

Today we see MLS systems featured in low risk environments, but it’s still not uncommon to see an operator tasked with using 3-4 different computers which are sometimes located in physically isolated facilities.

While this offers a level of security that has physical air gaps to help protect against unauthorized access, it is costly, complex, inefficient and does not provide for the real-time access needed to support the complex mission of today’s intelligence operatives, coalition forces or battlefield warfighters.

It may sound like a simple and mundane problem to solve, but in today’s distributed and collaborative Web2.0 world (which is one the DoD/Intel crowd are beginning to utilize) we find it more and more difficult to achieve.   Couple the information compartmentalization issue with the recent virtualization security grumblings: breaking out of VM Jails, Hypervisor
Rootkits and exploiting VM API’s for fun and profit…

This functionality has many opportunities to provide for
more secure virtualization deployments that will utilize MLS-capable
OS’s in conjunction with strong authentication, encryption, memory
firewalling and process isolation.  We’ve seen the first steps toward that already.

I look forward to what this may bring to the commercial space and the development of more secure virtualization platforms in general.  It’s building on decades of work in the information assurance space, but it’s getting closer to being cost-effective and reasonable enough for deployment.

/Hoff

Take5 (Episode #5) – Five Questions for Allwyn Sequeira, SVP of Product Operations, Blue Lane

August 21st, 2007 18 comments

This fifth episode of Take5 interviews Allwyn Sequeira, SVP of Product Operations for Blue Lane.  

First a little background on the victim:

Allwyn
Allwyn Sequeira is Senior Vice President of Product Operations at Blue
Lane Technologies, responsible for managing the overall product life
cycle, from concept through research, development and test, to delivery
and support. He was previously the Senior Vice President of Technology
and Operations at netVmg, an intelligent route control company acquired
by InterNap in 2003, where he was responsible for the architecture,
development and deployment of the industry-leading flow control
platform. Prior to netVmg, he was founder, Chief Technology Officer and
Executive Vice President of Products and Operations at First Virtual
Corporation (FVC), a multi-service networking company that had a
successful IPO in 1998. Prior to FVC, he was Director of the Network
Management Business Unit at Ungermann-Bass, the first independent local
area network company. Mr. Sequeira has previously served as a Director
on the boards of FVC and netVmg.


Mr. Sequeira started his career as a software developer at HP in the
Information Networks Division, working on the development of TCP/IP
protocols. During the early 1980’s, he worked on the CSNET project, an
early realization of the Internet concept. Mr. Sequeira is a recognized
expert in data networking, with twenty five years of experience in the
industry, and has been a featured speaker at industry leading forums
like Networld+Interop, Next Generation Networks, ISP Con and RSA
Conference.

Mr. Sequeira holds a Bachelor of Technology degree in Computer
Science from the Indian Institute of Technology, Bombay, and a Master
of Science in Computer Science from the University of Wisconsin,
Madison.   

Allwyn, despite all this good schoolin’ forgot to send me a picture, so he gets what he deserves 😉
(Ed: Yes, those of you quick enough were smart enough to detect that the previous picture was of Brad Pitt and not Allwyn.  I apologize for the unnecessary froth-factor.)

 Questions:

1) Blue Lane has two distinct product lines, VirtualShield and PatchPoint.  The former is a software-based solution which provides protection for VMware Infrastructure 3 virtual servers as an ESX VM plug-in whilst the latter offers a network appliance-based solution for physical servers.  How are these products different than either virtual switch IPS’ like Virtual Iron or in-line network-based IPS’s?

IPS technologies have been charged with the incredible mission of trying to protect everything from anything.  Overall they’ve done well, considering how much the perimeter of the network has changed and how sophisticated hackers have become. Much of their core technology, however, was relevant and useful when hackers could be easily identified by their signatures. As many have proclaimed, those days are coming to an end.

A defense department official recently quipped, "If you offer the same protection for your toothbrushes and your diamonds you are bound to lose fewer toothbrushes and more diamonds."  We think that data center security similarly demands specialized solutions.  The concept of an enterprise network has become so ambiguous when it comes to endpoints and devices and supply chain partners, etc. we think its time to think more realistically in terms of trusted, yet highly available zones within the data center.

It seems clear at this point that different parts of the network need very different security capabilities.  Servers, for example need highly accurate solutions that do not block or impede good traffic and can correct bad traffic, especially when it comes to closing network-facing vulnerability windows.  They need to maintain availability with minimal latency for starters; and that has been a sort of Achilles heel for signature-based approaches.  Of course, signatures also bring considerable management burdens over and beyond their security capabilities.

No one is advocating turning off the IPS, but rather approaching servers with more specialized capabilities.  We started focusing on servers years ago and established very sophisticated application and protocol intelligence, which has allowed us to correct traffic inline without the noise, suspense and delay that general purpose network security appliance users have come to expect.

IPS solutions depend on deep packet inspection typically at the perimeter based on regexp pattern matching for exploits.  Emerging challenges with this approach have made alert and block modes absolutely necessary as most IPS solutions aren’t accurate enough to be trusted in full library block. 

Blue Lane uses a vastly different approach.  We call it deep flow inspection/correction for known server vulnerabilities based on stateful decoding up to layer 7.  We can alert, block and correct, but most of are deployments are in correct mode, with our full capabilities enabled. From an operational standpoint we have substantially different impacts.

A typical IPS may have 10K signatures while experts recommend turning on just a few hundred.  That kind of marketing shell game (find out what really works) means that there will be plenty of false alarms, false positives and negatives and plenty of tuning.  With polymorphic attacks signature libraries can increase exponentially while not delivering meaningful improvements in protection. 

Blue Lane supports about 1000 inline security patches across dozens of very specific server vulnerabilities, applications and operating systems.  We generate very few false alarms and minimal latency.  We don’t require ANY tuning.  Our customers run our solution in automated, correct mode.

The traditional static signature IPS category has evolved into an ASIC war between some very capable players for the reasons we just discussed.Exploding variations of exploits and vectors means that exploit-centric approaches will require more processing power.

Virtualization is pulling the data center into an entirely different direction, driven by commodity processors.  So of course our VirtualShield solution was a much cleaner setup with a hypervisor; we can plug into the hypervisor layer and run on top of existing hardware, again with minimal latency and footprint.

You don’t have to be a Metasploit genius to evade IPS signatures.  Our higher layer 7 stateful decoding is much more resilient. 

2) With zero-days on the rise, pay-for-play vulnerability research and now Zero-Bay (WabiSabiLabi) vulnerability auctions and the like, do you see an uptake in customer demand for vulnerability shielding solutions?

Exploit-signature technologies are meaningless in the face of evanescent, polymorphic threats, resulting in 0-day exploits. Slight modifications to signatures can bypass IPSes, even against known vulnerabilities.  Blue Lane technology provides 0-day protection for any variant of an exploit against known vulnerabilities.  No technology can provide ultimate protection against 0-day exploits based on 0-day vulnerabilities. However, this requires a different class of hacker.

3) As large companies start to put their virtualization strategies in play, how do you see customers addressing securing their virtualized infrastructure?  Do they try to adapt existing layered security methodologies and where do these fall down in a virtualized world?

I’ve explored this topic in depth at the Next Generation Data Center conference last week. Also, your readers might be interested in listening to a recent podcast: The Myths and Realities of Virtualization Security: An Interview. 

To summarize, there are a few things that change with virtualization, that folks need to be aware of.  It represents a new architecture.  The hypervisor layer represents the un-tethering and clustering of VMs, and centralized control.  It introduces a new virtual network layer.  There are entirely new states of servers, not anticipated by traditional static security approaches (like instant create, destroy, clone, suspend, snapshot and revert to snapshot). 

Then you’ll see unprecedented levels of mobility and new virtual appliances and black boxing of complex stacks including embedded databases.  Organizations will have to work out who is responsible for securing this very fluid environment.  We’ll also see unprecedented scalability with Infiniband cores attaching LAN/SAN out to 100’s of ESX hypervisors and thousands of VMs.

Organizations will need the capability to shield these complex, fluid environments; because trying to keep track of individual VMs, states, patch levels, locations will make tuning an IPS for polymorphic attacks look like childs play in comparison.   Effective solutions will need to be highly accurate, low latency solutions deployed in correct mode. Gone will be the days of man-to-man blocking and tuning.  Here to stay are the days of zone defense.

4) VMware just purchased Determina and intends to integrate their memory firewall IPS product as an ESX VM plug-in.  Given your early partnership with VMware, are you surprised by this move?  Doesn’t this directly compete with the VirtualSheild offering?

I wouldn’t read too much into this. Determina hit the wall on sales, primarily because it’s original memory wall technology was too intrusive, and fell short of handling new vulnerabilities/exploits.

This necessitated the LiveShield product, which required ongoing updates, destroying the value proposition of not having to touch servers, once installed. So, this is a technology/people acquisition, not a product line/customer-base acquisition.

VMware was smart to get a very bright set of folks, with deep memory/paging/OS, and a core technology that would do well to be integrated into the hypervisor for the purpose of hypervisor hardening, and interVM isolation. I don’t see VMware entering the security content business soon (A/V, vulnerabilities, etc.). I see Blue Lane’s VirtualShield technology integrated into the virtual networking layer (vSwitch), as a perfect complement to anything that will come out of the Determina acquisition.

5) Citrix just acquired XenSource.  Do you have plans to offer VirtualShield for Xen? 

A smart move on Citrix’s part to get back into the game. Temporary market caps don’t matter. Virtualization matters. If Citrix can make this a two or three horse race, it will keep the VMware, Citrix, Microsoft triumvirate on their toes, delivering better products, and net good for the customer.

Regarding BlueLane, and Citrix/Xensource, we will continue to pay attention to what customers are buying as they virtualize their data centers. For now, this is a one horse show 🙂

Oh SNAP! VMware acquires Determina! Native Security Integration with the Hypervisor?

August 19th, 2007 12 comments

Determinalogo
Hot on the trails of becoming gigagillionaires, the folks at VMware make my day with this.  Congrats to the folks @ Determina.

Methinks that for the virtualization world, it’s a very, very good thing.  A step in the right direction.

I’m going to prognosticate that this means that Citrix will buy Blue Lane or Virtual Iron next (see bottom of the post) since their acquisition of XenSource leaves them with the exact same problem that this acquisition for VMware tries to solve:

VMware Inc., the market leader in virtualization software, has acquired
Determina Inc., a Silicon Valley maker of host intrusion prevention
products.

…the security of virtualized
environments has been something of an unknown quantity due to the
complexity of the technology and the ways in which hypervisors interact
with the host OS. 
Determina’s technology is designed specifically to protect the OS
from malicious code, regardless of the origin of the attack, so it
would seem to be a sensible fit for VMware, analysts say.Memoryfirewall

In his analysis of the deal, Gartner’s MacDonald sounded many of
the same notes. "By potentially integrating Memory Firewall into the
ESX hypervisor, the hypervisor itself can provide an additional level
of protection against intrusions. We also believe the memory protection
will be extended to guest OSs as well: VMware’s extensive use of binary
emulation for virtualization puts the ESX hypervisor in an advantageous
position to exploit this style of protection," he wrote.

I’ve spoken a lot recently  about how much I’ve been dreading the notion that security was doomed to repeat itself with the accelerated take off of server virtualization since we haven’t solved many of the most basic security problem classes.  Malicious code is getting more targeted and more intelligent and when you combine an emerging market using hot technology without an appropriate level of security… 

Basically, my concerns have stemmed from the observation that if we can’t do a decent job protecting physically-seperate yet interconnected network elements with all the security fu we have, what’s going to happen when the "…network is the computer" (or vice versa.)  Just search for "virtualization" via the Lijit Widget above for more posts on this…

Some options for securing virtualized guest OS’s in a VM are pretty straight foward:

  1. Continue to deploy layered virtualized security services across VLAN segments of which each VM is a member (via IPS’s, routers, switches, UTM devices…)
  2. Deploy software like Virtual Iron’s which looks like a third party vSwitch IPS on each VM
  3. Integrate something like Blue Lane’s ESX plugin-in which interacts with and at the VMM level
  4. As chipset level security improves, enable it
  5. Deploy HIPS as part of every guest OS.

Each of these approaches has its own sets of pros and cons, and quite honestly, we’ll probably see people doing all five at the same time…layered defense-in-depth.  Ugh.

What was really annoying to me, however, is that it really seemed that in many cases, the VM solution providers were again expecting that we’d just be forced to bolt security ON TO our VM environments instead of BAKING IT IN.  This was looking like a sad reality.

I’ll get into details in another post about Determina’s solution, but I am encouraged by VMware’s acquisition of a security company which will be integrated into their underlying solution set.  I don’t think it’s  a panacea, but quite honestly, the roadmap for solving these sorts of problems were blowing in the wind for VMware up until this point.

"Further, by
using the LiveShield capabilities, the ESX hypervisor could be used
‘introspectively’ to shield the hypervisor and guest OSs from attacks
on known vulnerabilities in situations where these have not yet been
patched. Both Determina technologies are fairly OS- and
application-neutral, providing VMware with an easy way to protect ESX
as well as Linux- and Windows-based guest OSs."

Quite honestly, I hoped they would have bought Blue Lane since the ESX Hypervisor is now going to be a crowded space for them…

We’ll see how well this gets integrated, but I smiled when I read this.

Oh, and before anyone gets excited, I’m sure it’s going to be 100% undetectable! 😉

/Hoff

VMware to Open Development of ESX Virtual Switches to Third Parties…Any Guess Who’s First?

August 6th, 2007 3 comments

Darthvaderdogcostume
On the tail of my posts from a week or so ago regarding to Cisco’s Data Center 3.0 announcement, Mr. Chamber’s keynote at VMWorld and the follow-on $150Million investment in VMware, here’s something that really gets my goose honking because the force is strong with this one…

Virtualization.info broke the news last week that VMware will "…allow 3rd party vendors to develop their virtual
switches for ESX Server virtual network, and Cisco is expected to be
the first company announcing such product (Virtual Catalyst?)"

This may sound like a no-brain yawner, but it’s quite profound…not just for Cisco, but for any of the switch vendors who want in on the lucrative virtualization market.

For a quick refresher, let’s review the concept of virtual switches (vSwitches).  From VMware’s definition:

A virtual switch, vSwitch,
works much like a physical Ethernet switch. It detects which virtual
machines are logically connected to each of its virtual ports and uses
that information to forward traffic to the correct virtual machines. A
vSwitch can be connected to physical switches using physical Ethernet
adapters, also referred to as uplink adapters, to join virtual networks
with physical networks. This type of connection is similar to
connecting physical switches together to create a larger network. Even
though a vSwitch works much like a physical switch, it does not have
some of the advanced functionality of a physical switch. For more
information on vSwitches, see Virtual Switches.

Given my previous posts on the matter, this offers two interesting and profound perspectives on the virtualization front:

  1. If you recall, I blogged back in February about my participation in a Goldman Sachs Security conference where Jayshree Ullal presented Cisco’s vision of virtualized security.  During the Q&A period after her presentation, I asked her a somewhat loaded question that went something like this:

    Virtualization_2
    If now we see the consolidation of multiple OS and applications on a
    single VM host in which the bulk of traffic and data interchange is
    between the VM’s themselves and utilize the virtual switching fabrics (ed: software)
    in the VM Host and never hit the actual physical network
    infrastructure, where, exactly, does this leave the self-defending
    "network" without VM-level security functionality at the "micro
    perimeters" of the VM’s?

    I think that this announcement pretty much answers this question.  Cisco will take the concept that I blogged about previously wherein they will abstract the software from the hardware and provide a virtualized version of a catalyst as the ESX vSwitch.  I wager we will see a subset of security functionality in the vSwitch natively that one might expect in the "physical" Catalyst hardware products as much of the capabilities still hinge on new components such as the ACE.

    Now, if the virtual switch is Cisco’s, you can expect a bevy of interaction between the "virtual switch(es)" and the physical ones that the VM Hosts connect to.  This would provide interfaces between all manner of network controls and monitoring capacities such as firewalls, IDS, IPS, SEIM, and solve the issue above by merely "offloading" this functionality via API’s to the physical boxes plumbed into the network.

    Combine that with NAC agents on the hosts and…whether or not it actually works is neither here nor there.  They told they story and here it is.  It’s good to be king.

  2. This brings us to point numero dos…and it’s a doozy.  If you think that the current crop of L2/L3 switching and routing infrastructure is fragile enough, just imagine how much fun it’s going to be trying to detect and defend against infrastructure attacks on virtual switches that open up the guts of the VM hosts and hypervisors to third parties.

    We won’t need a Blue Pill, I’ll take one of these below, instead (it’s a cyanide capsule, btw):

    Cyanide
    Ettercap and arp-twiddling, anyone?  If you don’t have the capability to virtualize the functional equivalent of IDS taps and/or utilize "IPS" plugins to the hypervisors, compromising a single guestOS on a VM could spell disaster that goes undetected.  We already have issues protecting physically isolated critical infrastructure, can you imagine how much fun this is going to be? 

    I’m not talking about application layer attacks here, I’m talking layer 2/layer 3.  The vicious circle begins anew.  You’ll be worrying about XSS and AJAX attacks on your virtualized web servers whilst the same attacks from 10 years ago will give your shiny new virtual infrastructure a wedgie.

    And since it’s likely we’ll see a repeat of architectural car crashes as we have in the past, most of the inter-VM traffic won’t be mutually authenticated or encrypted, either.  So you’ve got that going for you…

So, I think that this model is what Reflex was aiming for with their vIPS (from Virtual Iron) software for the virtual switch which I blogged about here, but Cisco’s going to one-up them because of their investment in VMware, their switching acumen and the unfair advantage of owning both the virtual/logical switching/routing plane as well as the physical.

Good times are comin’, for sure.  I’m trying not to be cynical.  I think it’s fairly obvious as to what ought to be done to secure this mess before it becomes one, but I’m not sure we’re going to be able to step out in front of this train and stop it before it reaches the station.

/Hoff

Categories: Cisco, Virtualization, VMware Tags:

Follow-Up to My Cisco/VMWare Commentary

July 28th, 2007 No comments

 

Cisco_2
Thanks very much to whomsoever at Cisco linked to my previous post(s) onVmware_2
Cisco/VMware and the Data Center 3.0 on the Cisco Networkers website! I can tell it was a person because they misnamed my blog as "Regional Security" instead of Rational Security… you can find it under the Blogs section here. 😉

The virtualization.info site had an interesting follow-up to the VMware/Cisco posts I blogged about previously.

DataCenter 3.0 is Actually Old?

Firstly, in a post titled "Cisco announces (old) datacenter automation solution" in which they discuss the legacy of the VFrame product in which they suggest that VFrame is actually a re-branded and updated version software from Cisco’s acquisition of TopSpin back in 2005:

Cisco is well resoluted to make the most out of virtualization hype: it first declares Datacenter 3.0 initiative (more ambitiously than IDC, which claimed Virtualization 2.0), then it re-launches a technology obtained by TopSpin acquisition in April 2005 and offered since September 2005 under new brand: VFrame.

Obviously the press release doesn’t even mention that VFrame just moved
from 3.0 (which exist since May 2004, when TopSpin was developing it)
to 3.1 in more than three years.

In the same posting, the ties between Cisco and VMWare are further highlighted:

A further confirmation is given by fact that VMware is involved in VFrame development program since May 2004, as reported in a Cisco confidential presentation of 2005 (page 35).

Cisco old presentation also adds a detail about what probably will be announced at VMworld, and an interesting claim:

…VFrame can provision ESX Servers over SAN.

…VMWare needs Cisco for scaling on blades…

This starts helping us understand even further as to why Mr. Chambers will be keynoting at VMWorld’07.

Meanwhile, Cisco Puts its Money where its Virtual Mouth Is

Secondly, VMware announced today that Cisco will invest $150 Million in VMware:

Cisco will purchase $150 million of VMware Class A common shares
currently held by EMC Corporation, VMware’s parent company, subject to
customary regulatory and other closing conditions including
Hart-Scott-Rodino (HSR) review. Upon closing of the investment, Cisco
will own approximately 1.6 percent of VMware’s total outstanding common
stock (less than one percent of the combined voting power of VMware’s
outstanding common stock).  VMware has agreed to consider the
appointment of a Cisco executive to VMware’s board of directors at a
future date.

Cisco’s purchase is intended to strengthen inter-company
collaboration towards accelerating customer adoption of VMware
virtualization products with Cisco networking infrastructure and the
development of customer solutions that address the intersection of
virtualization and networking technologies. 

In addition, VMware and Cisco have entered into a routine and
customary collaboration agreement that expresses their intent to expand
cooperative efforts around joint development, marketing, customer and
industry initiatives.  Through improved coordination and integration of
networking and virtualized infrastructure, the companies intend to
foster solutions for enhanced datacenter optimization and extend the
benefits of virtualization beyond the datacenter to remote offices and
end-user desktops.

If should be crystal clear that Cisco and EMC are on a tear with regards to virtualization and that to Cisco, "bits is bits" and virtualizing those bits across the app. stack, network, security and storage departments coupled with a virtualized service management layer is integral to their datacenter strategy.

It’s also no mystery as to why Mr. Chambers is keynoting @ VMWorld now, either.

/Hoff

Categories: Cisco, Virtualization, VMware Tags:

Cisco & VMWare – The Revolution will be…Virtualized?

July 24th, 2007 No comments

Blogrevolution
During my tour of duty at Crossbeam, I’ve closely tracked the convergence of the virtualization strategies of companies such as VMWare with Cisco’s published long term product direction. 

One of the selfish reasons for doing so is that from a product-perspective, Crossbeam’s platform provides a competitively open, virtualized routing and switching platform combined with a blade-based processing compute stack powered by a hardened, Linux based operating system that allows customers to run the security applications of their choice. 

This provides an on-demand security architecture allowing customers to simply add a blade in order to add an application service component when needed.

Basically this allows one to virtualize networking/transport, applications/security contexts and security policies across any area of the network into which this service layer is plumbed and control the flows in order to manipulate in serial or parallel the path traffic takes through these various security software components.

So that’s the setup.  Yes, it’s intertwined with a bit of a commercial, but hey…perhaps liberty and beer are your idea of "free," but my blogoliciousness ain’t.  What’s really interesting is some of the deeper background on the collision of traditional networking with server virtualization technology.

While it wasn’t the first time we’ve heard it (and it won’t be the last,) back in December 2006, Phil Hochmuth from Network World wrote an article that appeared on the front page which was titled "Cisco’s IOS set for radical pricing, feature changes."  This article quoted Cliff Metzler, senior vice president of the company’s Network Management Technology Group as saying these very important words:

Cisco’s intention is to decouple IOS software from the hardware it
sells, which could let users add enhancements such as security or VoIP
more quickly,
without having to reinstall IOS images on routers and
switches. The vendor also plans to virtualize many of its network
services and applications, which currently are tied to
hardware-specific modules or appliances.

This
shift would make network gear operate more like a virtualized server,
running multiple operating systems and applications on top of a
VMware-like layer, as opposed to a router with a closed operating
system
, in which applications are run on hardware-based blades and
modules. Ultimately, these changes will make it less expensive to
deploy and manage services that run on top of IP networks, such as
security, VoIP and management features, Cisco says.

“The way we’ve sold software in the past is we’ve bolted it onto a
piece of hardware, and we shipped [customers] the hardware,” Metzler
said. “We need more flexibility to allow customers to purchase software
and to deploy it according to their terms.
” 

IOS upgrades require a reinstall of the new software image on the
router or switch — which causes downtime — or, “we say, not a problem,
UPS will arrive soon, here’s another blade” to run your new service or
application
, Metzler said. “This adds months to the deployment cycle,
which is not good for customers or Cisco’s business.”

The article above fundamentally demonstrates the identical functional software-based architecture that Crossbeam offers for exactly the right reasons; make security simpler, less expensive, easier to manage and more flexible to deploy on hardware that scales performance-wise.

Now couple this with the announcement that John Chambers will be delivering a keynote at VMWorld and things get even more interesting in a hurry.  Alessandro Perilli over at the Virtualization.info blog shares his perspective on why this is important and what it might mean:

Chambers presence possibly means announcement of a major partnership
between VMware and Cisco, which may be related to network equipment
virtualization or endpoint security support.

Many customers in these years prayed to have capability to use
virtual machines as routers inside VMware virtual networks. So far this
has been impossible:
despite Cisco proprietary IOS relies on standard
x86 hardware, it still requires a dedicated EEPROM to work, which
VMware doesn’t include in its virtual hardware set. Maybe Cisco is now
ready to virtualize its hardware equipment.

On the other side VMware may have a deal in place with Cisco about
its Assured Computing Environment (ACE) product: Cisco endpoint
security solution called Network Admission Control (NAC) may work with
VMware ACE as an endpoint security agent, eliminating any need to
install more software inside host or guest operating systems.

In any case a partnership between VMware and Cisco may greatly enhance virtual infrastructures capabilities.

This is interesting for sure and if you look at the way in which the demand for flexibility of software combined with generally-available COTS compute stacks and specific network processing where required, the notion that Cisco might partner with VMWare or a similar vendor such as SWSoft looks compelling.  Of course with functionality like KVM in the Linux kernel, there’s no reason they have to buy or ally…

Certainly there are already elements of virtualization within Cisco’s routing, switching and security infrastructure, but many might argue that it requires a refresh in order to meet the requirements of their customers.  It seems that their CEO does.

I think that this type of architecture looks promising.  Of course, you could have purchased it 6 years ago — as you can today — by talking to these folks. But I’m biased. 😉

/Hoff

Categories: Cisco, Virtualization, VMware Tags:

Heisenbugs: The Case of the Visibly Invisible Rogue Virtual Machine

May 28th, 2007 No comments

Pulloutyourhair
A Heisenbug is defined by frustrated programmers trying to mitigate a defect as:

     A bug that disappears or    alters its
behavior when one
     attempts to probe or isolate it.

In the case of a hardened rogue virtual machine (VM) sitting somewhere on your network, trying to probe or isolate it yields a similar frustration index for the IDS analyst as compared to that of the pissed off code jockey unable to locate a bug in a trace. 

In many cases, simply nuking it off the network is not good enough.  You want to know where it is (logically and physically,) how it got there, and whose it is.

Here’s the scenario I was reminded of last week when discussing a nifty experience I had in this regard.  It’s dumbed down and wouldn’t pass a

Here’s what transpired for about an hour or so one Monday morning:

1) IDP console starts barfing about an unallocated RFC address space emergence being used by a host on an internal network segment.  Traffic not malicious, but it seems to be talking to the Internet, some DNS on (actual name) "attacker.com" domain…

2) We see the same address popping up on the external firewall rulesets in the drop rule.

3) We start to work backwards from the sensor on the beaconing segment as well as the perimeter firewall.

4) Ping it.  No response.

5) Traceroute it.  Stops at the local default with nowhere to go since the address space is not routed.

6) Look in CAM tables for interfaces usage in the switch(es).  Coming through the trunk uplink ports.

7) Traced it to a switch.  Isolate the MAC address and isolate based on something unique?  Ummm…

8) On a segment with a collection of 75+ hosts with workgroup hubs…can’t find the damned thing.  IDP console still barfing.

9) One of the security team comes back from lunch.  Sits down near us and logs in.  Reboots a PC.

10) IDP alerts go dead.  All eyes on Cubicle #3.

…turns out that he was working @ home the previous night on his laptop upon which he uses (on his home LAN) VMWare for security research to test for how our production systems will react under attack.  He was using the NAT function and was spoofing the MAC as part of one of his tests.  The machine was talking to Windowsupdate and his local DNS hosts on the (now) imaginary network.

He bought lunch for the rest of us that day and was reminded that while he was authorized to use such tools based upon policy and job function, he shouldn’t use them on machines plugged into the internal network…or at least turn VMWare off ;(

/Hoff

Categories: Virtualization, VMware Tags: