Archive

Archive for the ‘Virtualization’ Category

Thin Clients: Does This Laptop Make My Ass(ets) Look Fat?

January 10th, 2008 11 comments

Phatburger_2
Juicy Fat Assets, Ripe For the Picking…

So here’s an interesting spin on de/re-perimeterization…if people think we cannot achieve and cannot afford to wait for secure operating systems, secure protocols and self-defending information-centric environments but need to "secure" their environments today, I have a simple question supported by a simple equation for illustration:

For the majority of mobile and internal users in a typical corporation who use the basic set of applications:

  1. Assume a company that:
    …fits within the 90% of those who still have data centers, isn’t completely outsourced/off-shored for IT and supports a remote workforce that uses Microsoft OS and the usual suspect applications and doesn’t plan on utilizing distributed grid computing and widespread third-party SaaS
  2. Take the following:
    Data Breaches.  Lost Laptops.  Non-sanitized corporate hard drives on eBay.  Malware.  Non-compliant asset configurations.  Patching woes.  Hardware failures.  Device Failure.  Remote Backup issues.  Endpoint Security Software Sprawl.  Skyrocketing security/compliance costs.  Lost Customer Confidence.  Fines.  Lost Revenue.  Reduced budget.
  3. Combine With:
    Cheap Bandwidth.  Lots of types of bandwidth/access modalities.  Centralized Applications and Data. Any Web-enabled Computing Platform.  SSL VPN.  Virtualization.  Centralized Encryption at Rest.  IAM.  DLP/CMP.  Lots of choices to provide thin-client/streaming desktop capability.  Offline-capable Web Apps.
  4. Shake Well, Re-allocate Funding, Streamline Operations and "Security"…
  5. You Get:
    Less Risk.  Less Cost.  Better Control Over Data.  More "Secure" Operations.  Better Resilience.  Assurance of Information.  Simplified Operations. Easier Backup.  One Version of the Truth (data.)

I really just don’t get why we continue to deploy and are forced to support remote platforms we can’t protect, allow our data to inhabit islands we can’t control and at the same time admit the inevitability of disaster while continuing to spend our money on solutions that can’t possibly solve the problems.

If we’re going to be information centric, we should take the first rational and reasonable steps toward doing so. Until the operating systems are more secure, the data can self-describe and cause the compute and network stacks to "self-defend," why do we continue to focus on the endpoint which is a waste of time.

If we can isolate and reduce the number of avenues of access to data and leverage dumb presentation platforms to do it, why aren’t we?

…I mean besides the fact that an entire industry has been leeching off this mess for decades…


I’ll Gladly Pay You Tuesday For A Secure Solution Today…

The technology exists TODAY to centralize the bulk of our most important assets and allow our workforce to accomplish their goals and the business to function just as well (perhaps better) without the need for data to actually "leave" the data centers in whose security we have already invested so much money.

Many people are doing that with the servers already with the adoption of virtualization.  Now they need to do with their clients.

The only reason we’re now going absolutely stupid and spending money on securing endpoints in their current state is because we’re CAUSING (not just allowing) data to leave our enclaves.  In fact with all this blabla2.0 hype, we’ve convinced ourselves we must.

Hogwash.  I’ve posted on the consumerization of IT where companies are allowing their employees to use their own compute platforms.  How do you think many of them do this?

Relax, Dude…Keep Your Firewalls…

In the case of centralized computing and streamed desktops to dumb/thin clients, the "perimeter" still includes our data centers and security castles/moats, but also encapsulates a streamed, virtualized, encrypted, and authenticated thin-client session bubble.  Instead of worrying about the endpoint, it’s nothing more than a flickering display with a keyboard/mouse.

Let your kid use Limewire.  Let Uncle Bob surf pr0n.  Let wifey download spyware.  If my data and applications don’t live on the machine and all the clicks/mouseys are just screen updates, what do I care?

Yup, you can still use a screen scraper or a camera phone to use data inappropriately, but this is where balancing risk comes into play.  Let’s keep the discussion within the 80% of reasonable factored arguments.  We’ll never eliminate 100% and we don’t have to in order to be successful.

Sure, there are exceptions and corner cases where data *does* need to leave our embrace, but we can eliminate an entire class of problem if we take advantage of what we have today and stop this endpoint madness.

This goes for internal corporate users who are chained to their desks and not just mobile users.

What’s preventing you from doing this today?

/Hoff

Are Virtualization Laws That Are Immutable, Disputable?

January 8th, 2008 3 comments

Moses
A few months ago, Pete Lindstrom shot me over the draft of a Burton paper on virtualization security.  We sputtered back and forth at one another, I called him names, and then we had beer later.

The title of the paper was the "Five Immutable Laws of Virtualization Security."

I must admit, I reacted to what he sent me in a combinational fit of puzzlement and apathy.   I really couldn’t put my finger on why.  Was it the "not invented here syndrome?"  I didn’t think so.  So what was it that made me react the way I did?

I think that over time I’ve come to the conclusion that to me, these aren’t so much "immutable laws" but more so derivative abstractions of common sense that left me wondering what all the fuss was about.

Pete posted the five laws on his blog today.  A more detailed set of explanations can be found on the Burton blog here

I dare you to read through these without having to re-read each of them multiple times and then re-read them in cascading sequence since (hint) they are recursive:

Law 1: Attacks against the OS and applications of a physical system
have the exact same damage potential against a duplicate virtual system.

Law 2: A VM has higher risk than its counterpart physical system
that is running the exact same OS and applications and is configured
identically.

Law 3: VMs can be more secure than related physical systems
providing the same functional service to an organization when they
separate functionality and content that are combined on a physical
system.

Law 4: A set of VMs aggregated on the same physical system can only
be made more secure than its physical, separate counterparts by
modifying the configurations of the VMs to offset the increased risk
introduced by the hypervisor.

Law 5: A system containing a “trusted” VM on an “untrusted” host has
a higher risk level than a system containing a “trusted” host with an
“untrusted” VM.

Ultimately, I’d suggest that for the most part, these "observations" are correct, if not oversimplified in a couple of spots.  But again, I’m left with the overall reaction of "so what?" 

Pete even mentions the various reactions he’s been getting:

I have been getting interesting reactions to these. Some say they
are wrong. Some say they are common sense. Some just don’t like the
word "immutable." I think they serve to clarify some of the confusion
that comes up when discussing virtualization by applying fairly
straightforward risk management principles.

I want to believe that somehow these "laws" will enable some sort sort of actionable epiphany that will magically allow me to make my virtualized systems more secure, but I’m left scratching my head regarding who the audience for this was?

I don’t think it clarifies any "confusion" regarding risk and virtualization and I’m puzzled that Burton suggests that these "laws" will enlighten anyone and dispel any confusion relating to whether or not deploying virtualization is more or less risky than not deploying virtualization:

In reality, we can apply traditional security practices to
virtualization to determine whether risk increases or decreases with
new virtualization architectures. It shouldn’t be surprising that the
increase or decrease in risk is predicated on the current architecture.
Here are five laws to live by when evaluating your virtualization
architectures.

When combining the standard risk principles with an understanding of
the use cases of virtualization, a set of immutable laws can be derived
to assist in securing virtual environments

So, I’m with the "common sense" crowd since most of these "laws" have been discussed — and some practical advice to go along with them — for quite some time before the "Burton Tablets" came down from the mountain.

So I don’t disagree, but I’m reminded of a couple of good lines from a bad movie wherein the nasty knight says to the good knight "you’ve been weighed, measured and found wanting…"

So, there we are.  My $0.02.  I think I’ll add a slide or two about this at the virtualization forum next month…

/Hoff

Categories: Virtualization Tags:

Complimentary Admission to the InfoWord Virtualization Executive Forum, February 4th, San Francisco…

January 8th, 2008 3 comments

Virt_header

If any of my readers are interested in attending the InfoWorld Virtualization Executive Forum in San Francisco on February 4th, 2008, I have complimentary passes available (a $795 value)

You might remember that when InfoWorld first ran this forum in New York, I was grumpy because there were no topics/speakers focused on security.  I contacted them and expressed my concern.  Alan Shimel doubted anything would come of my kvetching, but lo and behold, the organizers invited me to come and present at this show in February.

So, if you believe in something strongly enough, good things *do* happen 😉

I am speaking (the last speaker of the day — all that stands between the audience and beer…is me!) so come on down and heckle me:


  Addressing Security Concerns in Virtual Environments

Easy
to create and easy to move, introducing a new software layer between
hardware and operating system, and operating over virtual networks as
well as physical ones, virtual servers present a new order of security
risks and challenges. This session will explore the impact of
virtualization on network and host security, how security solutions
providers are beginning to address them, and the best practices that
are emerging for securing virtual environments.

If you’re interested in attending free of charge, ping me via email and I’ll give you the details in order to register.

Hope to see you there!

/Hoff

{Edited for proper Yiddish…Thanks, Alan!}

Categories: Virtualization Tags:

2008 Security Predictions — They’re Like Elbows…

December 3rd, 2007 6 comments

Carnacangled_2
Yup.  Security predictions are like elbows.  Most everyone’s got at least two, they’re usually ignored unless rubbed the wrong way but when used appropriately, can be devastating in a cage match…

So, in the spirit of, well, keeping up with the Jones’, I happily present you with Hoff’s 2008 Information (in)Security Predictions.  Most of them are feature attacks/attack vectors.  A couple are ooh-aah trends.  Most of them are sadly predictable.  I’ve tried to be more specific than "cybercrime will increase."

I’m really loathe do these, but being a futurist, the only comfort I can take is that nobody can tell me that I’m wrong today 😉

…and in the words of Carnac the Magnificent, "May the winds of the Sahara blow a desert scorpion up your turban…"

  1. Nasty Virtualization Hypervisor Compromise
    As the Hypervisor gets thinner, more of the guts will need to be exposed via API or shed to management and functionality-extending toolsets, expanding the attack surface with new vulnerabilities.  To wit, a Hypervisor-compromising malware will make it’s first in-the-wild appearance to not only produce an exploit, but obfuscate itself thanks to the magic of virtualization in the underlying chipsets.  Hang on to yer britches, the security vendor product marketing SpecOps Generals are going to scramble the fighters with a shock and awe campaign of epic "I told you so" & "AV isn’t dead, it’s just virtualized" proportions…Security "strategery" at it’s finest.

  2. Major Privacy Breach of a Social Networking Site
    With the broadening reach of application extensibility and Web2.0 functionality, we’ll see a major privacy breach via social network sites such as MySpace, LinkedIn or Facebook via the usual suspects (CSRF, XSS, etc.) and via host-based Malware that 0wns unsuspecting Millenials and utilizes the interconnectivity offered to turn these services into a "social botnet" platform with a wrath the likes of which only the ungoldly lovechild of Storm, Melissa, and Slammer could bring…

  3. Integrity Hack of a Major SaaS Vendor
    Expect a serious bit of sliminess to occur with real financial impact to occur from a SaaS vendor’s offering.  With professional cybercrime on the rise, the criminals will go not only where the money is, but also after the data that describes where that money is.  Since much of the security of the SaaS model counts on the integrity and not just the availability of the hosted service, a targeted attack which holds hostage the (non-portable) data and threatens its integrity could have devastating effects on the companies who rely on it.  SalesForce, anyone?
     
  4. Targeted eBanking Compromise with substantial financial losses
    Get ready for a nasty eBanking focused compromise that starts to unravel the consumer confidence in this convenient utility; not directly because of identity abuse (note I didn’t say identity theft) but because of the business model impact it will bring to the banks.   These types of direct attacks (beyond phishing) will start to push the limits of acceptable loss for the financial institutions and their insurers and will start to move the accountability/responsibility more heavily down to the eBanker.  A tiered service level will surface with greater functionality/higher transaction limits being offered with a trade-off of higher security/less convenience.  Same goes for credit/debit cards…priceless!
  5. A Major state-sponsored espionage and cyberAttack w/disruption of U.S. government function
    We saw some of the more noisy examples of low-level crack attacks via our Chinese friends recently, but given the proliferation of botnets, the inexcusably poor levels of security in government systems and network security, we’ll see a targeted attack against something significant.  It’ll be big.  It’ll be public.  It’ll bring new legislation…Isn’t there some little election happening soon?  This brings us to…
  6. Be-Afraid-A of a SCADA compromise…the lunatics are running the asylum!
    Remember that leaked DHS "turn your generator into a roman candle" video that circulated a couple of months ago?  Get ready to see the real thing on prime time news at 11.  We’ve got decades of legacy controls just waiting for the wrong guy to flip the right switch.  We just saw an "insider" of a major water utility do naughty things, imagine if someone really motivated popped some goofy pills and started playing Tetris with the power grid…imagine what all those little SCADA doodads are hooked to…
     
  7. A Major Global Service/Logistics/Transportation/Shipping/Supply-Chain Company will be compromised via targeted attack
    A service we take for granted like UPS, FedEx, or DHL will have their core supply chain/logistics systems interrupted causing the fragile underbelly of our global economic interconnectedness to show itself, warts and all.  Prepare for huge chargebacks on next day delivery when all those mofo’s don’t get their self-propelled, remote-controlled flying UFO’s delivered from Amazon.com.

  8. Mobile network attacks targeting mobile broadband
    So, you don’t use WiFi because it’s insecure, eh?  Instead, you fire up that Verizon EVDO card plugged into your laptop or tether to your mobile phone instead because it’s "secure."  Well, that’s going to be a problem next year.  Expect to see compromise of the RF you hold so dear as we all scramble to find that next piece of spectrum that has yet to be 0wn3d…Google’s 700Mhz spectrum, you say? Oh, wait…WiMax will save us all…
     
  9. My .txt file just 0wn3d me!  Is nothing sacred!?  Common file formats and protocols to cause continued unnatural acts
    PDF’s, Quicktime, .PPT, .DOC, .XLS.  If you can’t trust the sanctity of the file formats and protocols from Adobe, Apple and Microsoft, who can you trust!?  Expect to see more and more abuse of generic underlying software plumbing providing the conduit for exploit.  Vulnerabilities that aren’t fixed properly combined with a dependence on OS security functionality that’s only half baked is going to mean that the "Burros Gone Wild" video you’re watching on YouTube is going to make you itchy in more ways than one…

  10. Converged SensorNets
    In places like the UK, we’ve seen the massive deployment of CCTV monitoring of the populous.  In places like South Central L.A., we have ballistic geo-location and detection systems to track gunshots.  We’ve got GPS in phones.  In airports we have sniffers, RFID passport processing, biometrics and "Total Recall" nudie scanners.  The PoPo have license plate recognition.  Vegas has facial recognition systems.  Our borders have motion, heat and remote sensing pods.  start knitting this all together and you have massive SensorNets — all networked — and able to track you to military precision.  Pair that with GoogleMaps/Streets and I’ll be able to tell what color underwear you had on at the Checkout counter of your local Qwik-E-Mart when you bought that mocha slurpaccino last Tuesday…please don’t ask me how I know.

  11. Information Centric Security Phase One
    It should come as no surprise that focusing our efforts on the host and the network has led to the spectacular septic tank of security we have today.  We need to focus on content in context and set policies across platform and transport to dictate who, how, when, where, and why the creation, modification, consumption and destruction of data should occur.  In this first generation of DLP/CMF solutions (which are being integrated into the larger base of "Information" centric "assurance" solutions,) we’ve taken the first step along this journey.  What we’ll begin to see in 2008 is the information equivalent of the Mission Impossible self-destructing recording…only with a little more intelligence and less smoke.  Here come the DRM haters…
     
  12. The Attempted Coup to Return to Centralized Computing with the Paradox of Distributed Data
    Despite the fact that data is being distributed to the far reaches of the Universe, the wonders of economics combined with the utility of some well-timed technology is seeing IT & Security (encouraged by the bean counters) attempting to reel the genie back in the bottle and re-centralize the computing (desktop, server, application and storage) experience back into big boxes tucked safely away in some data center somewhere.  Funny thing is, with utility/grid computing and SaaS, the data center is but an abstraction, too.  Virtualization companies will become our dark overlords as they will control the very fabric of our digital lives…2008 is when we’ll really start to use the web as the platform for the delivery of all applications, served through streamed desktops on thinner and thinner clients.

So, that’s all I could come up with.  I don’t really have a formulaic empirical model like Stiennon.  I just have a Guiness and start complaining.  This is what I came up with.

In more ways than one, I hope I’m terribly wrong on most of these.

/Hoff

[Edit: Please see my follow-on post titled "And Now Some Useful 2008 Information Survivability Predictions" which speak to some interesting less gloomy things I predict to happen in 2008]

Hypervisors Are Becoming a Commodity…Virtualization Is a Feature?

November 14th, 2007 No comments

Marketfeature2 A couple of weeks ago I penned a blog entry titled "The Battle for the HyperVisor Heats Up"
in which I highlighted an announcement from Phoenix Technologies
detailing their entry into the virtualization space with their
BIOS-enabled VMM/Hypervisor offering called HyperCore.

This drew immediate parallels (no pun intended) to VMware and Xen’s plans to embed virtualization capabilities into hardware.

The marketing continues this week with interesting announcements from Microsoft, Oracle and VMware:

  1. VMware offers VMware Server 2 as a free virtualization product to do battle against…
  2. Oracle offering "Oracle VM" for free (with paid support if you
    like) which claims to be 3 times as efficient than VMWare — based on
    Xen.
  3. Microsoft officially re-badged its server virtualization technology as Hyper-V (nee Veridian)
    detailing both a stand-alone Hyper-V Server as well technology integrated into W2K8 Server.

It seems that everyone and their mother is introducing a virtualization platform and the underpinning of commonality between basic functionality demonstrates how the underlying virtualization enabler — the VMM/Hypervisor — is becoming a commodity.

We are sure to see fatter, thinner, faster, "more secure" or more open Hypervisors, but this will be an area with less and less differentiation.  Table stakes.  Everything’s becoming virtualized, so a VMM/Hypervisor will be the underlying "OS" enabling that transformation.

To illustrate the commoditization trend as well as a rather fractured landscape of strategies, one need only look at the diversity in existing and emerging VMM/Hypervisor solutions.   Virtualization strategies are beginning to revolve around a set of distinct approaches where virtualization is:

  1. Provided for and/or enhanced in hardware (Intel, AMD, Phoenix)
  2. A function of the operating system (Linux, Unix, Microsoft)
  3. Delivered by means of an enabling software layer (nee
    platform) that is deployed across your entire infrastructure (VMware, Oracle)
  4. Integrated into the larger Data Center "Fabric" or Data Center OS (Cisco)
  5. Transformed into a Grid/Utility Computing model for service delivery

The challenge for a customer is making the decision on whom to invest it now.  Given the fact that there is not a widely-adopted common format for VM standardization, the choice today of a virtualization vendor (or vendors) could profoundly affect one’s business in the future since we’re talking about a fundamental shift in how your "centers of data" manifest.

What is so very interesting is that if we accept virtualization as a feature defined as an abstracted platform isolating software from hardware then the next major shift is the extensibility, manageability and flexibility of the solution offering as well as how partnerships knit out between the "platform" providers and the purveyors of toolsets.

It’s clear that VMware’s lead in the virtualization market is right inline with how I described the need for differentiation and extensibility both internally and via partnerships. 

VMotion is a classic example; it’s clearly an internally-generated killer app. that the other players do not currently have and really speaks to being able to integrate virtualization as a "feature" into the combined fabric of the data center.  Binding networking, storage, computing together is critical.  VMware has a slew of partnerships (and potential acquisitions) that enable even greater utility from their products.

Cisco has already invested in VMware and a recent demo I got of Cisco’s VFrame solution shows they are serious about being able to design, provision, deploy, secure and manage virtualized infrastructure up and down the stack, including servers, networking, storage, business process and logic.

In the next 12 months or so, you’ll be able to buy a Dell or HP server using Intel or AMD virtualization-enabled chipsets pre-loaded with multiple VMM/Hypervisors in either flash or BIOS.  How you manage, integrate and secure it with the rest of your infrastructure — well, that’s the fun part, isn’t it?

I’ll bet we’ll see more and more "free" commoditized virtualization platforms with the wallet ding coming from the support and licenses to enable third party feature integration and toolsets.

/Hoff

The Battle For the HyperVisor Heats Up…

October 27th, 2007 2 comments

Laforge
The battle for Hypervisor supremacy is beginning to heat up.  While Geordi LaForge certainly had a primo "hypervisor" unrivaled by shade designers across the galaxy, let’s focus on the virtualization persuasion.  I promise, it’s equally as exciting…

Besides the more well known players in the virtualization space such as VMware, Citrix (nee XenSource,) Sun and Microsoft working diligently to differentiate and provide lighter weight and more secure hypervisors, I found the news regarding Phoenix Technology’s entry into the VMM wars quite interesting on several fronts.  Here’s what they have to say:

  1. Phoenix will provide a very thin footprint hypervisor called HyperCore that sits on top of their SecureCore BIOS which will be loaded before any other OS.
  2. HyperCore would naturally support the installation/execution of non-virtualized OS’s such as Vista, Linux, etc. running on top of it.
  3. HyperCore provides a capability called HyperSpace which offers self-contained embedded operating systems/virtual machines which will allow users to dynamically switch between them and utilize them as isolated virtual appliances delivered in firmware.  The prospective HyperSpace applications include web browsing, messaging, management and encrypted data stores.
  4. Phoenix has engaged with Joanna Rutkowska and her Invisible Things Lab to leverage her work done with her Blue Pill research which she is (now) describing as a very lightweight hypervisor (rather than a hypervisor rootkit) in order to deliver a more secure VMM.

I have yet to see the technical details regarding how VMware’s 3i product loads out of flash, but certainly Phoenix is on a head-on with VMware in this space given their BIOS implementation.

This is very interesting because Phoenix has great BIOS market penetration (about 50% of the overall market and 60% of the mobile x86 market) and is in a good position to offer desktop machines with a compelling option for secure application environments.  I mention desktops and not servers because the folks I’ve spoken to expect to utilize the VMM provided by their main virtualization infrastructure vendor for support, performance and reliability reasons.

I wonder how many more boutique VMM’s we will see popping up soon and driving innovation? 

Hypervisors today really represent an evolution in operating systems. In many respects, the virtualization capabilities they bring to the table really make Scott McNealy’s mantra of "the Network is the Computer" seem just that much more profound.  Everything’s virtualizing; applications, network, storage, policy, data, operating systems…

In fact, when I was at VMworld, I spent a considerable amount of time in the VMware and Cisco booths speaking with product managers and engineers about service composition, provisioning and deployment in the virtualized infrastructure and it became clear that the game was going to change in wholesale fashion very soon.

Chambers referenced the Datacenter OS in his keynote and described that we’ll shortly have a "fabric" which interconnects compute stacks, storage and networking which is managed by tools that allow for realtime dynamic infrastructure, including virtual machines.

You can bet that we’ll see build/buy/ally decisions being executed here soon enough as the virtualization players jock for positioning within this ultra hot segment.

I plan to follow up with another post on the topic of the Datacenter OS shortly, but it will be interesting to see where new players factor into the mix and how sustainable they are under the weight of companies like Cisco, VMware and Microsoft.

/Hoff

Categories: Virtualization Tags:

Version 1.0 of the CIS Benchmark for VMware ESX Server Available

October 19th, 2007 No comments

Cis_2
Version 1.0 of the Center for Internet Security (CIS) benchmark for securing VMware ESX server is available.  This is specific to version 3.x of ESX and covers the basic best practices of preparing an ESX server for deployment.

We’ve still got a ton of stuff that didn’t make the deadline cut-off for the first version of the document in follow-on iterations, but it’s a good start.  Please sign up if you can contribute to making this document even better.

You can find it here.

/Hoff

Categories: Virtualization, VMware Tags:

Virtualization Security Training?

October 1st, 2007 10 comments

I just read an interesting article written by Patrick Thibodeau from Computerworld which described the difficulties IT managers are having finding staffers with virtualization experience and expertise:

As more organizations adopt server virtualization software, they’re
also looking to hire people who have worked with the technology in live
applications.

But such workers can be hard to find, as Joel Sweatte, IT
manager at East Carolina University’s College of Technology and
Computer Science, recently discovered when he placed a help-wanted ad
for an IT systems engineer with virtualization skills.

Sweatte received about 40 applications for the job at the
Greenville, N.C.-based university, but few of the applicants had any
virtualization experience, and he ended up hiring someone who had none.
“I’m fishing in an empty ocean,” Sweatte said.

To give his new hire a crash course in virtualization,
Sweatte brought him to market leader VMware Inc.’s annual user
conference in San Francisco last month. “That’s a major expenditure for
a university,” Sweatte said of the conference and travel costs. “[But]
I wanted him to take a drink from the fire hose.”

If the industry is having trouble finding IT generalists with training in virtualization security, I can only imagine the dearth of qualified security experts in the hopper.  I wonder when the first SANS course in virtualization security will surface?

I’m interested in understanding how folks are approaching security training for their server ops, audit, compliance and security teams.  If you wouldn’t mind, please participate in the poll below.  This is the first time I’ve used Visu Polls, and you’ll need to enable scripting/Flash to make this work:

Categories: Virtualization Tags:

Opening VMM/HyperVisors to Third Parties via API’s – Goodness or the Apocalypse?

September 27th, 2007 2 comments

Holding_breath
This is truly one of those times that we’re just going to have to hold our breath and wait and see…

Prior to VMworld, I blogged about the expected announcement by Cisco and VMware that the latter would be opening the HyperVisor to third party vendors to develop their virtual switches for ESX.

This is extremely important in the long term because security vendors today who claim to have security solutions for virtualized environments are basically doing nothing more than making virtual appliance versions of their software that run as yet another VM on a host along side critical applications.

These virtual appliances/applications are the same you might find running on their stand-alone physical appliance counterparts, and they have no access to the HyperVisor (or the vSwitch) natively.  Most of them therefore rely upon enabling promiscuous mode on vSwitches to gain visibility to inter-VM traffic which uncorks a nasty security genie of its own.  Furthermore, they impose load and latencies on the VM’s as they compete for resources with the very assets they seek to protect.

The only exception to this that I know of currently is Blue Lane who actually implements their VirtualShield product as a HyperVisor Plug-in which gives them tremendous advantages over products like Reflex and Catbird (which I will speak about all of these further in a follow-on post.)  Ed: I have been advised that this statement needs revision based upon recent developments — I will, as I mention, profile a comparison of Blue Lane, Catbird and Reflex in a follow-on post.  Sorry for the confusion.

At any rate, the specific vSwitch announcement described above was not forthcoming at the show, but a more important rumbling became obvious on the show floor after speaking with several vendors such as Cisco, Blue Lane, Catbird and Reflex; VMware was quietly beginning to provide third parties access to the HyperVisor by exposing API’s per this ZDNet article titled "VMware shares secrets in security drive":

Virtualization vendor VMware has quietly begun sharing some of
its software secrets with the IT security industry under an unannounced
plan to create better ways of securing virtual machines. 

VMware
has traditionally restricted access to its hypervisor code and, while
the vendor has made no official announcement about the API sharing
program tentatively called "Vsafe," VMware founder and chief scientist
Mendel Rosenblum said that the company has started sharing some APIs
(application program interfaces) with security vendors.

I know I should be happy about this, and I am, but now that we’re getting closer to the potential for better VM security, the implementation deserves some scrutiny.  We don’t have that yet because most of the vSafe detail is still hush-hush.

This is a double-edged sword.  While it represents a fantastic opportunity to expose functionality and provide visibility into the very guts of the VMM to allow third party software to interact with and control the HyperVisor and dependent VM/GuestOS’s, opening the Kimono represents a potentially huge new attack surface for potential malicious use.

"We would like at a high level for (VMware’s platform) to be a better
place to run," he said. "To try and realize that vision, we have been
partnering with experts in security, like the McAfees and Symantecs,
and asking them about the security issues in a virtual world."

I’m not quite sure I follow that logic.  McAfee and Symantec are just as clueless as the bulk of the security world when it comes to security issues in a virtual world.  Their answer is usually "do what you normally do and please make sure to buy a license for our software on each VM!" 

The long-term for McAfee and Symantec can’t be to continue to deploy bloatware on every VM.  Users won’t put up with the performance hit or the hit in their wallet.  They will have to re-architect to take advantage of the VMM API’s just like everyone else, but they have a more desperate timefame:

Mukil Kesavan, a VMware intern studying at the University of Rochester,
demonstrated his research into the creation of a host-based antivirus
scanning solution for virtualized servers at the conference. Such a
solution would enable people to pay for a single antivirus solution
across a box running multiple virtual servers, rather than having to
buy an antivirus solution for each virtual machine.

Licensing is going to be very critical to companies like these two very shortly as it’s a "virtual certainty" that the cost savings enjoyed by consolidating physical servers will place pressure on reducing the software licensing that goes along with it — and that includes security.


Rosenblum says that some of the traditional tools used to protect a hardware server work just as well in a virtualized environment, while others "break altogether."


"We’re trying to fix the things that break, to bring ourselves up to
the level of security where physical machines are," he said. "But we
are also looking to create new types of protection."

Rosenblum said the APIs released as part of the initiative
offer security vendors a way to check the memory of a processor, "so
they can look for viruses or signatures or other bad things."

Others allow a security vendor to check the calls an
application within a virtual machine is making, or at the packets the
machine is sending and receiving, he said.

I think Rosenblum’s statement is interesting in a couple of ways:

  1. His goal, as quoted,  is to fix the things that virtualization breaks and bring security up to the level of physical servers.  Unlike every other statement from VMware spokesholes, this statement therefore suggests that virtualized environments are less secure than physical ones.  Huh.
  2. I think this area of focus — when combined with the evolution of the determina acquisition — will yield excellent security gains.  Extending the monitoring and visibility into the isolated memory spaces of the virtual processors in a VM means that we may be able to counter attacks without having to depend upon solely on the virtual switches; it gives you application-level visibility without the need for another agent.

The Determina acquisition is really a red herring for VMware.  Determina’s "memory firewall" seeks to protect a system "…from buffer overflow
attacks, while still allowing the system to run at high speeds. It also
developed "hot-patching" technology–which allows servers to be patched
on the fly, while they are still running."  I’ve said before that this acquisition was an excellent move.  Let’s hope the integration goes well.

If you imagine this chunk built into the VMM, the combination of exposed VMM API’s with a lightweight VMM running in hardware (flash) embedded natively into a server less the bloated service console, it really is starting tohead down an interesting path.  This is what ESX Server 3i is designed to provide:

ESX Server 3i has considerable advantages over its predecessors
from a security standpoint. In this latest release, which will be
available in November, VMware has decoupled the hypervisor from the
service console it once shipped with. This console was based on a
version of the Red Hat Linux operating system.


As such, ESX 3i is a mere 32MB in size, rather than 2GB.

Some 50 percent of the vulnerabilities that VMware was patching in
prior versions of its software were attributable to the Red Hat piece,
not the hypervisor.

"Our hope is that those vulnerabilities will all be gone in 3i," Rosenblum said.

Given Kris Lamb’s vulnerability distribution data from last week, I can imagine that everyone hopes that these vulnerabilities will all be gone, too.  I wonder if Kris can go back and isolate how many of the vulns listed as "First Party" were attributable to the service console (the underlying RH Linux OS) accompanying the HyperVisor.  This would be good to know.  Kris? 😉

At any rate, life’s about trade-offs and security’s no different.  I think that as we see the API’s open up, so will more activity designed to start tearing at the fleshy underbelly of the VMM’s.  I wonder if we’ll see attacks specific to flash hardware when 3i comes out?

/Hoff

(P.S. Not to leave XenSource or Veridian out of the mix…I’m sure that their parent companies (Citrix & microsoft) who have quite a few combined security M&A transactions behind them are not dragging their feet on security portfolio integration, either.)

 

Categories: Virtualization, VMware Tags:

What Do the Wicked Witch of the East and a Stranded House Ditched on the Freeway Have to Do with Rogue Virtualization Deployments?

September 26th, 2007 1 comment

OK, hang on tight.  This one’s full of non-sequiturs, free-form associations, housemoving debacles, and several "Wizard of Oz" references…

First comes the setup care of BoingBoing wherein a man who has permission to take one route moving his house down a specific freeway, takes another route instead without telling anyone :

Apparently some guy ditched his house on the Hollywood Freeway, and it’s been there since Saturday.
200709250912Patrick
Richardson’s now immobile home was being moved Saturday from Santa
Monica to Santa Clarita when several mishaps _ including a
roof-shredding blow while attempting to pass beneath an overpass _
slowed its progress and it fell off its trailer.

Richardson, 45, got an oversized load permit from the California
Department of Transportation. But instead of following the authorized
Santa Monica-San Diego-Golden State freeways route, authorities said,
he headed through downtown Los Angeles and then onto the Hollywood
Freeway.

In the downtown area, the wheels started falling off, California Highway Patrol Officer Jason McCutcheon said.

Now the punchline courtesy of ComputerWorld wherein IT managers describe taking their own interesting routes unannounced whilst adopting virtualization. 

A couple of these choice snippets seem to indicate that many corporate IT managers are ignoring posted routes, choosing different off-ramps, and often experience the virtual equivalent of losing their roofs, feeling the wheels come off and leaving their infrastructure stuck on the information superhighway:

IT managers at some companies can feel forced to hide plans
from end users and vendors in order to overcome potential objections to
virtualization
, said IT professionals and analysts attending Computerworld’s Infrastructure Management World (IMW) conference, held earlier this month in Scottsdale, Ariz.

In some cases, end users object to virtualization because
they’re concerned that virtual machines lack the security and
performance of dedicated servers.

Companies are taking a variety of measures to overcome such
obstacles, including adopting “don’t ask, don’t tell” policies in order
to get virtual applications running without notifying users and
vendors.

Some IT professionals at the conference defended decisions to keep
users out of the loop, while others said such dishonest dealings could
prove tricky.

“It’s not like we’re hiding anything,” said Wendy Saadi, a
virtualization project manager for the city government of Mesa, Ariz.

My users don’t care what servers we run their applications on, for the most part, as long as it all works.” 

However, Saadi noted that an initial effort by a small Mesa IT
team to implement virtualization without notifying users — or the rest
of the IT organization — did force a change in direction.

“When we first started, [the small team] watched training
videos about how to virtualize everything without asking anyone first,”
Saadi said. “So they did that, and we were getting a reputation [among
users and other Mesa IT managers] as ‘that’ server group. We put the
brakes on everything.”


Software vendors are also erecting barriers to efforts to set up virtual computing systems, according to IMW attendees. 

Some vendors won’t support their software at all if it’s run
on virtual machines, they said. Those that do support virtualized
deployments have widely varied pricing schemes

David Hodge, manager of computer systems at Systech Inc., a Woodridge,
Ill.-based vendor of billing and dispatch software for concrete mixers,
is one IT staffer who doesn’t tell his vendors and end users about
virtualization projects right away. However, his employer is a software
vendor that prohibits users from virtualizing its software.

“We’re one of those vendors that doesn’t allow our customers
to do virtualization, but I’m off in my corner doing it,” he
acknowledged. “It makes my job easier to just put it out there and then
tell [users] later.
I eventually do tell them, but just not during the
initial period.”

Herb…cleanup, aisle seven!

Wicked_witch_2
Wow.  This is why trying to fix social problems with technology will never work.  The last time we tried to mix magic and housemoving we got this:

Sure, it all ended well, but the Scarecrow (InfoSec,) the Lion (compliance/audit) and the Tinman (IT) went through hell to get there…I guess there’s no place like /var/home

Clicking our heels ain’t gonna make stuff like this better anytime soon.  We need to get our arms around the policies regarding virtualization deployments *before* they start happening, or else you can expect to be pulling folks out from under the collapsed weight of their datacenters.

…if I only had a brain…you got all the references, right?  I knew that you would!

/Hoff

Categories: Virtualization Tags: