Archive

Archive for the ‘Virtualization’ Category

Follow-Up to My Cisco/VMWare Commentary

July 28th, 2007 No comments

 

Cisco_2
Thanks very much to whomsoever at Cisco linked to my previous post(s) onVmware_2
Cisco/VMware and the Data Center 3.0 on the Cisco Networkers website! I can tell it was a person because they misnamed my blog as "Regional Security" instead of Rational Security… you can find it under the Blogs section here. 😉

The virtualization.info site had an interesting follow-up to the VMware/Cisco posts I blogged about previously.

DataCenter 3.0 is Actually Old?

Firstly, in a post titled "Cisco announces (old) datacenter automation solution" in which they discuss the legacy of the VFrame product in which they suggest that VFrame is actually a re-branded and updated version software from Cisco’s acquisition of TopSpin back in 2005:

Cisco is well resoluted to make the most out of virtualization hype: it first declares Datacenter 3.0 initiative (more ambitiously than IDC, which claimed Virtualization 2.0), then it re-launches a technology obtained by TopSpin acquisition in April 2005 and offered since September 2005 under new brand: VFrame.

Obviously the press release doesn’t even mention that VFrame just moved
from 3.0 (which exist since May 2004, when TopSpin was developing it)
to 3.1 in more than three years.

In the same posting, the ties between Cisco and VMWare are further highlighted:

A further confirmation is given by fact that VMware is involved in VFrame development program since May 2004, as reported in a Cisco confidential presentation of 2005 (page 35).

Cisco old presentation also adds a detail about what probably will be announced at VMworld, and an interesting claim:

…VFrame can provision ESX Servers over SAN.

…VMWare needs Cisco for scaling on blades…

This starts helping us understand even further as to why Mr. Chambers will be keynoting at VMWorld’07.

Meanwhile, Cisco Puts its Money where its Virtual Mouth Is

Secondly, VMware announced today that Cisco will invest $150 Million in VMware:

Cisco will purchase $150 million of VMware Class A common shares
currently held by EMC Corporation, VMware’s parent company, subject to
customary regulatory and other closing conditions including
Hart-Scott-Rodino (HSR) review. Upon closing of the investment, Cisco
will own approximately 1.6 percent of VMware’s total outstanding common
stock (less than one percent of the combined voting power of VMware’s
outstanding common stock).  VMware has agreed to consider the
appointment of a Cisco executive to VMware’s board of directors at a
future date.

Cisco’s purchase is intended to strengthen inter-company
collaboration towards accelerating customer adoption of VMware
virtualization products with Cisco networking infrastructure and the
development of customer solutions that address the intersection of
virtualization and networking technologies. 

In addition, VMware and Cisco have entered into a routine and
customary collaboration agreement that expresses their intent to expand
cooperative efforts around joint development, marketing, customer and
industry initiatives.  Through improved coordination and integration of
networking and virtualized infrastructure, the companies intend to
foster solutions for enhanced datacenter optimization and extend the
benefits of virtualization beyond the datacenter to remote offices and
end-user desktops.

If should be crystal clear that Cisco and EMC are on a tear with regards to virtualization and that to Cisco, "bits is bits" and virtualizing those bits across the app. stack, network, security and storage departments coupled with a virtualized service management layer is integral to their datacenter strategy.

It’s also no mystery as to why Mr. Chambers is keynoting @ VMWorld now, either.

/Hoff

Categories: Cisco, Virtualization, VMware Tags:

Cisco Responds to My Data Center Virtualization Post…

July 24th, 2007 2 comments

Cisco
"…I will squash him like a liiiiittle bug, that Hoff!"

OK, well they weren’t responding directly to my post from last night, but as they say in the big show, "timing is everything."

My last blog entry detailed some navel gazing regarding some interesting long term strategic moves by Cisco to further embrace the virtualized data center and the impact this would have on the current and future product roadmaps.  I found it very telling that Chambers will be keynoting at this year’s VMWorld and what this means for the future.

Not 8 hours after my posting (completely coincidental I’m sure 😉 the PR machine spit out the following set of announcements from Networkers Cisco Live titled "Cisco Unveils Plans to Transform the Data Center."    You can find more detailed information from Cisco’s web here.

This announcement focused on outlining some of the near-term (2 year) proofpoints and touts the introduction of "…New Data Center Products, Services and Programs to Support a Holistic View of the Data Center." 

There’s an enormous amount of data to digest in this announcement, but the interesting bits for me to focus on are the two elements pertaining to security virtualization as well as service composition, provisioning and intelligent virtualized service delivery.   This sort of language is near and dear to my heart.

I’m only highlighting a small subsection of the release as there is a ton of storage, data mobility, multiservice fabric and WAAS stuff in there too.  This is all very important stuff, but I wanted to pay attention to the VFrame Data Center orchestration platform and the ACE XML security gateway functions since they pertain to what I have been writing about recently:

If you can choke back the bile from the  "Data Center v3.0" moniker:

…Cisco announced at a press conference today its
vision for
next-generation data centers, called Data Center 3.0. The
Cisco vision for
Data Center 3.0 entails the real-time, dynamic orchestration
of
infrastructure services from shared pools of virtualized
server, storage
and network resources, while optimizing application performance,
service
levels, efficiency and collaboration.

Over the next 24 months, Cisco will deliver innovative new
products,
programs, and capabilities to help customers realize the
Cisco Data Center
3.0 vision. New products and programs announced today support
that vision,
representing the first steps in helping customers to create
next-generation
data centers.

Cisco VFrame Data Center

VFrame Data Center (VFrame DC) is an orchestration platform
that leverages
network intelligence to provision resources together as
virtualized
services. This industry-first approach greatly reduces application
deployment times, improves overall resource utilization,
and offers greater
business agility. Further, VFrame DC includes an open API,
and easily
integrates with third party management applications, as
well as
best-of-breed server and storage virtualization offerings.

With VFrame DC, customers can now link their compute, networking
and
storage infrastructures together as a set of virtualized
services. This
services approach provides a simple yet powerful way to
quickly view all
the services configured at the application level to improve
troubleshooting
and change management. VFrame DC offers a policy engine
for automating
resource changes in response to infrastructure outages and
performance
changes. Additionally, these changes can be controlled by
external
monitoring systems via integration with the VFrame DC web
services
application programming interface (API).

I think that from my view of the world, these two elements represent a step in the right direction for Cisco.  Gasp!  Yes, I said it.  While Chambers prides himself on hyping Cisco’s sensitivity to "market transitions" it’s clear that Cisco gets that virtualization across both the network, host and storage is actually a real market.  They’re still working the security piecem however they, like Microsoft, mean business when they enter a space and it’s no doubt they’re swinging to fences with VFrame. 

I think the VFrame API is critical and how robust it is will determine the success of VFrame.  It’s interesting that VFrame is productized as an appliance, but I think I get what Chambers is going to be talking about at VMWorld — how VFrame will interoperate/interact with VMWare provisioning and management toolsets. 

Interestingly, the UI and template functionality looks a hell of a lot like some others I’ve blogged about and is meant to provide an umbrella management "layer" that allows for discovery, design, provisioning, deployment and automation of services and virtualized components across resource pools of servers, network components, security and storage:

Cisco VFrame Data Center components include:

  • Cisco VFrame Data Center Appliance: Central controller that connects to Ethernet and Fibre Channel networks
  • Cisco VFrame Data Center GUI: Java-based client that accesses application running on VFrame Data Center Appliance
  • Cisco VFrame Web Services Interface and Software Development Kit:
    Programmable interface that allows scripting of actions for Cisco
    VFrame Data Center
  • Cisco VFrame Host Agent: Host agent that provides server heartbeat,
    capacity utilization metrics, shutdown, and other capabilities
  • Cisco VFrame Data Center Macros: Open interface that allows administrators to create custom provisioning actions

That’s ambitious to say the least.

It’s still a raucous debate with me regarding where a lot of this stuff belongs (in the network or as a service layer) and I maintain the latter.  Innovation driven by companies such as 3Tera demonstrate that the best ideas are always copied by the 800 pound gorillas once they become mainstream.

Enhanced Cisco ACE XML Gateway Software

The new Cisco Application Control Engine (ACE) Extensible
Markup Language
(XML) Gateway software delivers enhanced capabilities for
enabling secure
Web services, providing customers with better management,
visibility, and
performance of their XML applications and Web 2.0 services.
The new
software includes a wide variety of new capabilities and
features plus
enhanced performance monitoring and reporting, providing
improved
operations and capacity planning for Web services secured
by the Cisco ACE
XML Gateway.

I’d say this is a long overdue component for Cisco; since Chambers has been doing nothing but squawking about Web2.0, collaboration, etc., the need to integrate XML security into the security portfolio is a must, especially as we see XML as the Internet-based messaging bus for just about everything these days.

All in all I’d say Cisco is doing a good job of continuing to push the message along and while one shouldn’t see this faint praise as me softening my stance on Cisco’s execution potential, it’s yet to be seen if trying to be everything to everyone will deliver levels of service commensurate with what customers need.

Only time will tell.

/Hoff

 

Categories: Cisco, Networking, Virtualization Tags:

Cisco & VMWare – The Revolution will be…Virtualized?

July 24th, 2007 No comments

Blogrevolution
During my tour of duty at Crossbeam, I’ve closely tracked the convergence of the virtualization strategies of companies such as VMWare with Cisco’s published long term product direction. 

One of the selfish reasons for doing so is that from a product-perspective, Crossbeam’s platform provides a competitively open, virtualized routing and switching platform combined with a blade-based processing compute stack powered by a hardened, Linux based operating system that allows customers to run the security applications of their choice. 

This provides an on-demand security architecture allowing customers to simply add a blade in order to add an application service component when needed.

Basically this allows one to virtualize networking/transport, applications/security contexts and security policies across any area of the network into which this service layer is plumbed and control the flows in order to manipulate in serial or parallel the path traffic takes through these various security software components.

So that’s the setup.  Yes, it’s intertwined with a bit of a commercial, but hey…perhaps liberty and beer are your idea of "free," but my blogoliciousness ain’t.  What’s really interesting is some of the deeper background on the collision of traditional networking with server virtualization technology.

While it wasn’t the first time we’ve heard it (and it won’t be the last,) back in December 2006, Phil Hochmuth from Network World wrote an article that appeared on the front page which was titled "Cisco’s IOS set for radical pricing, feature changes."  This article quoted Cliff Metzler, senior vice president of the company’s Network Management Technology Group as saying these very important words:

Cisco’s intention is to decouple IOS software from the hardware it
sells, which could let users add enhancements such as security or VoIP
more quickly,
without having to reinstall IOS images on routers and
switches. The vendor also plans to virtualize many of its network
services and applications, which currently are tied to
hardware-specific modules or appliances.

This
shift would make network gear operate more like a virtualized server,
running multiple operating systems and applications on top of a
VMware-like layer, as opposed to a router with a closed operating
system
, in which applications are run on hardware-based blades and
modules. Ultimately, these changes will make it less expensive to
deploy and manage services that run on top of IP networks, such as
security, VoIP and management features, Cisco says.

“The way we’ve sold software in the past is we’ve bolted it onto a
piece of hardware, and we shipped [customers] the hardware,” Metzler
said. “We need more flexibility to allow customers to purchase software
and to deploy it according to their terms.
” 

IOS upgrades require a reinstall of the new software image on the
router or switch — which causes downtime — or, “we say, not a problem,
UPS will arrive soon, here’s another blade” to run your new service or
application
, Metzler said. “This adds months to the deployment cycle,
which is not good for customers or Cisco’s business.”

The article above fundamentally demonstrates the identical functional software-based architecture that Crossbeam offers for exactly the right reasons; make security simpler, less expensive, easier to manage and more flexible to deploy on hardware that scales performance-wise.

Now couple this with the announcement that John Chambers will be delivering a keynote at VMWorld and things get even more interesting in a hurry.  Alessandro Perilli over at the Virtualization.info blog shares his perspective on why this is important and what it might mean:

Chambers presence possibly means announcement of a major partnership
between VMware and Cisco, which may be related to network equipment
virtualization or endpoint security support.

Many customers in these years prayed to have capability to use
virtual machines as routers inside VMware virtual networks. So far this
has been impossible:
despite Cisco proprietary IOS relies on standard
x86 hardware, it still requires a dedicated EEPROM to work, which
VMware doesn’t include in its virtual hardware set. Maybe Cisco is now
ready to virtualize its hardware equipment.

On the other side VMware may have a deal in place with Cisco about
its Assured Computing Environment (ACE) product: Cisco endpoint
security solution called Network Admission Control (NAC) may work with
VMware ACE as an endpoint security agent, eliminating any need to
install more software inside host or guest operating systems.

In any case a partnership between VMware and Cisco may greatly enhance virtual infrastructures capabilities.

This is interesting for sure and if you look at the way in which the demand for flexibility of software combined with generally-available COTS compute stacks and specific network processing where required, the notion that Cisco might partner with VMWare or a similar vendor such as SWSoft looks compelling.  Of course with functionality like KVM in the Linux kernel, there’s no reason they have to buy or ally…

Certainly there are already elements of virtualization within Cisco’s routing, switching and security infrastructure, but many might argue that it requires a refresh in order to meet the requirements of their customers.  It seems that their CEO does.

I think that this type of architecture looks promising.  Of course, you could have purchased it 6 years ago — as you can today — by talking to these folks. But I’m biased. 😉

/Hoff

Categories: Cisco, Virtualization, VMware Tags:

The Evolution of Bipedal Locomotion & the Energetic Economics of Virtualization

July 17th, 2007 5 comments

Evolution_2
By my own admission, this is a stream-of-consciousness, wacky, caffeine-inspired rant that came about while I was listening to a conference call.   It’s my ode to paleoanthropology and how we, the knuckledraggers of IT/Security, evolve.

My apologies to anyone who actually knows anything about or makes an honest living from science; I’ve quite possibly offended all of you with this post…

I came across this interesting article posted today on the ScienceDaily website which discusses a hypothesis by a University of Arizona professor, David Raichlen, who suggests that bipedalism, or walking on two legs, evolved simply because it used less energy than quad knuckle-walking.  If one looks at the force impact expended whilst quad-knuckle walking, it is comparably 4 times that of a bipedal footprint!  That’s huge.

I’m always looking for good tangential analogs for points I want to reinforce within the context of my line of work, and I found this fantastic fodder for such an exercise.

So without a lot of work on my part, I’m going to post some salient points from the article and leave it up to you to determine how, if at all, the "energetic" evolution of virtualization draws interesting parallels to this very interesting hypothesis; that the heretofore theorized complexity associated with this crucial element of human evolution was, in fact, simply an issue derived from energy efficiency which ultimately led to sustainable survivability and not necessarily because of ecological, behavioral or purely anatomical reasons:

From Here:

The origin of bipedalism, a defining feature of hominids, has been 
attributed to several
competing hypothesis. The postural feeding hypothesis
(Hunt 1996) is an ecological model.
The behavioral model (Lovejoy 1981) attributes bipedality to the social, sexual and
reproductive conduct
of early hominids. The thermoregulatory model (Wheeler 1991) views
the
increased heat loss, increased cooling, reduced heat gain and reduced water requirements conferred by a bipedal stance in a hot, tropical
climate as the selective pressure leading to bipedalism.

At its core, server virtualization might be described as a manifestation of how we rationalize and deal with the sliding-window impacts of time and the operational costs associated with keeping pace with the transformation and adaptation of technology in compressed footprints.  One might describe this as the "energy" (figuratively and literally) that it takes to operate our IT infrastructure.

It’s about doing more with less and being more efficient such that the "energy" used to produce and deliver services is small in comparison to the output mechanics of what is consumed.  One could suggest that once the efficiency gains (or savings?) are realized, the energy can be allocated to other more enabling abilities.  Using the ape to human bipedalism analog, one could suggest that bipedalism lead to bigger brains, better hunting/gathering skills, fashioning tools, etc.  Basically the initial step of efficiency gains leads to exponential capabilities over the long term.

So that’s my Captain Obvious declaration relating bipedalism with virtualization.  Ta Da!

From the article as sliced & diced by the fine folks at ScienceDaily:

Raichlen and his colleagues will publish the article, "Chimpanzee
locomotor energetics and the origin of human bipedalism" in the online
early edition of the Proceedings of the National Academy of Sciences
(PNAS) during the week of July 16. The print issue will be published on
July 24.

Bipedalism marks a critical divergence between humans
and other apes and is considered a defining characteristic of human
ancestors. It has been hypothesized that the reduced energy cost of
walking upright would have provided evolutionary advantages by
decreasing the cost of foraging.

"For decades now researchers
have debated the role of energetics and the evolution of bipedalism,"
said Raichlen. "The big problem in the study of bipedalism was that
there was little data out there."

The researches collected
metabolic, kinematic and kenetic data from five chimpanzees and four
adult humans walking on a treadmill. The chimpanzees were trained to
walk quadrupedally and bipedally on the treadmill.

Humans
walking on two legs only used one-quarter of the energy that
chimpanzees who knuckle-walked on four legs did. On average, the
chimpanzees used the same amount of energy using two legs as they did
when they used four legs. However, there was variability among
chimpanzees in how much energy they used, and this difference
corresponded to their different gaits and anatomy.

"We were
able to tie the energetic cost in chimps to their anatomy," said
Raichlen. "We were able to show exactly why certain individuals were
able to walk bipedally more cheaply than others, and we did that with
biomechanical modeling."

The biomechanical modeling revealed
that more energy is used with shorter steps or more active muscle mass.
Indeed, the chimpanzee with the longest stride was the most efficient
walking upright.

"What those results allowed us to do was to
look at the fossil record and see whether fossil hominins show
adaptations that would have reduced bipedal energy expenditures," said
Raichlen. "We and many others have found these adaptations [such as
slight increases in hindlimb extension or length] in early hominins,
which tells us that energetics played a pretty large role in the
evolution of bipedalism."

The point here is not that I’m trying to be especially witty, but rather to illustrate that when we cut through the FUD and marketing surrounding server virtualization and focus on evolution versus revolution, some very interesting discussion points emerge regarding why folks choose to virtualize their server infrastructure.

After I attended the InterOp Data Center Summit, I walked away with a very different view of the benefits and costs of virtualization than I had before.  I think that as folks approach this topic, the realities of how the game changes once we start "walking upright" will provide a profound impact to how we view infrastructure and what the next step might bring.

Server virtualization at its most basic is about economic efficiency (read: energy == power + cooling…) plain and simple.  However, if we look beyond this as the first "step," we’ll see grid and utility computing paired with Web2.0/SaaS take us to a whole different level.  It’s going to push security to its absolute breaking point.

I liked the framing of the problem set with the bipedal analog.  I can’t wait until we come full circle, grow wings and start using mainframes again 😉

Did that make any bloody sense at all?

/Hoff

P.S. I liked Jeremiah’s evolution picture, too:

Evolution2

 

 

Categories: Virtualization Tags:

Secure Services in the Cloud (SSaaS/Web2.0) – InternetOS Service Layers

July 13th, 2007 2 comments

Internet
The last few days of activity involving Google and Microsoft have really catalyzed some thinking and demonstrated some very intriguing indicators as to how the delivery of applications and services is dramatically evolving. 

I don’t mean the warm and fuzzy marketing fluff.  I mean some real anchor technology investments by the big-boys putting their respective stakes in the ground as they invest hugely in redefining their business models to setup for the future.

Enterprises large and small are really starting to pay attention to the difference between infrastructure and architecture and this has a dramatic effect on the service providers and supply chain who interact with them.

It’s become quite obvious that there is huge business value associated with divorcing the need for "IT" to focus on physically instantiating and locating "applications" on "boxes" and instead  delivering "services" with the Internet/network as the virtualized delivery mechanism.

Google v. Microsoft – Let’s Get Ready to Rumble!

My last few posts on Google’s move to securely deliver a variety of applications and services represents the uplift of the "traditional" perspective of backoffice SaaS offerings such as Salesforce.com but also highlights the migration of desktop applications and utility services to the "cloud" also.

This is really executing on the vision of the thin-client Internet-centric vision from back in the day o’ the bubble when we saw a ton of Internet-borne services such as storage, backup, etc.  using the "InternetOS" as the canvas for service.

So we’ve talked about Google.  I maintain that their strategy is to ultimately take on Microsoft — including backoffice, utility and desktop applications.  So let’s look @ what the kids from Redmond are up to.

What Microsoft is developing towards with their vision of CloudOS was just recently expounded upon by one Mr. Ballmer.

Not wanting to lose mindshare or share of wallet, Microsoft is maneuvering to give the customer control over how they want to use applications and more importantly how they might be delivered.  Microsoft Live bridges the gap between the traditional desktop and puts that capability into the "cloud."

Let’s explore that a little:

In addition to making available its existing services, such as mail and
instant messaging, Microsoft also will create core infrastructure
services, such as storage and alerts, that developers can build on top
of. It’s a set of capabilities that have been referred to as a "Cloud OS," though it’s not a term Microsoft likes to use publicly.

Late last month, Microsoft introduced two new Windows Live Services,
one for sharing photos and the other for all types of files. While
those services are being offered directly by Microsoft today, they
represent the kinds of things that Microsoft is now promising will be
also made available to developers.

Among the other application and infrastructure components,
Microsoft plans to open are its systems for alerts, contact management,
communications (mail and messenger) and authentication.

As it works to build out the underlying core services, Microsoft is
also offering up applications to partners, such as Windows Live
Hotmail, Windows Live Messenger and the Spaces blogging tool.

Combine the emerging advent of "thinner" end-points (read: mobility products) with high-speed, lower latency connectivity and we can see why this model is attractive and viable.  I think this battle is heating up and the consumer will benefit.

A Practical Example of SaaS/InternetOS Today?

So if we take a step back from Google and Microsoft for a minute, let’s take a snapshot of how one might compose, provision, and deploy applications and data as a service using a similar model over the Internet with tools other than Live or GoogleGear.

Let me give you a real-world example — deliverable today — of this capability with a functional articulation of this strategy; on-demand services and applications provided via virtualized datacenter delivery architectures using the Internet as the transport.  I’m going to use a mashup of two technologies: Yahoo Pipes and 3tera’s AppLogic.

Yahoo Pipes is  "…an interactive data aggregator and manipulator that lets you mashup your favorite online data sources."  Assuming you have data from various sources you want to present an application environment such as Pipes will allow you to dynamically access, transform and present this information any way you see fit.

This means that you can create what amounts to application and services on demand. 

Let’s agree however that while you have the data integration/presentation layer, in many cases you would traditionally require a complex collection of infrastructure from which this source data is housed, accessed, maintained and secured. 

However, rather than worry about where and how the infrastructure is physically located, let’s use the notion of utility/grid computing to make available dynamically an on-demand architecture that is modular, reusable and flexible to make my service delivery a reality — using the Internet as a transport.

Enter 3Tera’s AppLogic:

3Tera’s AppLogic is used by hosting providers to offer true utility computing. You get all the control of having your own virtual datacenter, but without the need to operate a single server.

Deploy and operate applications in your own virtual private datacenter

Set up infrastructure, deploy apps and manage operations with just a browser    
Scale from a fraction of a server to hundreds of servers in days

Deploy and run any Linux software without modifications

Get your life back: no more late night rushes to replace failed equipment

In fact, BT is using them as part of the 21CN project which I’ve written about many times before.

So check out this vision, assuming the InternetOS as a transport.  It’s the drag-and-drop, point-and-click Metaverse of virtualized application and data combined with on-demand infrastructure.

You first define the logical service composition and provisioning through 3Tera with a visual drag-drop canvas, defining firewalls, load-balancers, switches, web servers, app. servers, databases, etc.  Then you click the "Go" button.  AppLogic provisions the entire thing for you without you even necessarily knowing where these assets are.

Then, use something like Pipes to articulate how data sources can be accessed, consumed and transformed to deliver the requisite results.  All over the Internet, transparent to you securely.

Very cool stuff.

Here are some screen-caps of Pipes and 3Tera.

Yahoopipes

3tera

 

 

 

For Sale / Special Price: One (Un)detectable Hyperjacking PillWare: $416,000. Call Now While Supplies Last!

June 29th, 2007 No comments

Rootkits_for_dummies
Joanna Rutkowska of "Invisible Things" Blue Pill Hypervisor rootkit fame has a problem.  It’s about 6 foot+ something, dresses in all black and knows how to throw down both in prose and in practice.

Joanna and crew maintain that they have the roughed-out prototype that supports their assertion that their HyperJacking malware is undetectable.  Ptacek and his merry band of Exploit-illuminati find this a hard pill to swallow and reckon they have a detector that can detect the "undetectable."

They intend to prove it.  This is awesome!  It’s like the Jackson/Lidell UFC fight.  You don’t really know who to "root" for, you just want to be witness to the ensuing carnage!

We’ve got a stare down.  Ptacek and crew have issued a challenge that they expect — with or without Joanna’s participation — to demonstrate successfully at BlackHat Vegas:

Joanna, we respectfully request terms under which you’d agree to an
“undetectable rootkit detection challenge”. We’ll concede almost
anything reasonable; we want the same access to the
(possibly-)infected machine than any antivirus software would get.

The backstory:

  • Dino Dai Zovi, under Matasano colors,
    presented a hypervisor rootkit (“Vitriol”) for Intel’s VT-X extensions at Black Hat last year,
    at the same time as Joanna presented BluePill for AMD’d SVM.

  • We concede: Joanna’s rootkit is coolor than ours. I particularly
    liked using the debug registers to grab network traffic out of
    the drivers. We stopped weaponizing Vitriol.

  • Peter Ferrie, the Symantec branch of our Black Hat team, releases
    a kick-ass paper
    on hypervisor detection. Peter’s focus is
    on fingerprinting software hypervisors (like VMWare), but he also
    comes up with a clever way to detect hardware virtualization.

  • Nate Lawson, Dino, and I are, simultaneously, working on hardware
    rootkit detection techniques.

  • Nate, Peter, Dino, and I join up to defend our thesis at Black
    Hat: if you surreptitiously “hyperjack” an OS, enabling hardware
    virtualization (or replacing or infecting an existing hypervisor),
    you introduce so many subtle changes in system behavior —- timing
    and otherwise —- that you’re bound to be detectable.

…and Joanna respondeth, signaling her "readiness" and conditions for the acceptance of said challenge:

Thomas Ptacek and company just came up with this funny challenge to test our Blue Pill rootkit. And, needles to say, the Invisible Things Lab team is ready to take their challenge, however with some additional requirements, that would assure the fairness of the contest.

First,
we believe that 2 machines are definitely not enough, because the
chance of correct guess, using a completely random (read: unreliable)
detection method is 50%. Thus we think that the reasonable number is 5
machines. Each of them could be in a state 0 or 1bluepill.exe and bluepill.sys

The .sys
file is digitally signed, so it loads without any problem (we could use
one of our methods for loading unsigned code on vista that we’re
planning to demonstrate at BH, but this is not part of the challenge,
so we will use the official way).

The bluepill.exe takes one argument which is 0 or 1. If it’s 1 it loads the driver and infects the machines. If it’s 0 it also loads the driver, but the driver does not infect the machine.

So, on each of the 5 machines we run bluepill.exe with randomly chosen argument, being 0 or 1. We make sure that at least one machine is not infected and that at least one machine is infected.

After that the detection team runs their detector.exe executable on each machine. This program can not take any arguments and must return only one value: 0 or 1. It must act autonomously — no human assistance when interpreting the results.

The goal of the detection team is to correctly mark each machine as either being infected (1) or not (0). The chance of a blind guess is:

(i.e. infected or not). On each of this machines we install two files:

1/(2^5-2) = 3%


The
detector can not cause system crash or halt the machine — if it does
they lose. The detector can not consume significant amount of CPU time
(say > 90%) for more then, say 1 sec. If it does, then it’s
considered disturbing for the user and thus unpractical.

The
source code of our rootkit as well as the detector should be provided
to the judges at the beginning of the contests. The judges will compile
the rootkit and the detector and will copy the resulting binaries to
all test machines.

After the completion of the contest,
regardless of who wins, the sources for both the rootkit and the
detector will be published in the Internet — for educational purpose
to allow others to research this subject.

Our current Blue Pill
has been in the development for only about 2 months (please note that
we do not have rights to use the previous version developed for
COSEINC) and it is more of a prototype, with primary use for our training in Vegas,
rather then a "commercial grade rootkit". Obviously we will be
discussing all the limitations of this prototype during our training.
We believe that we would need about 6 months full-time work by 2 people
to turn it into such a commercial grade creature that would win the
contest described above. We’re ready to do this, but we expect that
somebody compensate us for the time spent on this work. We would expect
an industry standard fee for this work, which we estimate to be $200
USD per hour per person.

If Thomas Ptacek and his colleges are
so certain that they found a panacea for virtualization based malware,
then I’m sure that they will be able to find sponsors willing to
financially support this challenge.

As a side note, the description for our new talk for Black Hat Vegas has just been published yesterday.

So, if you get past the polynomial math, the boolean logic expressions, and the fact that she considers this challenge "funny," reading between the HyperLines, you’ll extract the following:

  1. The Invisible Things team has asserted for some time that their rootkit is 100% undetectable
  2. They’ve worked for quite sometime on their prototype, however it’s not "commercial grade"
  3. In order to ensure success in winning the competition and thus proving the assertion, they need to invest time in polishing the rootkit
  4. They need 5 laptops to statistically smooth the curve
  5. The Detector can’t impact performance of the test subjects
  6. All works will be Open Sourced at the conclusion of the challenge
    (Perhaps Alan Shimel can help here! 😉 ) and, oh, yeah…
  7. They have no problem doing this, but someone needs to come up with $416,000 to subsidize the effort to prove what has already been promoted as fact

That last requirement is, um, unique.

Nate Lawson, one of the challengers, is less than impressed with this codicil and respectfully summarizes:

The final requirement is not surprising. She claims she has put four
person-months work into the current Blue Pill and it would require
twelve more person-months for her to be confident she could win the
challenge. Additionally, she has all the experience of developing Blue
Pill for the entire previous year.

We’ve put about one person-month into our detector software and have
not been paid a cent to work on it. However, we’re confident even this
minimal detector can succeed, hence the challenge. Our Blackhat talk
will describe the fundamental principles that give the detector the
advantage.

If Joanna’s time estimate is correct, it’s about 16 times harder to
build a hypervisor rootkit than to detect it. I’d say that supports our
findings.

I’m not really too clear on Nate’s last sentence as I didn’t major in logic in high school, but to be fair, this doesn’t actually discredit Joanna’s assertion; she didn’t say it wasn’t difficult to detect HV rootkits, she said it was impossible. Effort and possibility are mutually exclusive.

This is going to be fun.  Can’t wait to see it @ BlackHat.

See you there!

/Hoff

Read more…

Categories: Virtualization, VM HyperJacking Tags:

The 4th Generation of Security Devices = UTM + Routing & Switching or New Labels = Perfuming a Pig?

June 22nd, 2007 5 comments

That’s it.  I’ve had it.  Again.  There’s no way I’d ever make it as a Marketeer.  <sigh> Pig_costume1_2

I almost wasn’t going to write anything about this particular topic because my response can (and probably should) easily be perceived as and retorted against as a pissy little marketing match between competitors.  Chu don’t like it, Chu don’t gotta read it, capice?

Sue me for telling the truth. {strike that, as someone probably will}

However, this sort of blatant exhalation of so-called revolutionary security product and architectural advances disguised as prophecy is just so, well, recockulous, that I can’t stand it.

I found it funny that the Anti-Hoff (Stiennon) managed to slip another patented advertising editorial Captain Obvious press piece in SC Magazine regarding what can only be described as the natural evolution of network security products that plug into — but are not natively — routing or switching architectures.

I don’t really mind that, but to suggest that somehow this is an original concept is just disingenuous.

Besides trying to wean Fortinet away from the classification as UTM devices (which Richard clearly hates
to be associated with) by suggesting that UTM should be renamed as "Flexible Security Platform," he does a fine job of asserting that a "geologic shift" (I can only assume he means tectonic) is coming soon in the so-called fourth generation of security products.

Of course, he’s completely ignoring the fact that the solution he describes is and has already been deployed for years…but since tectonic shifts usually take millions of years to culminate in something noticeably remarkable, I can understand his confusion.

As you’ll see below, calling these products "Flexible Security Platforms" or "Unified Network Platforms" is merely an arbitrary and ill-conceived hand-waving exercise in an attempt to differentiate in a crowded market.  Open source or COTS, ASIC/FPGA or multi-core Intel…that’s just the packaging and delivery mechanism.  You can tart it up all you want with fancy marketing…

It’s not new, it’s not revolutionary (because it’s already been done) and it sure as hell ain’t the second coming.  I’ll say it again, it’s been here for years.  I personally bought it and deployed it as a customer almost 4 years ago…if you haven’t figured out what I’m talking about yet, read on.

Here’s how C.O. describes what the company I work for has been doing for 6 years and that he intimates Fortinet will provide that nobody else can:

We are rapidly approaching the advent of the fourth generation
security platform. This is a device that can do all of the security
functions that are lumped in to UTM but are also excellent network
devices at layers two and three. They act as a switch and a router.
They supplant traditional network devices while providing security at
all levels. Their inherent architectural flexibility makes them easy to
fit into existing environments and even make some things possible that
were never possible before. For instance a large enterprise with
several business units could deploy these advanced networking/security
devices at the core and assign virtual security domains to each
business unit while performing content filtering and firewalling
between each virtual domain, thus segmenting the business units and
maximizing the investment in core security devices.

One geologic
shift that will occur thanks to the advent of these fourth generation
security platforms is that networking vendors will be playing catch up,
trying to patch more and more security functions into their
under-powered devices or complicating their go to market message with a
plethora of boxes while the security platform vendors will quickly and
easily add networking functionality to their devices.

Fourth
generation network security platforms will evolve beyond stand alone
security appliances to encompass routing and switching as well. This
new generation of devices will impact the networking industry it
scrambles to acquire the expertise in security and shift their business
model from commodity switching and routing to value add networking and
protection capabilities.

Let’s see…combine high-speed network processing whose routing/switching architecture was designed by the same engineers that designed Bay/Welfleet’s core routers, add in a multi-core Intel processing/compute layer which utilizes virtualized, load-balanced security applications as a  service layer that can be overlaid across a fast, reliable, resilient and highly-available network transport and what do you get?

X80angled_2This:

Up to 32 GigE or 64 10/100 switching ports and 40 Intel cores in a single chassis today…and in Q3’07 you’ll also have the combination of our NextGen network processors which will provide up to 8x10GigE and 40xGigE with 64 MIPS Network Security cores combined with the same 40 Intel cores in the same chassis.

By the way, I consider that routing and switching are just table stakes, not market differentiators; in products like the one to the left, this is just basic expected functionality.

Furthermore, in this so-called next generation of "security switches," the customer should be able to run both open source as well as best-in-breed COTS security applications on the platform and not constrain the user to a single vendor’s version of the truth running proprietary software.

—–

But wait, it only gets better…what I found equally as hysterical is the notion that Captain Obvious now has a sidekick!  It seems Alan Shimel has signed on as Richard’s Boy Wonder.  Alan’s suggesting that again, the magic bullet is Cobia and that because he can run a routing daemon and his appliance has more than a couple of ports, it’s a router and a switch as well as a multi-function UTM UNP swiss army knife of security & networking goodness — and he was the first to do it!  Holy marketing-schizzle Batman! 

I don’t need to re-hash this.  I blogged about it here before.

You can dress Newt Gingrich up as a chick but it doesn’t mean I want to make out with him…

This is cheap, cheap, cheap marketing on both your parts and don’t believe for a minute that customers don’t see right through it; perfuming pigs is not revolutionary, it’s called product marketing.

/Hoff

McAfee’s Bolin suggests Virtual “Sentinel” Security Model for Uncompromised Security

June 15th, 2007 2 comments

Sentinel
Christopher Bolin, McAfee’s EVP & CTO blogged an interesting perspective on utilizing virtualization technology to instantiate security functionality in an accompanying VM on a host to protect one or more VM’s on the same host.  This approach differs than the standard approach of placing host-based security controls on each VM or the intra-VS IPS models from companies such as Reflex (blogged about that here.)

I want to flush out some of these concepts with a little more meat attached

He defines the concept of running
security software alongside the operating system it is protecting as "the sentinel":

In this approach, the security software itself resides in its own
virtual machine outside and parallel to the system it is meant to
protect, which could be another virtual machine running an operating
system such as Windows. This enables the security technology to look
omnisciently into the subject OS and its operation and take appropriate
action when malware or anomalous behavior is detected.

Understood so far…with some caveats, below.

The security
software would run in an uncompromised environment monitoring in
real-time, and could avoid being disabled, detected or deceived (or
make the bad guys work a lot harder.)

While this supposedly uncompromised/uncompromisable OS could exist, how are you going to ensure that the underlying "routing" traffic flow control actually forces the traffic through the Sentinel VM in the first place? If the house of cards rests on this design element, we’ll be waiting a while…and adding latency.  See below.

This kind of security is not necessarily a one-to-one relationship
between sentinel and OSs. One physical machine can run several virtual
machines, so one virtual sentinel could watch and service many virtual
machines.

I think this is a potentially valid and interesting alternative to deploying more and more host-based security products (which seems odd coming from McAfee) or additional physical appliances, but there are a couple of issues with this premise, some of which Bolin points out, others I’ll focus on here:

  1. Unlike other applications which run in a VM and just require a TCP/IP stack, security applications are extremely topology sensitive.  The ability to integrate sentinels in a VM environment with other applications/VM’s at layer 2 is extremely difficult, especially if these security applications are to act "in-line." 

    Virtualizing transport while maintaining topology context is difficult and when you need to then virtualize the policies based upon this topology, it gets worse.  Blade servers have this problem; they have integrated basic switch/load balancing modules, but implementing policy-driven "serialization" and "parallelization" (which is what we call it @ Crossbeam) is very, very hard.

  2. The notion that the sentinel can "…look
    omnisciently into the subject OS and its operation and take appropriate
    action when malware or anomalous behavior is detected" from a network perspective is confusing.  If you’re outside the VM/Hypervisor, I don’t understand the feasibility of this approach.  This is where Blue Lane’s VirtualShiel ESX plug-in kicks ass — it plugs into the Hypervisor and protects not only directed traffic to the VM but also intra-VM traffic with behavioral detection, not just signatures.

  3. Resource allocation of the sentinel security control as a VM poses a threat vector inasmuch as one could overwhelm/DoS the Sentinel VM and security/availability of the entire system could be compromised; the controls protecting the VMs are competing for the same virtualized resources that the resources are after.
  4. As Bolin rightfully suggests, a vulnerability in the VM/VMM/Chipsets could introduce a serious set of modeling problems.

I maintain that securing virtualization by virtualizing security is nascent at best, but as Bolin rightfully demonstrates, there are many innovative approaches being discussed to address these new technologies.

/Hoff

For Data to Survive, It Must ADAPT…

June 1st, 2007 2 comments

Adapt

Now that I’ve annoyed you by suggesting that network security will over time become irrelevant given lost visibility due to advances in OS protocol transport and operation, allow me to give you another nudge towards the edge and further reinforce my theories with some additionally practical data-centric security perspectives.

If any form of network-centric security solution is to succeed in adding value over time, the mechanics of applying policy and effecting disposition on flows as they traverse the network must be made on content in context.  That means we must get to a point where we can make “security” decisions based upon information and its “value” and classification as it moves about.

It’s not good enough to only make decisions on how flows/data should be characterized and acted on with the criteria being focused on the 5-tupule (header,) signature-driven profiling or even behavioral analysis that doesn’t characterize the content in context of where it’s coming from, where it’s going and who (machine, virtual machine and “user”) or what (application, service) intends to access and consume it.

In the best of worlds, we like to be able to classify data before it makes its way though the IP stack and enters the network and use this metadata as an attached descriptor of the ‘type’ of content that this data represents.  We could do this as the data is created by applications (thick or thin, rich or basic) either using the application itself or by using an agent (client-side) that profiles the data prior to storage or transmission.

Since I’m on my Jericho Forum kick lately, here’s how they describe how data ought to be controlled:

Access to data should be controlled by security attributes of the data itself.

  • Attributes can be held within the data (DRM/Metadata) or could be a separate system.
  • Access / security could be implemented by encryption.
  • Some data may have “public, non-confidential” attributes.
  • Access and access rights have a temporal component.

You would probably need client-side software to provide this
functionality.  As an example, we do this today with email compliance solutions that have primitive versions of
this sort of capability that force users to declare the classification
of an email before you can hit the send button or even the document info that can be created when one authors a Word document.

There are a bunch of ERM/DRM solutions in play today that are bandied about being sold as “compliance” solutions, but there value goes much deeper than that.  IP Leakage/Extrusion prevention systems (with or without client-side tie-ins) try to do similar things also.

Ideally, this metadata would be used as a fixed descriptor of the content that permanently attaches itself and follows that content around so it can be used to decide what content should be “routed” based upon policy.

If we’re not able to use this file-oriented static metadata, we’d like then for the “network” (or something in/on it) to be able to dynamically profile content at wirespeed and characterize the data as it moves around the network from origin to destination in the same way.

So, this is where Applied Data & Application Policy Tagging (ADAPT) comes in.  ADAPT is an approach that can make use of existing and new technology to profile and characterize content (by using content matching, signatures, regular expressions and behavioral analysis in hardware or software) to then apply policy-driven information “routing” functionality as flows traverse the network by using an 802.1 q-in-q VLAN tags (open approach) or applying a proprietary ADAPT tag-header as a descriptor to each flow as it moves around the network.

Think of it like a VLAN tag the describes the data within the packet/flow which is defined as seen fit;

The ADAPT tag/VLAN is user defined and can use any taxonomy that best suits the types of content that is interesting; one might use asset classification such as “confidential” or uses taxonomies such as “HIPAA” or “PCI” to describe what is contained in the flows.  One could combine and/or stack the tags, too.  The tag maps to one of these arbitrary categories which could be fed by interpreting metadata attached to the data itself (if in file form) or dynamically by on-the-fly profiling at the network level.

As data moves across the network and across what we call boundaries (zones) of trust, the policy tags are parsed and disposition effected based upon the language governing the rules.  If you use the “open” version using the q-in-q VLAN’s, you have something on the order of 4096 VLAN IDs to choose from…more than enough to accomodate most asset classification and still leave room for VLAN usage.  Enforcing the ACL’s can be done by pretty much ANY modern switch that supports q-in-q, very quickly.

Just like an ACL for IP addresses or VLAN policies, ADAPT does the same thing for content routing, but using VLAN ID’s (or the proprietary ADAPT header) to enforce it.

To enable this sort of functionality, either every switch/router in the network would need to either be q-in-q capable (which is most switches these days) or ADAPT enabled (which would be difficult since you’d need every network vendor to support the protocols.)  You could use an overlay UTM security services switch sitting on top of the network plumbing through which all traffic moving from one zone to another would be subject to the ADAPT policy since each flow has to go through said device.

Since the only device that needs to be ADAPT aware is this UTM security service switch (see the example below,) you can let the network do what it does best and utilize this solution to enforce the policy for you across these boundary transitions.  Said UTM security service switch needs to have an extremely high-speed content security engine that is able to characterize the data at wirespeed and add a tag to the frame as it moves through the switching fabric and processed prior to popping out onto the network.

Clearly this switch would have to have coverage across every network segment.  It wouldn’t work well in virtualized server environments or any topology where zoned traffic is not subject to transit through the UTM switch.

I’m going to be self-serving here and demonstrate this “theoretical” solution using a Crossbeam X80 UTM security services switch plumbed into a very fast, reliable, and resilient L2/L3 Cisco infrastructure.  It just so happens to have a wire-speed content security engine installed in it.  The reason the X-Series can do this is because once the flow enters its switching fabric, I own the ultimate packet/frame/cell format and can prepend any header functionality I like onto the structure to determine how it gets “routed.”

Take the example below where the X80 is connected to the layer-3 switches using 802.1q VLAN trunked interfaces.  I’ve made this an intentionally simple network using VLANs and L3 routing; you could envision a much more complex segmentation and routing environment, obviously.

AdaptjpgThis network is chopped up into 4 VLAN segments:

  1. General Clients (VLAN A)
  2. Finance & Accounting Clients (VLAN B)
  3. Financial Servers (VLAN C)
  4. HR Servers (VLAN D)

Each of the clients/servers in the respective VLANs default routes out to an IP address which belongs to the firewall cluster IP addresses which is proffered by the firewall application modules providing service in the X80.

Thus, to get from one VLAN to another VLAN, one must pass through the X80 and profiled by this content security engine and whatever additional UTM services are installed in the chassis (such as firewall, IDP, AV, etc.)

Let’s say then that a user in VLAN A (General Clients) attempts to access one or more resources in the VLAN D (HR Servers.)

Using solely IP addresses and/or L2 VLANs, let’s say the firewall and IPS policies allow this behavior as the clients in that VLAN have a legitimate need to access the HR Intranet server.  However, let’s say that this user tries to access data that exists on the HR Intranet server but contains personally identifiable information that falls under the governance/compliance mandates of HIPAA.

Let us further suggest that the ADAPT policy states the following:

Rule  Source                Destination            ADAPT Descriptor           Action
==============================================================

1        VLAN A             VLAN D                    HIPAA, Confidential        Deny
IP.1.1               IP.3.1

2        VLAN B             VLAN C                    PCI                                 Allow
IP.2.1             IP.4.1

Using rule 1 above, as the client makes the request, he transits from VLAN A to VLAN D.  The reply containing the requested information is profiled by the content security engine which is able to  characterize the data as containing information that matches our definition of either “HIPAA or Confidential” (purely arbitrary for the sake of this example.)

This could be done by reading the metadata if it exists as an attachment to the content’s file structure, in cooperation with an extrusion prevention application running in the chassis, or in the case of ad-hoc web-based applications/services, done dynamically.

According to the ADAPT policy above, this data would then be either silently dropped, depending upon what “deny” means, or perhaps the user would be redirected to a webpage that informs them of a policy violation.

Rule 2 above would allow authorized IP’s in VLANs to access PCI-classified data.

You can imagine how one could integrate IAM and extend the policies to include pseudonymity/identity as a function of access, also.  Or, one could profile the requesting application (browser, for example) to define whether or not this is an authorized application.  You could extend the actions to lots of stuff, too.

In fact, I alluded to it in the first paragraph, but if we back up a step and look at where consolidation of functions/services are being driven with virtualization, one could also use the principles of ADAPT to extend the ACL functionality that exists in switching environments to control/segment/zone access to/from virtual machines (VMs) of different asset/data/classification/security zones.

What this translates to is a workflow/policy instantiation that would use the same logic to prevent VM1 from communicating with VM2 if there was a “zone” mis-match; as we add data classification in context, you could have various levels of granularity that defines access based not only on VM but VM and data trafficked by them.

Furthermore, assuming this service was deployed internally and you could establish a trusted CA with certs that would support transparent MITM SSL decrypts, you could do this (with appropriate scale) with encrypted traffic also.

This is data-centric security that uses the network when needed, the host when it can and the notion of both static and dynamic network-borne data classification to enforce policy in real-time.

/Hoff

[Comments/Blogs on this entry you might be interested in but have no trackbacks set:

MCWResearch Blog

Rob Newby’s Blog

Alex Hutton’s Blog

Security Retentive Blog

Heisenbugs: The Case of the Visibly Invisible Rogue Virtual Machine

May 28th, 2007 No comments

Pulloutyourhair
A Heisenbug is defined by frustrated programmers trying to mitigate a defect as:

     A bug that disappears or    alters its
behavior when one
     attempts to probe or isolate it.

In the case of a hardened rogue virtual machine (VM) sitting somewhere on your network, trying to probe or isolate it yields a similar frustration index for the IDS analyst as compared to that of the pissed off code jockey unable to locate a bug in a trace. 

In many cases, simply nuking it off the network is not good enough.  You want to know where it is (logically and physically,) how it got there, and whose it is.

Here’s the scenario I was reminded of last week when discussing a nifty experience I had in this regard.  It’s dumbed down and wouldn’t pass a

Here’s what transpired for about an hour or so one Monday morning:

1) IDP console starts barfing about an unallocated RFC address space emergence being used by a host on an internal network segment.  Traffic not malicious, but it seems to be talking to the Internet, some DNS on (actual name) "attacker.com" domain…

2) We see the same address popping up on the external firewall rulesets in the drop rule.

3) We start to work backwards from the sensor on the beaconing segment as well as the perimeter firewall.

4) Ping it.  No response.

5) Traceroute it.  Stops at the local default with nowhere to go since the address space is not routed.

6) Look in CAM tables for interfaces usage in the switch(es).  Coming through the trunk uplink ports.

7) Traced it to a switch.  Isolate the MAC address and isolate based on something unique?  Ummm…

8) On a segment with a collection of 75+ hosts with workgroup hubs…can’t find the damned thing.  IDP console still barfing.

9) One of the security team comes back from lunch.  Sits down near us and logs in.  Reboots a PC.

10) IDP alerts go dead.  All eyes on Cubicle #3.

…turns out that he was working @ home the previous night on his laptop upon which he uses (on his home LAN) VMWare for security research to test for how our production systems will react under attack.  He was using the NAT function and was spoofing the MAC as part of one of his tests.  The machine was talking to Windowsupdate and his local DNS hosts on the (now) imaginary network.

He bought lunch for the rest of us that day and was reminded that while he was authorized to use such tools based upon policy and job function, he shouldn’t use them on machines plugged into the internal network…or at least turn VMWare off ;(

/Hoff

Categories: Virtualization, VMware Tags: