Archive

Author Archive

Introducing the Next Generation of Cloud Computing…

January 11th, 2009 13 comments

It is my pleasure to introduce the fruits of the labor of months minutes of diligent research and engineering prowess — my opus magnum — the next generation of Cloud Computing.  Pending standards-body approval shortly:

Commode Computing.001 

Commode Computing.002

Commode Computing.003

Commode Computing.004

Commode Computing.005

Commode Computing.006

Commode Computing.007

I'm looking for extensive peer review prior to standards body submission.  Open source also considered.  Please ensure you comment below in order to ensure transparency.  There are no ivory towers here, flame away (although you might want to open the window first.)

/Hoff

The Quandary Of the Cloud: Centralized Compute But Distributed Data

January 7th, 2009 3 comments

Here's a theme I've been banging around for quite some time as it relates to virtualization, cloud computing and security.  I've never really sat down and written about it, however.

As we trend towards consolidating and (re)centralizing our computing platforms — both endpoints and servers — using virtualization and cloud computing as enablers to do so, we're also simultaneously dealing with the decentralization and distributed data sets that come with technologies such as Web2.0, mobility and exposure of APIs from cloud platforms.*

So here we are all frothed up as virtualization and cloud computing have, in a sense, led us back to the resource-based consolidation of the mainframe model with all it's centralized splendor and client virtualization/thin clients/compartmentalized remote access is doing the same thing for endpoints. 

But the interesting thing is that with Moore's Law, the endpoints are also getting more and more powerful even though we're dumbing them down and trying to make their exposure more limited despite the fact that they can still efficiently process and store data locally.

These models, one could argue, are diametrically opposed when describing how to secure the platforms versus the information that resides on or is utilized by them.  As the cyclic waffling between centralized versus distributed continues, the timing of how and where we adapt to securing them always lags behind.  Which do we focus on securing and where?  The host, centralized server, network.

The unfortunate answer is always "yes."

Remember this (simplified) model of how/where we secure things?
Youarehere

If you juxtapose the image above mentally with how I represent the centralized <–> distributed trends in IT below, it's no wonder we're always behind the curve.  The computing model technology changes much more quickly than the security technology and processes do, thus the disconnect:

Compute-data-access
I need to update the diagram above to split out the "computing" layer
into client and server as well as extend the data layer to reference
storage modalities also, but it gets the job done.

At any rate, it's probably obvious and common sense, but when explaining to people why I spend my time pointing out gaps with security in virtualization and cloud models, I found this useful.

/Hoff

* It's important to note that while I refer to/group cloud computing models as centralized, I understand they have a distributed element to them, also.  I would ask you to think about the multiple cloud overlays as centralized resources, regardless of how intrinsically "distributed" in processing/load balancing they may be.

P.S. I just saw an awesome post titled "The Rise of the Stupid Endpoint" on the vinternals blog that shares many of the same points, although much more eloquently.  Check it out here.  Awesome!

Virtualization: An Excuse for Shitty Operating System Software Support

January 6th, 2009 6 comments

In honor of my friend @quine on Twitter who today complained thusly:
Quine-virtualization

In case you're reading this with Lynx (you web pimp, you!,) Zach was lamenting the fact that vendors who don't support customer operating systems of choice are simply sloughing off development efforts and support by suggesting that customers should simply run it as a VM instead.

Ah, it used to be called "software," but now it's a "virtual appliance!"  Silly rabbit, tricks are for kids.

One might suggest this is a perfectly reasonable use of virtualization technology — neigh one of the very purposes behind its genesis.  I'd agree, to a point.  However, I've noticed an alarming uptake recently by product managers who are simply short-cutting roadmap/development paths by taking the "lazy" way out.

Hey, it cuts down support, testing, regression and troubleshooting…for the vendor.  But in my favorite commentary, it's simply a "squeezing the balloon problem" because it surfaces a whole host of other issues such as performance, scale, and in some cases support for various virtualization platforms.

What say you?  Do you see this happening more in your enterprise?  Do you care?  Is it a good thing?

/Hoff

Categories: Virtualization Tags:

Cloud (in)Security: A Matter of (t)Rust

January 6th, 2009 3 comments

Skyfalling-angled
Alan from the VirtualDC blog wrote a great post today titled "Cloud Security: A New Level of Trust" summarizing some of his thoughts regarding Cloud (in)security.

It's a little depressing because that "new level" of trust he's referring to isn't heightened, it's significantly reduced. 
I'll hack his longer post a bit to extract two interesting and relevant nuggets that focus on the notion of this changing nature of trust:

  1. Security has different meanings and requirements depending on the context of how a particular service is accessed or invoked.
  2. So moving forward, as the security people tear apart the (in)security of cloud computing, the rest of the world will just need to take that leap of trust. A lowering of our standards for what we can control in the cloud’s outsourced data model.

In simply closing our eyes, holding our breath and accepting that in the name of utility, agility, flexibility, and economy, we're ignoring many of the lessons we've learned over the years, we are repeating the same mistakes and magically expecting they will yield a different outcome.

I'll refer back to one of my favorite axioms:
Secconven

We're willing to give up and awful lot for the sake of convenience, don't you think.  Look, I accept the innovation and ultimate goodness that will come out of this new world order, really I do.  Heck, I use many of these services. 

I also see how this new suite of adapted services are beginning to break down in the face of new threats, use cases and risk models by a cross-pollinated generation of anonymized users that simply do not care about things like privacy or security — until it affects them personally.  Then they're outraged.  Then the next day, they're back to posting about how drunk they were at the orgy they attended last night (but they use SSL, so it's cool…)

So for me, security and the cloud is really a matter of RUST, not trust: the corrosion of expectations, requirements, controls and the relaxation of common sense and diligence for the sake of "progress."

Same as it ever was, same as it ever was…

/Hoff

Categories: Cloud Computing, Cloud Security Tags:

Jaquith: Data-Centric Security Requires Devolution, Not a Revolution

January 6th, 2009 1 comment

If I may be as bold to call Andy Jaquith a friend, I'll do so as I welcomed both his first research report and blog as an analyst for Forrester.

Andy's first topic — Data-Centric Security Requires Devolution, Not a Revolution — is a doozy, and an important one given the recent re-focus on information protection.  The notion of data-centric security has caused quite the stir over the last year with the maturation, consolidation and (some might say) commoditzation of certain marketspaces (DLP) into larger mainstream security product suites.

I will admit that I did not spend the $350 to read Andy's research.  As much as I like to support the ever-turning wheels of the analyst sausage machine, I'm going to upgrade to Apple's newly-announced iLife/iWork '09 bundle instead.  Sorry, Andy.  I'll buy you that beer instead.

However, Andy wrote a great blog entry summarizing the research here:

All of the enterprise's data must be secured… that is obvious. Enterprises have been trying to do this for years with e-mail filtering, hard disk encryption, data leak prevention (DLP) and other technologies. Every few years, another hot technology emerges. But what's less obvious is that the accepted way of tacking the problem — making IT Security the primary responsible party — isn't necessarily the most effective way to do it.

In the report, I take the position that devolution of responsibilities from IT Security to business units is the most important success factor. I'd urge you to read the report for yourself. But in short: as long as data security is just "an IT thing," it's virtually certain that the most accountable parties (BUs) will be able to wash their hands of any responsibility. Depending on the organization, the centralized approach tends to lead to two scenarios:

(1) IT throws up its hands, saying "it's too hard!" — guaranteeing that data security problems breed like rabbits
(2) IT dials up the data controls so tight that end-users and business units rebel against or subvert the controls — leading to even worse problems


What's worse? No controls, or too many? The truth lies somewhere in between, and results vary widely depending on who's accountable: the boss you already know and have a relationship with, or an amorphous cost center whose workers don't know what you do all day. Your boss knows what work products are appropriate to protect, and what aren't. IT Security's role should be supply the tools to enforce the businesses' wishes, not operate them themselves.

Want to secure enterprise data? Stop trying so hard, and devolve!

My only comments are that much like the X-Files, the truth is "out there."  It is most certainly somewhere in between as users and the business will always take the convenient path of least resistance and security will impose the iron fist. 

Securing information must be a cooperative effort that involves the broader adoption of pervasive discovery and classification capabilities across the entire information lifecycle.  The technology has to become as transparent as possible such that workflow isn't interrupted.  That's no easy task

Rich Mogull and I have been writing and presenting about this for quite some time, and we're making evolutionary progress, but not revolutionary progress.

To that point, I might have chosen a different by-line.  Instead of "devolution, not a revolution," I would suggest that perhaps "goverened delegation, not regulation" might be appropriate, too.

Can't wait for that iLife/iWork bundle!

/Hoff

SPOILER: I know what Sotirov and Applebaum’s 25C3 Preso. Is…

December 29th, 2008 4 comments

UPDATE: HA! So I was *so* close to the real thing!  Turns out that instead of 240 Nintendo DS Lites, they used 200 clustered  Sony PS III's! I actually guessed that in an email to Sotirov, too!  I can't believe you people doubted me!

I initially thought they used the go-kart crashes in Super Mario brothers to emulate MD5 "collisions."

Check out Ryan Naraine's write-up here.

So Alexander Sotirov and Jacob Applebaum are giving a presentation tomorrow at 25C3 titled "Making the Theoretical Possible."

There's a summary of their presentation abstract posted via the link above, but the juicy parts are redacted, hiding the true nature of the crippling effects of the 'sploit about to be released upon the world:

25C3_censored

I have a Beowulf cluster of 240 Nintendo DS Lite's running in my basement and harnessing the capabilities thereof was able to apply my custom-written graphical binary isomorphic differ algorithm using neural networking based self-organizing maps and reverse steganography to deduce the obscured content.

I don't wish to be held liable for releasing the content of this prior to their presentation nor do I wish to be pursued for any fair use violations, so I'm hosting the results off shore.

Please click here for the non-redacted image hosted via a mirror site that reveals the content of the abstract.

/Hoff

Categories: Jackassery Tags:

Virtualization? So last Tuesday.

December 27th, 2008 3 comments

This post contains nothing particularly insightful other than a pronounced giant sucking sound that's left a vacuum in terms of forward motion regarding security and virtualization.

Why?

Three things:
  1. There's an awful lot of focus moving from the (cough) mature space of server virtualization to the myriad of options and solutions on client virtualization as we're seeing the transition of where we focus our efforts swing again.  

    We're in the throes of yet another "great awakening" where we some of us realize that (gasp!) it's the information we ought to secure and that the platforms themselves are insecure and should be treated as such.  However, we've got so much security invested in the network and servers that we play ping-pong between securing them, bypassing the crown jewels.

    Virtualization has just reinforced that behavior and as we take stock of where we are in (not) securing these vectors looking for the next silver bullet, we knee jerk back to the the conduit through which the user interacts with our precious data: the client.

    The client, it seems, is the focus yet again, driven mostly by economics.  It's interesting to note that even though the theme of RSA this last go-round was "Information Centricity"  someone didn't get the memo. 

    Check out this graphic from my post a ways back titled "Security Will Not End Up In the Network…" for why this behavior is not only normal but will unfortunately lead us to always focus on the grass which turns out not to be greener on the other side.  I suppose I should really break out the "host" into server and client, accordingly:

  2. Youarehere_3

    Further, and rightfully so, the accelerated convergence of storage and networking thanks to virtualization is causing heads to a-splode in ways that cause security to be nothing more than a shrug and a prayer.  What it means to "secure the cloud" is akin to pissing in the wind at the moment.  Hey, if you've got to go, you've got to go…

  3. ISV's are in what a amounts to a holding platform waiting for VDCOS, VI4, vSphere with vNetworking and the VMsafe API's to be released so they can unleash their next round of security software appliances to tackle the problems highlighted in my Four Horsemen of the Virtualization Security Apocalypse series.  For platforms other than VMware, we've seen bupkis as it relates to innovation of VirtSec.  
  4. The "Cloud" has assimilated us all and combined with the stalling function above, has left us waffling in ambivalence.  The industry is so caught up in the momentum of this new promised revenue land that the blinding opportunity combined with a lack of standards and a slew of new business and technology models means that innovation is being driven primarily by startups while existing brands jockey to retool.

It's messy.  It's going to get messier, but the good news is that it's a really exciting time.  We're going to see old friends like IAM, IDP, VPNs, and good old fashioned routing and switching tart themselves up, hike up the hemlines and start trolling for dates again as virtualization 2.x, VirtSec and Cloud/Cloud Security make all the problems we haven't solved (but know we need to) relevant and pressing once again.

All those SysAdmin and NetAdmin skills you started with before you became a "security professional" will really help in sorting through all this mud.

There exist lots of opportunity to make both evolutionary and revolutionary advancements in solving many of the problems we've been suffering from for decades.  Let's work to press forward and not lose sight of where we're going and more importantly from whence we've come.

/Hoff
  

Servers and Switches and VMM’s, Oh My! Cisco’s California “Server Switch”

December 21st, 2008 4 comments

Cisco-Virtualization-Wow
From the desk of Cisco's "Virtualization Wow!" Campaign: When is a switch with a server not a virtualization platform?  When it's a server with a switch as a virtualization platform, of course! 😉

I can't help but smile at the announcement that Cisco is bringing to market a blade-based chassis which bundles together Intel's Nehalem-based server processors, the Nexus 5000 switch, and VMware's virtualization and management platform.  From InformationWeek:

Cisco's system, code-named California, likely will be introduced in the
spring, according to the sources. It will meld Cisco's Nexus 5000 switch that converges storage
and data network traffic, blade servers that employ Intel Nehalem
processors, and virtualization management with help from VMware. 

This news was actually broken back in the beginning of December by virtualization.info but I shamefully missed it.  It looked like a bunch of others did, too.

This totally makes sense as virtualization has driven convergence across the compute, network and storage realms and has highlighted the fact that the provisioning, automation, and governance — up and down the stack — demands a unified approach for management and support.

For me, this is the natural come-about of what I wrote about in July of 2007 in a blog post titled "Cisco & VMware – The Revolution Will Be…Virtualized?":

This [convergence of network, compute and virtualization, Ed.] is interesting for sure and if you look at the way in which the
demand for flexibility of software combined with generally-available
COTS compute stacks and specific network processing where required, the
notion that Cisco might partner with VMWare or a similar vendor such as
SWSoft looks compelling.  Of course with functionality like KVM in the Linux kernel, there's no reason they have to buy or ally…

Certainly there are already elements of virtualization within
Cisco's routing, switching and security infrastructure, but many might
argue that it requires a refresh in order to meet the requirements of
their customers.  It seems that their CEO does.

When I last blogged about Cisco's partnership with VMware and (what is now called) the Nexus 1000v/VN-Link, I made reference to the fact that I foresaw the extraction of the VM's from the servers and suggested that we would see VM's running in the Nexus switches themselves.  Cisco representatives ultimately put a stake in the sand and said this would never happen in the comments of that blog post.

Now we know what they meant and it makes even more sense.

So the bundling of the Nexus 5000* (with the initiator,) the upcoming protocol for VM-Flow affinity tagging, the integrated/converged compute and storage capabilities, and Intel's SR-IOV/MR-IOV/IOMMU technologies in the Nehalem, all supported by the advances with vNetworking/VN-Link makes this solution a force to be reckoned with.

Other vendors, especially those rooted in servers and networking such as HP and IBM, are likely to introduce their own solutions, but given the tight coupling of the partnership, investment and technology development between Intel, VMware and Cisco, this combo will be hard to beat. 

Folks will likely suggest that Cisco has no core competency in building, selling or supporting "servers," but given the channel and partnership with VMware — with virtualization abstracting that hardware-centric view in the first place — I'm not sure this really matters.  We'll have to see how accurate I am on this call.

Regardeless of the semantic differences of where the CPU execution lives (based on my initial prediction,) all the things I've been talking about that seemed tangentially-related but destined to come together seem to have.  Further, here we see the resurgence (or at least redefinition of Big Iron, all over again…)

Remember the Four Horsemen slides and the blog post (the network is the computer, is the network, is the…) where I dared you to figure out where "the network" wasn't in the stack?  This is an even more relevant question today. 

It's going to be a very interesting operational and organizational challenge from a security perspective when your server, storage, networking and virtualization platform all come from a single source.

California Dreamin'…

/Hoff

* Not that the 1000v ain't cool, but that little slide that only appeared once at VMworld about the 5000v and the initiator was just too subtely delicious not to be the real juice in the squeeze. The 1000v obviously has its place and will be a fantastic solution, but for folks who are looking for a one-stop shop for their datacenter blueprint designs heavily leveraging virtualization, this makes nothing but sense.

Categories: Cisco, Virtualization, VMware Tags:

Rogue VM Sprawl? Really?

December 19th, 2008 6 comments

Pirateflag
I keep hearing about the impending doom of (specifically) rogue VM sprawl — our infrastructure overrun with the unchecked proliferation of virtual machines running amok across our enterprises.  Oh the horror!

Most of the examples use the consolidation of server VM's onto hosts as delivered by virtualization as their example.

I have to ask you though, given what it takes to spin up a VM on a platform such as VMware, how can you have a "rogue" VM sprawling its way across your enterprise!? 

Someone — an authorized administrator — had to have loaded it into inventory, configured its placement on a virtual switch, and spun it up via VirtualCenter or some facsimile thereof depending upon platform.

That's the definition of a rogue?  I can see where this may be a definitional issue, but the marketeers are getting frothed up over this very issue, whispering in your ear constantly about the impending demise of your infrastructure…and undetectable hypervisor rootkits, too.  🙂

It may be that the ease of which a VM *can* be spun up legitimately can lead to the overly-exhuberant deployment of VM's without understanding the impact this might have on the infrastructure, but can we please stop grouping stupidity and poor capacity planning/impact analysis with rogueness?  They're two different things.

If administrators are firing off VMs that are unauthorized, unhardened, and unaccounted for, you have bigger problems than that of virtualization and you ought to consider firing them off. 

The inventory of active VMs is a reasonably easy thing to keep track of; if it's running, I can see it. 

I know "where" it is and I can turn it off.  To me, the bigger problem is represented by the offline VMs which can live outside that inventory window, just waiting to be reactivated from their hypervisorial hibernation.

But does that represent "rogue?"

You want an example of a threat which represents truly rogue VM "sprawl" that people ought to be afraid of?  OK, here's one, and it happened to me.  I talk about it all the time and people usually say "Oh, man, I never thought of that…"  usually because we're focused on server virtualization and not the client side.

The Setup: It's 9:10am about 4-5 years ago.  Settling in to read email after getting to work, the klaxon alarms start blaring.  The IDS/IPS consoles start going mad. Users can't access network resources.  DHCP addresses are being handed out from somewhere internally on the network from pools allocated to Korean address space.

We take distributed sniffer traces.  Trackback through firewall, IDS and IPS logs and isolate the MAC address in the CAM tables of the 96 port switch to which the offending DHCP server appears to be plugged, although we can't ping it.

My analyst is now on a mission to unplug the port, so he undocks his laptop and the alarms silence.

I look over at him.  He has a strange look on his face.  He docks his laptop again.  Seconds later the alarms go off again.

The Culprit: Turns out said analyst was doing research at home on our W2K AD/DHCP server hardening scripts.  He took our standard W2K server image, loaded it as a VM in VMware Workstation and used it at home to validate funtionality.

The image he used had AD/DHCP services enabled.

When he was done at home the night before, he minimized VMware and closed his laptop.

When he came in to work the next morning, he simply docked and went about reading email, forgetting the VMW instance was still running.  Doing what it does, it started responding to DHCP requests on the network.

Because he was using shared IP addresses for his VM and was "behind" the personal firewall on his machine which prohibits ICMP requests based on the policy (but obviously not bootp/DHCP) we couldn't ping it or profile the workstation…

Now, that's a rogue VM.  An accidental rogue VM.  Imagine if it were an attacker.  Perhaps he/she was a legitimate user but disgruntled.  Perhaps he/she decided to use wireless instead of wired.  How much fun would that be?

Stop with the "rogue (server) VM" FUD, wouldya?

Kthxbye.

/Hoff

Categories: Virtualization Tags:

Using Twitter (Via the Cloud) As a Human-Powered, Second Stage SIEM & IPS

December 18th, 2008 2 comments

Here's the premise that will change the face of network security, compliance, SIEM and IDP forever:

Twitter as a human-powered SIEM and IPS for correlation

This started as a joke I made on Twitter a few weeks ago, but given the astounding popularity of Cloud-based zaniness currently, I'm going open source with my idea and monetize it in the form of a new startup called CloudCorrelatorâ„¢.

Here's how it works:

  1. You configure all your network devices and your management consoles (aggregated or not) to point to a virtual machine that you install somewhere in your infrastructure.  It's OVF compliant, so it will work with pretty much any platform.
  2. This VM accepts Syslog, SNMP, raw log formats, and/or XML and will take your streamed message bus inputs, package them up, encrypt them into something we call the SlipStreamâ„¢, and forward them off to…
  3. …the fantastic cloud-based service called CloudCorrelatorâ„¢ (running on the ever-popular AWS platform) which normalizes the alerts and correlates them as any SIEM platform does providing all the normal features you'd expect, but in the cloud where storage, availability, security and infinite expandability is guaranteed!  The CloudCorrelatorâ„¢ is open source, of course.

    This is where it gets fun…

  4. Based upon your policies the CloudCorrelatorâ„¢ sanitizes your SlipStreamâ„¢ feed and using the Twitter API will allow Twitter followers to cross-correlate seemingly random events globally, using actual human eyeballs to provide the heuristics and fuzzy logic analysis across domains.

Why bother sending your SlipStreamâ„¢ to Twitter?  Well, firstly you can use existing search tools to determine if anyone else is seeing similar traffic patterns across diverse networks.  Take TwitterSearch for example.   Better yet, use the TweetStat Cloud to map relevant cross-pollination of events.

That zero day just became a non-event.

I am accepting VC, press and alpha customer inquries immediately.  The @VirtualSIEM Twitter feed should start showing SlipStreamâ„¢ parses out of CloudCorrelatorâ„¢ shortly.

/Hoff

Categories: Jackassery Tags: