Archive

Author Archive

Google Gaffe – The Cloud Needs a Snuggie…Or a Wedgie

May 19th, 2009 No comments

snuggieBy now you’ve undoubtedly heard that Google had a little operational hiccup.  I particularly enjoyed Craig Labovitz’s (arbor) account of “The Great GoogleLapse

When a suite of services that account for a projected 5% of the entire Intertube’s traffic shits the bed, people pay attention.

Sometimes for the wrong reasons.

Conspiracy theories, rumors of the end of days and chants of “don’t trust the Cloud!” start to fly when operational issues such as the routing boo-boo that hit Google turn up.

The reality is that in the grand scheme of things, we should take the three salient points from this experience and move on:

  1. Cloud services — even those with the scale, maturity and operational track-record of Google — still depend on fundamentally weak, insecure and unstable infrastructure that is easy to screw up.
    This is the premise for my upcoming Black Hat talk titled “Cloudifornication: Indiscriminate Information Intercourse Involving Internet Infrastructure.”
  2. You ought to have a Plan B. That maybe difficult as it relates to Cloud-based SaaS application offerings and service which, by definition, tend to tie you to the platform/provider offering them.
  3. This isn’t going to stop anyone from moving to the Cloud.  It may give people pause and they may spend a few more cycles evaluating what Plan B might mean, but it also pushes the agendas of hybrid architectures like Google’s NaCl and client-side hypervisors for “off-line” Cloud goodness.  All in all, it’s a nice reminder, but Cloud goes on.

The economic lubricant provided by the Astro Glide that is Cloud is just too compelling. If someone hasn’t factored potential widespread outages from single-sourced providers, shame on them; that’s poor risk assessment.

Yes, we’ve got lots of attendant issues to solve when it comes to Cloud.  Many of them, I have so soapboxed, are the same ones we’ve had for a long while.  To those of us who recognize the Internet Cloud for what it is, Google’s outage was simply an opportunity to order another Hoffachino.

What doesn’t kill us makes us…just as insecure and potentially unavailable due to some monkey pushing the wrong button as we’ve always been.

Besides, now we know that outsourcing your traffic to China is the sux0r.

So chill.  Learn from this.  Use it to form rational arguments about how to deal with this sort of thing when it does happen — because it’s going to again, just like it always has.  Remember?

Worse comes to worse, may I suggest one of these — it is the cure for all your woes anyway, right?

/Hoff

Incomplete Thought: Storage In the Cloud: Winds From the ATMOS(fear)

May 18th, 2009 1 comment

I never metadata I didn’t like…

I first heard about EMC’s ATMOS Cloud-optimized storage “product” months ago:

EMC Atmos is a multi-petabyte offering for information storage and distribution. If you are looking to build cloud storage, Atmos is the ideal offering, combining massive scalability with automated data placement to help you efficiently deliver content and information services anywhere in the world.

I had lunch with Dave Graham (@davegraham) from EMC a ways back and while he was tight-lipped, we discussed ATMOS in lofty, architectural terms.  I came away from our discussion with the notion that ATMOS was more of a platform and less of a product with a focus on managing not only stores of data, but also the context, metadata and policies surrounding it.  ATMOS tasted like a service provider play with a nod to very large enterprises who were looking to seriously trod down the path of consolidated and intelligent storage services.

I was really intrigued with the concept of ATMOS, especially when I learned that at least one of the people who works on the team developing it also contributed to the UC Berkeley project called OceanStore from 2005:

OceanStore is a global persistent data store designed to scale to billions of users. It provides a consistent, highly-available, and durable storage utility atop an infrastructure comprised of untrusted servers.

Any computer can join the infrastructure, contributing storage or providing local user access in exchange for economic compensation. Users need only subscribe to a single OceanStore service provider, although they may consume storage and bandwidth from many different providers. The providers automatically buy and sell capacity and coverage among themselves, transparently to the users. The utility model thus combines the resources from federated systems to provide a quality of service higher than that achievable by any single company.

OceanStore caches data promiscuously; any server may create a local replica of any data object. These local replicas provide faster access and robustness to network partitions. They also reduce network congestion by localizing access traffic.

Pretty cool stuff, right?  This just goes to show that plenty of smart people have been working on “Cloud Computing” for quite some time.

Ah, the ‘Storage Cloud.’

Now, while we’ve heard of and seen storage-as-a-service in many forms, including the Cloud, today I saw a really interesting article titled “EMC, AT&T open up Atmos-based cloud storage service:”

EMC Corp.’s Atmos object-based storage system is the basis for two cloud computing services launched today at EMC World 2009 — EMC Atmos onLine and AT&T’s Synaptic Storage as a Service.
EMC’s service coincides with a new feature within the Atmos Web services API that lets organizations with Atmos systems already on-premise “federate” data – move it across data storage clouds. In this case, they’ll be able to move data from their on-premise Atmos to an external Atmos computing cloud.

Boston’s Beth Israel Deaconess Medical Center is evaluating Atmos for its next-generation storage infrastructure, and storage architect Michael Passe said he plans to test the new federation capability.

Organizations without an internal Atmos system can also send data to Atmos onLine by writing applications to its APIs. This is different than commercial graphical user interface services such as EMC’s Mozy cloud computing backup service. “There is an API requirement, but we’re already seeing people doing integration” of new Web offerings for end users such as cloud computing backup and iSCSI connectivity, according to Mike Feinberg, senior vice president of the EMC Cloud Infrastructure Group. Data-loss prevention products from RSA, the security division of EMC, can also be used with Atmos to proactively identify confidential data such as social security numbers and keep them from being sent outside the user’s firewall.

AT&T is adding Synaptic Storage as a Service to its hosted networking and security offerings, claiming to overcome the data security worries many conservative storage customers have about storing data at a third-party data center.

The federation of data across storage clouds using API’s? Information cross-pollenization and collaboration? Heavy, man.

Take plays like Cisco’s UCS with VMware’s virtualization and stir in VN-Tag with DLP/ERM solutions and sit it on top of ATMOS…from an architecture perspective, you’ve got an amazing platform for service delivery that allows for some slick application of policy that is information centric.  Sure, getting this all to stick will take time, but these are issues we’re grappling with in our discussions related to portability of applications and information.

Settling Back Down to Earth

This brings up a really important set of discussions that I keep harping on as the cold winds of reality start to blow.

From a security perspective, storage is the moose on the table that nobody talks about.  In virtualized environments we’re interconnecting all our hosts to islands of centralized SANs and NAS.  We’re converging our data and storage networks via CNAs and unified fabrics.

In multi-tenant Cloud environments all our data ends up being stored similarly with the trust that segregation and security are appropriately applied.  Ever wonder how storage architectures never designed to do these sorts of things at scale can actually do so securely? Whose responsibility is it to manage the security of these critical centerpieces of our evolving “centers of data.”

So besides my advice that security folks need to run out and get their CCIE certs, perhaps you ought to sign up for a storage security class, too.  You can also start by reading this excellent book by Himanshu Dwivedi titled “Securing Storage.”

What are YOU doing about securing storage in your enterprise our Cloud engagements?  If your answer is LUN masking, here’s four Excedrin, call me after the breach.

/Hoff

Security and the Cloud – What Does That Even Mean?

May 18th, 2009 1 comment

I was chatting with Pete Lindstrom this morning about how difficult it is to frame meaningful discussion around what security and Cloud Computing means.

In my Four Horsemen presentation I reflected on the same difficulty as it relates to security and virtualization.  I arrived at separating the discussion into three parts:

virtsec-points017Securing virtualization refers to what we need to do in order to ensure the security of the underlying virtualization platform itself.

Virtualizing security refers to how we operationalize and virtualize security capabilities — those we already have and new, evolving solutions — in order to secure our virtualized resources

Security via virtualization refers to what security benefits above and beyond what we might expect from non-virtualized environments we gain through the deployment of virtualization.

In reality, we need to break down the notion of security and Cloud computing into similar chunks.  The reason for this is that much like in the virtualization realm, we’re struggling less with security technology solutions (as there really are few) but rather with the operational, organizational and compliance issues that come with this new unchartered (or pooly chartered) territory.

Further, it’s important that we abstract offering security services from the Cloud as a platform versus how we secure the Cloud as a platform…I’ve chatted about that previously.

Thus we need to understand what it means to secure — or have a provider secure — the underlying Cloud platform, how we can then apply solutions from a collective catalog of compensating controls to apply security to our Cloud resources and ultimately how we can achieve parity or even better security through Cloud Computing.

I find it disturbing that folks often have the opinion of me that I am anti-Cloud. That’s something I must obviously work on, but suffice it to say that I am incredibly passionate about Cloud Computing and ensuring that we achieve an appropriate balance of security and survivability with its myriad of opportunity.

To illustrate this, I offer the talking slide from my Frogs presentation of security benefits that Cloud presents to an organization as a forcing function as they think about embracing Cloud Computing.  I present this slide before the security issues slide.  Why?  because I think Cloud can be harnessed as a catalyst for moving things forward in the security realm and used as lever to get things done:

cloudsec-benefits059Looking at the list of benefits, they actually highlight what I think are the the top three concerns organizations have with Cloud computing.  I believe they revolve around understanding how Cloud services provide for the following:

  • Preserving confidentiality, integrity and availability
  • Maintaining appropriate levels of identity and access Control
  • Ensuring appropriate audit and compliance capability

These aren’t exactly new problems.  They are difficult problems, especially when combined with new business models and technology, but ones we need to solve.  Cloud can help.

So, what does “securing the Cloud” mean and how do we approach discussing it?

I think the most rational approach is the one the Cloud Security Alliance is taking by framing the issues around the things that matter most, pointing out how these issues with which we are familiar are both similar and different when talking about Cloud Computing.  While others still argue with defining the Cloud, we’re busy trying to get in front of the issues we know we already have.

If you haven’t had a chance to take a look at the guidance, please do!  You can discuss it here on our Google Group.

In the meantime, ponder this: Valeo utilizing Google Apps across it’s 30,000 users. Funny, I remember talking about CapGemini and Google doing this very thing back in 2007: Google Makes Its Move To The Corporate Enterprise Desktop – Can It Do It Securely?

Check out some of the comments in that post. Crow, anyone?

/Hoff

Incomplete Thought: The Crushing Costs of Complying With Cloud Customer “Right To Audit” Clauses

May 14th, 2009 13 comments

As Cloud Computing continues to capture the hearts, minds and other assorted organs of business folk everywhere, the economics of outsourcing services to the Cloud come more and more into focus.  Here’s one element that I don’t think is being paid much attention, however*:

While most of the cost/benefit analysis is being discussed as it relates to the “consumer” side of Cloud, the providers themselves have an equally burgeoning issue surfacing as it relates to cost; satisfying right to audit clauses.

Almost all of the Cloud providers I have spoken to are being absolutely hammered by customers acting on their “right to audit” clauses in contracts. This is a change in behavior.  Most customers have traditionally not acted on these clauses as they used them more as contingency/insurance options.  With the uncertainty relating to confidentiality, integrity and availability of Cloud services, this is no more.  Cloud providers continue to lament that they really, really want a standardized way of responding to these requests**

These providers — IaaS, PaaS and especially SaaS — are having to staff up and spend considerable amounts of time, money and resources on satisfying these requests from customers.

When I negotiated contracts for outsourced services, I always required an RTA clause.  It was non-negotiable.  I also acted on them several times in response to an issue or request from an auditor/regulator.

If you aren’t writing these clauses into your contracts, you should be.  For those of you who have done so, good on you for being diligent.  To those providers who are eating it with the load this renders, I feel your pain but I fear it will only get worse.

/Hoff

* This WordPress theme makes indented captions look like quotes. This is a highlighted section written by me and is not a quote from someone else.  Sorry for any confusion.
** This is where/why Cloud providers should get involved with the Cloud Security Alliance — we can, as a community, facilitate both expectations and deliverables from both the consumer and provider perspective…

On the Draft NIST Working Definition Of Cloud Computing…

May 8th, 2009 6 comments

How many of you have seen the Draft NIST Working Definition Of Cloud Computing?  It appears to have been presented to government CIO’s at the recent Federal CIO Cloud Computing Summit in Washington DC last week.

I saw the draft NIST Working Definition of Cloud Computing shown below (copied from Reuven Cohen’s blog) about a month and a half ago, but have not seen it presented in its entirety outside of the copy I was sent until now and didn’t know how/when it would be made “public,” so I didn’t blog directly about its content.

The reason I was happy to see it when I did was that I had just finished writing the draft of the Cloud Security Alliance Security Guidance for Critical Areas of Focus In Cloud Computing — specifically the section on Cloud architecture and found that there was a very good alignment between our two independent works (much like with the Jericho Cloud Cube model.)

In fact, you’ll see that I liked the definitions for the SPI model components so much, I used them and directly credited  Peter Mell from NIST, one of the authors of the work.

I sent a very early draft of my work along with some feedback to Peter on some of the definitions, specifically since I noted some things I did not fully agree with in the deployment models sections. The “community” clouds seem to me as being an abstraction or application of of private clouds. I have a “managed cloud” instead.  Ah, more fuel for good discussion.

I hoped we could have discussed them prior to publishing either of the documents, but we passed in the ether as it seems.

At any rate, here’s the draft from our wily Canadian friend:

4-24-09

Peter Mell and Tim Grance – National Institute of Standards and Technology, Information Technology Laboratory

Note 1: Cloud computing is still an evolving paradigm. Its definitions, use cases, underlying technologies, issues, risks, and benefits will be refined in a spirited debate by the public and private sectors. These definitions, attributes, and characteristics will evolve and change over time.

Note 2: The cloud computing industry represents a large ecosystem of many models, vendors, and market niches. This definition attempts to encompass all of the various cloud approaches.

Definition of Cloud Computing:

Cloud computing is a pay-per-use model for enabling available, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. This cloud model promotes availability and is comprised of five key characteristics, three delivery models, and four deployment models.

Key Characteristics:

  • On-demand self-service. A consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed without requiring human interaction with each service’s provider.
  • Ubiquitous network access. Capabilities are available over the network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, laptops, and PDAs).
  • Location independent resource pooling. The provider’s computing resources are pooled to serve all consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand. The customer generally has no control or knowledge over the exact location of the provided resources. Examples of resources include storage, processing, memory, network bandwidth, and virtual machines.
  • Rapid elasticity. Capabilities can be rapidly and elastically provisioned to quickly scale up and rapidly released to quickly scale down. To the consumer, the capabilities available for rent often appear to be infinite and can be purchased in any quantity at any time.
  • Pay per use. Capabilities are charged using a metered, fee-for-service, or advertising based billing model to promote optimization of resource use. Examples are measuring the storage, bandwidth, and computing resources consumed and charging for the number of active user accounts per month. Clouds within an organization accrue cost between business units and may or may not use actual currency.

Note: Cloud software takes full advantage of the cloud paradigm by being service oriented with a focus on statelessness, low coupling, modularity, and semantic interoperability.

Delivery Models:

  • Cloud Software as a Service (SaaS). The capability provided to the consumer is to use the provider’s applications running on a cloud infrastructure and accessible from various client devices through a thin client interface such as a Web browser (e.g., web-based email). The consumer does not manage or control the underlying cloud infrastructure, network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings.
  • Cloud Platform as a Service (PaaS). The capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created applications using programming languages and tools supported by the provider (e.g., java, python, .Net). The consumer does not manage or control the underlying cloud infrastructure, network, servers, operating systems, or storage, but the consumer has control over the deployed applications and possibly application hosting environment configurations.
  • Cloud Infrastructure as a Service (IaaS). The capability provided to the consumer is to rent processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, deployed applications, and possibly select networking components (e.g., firewalls, load balancers).

Deployment Models:

  • Private cloud. The cloud infrastructure is owned or leased by a single organization and is operated solely for that organization.
  • Community cloud. The cloud infrastructure is shared by several organizations and supports a specific community that has shared concerns (e.g., mission, security requirements, policy, and compliance considerations).
  • Public cloud. The cloud infrastructure is owned by an organization selling cloud services to the general public or to a large industry group.
  • Hybrid cloud. The cloud infrastructure is a composition of two or more clouds (internal, community, or public) that remain unique entities but are bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting).

Each deployment model instance has one of two types: internal or external. Internal clouds reside within an organizations network security perimeter and external clouds reside outside the same perimeter.

Now, Reuven Cohen mentioned on his blog:

In creating this definition, NIST consulted extensively with the private sector including a wide range of vendors, consultants and industry pundants (sic!) including your (sic!) truly. Below is the draft NIST working definition of Cloud Computing. I should note, this definition is a work in progress and therefore is open to public ratification & comment. The initial feedback was very positive from the federal CIO’s who were presented it yesterday in DC. Baring any last minute lobbying I doubt we’ll see many more major revisions.

…which is interesting, because for being “…open to public ratification & comment,” I can’t seem to find it anywhere except for references to its creation as a deliverable in FY09 in a presentation from December, 2008.  I searched NIST’s site, but perhaps I’m just having a bad search day.

Clearly at least I have a couple of comments.  I could send them to Peter directly, but I’d rather discuss them openly if that’s appropriate and there is a forum to do so.  At this rate, it looks as though it may be too late, however.

/Hoff

The Forthcoming Citrix/Xen/KVM Virtual Networking Stack…What Does This Mean to VMware/Cisco 1000v?

May 8th, 2009 8 comments

I was at Citrix Synergy/Virtualization Congress earlier this week and at the end of the day on Wednesday, Scott Lowe tweeted something interesting when he said:

In my mind, the biggest announcement that no one is talking about is virtual switching for XenServer. #CitrixSynergy

I had missed the announcements since I didn’t get to many of the sessions due to timing, so I sniffed around based on Scott’s hints and looked for some more meat.

I found that Chris Wolf covered the announcement nicely in his blog here but I wanted a little more detail, especially regarding approach, architecture and implementation.

Imagine my surprise when Alessandro Perilli and I sat down for a quick drink only to be joined by Simon Crosby and Ian Pratt.  Sometimes, membership has its privileges 😉

I asked Simon/Ian about the new virtual switch because I was very intrigued, and since I had direct access to the open source, it was good timing.

Now, not to be a spoil-sport, but there are details under FreiNDA that I cannot disclose, so I’ll instead riff off of Chris’ commentary wherein he outlined the need for more integrated and robust virtual networking capabilities within or adjunct to the virtualization platforms:

Cisco had to know that it was only a matter of time before competition for the Nexus 1000V started to emerge, and it appears that a virtual switch that competes with the Nexus 1000V will come right on the heels of the 1000V release. There’s no question that we’ve needed better virtual infrastructure switch management, and an overwhelming number of Burton Group clients are very interested in this technology. Client interest has generally been driven by two factors:

  • Fully managed virtual switches would allow the organization’s networking group to regain control of the network infrastructure. Most network administrators have never been thrilled with having server administrators manage virtual switches.
  • Managed virtual switches provide more granular insight into virtual network traffic and better integration with the organization’s existing network and security management tools

I don’t disagree with any of what Chris said, except that I do think that the word ‘compete’ is an interesting turn of phrase.

Just as the Cisco 1000v is a mostly proprietary (implementation of a) solution bound to VMware’s platform, the new Citrix/Xen/KVM virtual networking capabilities — while open sourced and free — are bound to Xen and KVM-based virtualization platforms, so it’s not really “competitive” because it’s not going to run in VMware environments. It is certainly a clear shot across the bow of VMware to address the 1000v, but there’s a tradeoff here as it comes to integration and functionality as well as the approach to what “networking” means in a virtualized construct.  More on that in a minute.

I’m going to take Chris’ next chunk out of order in order to describe the features we know about:

I’m expecting Citrix to offer more details of the open source Xen virtual switch in the near future, but in the mean time, here’s what I can tell you:

  • The virtual switch will be open source and initially compatible with both Xen- and KVM-based hypervisors
  • It will provide centralized network management
  • It will support advanced network management features such as Netflow, SPAN, RSPAN, and ERSPAN
  • It will initially be available as a plug-in to XenCenter
  • It will support security features such as ACLs and 802.1x

This all sounds like good stuff.  It brings the capabilities of virtual networking and how it’s managed to “proper” levels.  If you’re wondering how this is going to happen, you *cough* might want to take a look at OpenFlow…being able to enforce policies and do things similar to the 1000v with VMware’s vSphere, DVS and the up-coming VN-Link/VN-tag is the stuff I can’t talk about — even though it’s the most interesting.  Suffice it to say there are some very interesting opportunities here that do not require proprietary networking protocols that may or may not require uplifts or upgrades of routers/switches upstream.  ’nuff said. 😉

Now the next section is interesting, but in my opinion is a bit of reach in certain sections:

For awhile I’ve held the belief that the traditional network access layer was going to move to the virtual infrastructure. A large number of physical network and security appliance vendors believe that too, and are building or currently offering products that can be deployed directly to the virtual infrastructure. So for Cisco, the Nexus 1000V was important because it a) gave its clients functionality they desperately craved, but also b) protected existing revenue streams associated with network access layer devices. Throw in an open source managed virtual switch, and it could be problematic for Cisco’s continued dominance of the network market. Sure, Cisco’s competitors can’t go at Cisco individually, but by collectively rallying around an open source managed virtual switch, they have a chance. In my opinion, it won’t be long before the Xen virtual switch can be run via software on the hypervisor and will run on firmware on SR-IOV-enabled network interfaces or converged network adapters (CNAs).


This is clearly a great move by Citrix. An open source virtual switch will allow a number of hardware OEMs to ship a robust virtual switch on their products, while also giving them the opportunity to add value to both their hardware devices (e.g., network adapters) and software management suites. Furthermore, an open source virtual switch that is shared by a large vendor community will enable organizations to deploy this virtual switch technology while avoiding vendor lock-in.

Firstly, I totally agree that it’s fantastic that this capability is coming to Xen/KVM platforms.  It’s a roadmap item that has been missing and was, quite honestly, going to happen one way or another.

You can expect that Microsoft will also needto respond to this some point to allow for more integrated networking and security capabilities with Hyper-V.

However, let’s compare apples to apples here.

I think it’s interesting that Chris chose to toss in the “vendor lock-in” argument as it pertains to virtual networking and virtualization for the following reasons:

  • Most enterprise networking environments (from the routing & switching perspective) are usually provided by a single vendor.
  • Most enterprises choose a virtualization platform from a single vendor

If you take those two things, then for an environment that has VMware and Cisco, that “lock-in” is a deliberate choice, not foisted upon them.

If an enterprise chooses to invest based upon functionality NOT available elsewhere due to a tight partnership between technology companies, it’s sort of goofy to suggest lock-in.  We call this adoption of innovation.  When you’re a competitor who is threatened because don’t have the capability you call it lock-in. ;(

This virtual switch announcement does nothing to address “lock-in” for customers who choose to run VMware with a virtual networking stack other than VMware’s or Cisco’s…see what I mean.  it doesn’t matter if the customer has Juniper switches or not in this case…until you can integrate an open source virtual switch into VMware the same way Cisco did with the 1000v (which is not trivial,) then we are where we are.

Of course the 1000v was a strategic decision by Cisco to help re-claim the access layer that was disappering into the virtualized hosts and make Cisco more relevant in a virtualized environment.  It sets the stage, as I have mentioned, for the longer term advancements of the entire Nexus and NG datacenter switching/routing products including the VN-Link/VN-Tag — with some features being proprietary and requiring Cisco hardware and others not.

I just don’t buy the argument that an open virtual switch “… could be problematic for Cisco’s continued dominance of the network market.” when the longtime availablity of open source networking products (including routers like Vyatta) haven’t made much of a dent in the enterprise against Cisco.

Customers want “open enough” and solutions that are proven and time tested.  Even the 1000v is brand new.  We haven’t even finished letting the paint dry there yet!

Now, I will say that if IBM or HP want to stick their thumb in the pie and extend their networking reach into the host by integrating this new technology with their hardware network choices, it offers a good solution — so long as you don’t mind *cough* “lock-in” from the virtualization platform provider’s perspective (since VMware is clearly excluded — see how this is a silly argument?)

The final point about “security inspection” and comparing the ability to redirect flows at a kernel/network layer to a security VA/VM/appliance  is only one small part of what VMware’s VMsafe does:

Citrix needed an answer to the Nexus 1000V and the advanced security inspection offered by VMsafe, and there’s no doubt they are on the right track with this announcement.

Certainly, it’s the first step toward better visibility and does not require API modification of the security virtual appliances/machines like VMware’s solution in it’s full-blown implementation does, but this isn’t full-blown VM introspection, either.

Moreso, it’s a way of ensuring a more direct method of gaining better visibility and control over networking in a virtualized environment.  Remember that VMsafe also includes the ability to provide interception and inspection of virtualized memory, disk, CPU execution as well as networking.  There are, as I have mentioned Xen community projects to introduce VM introspection, however.

So yes, they’re on the right track indeed and will give people pause when evaluating which virtualization and network vendor to invest in should there be a greenfield capability to do so.  If we’re dealing with environments that already have Cisco and VMware in place, not so much.

/Hoff

Cloud Security Will NOT Supplant Patching…Qualys Has Its Head Up Its SaaS

May 4th, 2009 4 comments

“Cloud Security Will  Supplant Patching…”

What a sexy-sounding claim in this Network World piece which is titled with the opposite suggestion from the title of my blog post.  We will still need patching.  I agree, however, that how it’s delivered needs to change.

Before we get to the issues I have, I do want to point out that the article — despite it’s title —  is focused on the newest release of Qualys’ Laws of Vulnerability 2.0 report (pdf,) which is the latest version of the Half Lives of Vulnerability study that my friend Gerhardt Eschelbeck started some years ago.

In the report, the new author, Qualys’ current CTO Wolfgang Kandek, delivers a really disappointing statistic:

In five years, the average time taken by companies to patch vulnerabilities had decreased by only one day, from 60 days to 59 days, at a time when the number of flaws and the speed at which they are being exploited has accelerated from weeks to, in some cases, days. During the same period, the number of IP scanned on an anonymous basis by the company from its customer base had increased from 3 million to a statistically significant 80 million, with the number of vulnerabilities uncovered rocketing from 3 million to 680 million. Of the latter, 72 million were rated by Qualys as being of ‘critical’ severity.

That lack of progress is sobering, right? So far I’m intrigued, but then that article goes off the reservation by quoting Wolfgang as saying:

Taken together, the statistics suggested that a new solution would be needed in order to make further improvement with the only likely candidate on the horizon being cloud computing. “We believe that cloud security providers can be held to a higher standard in terms of security,” said Kandek. “Cloud vendors can come in and do a much better job.”  Unlike corporate admins for whom patching was a sometimes complex burden, in a cloud environment, patching applications would be more technically predictable – the small risk of ‘breaking’ an application after patching it would be nearly removed, he said.

Qualys has its head up its SaaS.  I mean that in the most polite of ways… 😉

Let me make a couple of important observations on the heels of those I’ve already made and an excellent one Lori MacVittie made today in here post titled “The Real Meaning Of Cloud Security Revealed:

  1. I’d like a better definition of the context of “patching applications.”  I don’t know whether Kandek mean applications in an enterprise or those hosted by a Cloud Provider or both?
  2. There’s a difference between providing security services via the Cloud versus securing Cloud and its application/data.  The quotes above mix the issues.  A “Cloud Security” provider like Qualys can absolutely provide excellent solutions to many of the problems we have today associated with point product deployments of security functions across the enterprise. Anti-spam and vulnerability management are excellent examples.  What that does not mean is that the applications that run in an enterprise can be delivered and deployed more “securely” thanks to the efforts of the same providers.
  3. To that point, the Cloud is not all SaaS-based.  Not every application is going to be or can be moved to a SaaS.  Patching legacy applications (or hosting them for that matter) can be extremely difficult.  Virtualization certainly comes into play here, but by definition, that’s an IaaS/PaaS opportunity, not a SaaS one.
  4. While SaaS providers who do “own the entire stack” are in a better position through consolidated multi-tenancy to transfer the responsibility of patching “their” infrastructure and application(s) on your behalf, it doesn’t really mean they do it any better on an application-by-application basis.  If a SaaS provider only has 1-2 apps to manage (with lots of customers) versus an enterprise with hundreds (and lost of customers,) the “quality” measurements as it relates to management of defect (from any perspective) would likely look better were you the competent SaaS vendor mentioned in this article.  You can see my point here.
  5. If you add in PaaS and IaaS as opposed to simply SaaS (as managed by a third party.) then the statement that “…patching applications would be more technically predictable – the small risk of ‘breaking’ an application after patching it would be nearly removed” is false.

It’s really, really important to compare apples to apples here. Qualys is a fantastic company with a visionary leader in Phillipe Courtot.  I was an early adopter of his SaaS service.  I was on his Customer Advisory Board.  However, as I pointed out to him at the Jericho event where I was a panelist, delivering a security function via the Cloud is not the same thing as securing it and SaaS is merely one piece of the puzzle.

I wrote a couple of other blogs about this topic:

/Hoff

Just What the Hell Is a Hoffacc[h]ino, Anyway?

May 4th, 2009 5 comments

hoffacinoYou may have heard of it.

It’s quite possibly the fundamental underpinning of the entire security industry; a veritable life-source for over-worked security folk.  It’s apparently critical to the success of Cloud, as you can see from the picture to the right.

What is this mystical thing?  The Hoffacchino. Or, Hoffaccino, if you prefer.

You may hear it muttered and wonder “Just what the hell is a Hoffac[h]ino, anyway?”

Go to your local Starbucks and order the following:

The Hoffacc[h]ino

Venti Starbucks Doubleshot on ice. 6 shots, 3 Splenda (can sub sugar,) no classic (syrup,) breve (that’s 1/2 and 1/2 for those of you who don’t speak Strabucktalian.)

I cannot take responsibility for substitutions, because the recipe above took dozens of iterations to perfect for balance acidity, sweetness, caffeine, creaminess and mouth feel.

It’s like an americano over ice (without water to dilute) with splenda and 1/2 and 1/2, and it’s shaken which makes a big difference for some reason.

Now you know.

Everyone groans when they hear it.  Then they try it.  Then they’re hooked.

Sorry.

/Hoff

Categories: Jackassery Tags:

VMware’s Licensing – A “Slap In The Face For Cisco?” Hey Moe!

May 4th, 2009 2 comments

3stooges-slapI was just reading a post by Alessandro at virtualization.info in which he was discussing the availability of trial versions of Cisco’s Nexus 1000v virtual switch solution for VMware environments:

Starting May 21, we’ll see if the customers will really consider the Cisco virtual switch a must-have and will gladly pay the premium price to replace the basic VMware virtual switch they used for so many years now.  As usual in virtualization, it really depends on who’s your interlocutor inside the corporate. The guys at the security department may have a slightly different opinion on this product than the virtualization guys.

Clearly the Nexus 1000v is just the first in a series of technology and architectural elements that Cisco is introducing to integrate more tightly into virtualized and Cloud environments.  The realities of adoption of the 1000v come down to who is making the purchasing decisions, how virtualization is being addressed as an enterprise architecture issue,  how the organization is structured and what pain points might be felt from the current limitations associated with VMware’s vSwitch from both a technological and operational perspective.

Oh, it also depends on price, too 😉

Alessandro also alludes to some complaints in pricing strategy regarding how the underlying requirement for the 1000v, the vNetwork Distributed switch, is also a for-pay item.  Without the vNDS, the 1000v no workee:

Some VMware customers are arguing that the current packaging and price may negatively impact the sales of Nexus 1000V, which becomes now much less attractive.

I don’t pretend to understand all the vagaries of the SKU and cost structures of VMware’s new vSphere, but I was intrigued by the following post from the vinternals blog titled VMware slaps enterprise and Cisco in face, opens door for competitors,:

And finally, vNetwork Distributed Switch. This is where the slap in the face for Cisco is, because the word on the street is that no one even cares about this feature. It is merely seen as an enabler for the Cisco Nexus 1000V. But now, I have to not only pay $600 per socket for the distributed switch, but also pay Cisco for the 1000V!?!?! A large slice of Cisco’s potential market just evaporated. Enterprises have already jumped through the necessary security, audit and operational hoops to allow vSwitches and port groups to be used as standard in the production environment. Putting Cisco into the virtual networking stack is nowhere near a necessity. I wonder what Cisco are going to do now, start rubbishing VMware’s native vSwitches? That will go down well. Oh and yeh, looks like you pretty much have only 1 licensing option for Cisco’s Unified Computing System now. Guess that “20% reduction in capital expense” just flew out the window.

Boy, what a downer! Nobody cares about vNDS?  It’s “…merely seen as an enabler for the Cisco Nexus 1000V?” Evaporation of market? I think those statements are a tad melodramatic, short-sighted and miss the point.

The “necessary security, audit and operational hoops to allow vSwitches and port groups to be used as standard in the production environment” may have been jumped through, but they represent some serious issues at scale and I maintain that these hoops barely satisfy these requirements based on what’s available, not what is needed, especially in the long term.  The issues surrounding compliance, separation of duties, change control/management as well as consistent and stateful policy enforcement are huge problems that are being tolerated today, not solved.

The reality is that vNDS and the 1000v represent serious operational, organizational and technical shifts in the virtualization environment. These are foundational building blocks of a converged datacenter, not point-product cash cows being built to make a quick buck.   The adoption and integration are going to take time, as will vSphere upgrades in general.  Will people pay for them?  If they need more scalable, agile, and secure environments, they will.  Remember the Four Horsemen? vSphere and vNetworking go a long way toward giving enterprises more choice in solving these problems and vNDS/1000v are certainly pieces of this puzzle. The network simply must become more virtualization (and application and information-) aware in order to remain relevant.

However, I don’t disagree in general that  “…putting Cisco into the virtual networking stack is nowhere near a necessity,” for most enterprises, especially if they have very simple requirements for scale, mobility and security.  In environments that are designing their next evolution of datacenter architecture, the integration between Cisco, VMware, and EMC are critical. Virtualization context, security and policy enforcement are pretty important things.  vNetworking/VNDS/1000v/VN-Link are all enablers.

Lastly, there is also no need for Cisco to “…start rubbishing VMware’s native vSwitches” as the differences are pretty clear.  If customers see value in the solution, they will pay for it. I don’t disagree that the “premium” needs to be assessed and the market will dicate what that will be, but this doom and gloom is premature.

Time will tell if these bets pay off.  I am putting money on the fact that they will.

Don’t think that Cisco and VMware aren’t aware of how critical one are to the other and there’s no face slapping going on.

/Hoff

See You At Virtualization Congress ’09 / Citrix Synergy In Vegas…

May 3rd, 2009 No comments

I’ll be at the Virtualization Congress ’09 / Citrix Synergy at the MGM Grand in Las Vegas for a couple of days this week.

I am presenting on Cloud Computing Security on May 6th at 11:30am-12:20pm – Mozart’s The Marriage of Figaro: The Complexity and Insecurity of the Cloud – VC105

This ought to be a funny presentation for about the first 5 minutes…you’ll see why 😉

I’m also on a panel with Dave Shackleford (Configuresoft) & Michael Berman (Catbird) moderated by the mastermind of all things virtualization, Alessandro Perelli,  on May 6th at 5: Securing the Virtual Data Center (on Earth and on Clouds) – VC302

If you’re around, ping me via DM on Twitter (@beaker) or hit me up via email [choff @ packetfilter.com]

Of course, it’s entirely likely you’ll find Crosby and I chatting it up somewhere 😉

See you there!

/Hoff