Archive

Posts Tagged ‘Cloud Security’

Incomplete Thought: Cloud Computing & Innovation – Government IT’s Version of Ethanol?

May 31st, 2009 4 comments

ethanolTime to stick my neck outside my shell again…

I was reading MIT’s Technology Review (registration required) and came across an interesting article titled “Can Technology Save the Economy?

This was a very thought-provoking read as it highlighted how tens of billions of dollars allocated to energy and information technology in the U.S. stimulus bill causes many economists and innovation experts to remain extremely skeptical:

The concern over the stimulus bill’s technology spending is not just that it offends conventional macroeconomic theory about the best way to boost the economy; it’s that it might harm the very technologies it means to support.  Because the bill was written quickly and shaped by political expeidiency, economists and experts on innovation policy are leery of many of its funding choices.  

Could extending billions of dollars’ worth of fiber-optic lines to rural communities, for example, become a boondoggle?  Or what if utilities run high-power transmission lines to remote solar or wind farms only to find that the electricity they produce is too expensive to compete with other sources?

As a historical analogy, experts point to corn-derived ethanol.   Once the darling of alternative-energy advocates, the heavily subsidied biofuel is now routinedly condemned by both environmentalists and economists.  Yet because ethanol’s use in gasoline is now mandated by federal law, and a large industry is now invested in its production, and its production is likely to continue even though it offers few envorinmental benefits over gasoline.

This example shows that we’ve gone so far down the path of “corn power” despite it’s lack of delivering on its promises.  We can’t escape from the gravity of our investment driven by the fervor surrounding its adoption which it seems in many cases were based upon untested theories and unsubstantiated practice.

Ethanol was designed to resolve dependencies on straight fossil fuels.  It was supposed to cost less and deliver better performance at lower emissions.  It hasn’t quite worked out that way.  In reality, ethanol has produced many profound unanticipated impacts; financial, environmental, economic, political and social.  Has it’s production and governmentally-forced adoption driven better solutions from being properly investigated?

Despite my unbridled enthusiasm for Cloud Computing, I am conflicted as I examine it using a similar context to the ethanol example above.  I fully admit that I’m stretching the analog here and mixing metaphors, but the article got me thinking and some of that is playing out here.  It *is* an “incomplete thought,” after all.

While Cloud adoption in certain scenarios may certainly offer tremendous agility and in some cases forthright cost savings, one must ask if the cult of personality surrounding Cloud, especially in the public sector, is not unduly influenced by the pressing macroeconomic conditions, confusing applications of ROI across various dozens of use cases enveloped by a single term, and the same sort of political expediency described above.  

With all of its many benefits, Cloud presents many (if not more) challenges stemming from not solving problems we’ve had for decades.  Cloud is a convenient reason to leap forward whilst refusing to look from whence we are jumping; we’re not necessarily solving problems, we’re “transforming” them.

As with much disruptive innovation, the timing and intersection of technology, religion, culture, economics and politics can mean the difference between bust or boom.  

In the case of Cloud, I’d suggest that the collision space provides the proverbial perfect storm; the hyping of Cloud Computing is largely premature and a convenient scape-horse to which we are hitching our cost-laden IT wagons.  The momentous interest surrounding Cloud in the Public Sector sounds eerily similar to the ethanol scenario above.  Are we so wrapped around the axle with Cloud Computing that we’re actually blinding ourselves from solving the problems we have in fundamentally better ways?

The danger, of course, is that while the federal dollars could help renewable-energy companies survive the recession, they could also prop up existing technologies that would not be competitive in an open market.  Not only could the federal spending support uneconomical energy sources (as has been the case with ethanol), but the resulting backlash could discourage policy makers, investors, and the public from embracing newer, more efficient technologies.  

Putting on my devil’s advocate hate I have to ask “Are we ignoring potentially better solutions to our problems?”  Certainly we’re seeing a definite spike in the punctuated equilibrium of IT’s evolution, but at this point, one might argue it’s a supply driven demand.  Is Cloud Computing really the answer to our problems or a fantastic and convenient way of treating the symptoms?

You can’t swing a dead cat in Washington without somebody talking about moving something to the Cloud.   One the one hand, it’s fantastic to see Government think outside the box, but what happens when the box collapses?  

The movie will play out and we’ll have to wait and see whether  the horsepower delivered by Cloud is more analogous to ethanol — a reformulated version of gasoline that doesn’t really deliver  on it’s promises, but that we’re stuck with — or whether we’ll see the equivalent of the IT hydrogen fuel cell instead.

Time will tell.  What do you think? 

/Hoff

JERICHO FORUM AND CLOUD SECURITY ALLIANCE JOIN FORCES TO ADDRESS CLOUD COMPUTING SECURITY

May 27th, 2009 6 comments

At the RSA conference I left the Cloud Security Alliance launch early in order to attend the Jericho Forum’s session on Cloud Computing.  It seems we haven’t solved the teleportation issue yet.  Maybe in the next draft…

We had a great session at the Jericho event with myself, Rich Mogull and Gunnar Peterson discussing Jericho’s COA and Cloud Cube work.  The conclusion of the discussion was that ultimately that Jericho and the CSA should join forces.

Voila:

JERICHO FORUM AND CLOUD SECURITY ALLIANCE JOIN FORCES TO ADDRESS CLOUD COMPUTING SECURITY

London and San Francisco, 21 May 2009 – Jericho Forum, the high level independent security expert group, and the Cloud Security Alliance, a not-for-profit group of information security and cloud computing security leaders, announced today that they are working together to promote best practices for secure collaboration in the cloud.  Both groups have a single goal: to help business understand the opportunity posed by cloud computing and encourage common and secure cloud practices.     Within the framework of the new partnership, both groups will continue to provide practical guidance on how to operate securely in the cloud while actively aiming to align the outcomes of their work.  

“This is good news for the industry” said Adrian Seccombe, CISO and Senior Enterprise Information Architect at Eli Lilly and Jericho Forum board member.  “The Cloud represents a compelling opportunity to achieve more with less but at the same time presents considerable security challenges.  For business to get the most out of it, this new development must be addressed responsibly and with eyes fully open.  Working together we believe that the Cloud Security Alliance and Jericho Forum can bring clear leadership in this important area and dispel some of the hype and confusion stirred up in the cloud.”

"The Cloud represents a fundamental shift in computing with limitless potential.  Solving the new set of risk issues it introduces is a shared responsibility of cloud provider and customer alike," said Jim Reavis, Co-founder of the Cloud Security Alliance (CSA).  "The Jericho Forum has shown early leadership in articulating and addressing the de-perimeterisation concept.  We are proud to join forces with them to provide pragmatic guidance for safely leveraging the cloud today as well as a clear vision for a future of pervasive and secure cloud computing."

Jericho Forum has lead the way for the last five years in the way de-perimeterisation is tackled and more recently in developing secure collaborative architectures. Last year the group published a Collaboration Oriented Architectures framework presenting a set of design principles allowing businesses to protect themselves against the security challenges posed by increased collaboration and the business potential offered by Web 2.0.  The Cloud Security Alliance has engaged, noted and well-recognised experts within crucial areas such as governance, law, network security, audit, application security, storage, cryptography, virtualization and risk management to provide authoritative guidance on how to adopt cloud computing solutions securely. 

Both groups recently published initial guidelines for cloud computing.   The Jericho Forum published a Cloud Cube Model designed to be an essential first tool to help business evaluate the risk and opportunity associated with moving in to the cloud.  A video presentation of this is available on YouTube (see(http://www.youtube.com/jerichoforum) and an accompanying Cloud Cube Model positioning paper is downloadable from the Jericho Forum Web site (http://www.opengroup.org/jericho/cloud_cube_model_v1.0.pdf).   At RSA in San Francisco, Cloud Security Alliance announced its formation and published an inaugural whitepaper, “Guidance for Critical Areas of Focus in Cloud Computing”,  downloadable from  http://www.cloudsecurityalliance.org/guidance/). 

About Jericho Forum

Jericho Forum is an international IT security thought-leadership group dedicated to defining ways to deliver effective IT security solutions that will match the increasing business demands for secure IT operations in our open, Internet-driven, globally networked world.  Members include many leading organisations from both the user and vendor community including IBM, Symantec, Boeing, AstraZeneca, Qualys, BP, Eli Lilly, KLM, Cap Gemini, Motorola and Hewlett Packard.  

Together there aim is to:

·         Drive and influence development of new architectures, inter-workable technology solutions, and implementation approaches for securing our de-perimeterizing world

·         Support development of open standards that will underpin these technology solutions.

A full list of member organisations can be seen at http://www.opengroup.org/jericho/memberCompany.htm.

About Cloud Security Alliance

The Cloud Security Alliance is a not-for-profit organization with a mission to promote the use of best practices for providing security assurance within Cloud Computing, and to provide education on the uses of Cloud Computing to help secure all other forms of computing. The Cloud Security Alliance is led by industry practitioners and supported by founding charter companies PGP Corporation, Qualys, Inc. and Zscaler, Inc. For further information, the Cloud Security Alliance website is www.cloudsecurityalliance.org

It’s great to see things moving along.  Previously we also announced that the CSA and ISACA have joined forces to promote security best practices in Cloud Computing.

In case you’ve not seen it, we’re looking for volunteers to work on specific areas of the v2.0 guidance targeted for October, 2009.  You can also contribute your thoughts on the existing guidance via our CSA Google Group.

Video Interview – Hoff & Crosby: Who Should Secure Virtual Environments?

May 26th, 2009 No comments

Simon Crosby and I were interviewed by Mike Mimoso of SearchSecurity.com at the RSA conference.  This was after a panel at the America’s Growth Capital conference and prior to our debate which included Steve Herrod of VMware.

It’s a two-part video that got a bit munged when the cameraman let the tape run out about 1/2 way through 😉

hoff-crosby

Part 1 can be found here.

Part 2 can be found here.

Quick Bit: Virtual & Cloud Networking – Where It ISN’T Going…

May 26th, 2009 No comments

In my Four Horsemen presentation, I made reference to one of the challenges with how the networking function is being integrated into virtual environments.  I’ve gone on to highlight how this is exacerbated in Cloud networking, also.

Specifically, as it comes to understanding how the network plays in virtual and Cloud architectures, it’s not where the network *is* in the increasingly complex virtualized, converged and unified computing architectures, it’s where networking *isn’t.*

What do I mean by this?  Here’s a graphical representation that I built about a year ago.  It’s well out-of-date and overly-simplified, but you get the picture:

virtualnetwork-whereThere’s networking at almost every substrate level — in the physical and virtual construct.  In our never-ending quest to balance performance, agility, resiliency and security, we’re ending up with a trade-off I call simplexity: the most complex simplicity in networking we’ve ever seen.   I wrote about this in a blog post last year titled “The Network Is the Computer…(Is the Network, Is the Computer…)

If you take a look at some of the more recent blips to appear on the virtual and Cloud networking  radar, you’ll see examples such as:

This list is far from inclusive.  Yes, I know I’ve left off blade server manufacturers and other players like HP (ProCurve) and Juniper, etc.  as well as ADC vendors like f5.  It’s not that I don’t appreciate their solutions, it’s just that I have a couple of free cycles to write this, and the list above appear on the top of my stack.

I plan on writing in more detail about the impact some of these technologies are having on next generation datacenters and Cloud deployments, as it’s a really interesting subject for me coming from my background at Crossbeam.

The most startling differences are in the approach of either putting the networking (and all its attendant capabilities) back in the hands of the network folks or allowing the server/virtual server admins to continue to leverage their foothold in the space and manage the network as a component of the converged and virtualized solution as a whole.

My friend @aneel (Twitter)  summed it up really well this morning when comparing the Blade Network Technology VMready offering and the Cisco Nexus 100ov:

huh.. where cisco uses nx1kv to put net control more in hands of net ppl, bnt uses vmready to put it further in server/virt admin hands

Looking at just the small sampling of solutions above, we see the diversity in integrated networking, external fabrics, converged fabrics (including storage) and add-on network processors.

It’s going to be a wild ride kids.  Buckle up.

/Hoff

Google Gaffe – The Cloud Needs a Snuggie…Or a Wedgie

May 19th, 2009 No comments

snuggieBy now you’ve undoubtedly heard that Google had a little operational hiccup.  I particularly enjoyed Craig Labovitz’s (arbor) account of “The Great GoogleLapse

When a suite of services that account for a projected 5% of the entire Intertube’s traffic shits the bed, people pay attention.

Sometimes for the wrong reasons.

Conspiracy theories, rumors of the end of days and chants of “don’t trust the Cloud!” start to fly when operational issues such as the routing boo-boo that hit Google turn up.

The reality is that in the grand scheme of things, we should take the three salient points from this experience and move on:

  1. Cloud services — even those with the scale, maturity and operational track-record of Google — still depend on fundamentally weak, insecure and unstable infrastructure that is easy to screw up.
    This is the premise for my upcoming Black Hat talk titled “Cloudifornication: Indiscriminate Information Intercourse Involving Internet Infrastructure.”
  2. You ought to have a Plan B. That maybe difficult as it relates to Cloud-based SaaS application offerings and service which, by definition, tend to tie you to the platform/provider offering them.
  3. This isn’t going to stop anyone from moving to the Cloud.  It may give people pause and they may spend a few more cycles evaluating what Plan B might mean, but it also pushes the agendas of hybrid architectures like Google’s NaCl and client-side hypervisors for “off-line” Cloud goodness.  All in all, it’s a nice reminder, but Cloud goes on.

The economic lubricant provided by the Astro Glide that is Cloud is just too compelling. If someone hasn’t factored potential widespread outages from single-sourced providers, shame on them; that’s poor risk assessment.

Yes, we’ve got lots of attendant issues to solve when it comes to Cloud.  Many of them, I have so soapboxed, are the same ones we’ve had for a long while.  To those of us who recognize the Internet Cloud for what it is, Google’s outage was simply an opportunity to order another Hoffachino.

What doesn’t kill us makes us…just as insecure and potentially unavailable due to some monkey pushing the wrong button as we’ve always been.

Besides, now we know that outsourcing your traffic to China is the sux0r.

So chill.  Learn from this.  Use it to form rational arguments about how to deal with this sort of thing when it does happen — because it’s going to again, just like it always has.  Remember?

Worse comes to worse, may I suggest one of these — it is the cure for all your woes anyway, right?

/Hoff

Incomplete Thought: Storage In the Cloud: Winds From the ATMOS(fear)

May 18th, 2009 1 comment

I never metadata I didn’t like…

I first heard about EMC’s ATMOS Cloud-optimized storage “product” months ago:

EMC Atmos is a multi-petabyte offering for information storage and distribution. If you are looking to build cloud storage, Atmos is the ideal offering, combining massive scalability with automated data placement to help you efficiently deliver content and information services anywhere in the world.

I had lunch with Dave Graham (@davegraham) from EMC a ways back and while he was tight-lipped, we discussed ATMOS in lofty, architectural terms.  I came away from our discussion with the notion that ATMOS was more of a platform and less of a product with a focus on managing not only stores of data, but also the context, metadata and policies surrounding it.  ATMOS tasted like a service provider play with a nod to very large enterprises who were looking to seriously trod down the path of consolidated and intelligent storage services.

I was really intrigued with the concept of ATMOS, especially when I learned that at least one of the people who works on the team developing it also contributed to the UC Berkeley project called OceanStore from 2005:

OceanStore is a global persistent data store designed to scale to billions of users. It provides a consistent, highly-available, and durable storage utility atop an infrastructure comprised of untrusted servers.

Any computer can join the infrastructure, contributing storage or providing local user access in exchange for economic compensation. Users need only subscribe to a single OceanStore service provider, although they may consume storage and bandwidth from many different providers. The providers automatically buy and sell capacity and coverage among themselves, transparently to the users. The utility model thus combines the resources from federated systems to provide a quality of service higher than that achievable by any single company.

OceanStore caches data promiscuously; any server may create a local replica of any data object. These local replicas provide faster access and robustness to network partitions. They also reduce network congestion by localizing access traffic.

Pretty cool stuff, right?  This just goes to show that plenty of smart people have been working on “Cloud Computing” for quite some time.

Ah, the ‘Storage Cloud.’

Now, while we’ve heard of and seen storage-as-a-service in many forms, including the Cloud, today I saw a really interesting article titled “EMC, AT&T open up Atmos-based cloud storage service:”

EMC Corp.’s Atmos object-based storage system is the basis for two cloud computing services launched today at EMC World 2009 — EMC Atmos onLine and AT&T’s Synaptic Storage as a Service.
EMC’s service coincides with a new feature within the Atmos Web services API that lets organizations with Atmos systems already on-premise “federate” data – move it across data storage clouds. In this case, they’ll be able to move data from their on-premise Atmos to an external Atmos computing cloud.

Boston’s Beth Israel Deaconess Medical Center is evaluating Atmos for its next-generation storage infrastructure, and storage architect Michael Passe said he plans to test the new federation capability.

Organizations without an internal Atmos system can also send data to Atmos onLine by writing applications to its APIs. This is different than commercial graphical user interface services such as EMC’s Mozy cloud computing backup service. “There is an API requirement, but we’re already seeing people doing integration” of new Web offerings for end users such as cloud computing backup and iSCSI connectivity, according to Mike Feinberg, senior vice president of the EMC Cloud Infrastructure Group. Data-loss prevention products from RSA, the security division of EMC, can also be used with Atmos to proactively identify confidential data such as social security numbers and keep them from being sent outside the user’s firewall.

AT&T is adding Synaptic Storage as a Service to its hosted networking and security offerings, claiming to overcome the data security worries many conservative storage customers have about storing data at a third-party data center.

The federation of data across storage clouds using API’s? Information cross-pollenization and collaboration? Heavy, man.

Take plays like Cisco’s UCS with VMware’s virtualization and stir in VN-Tag with DLP/ERM solutions and sit it on top of ATMOS…from an architecture perspective, you’ve got an amazing platform for service delivery that allows for some slick application of policy that is information centric.  Sure, getting this all to stick will take time, but these are issues we’re grappling with in our discussions related to portability of applications and information.

Settling Back Down to Earth

This brings up a really important set of discussions that I keep harping on as the cold winds of reality start to blow.

From a security perspective, storage is the moose on the table that nobody talks about.  In virtualized environments we’re interconnecting all our hosts to islands of centralized SANs and NAS.  We’re converging our data and storage networks via CNAs and unified fabrics.

In multi-tenant Cloud environments all our data ends up being stored similarly with the trust that segregation and security are appropriately applied.  Ever wonder how storage architectures never designed to do these sorts of things at scale can actually do so securely? Whose responsibility is it to manage the security of these critical centerpieces of our evolving “centers of data.”

So besides my advice that security folks need to run out and get their CCIE certs, perhaps you ought to sign up for a storage security class, too.  You can also start by reading this excellent book by Himanshu Dwivedi titled “Securing Storage.”

What are YOU doing about securing storage in your enterprise our Cloud engagements?  If your answer is LUN masking, here’s four Excedrin, call me after the breach.

/Hoff

Security and the Cloud – What Does That Even Mean?

May 18th, 2009 1 comment

I was chatting with Pete Lindstrom this morning about how difficult it is to frame meaningful discussion around what security and Cloud Computing means.

In my Four Horsemen presentation I reflected on the same difficulty as it relates to security and virtualization.  I arrived at separating the discussion into three parts:

virtsec-points017Securing virtualization refers to what we need to do in order to ensure the security of the underlying virtualization platform itself.

Virtualizing security refers to how we operationalize and virtualize security capabilities — those we already have and new, evolving solutions — in order to secure our virtualized resources

Security via virtualization refers to what security benefits above and beyond what we might expect from non-virtualized environments we gain through the deployment of virtualization.

In reality, we need to break down the notion of security and Cloud computing into similar chunks.  The reason for this is that much like in the virtualization realm, we’re struggling less with security technology solutions (as there really are few) but rather with the operational, organizational and compliance issues that come with this new unchartered (or pooly chartered) territory.

Further, it’s important that we abstract offering security services from the Cloud as a platform versus how we secure the Cloud as a platform…I’ve chatted about that previously.

Thus we need to understand what it means to secure — or have a provider secure — the underlying Cloud platform, how we can then apply solutions from a collective catalog of compensating controls to apply security to our Cloud resources and ultimately how we can achieve parity or even better security through Cloud Computing.

I find it disturbing that folks often have the opinion of me that I am anti-Cloud. That’s something I must obviously work on, but suffice it to say that I am incredibly passionate about Cloud Computing and ensuring that we achieve an appropriate balance of security and survivability with its myriad of opportunity.

To illustrate this, I offer the talking slide from my Frogs presentation of security benefits that Cloud presents to an organization as a forcing function as they think about embracing Cloud Computing.  I present this slide before the security issues slide.  Why?  because I think Cloud can be harnessed as a catalyst for moving things forward in the security realm and used as lever to get things done:

cloudsec-benefits059Looking at the list of benefits, they actually highlight what I think are the the top three concerns organizations have with Cloud computing.  I believe they revolve around understanding how Cloud services provide for the following:

  • Preserving confidentiality, integrity and availability
  • Maintaining appropriate levels of identity and access Control
  • Ensuring appropriate audit and compliance capability

These aren’t exactly new problems.  They are difficult problems, especially when combined with new business models and technology, but ones we need to solve.  Cloud can help.

So, what does “securing the Cloud” mean and how do we approach discussing it?

I think the most rational approach is the one the Cloud Security Alliance is taking by framing the issues around the things that matter most, pointing out how these issues with which we are familiar are both similar and different when talking about Cloud Computing.  While others still argue with defining the Cloud, we’re busy trying to get in front of the issues we know we already have.

If you haven’t had a chance to take a look at the guidance, please do!  You can discuss it here on our Google Group.

In the meantime, ponder this: Valeo utilizing Google Apps across it’s 30,000 users. Funny, I remember talking about CapGemini and Google doing this very thing back in 2007: Google Makes Its Move To The Corporate Enterprise Desktop – Can It Do It Securely?

Check out some of the comments in that post. Crow, anyone?

/Hoff

Incomplete Thought: The Crushing Costs of Complying With Cloud Customer “Right To Audit” Clauses

May 14th, 2009 13 comments

As Cloud Computing continues to capture the hearts, minds and other assorted organs of business folk everywhere, the economics of outsourcing services to the Cloud come more and more into focus.  Here’s one element that I don’t think is being paid much attention, however*:

While most of the cost/benefit analysis is being discussed as it relates to the “consumer” side of Cloud, the providers themselves have an equally burgeoning issue surfacing as it relates to cost; satisfying right to audit clauses.

Almost all of the Cloud providers I have spoken to are being absolutely hammered by customers acting on their “right to audit” clauses in contracts. This is a change in behavior.  Most customers have traditionally not acted on these clauses as they used them more as contingency/insurance options.  With the uncertainty relating to confidentiality, integrity and availability of Cloud services, this is no more.  Cloud providers continue to lament that they really, really want a standardized way of responding to these requests**

These providers — IaaS, PaaS and especially SaaS — are having to staff up and spend considerable amounts of time, money and resources on satisfying these requests from customers.

When I negotiated contracts for outsourced services, I always required an RTA clause.  It was non-negotiable.  I also acted on them several times in response to an issue or request from an auditor/regulator.

If you aren’t writing these clauses into your contracts, you should be.  For those of you who have done so, good on you for being diligent.  To those providers who are eating it with the load this renders, I feel your pain but I fear it will only get worse.

/Hoff

* This WordPress theme makes indented captions look like quotes. This is a highlighted section written by me and is not a quote from someone else.  Sorry for any confusion.
** This is where/why Cloud providers should get involved with the Cloud Security Alliance — we can, as a community, facilitate both expectations and deliverables from both the consumer and provider perspective…

On the Draft NIST Working Definition Of Cloud Computing…

May 8th, 2009 6 comments

How many of you have seen the Draft NIST Working Definition Of Cloud Computing?  It appears to have been presented to government CIO’s at the recent Federal CIO Cloud Computing Summit in Washington DC last week.

I saw the draft NIST Working Definition of Cloud Computing shown below (copied from Reuven Cohen’s blog) about a month and a half ago, but have not seen it presented in its entirety outside of the copy I was sent until now and didn’t know how/when it would be made “public,” so I didn’t blog directly about its content.

The reason I was happy to see it when I did was that I had just finished writing the draft of the Cloud Security Alliance Security Guidance for Critical Areas of Focus In Cloud Computing — specifically the section on Cloud architecture and found that there was a very good alignment between our two independent works (much like with the Jericho Cloud Cube model.)

In fact, you’ll see that I liked the definitions for the SPI model components so much, I used them and directly credited  Peter Mell from NIST, one of the authors of the work.

I sent a very early draft of my work along with some feedback to Peter on some of the definitions, specifically since I noted some things I did not fully agree with in the deployment models sections. The “community” clouds seem to me as being an abstraction or application of of private clouds. I have a “managed cloud” instead.  Ah, more fuel for good discussion.

I hoped we could have discussed them prior to publishing either of the documents, but we passed in the ether as it seems.

At any rate, here’s the draft from our wily Canadian friend:

4-24-09

Peter Mell and Tim Grance – National Institute of Standards and Technology, Information Technology Laboratory

Note 1: Cloud computing is still an evolving paradigm. Its definitions, use cases, underlying technologies, issues, risks, and benefits will be refined in a spirited debate by the public and private sectors. These definitions, attributes, and characteristics will evolve and change over time.

Note 2: The cloud computing industry represents a large ecosystem of many models, vendors, and market niches. This definition attempts to encompass all of the various cloud approaches.

Definition of Cloud Computing:

Cloud computing is a pay-per-use model for enabling available, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. This cloud model promotes availability and is comprised of five key characteristics, three delivery models, and four deployment models.

Key Characteristics:

  • On-demand self-service. A consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed without requiring human interaction with each service’s provider.
  • Ubiquitous network access. Capabilities are available over the network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, laptops, and PDAs).
  • Location independent resource pooling. The provider’s computing resources are pooled to serve all consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand. The customer generally has no control or knowledge over the exact location of the provided resources. Examples of resources include storage, processing, memory, network bandwidth, and virtual machines.
  • Rapid elasticity. Capabilities can be rapidly and elastically provisioned to quickly scale up and rapidly released to quickly scale down. To the consumer, the capabilities available for rent often appear to be infinite and can be purchased in any quantity at any time.
  • Pay per use. Capabilities are charged using a metered, fee-for-service, or advertising based billing model to promote optimization of resource use. Examples are measuring the storage, bandwidth, and computing resources consumed and charging for the number of active user accounts per month. Clouds within an organization accrue cost between business units and may or may not use actual currency.

Note: Cloud software takes full advantage of the cloud paradigm by being service oriented with a focus on statelessness, low coupling, modularity, and semantic interoperability.

Delivery Models:

  • Cloud Software as a Service (SaaS). The capability provided to the consumer is to use the provider’s applications running on a cloud infrastructure and accessible from various client devices through a thin client interface such as a Web browser (e.g., web-based email). The consumer does not manage or control the underlying cloud infrastructure, network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings.
  • Cloud Platform as a Service (PaaS). The capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created applications using programming languages and tools supported by the provider (e.g., java, python, .Net). The consumer does not manage or control the underlying cloud infrastructure, network, servers, operating systems, or storage, but the consumer has control over the deployed applications and possibly application hosting environment configurations.
  • Cloud Infrastructure as a Service (IaaS). The capability provided to the consumer is to rent processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, deployed applications, and possibly select networking components (e.g., firewalls, load balancers).

Deployment Models:

  • Private cloud. The cloud infrastructure is owned or leased by a single organization and is operated solely for that organization.
  • Community cloud. The cloud infrastructure is shared by several organizations and supports a specific community that has shared concerns (e.g., mission, security requirements, policy, and compliance considerations).
  • Public cloud. The cloud infrastructure is owned by an organization selling cloud services to the general public or to a large industry group.
  • Hybrid cloud. The cloud infrastructure is a composition of two or more clouds (internal, community, or public) that remain unique entities but are bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting).

Each deployment model instance has one of two types: internal or external. Internal clouds reside within an organizations network security perimeter and external clouds reside outside the same perimeter.

Now, Reuven Cohen mentioned on his blog:

In creating this definition, NIST consulted extensively with the private sector including a wide range of vendors, consultants and industry pundants (sic!) including your (sic!) truly. Below is the draft NIST working definition of Cloud Computing. I should note, this definition is a work in progress and therefore is open to public ratification & comment. The initial feedback was very positive from the federal CIO’s who were presented it yesterday in DC. Baring any last minute lobbying I doubt we’ll see many more major revisions.

…which is interesting, because for being “…open to public ratification & comment,” I can’t seem to find it anywhere except for references to its creation as a deliverable in FY09 in a presentation from December, 2008.  I searched NIST’s site, but perhaps I’m just having a bad search day.

Clearly at least I have a couple of comments.  I could send them to Peter directly, but I’d rather discuss them openly if that’s appropriate and there is a forum to do so.  At this rate, it looks as though it may be too late, however.

/Hoff

The UFC and UCS: Cisco Is Brock Lesnar

March 17th, 2009 7 comments

Lesnar vs. Mir...My favorite sport is mixed martial arts (MMA.)

MMA is a combination of various arts and features athletes who come from a variety of backgrounds and combine many disciplines that they bring to the the ring.  

You’ve got wrestlers, boxers, kickboxers, muay thai practitioners, jiu jitsu artists, judoka, grapplers, freestyle fighters and even the odd karate kid.

Mixed martial artists are often better versed in one style/discipline than another given their strengths and background but as the sport has evolved, not being well-rounded means you run the risk of being overwhelmed when paired against an opponent who can knock you out, take you down, ground and pound you, submit you or wrestle/grind you into oblivion.  

The UFC (Ultimate Fighting Championship) is an organization which has driven the popularity and mainstream adoption of MMA as a recognizable and sanctioned sport and has given rise to some of the most notable MMA match-ups in recent history.

One of those match-ups included the introduction of Brock Lesnar — an extremely popular “professional” wrestler — who has made the  transition to MMA.  It should be noted that Brock Lesnar is an aberration of nature.  He is an absolute monster:  6’3″ and 276 pounds.  He is literally a wall of muscle, a veritable 800 pound gorilla.

In his first match, he was paired up against a veteran in MMA and former heavyweight champion, Frank Mir, who is an amazing grappler known for vicious submissions.  In fact, he submitted Lesnar with a nasty kneebar as Lesnar’s ground game had not yet evolved.  This is simply part of the process.  Lesnar’s second fight was against another veteran, Heath Herring, who he manhandled to victory.  Following the Herring fight, Lesnar went on to fight one of the legends of the sport and reigning heavyweight champion, Randy Couture.  

Lesnar’s skills had obviously progressed and he looked great against Couture and ultimately won by a TKO.

So what the hell does the UFC have to do with the Unified Computing System (UCS?)

Cisco UCS Components

Cisco UCS Components

 

Cisco is to UCS as Lesnar is to the UFC.

Everyone wrote Lesnar off after he entered the MMA world and especially after the first stumble against an industry veteran.

Imagine the surprise when his mass, athleticism, strength, intelligence and tenacity combined with a well-versed strategy paid off as he’s become an incredible force to be reckoned with in the MMA world as his skills progressed.  Oh, did I mention that he’s the World Heavyweight Champion now?

Cisco comes to the (datacenter) cage much as Lesnar did; an 800 pound gorilla incredibly well-versed in one  set of disciplines, looking to expand into others and become just as versatile and skilled in a remarkably short period of time.  Cisco comes to win, not compete. Yes, Lesnar stumbled in his first outing.  Now he’s the World Heavyweight Champion.  Cisco will have their hiccups, too.

The first elements of UCS have emerged.  The solution suite with the help of partners will refine the strategy and broaden the offerings into a much more well-rounded approach.  Some of Cisco’s competitors who are bristling at Cisco’s UCS vision/strategy are quick to criticize them and reduce UCS to simply an ill-executed move “…entering the server market.”  

I’ve stated my opinions on this short-sighted perspective:

Yes, yes. We’ve talked about this before here. Cisco is introducing a blade chassis that includes compute capabilities (heretofore referred to as a ‘blade server.’)  It also includes networking, storage and virtualization all wrapped up in a tidy bundle.

So while that looks like a blade server (quack!,) walks like a blade server (quack! quack!) that doesn’t mean it’s going to be positioned, talked about or sold like a blade server (quack! quack! quack!)
What’s my point?  What Cisco is building is just another building block of virtualized INFRASTRUCTURE. Necessary infrastructure to ensure control and relevance as their customers’ networks morph.

My point is that what Cisco is building is the natural by-product of converged technologies with an approach that deserves attention.  It *is* unified computing.  It’s a solution that includes integrated capabilities that otherwise customers would be responsible for piecing together themselves…and that’s one of the biggest problems we have with disruptive innovation today: integration.

 

The knee-jerk dismissals witnessed since yesterday by the competition downplaying the impact of UCS are very similar to how many people reacted to Lesnar wherein they suggested he was one dimensional and had no core competencies beyond wrestling, discounting his ability to rapidly improve and overwhelm the competition.  

Everyone seems to be focused on the 5100 — the “blade server” — and not the solution suite of which it is a single piece; a piece of a very innovative ecosystem, some Cisco, some not.  Don’t get lost in the “but it’s just a blade server and HP/IBM/Dell can do that” diatribe.  It’s the bigger picture that counts.

The 5100 is simply that — one very important piece of the evolving palette of tools which offer the promise of an integrated solution to a radically complex set of problems.

Is it complete?  Is it perfect?  Do we have all the details? Can they pull it off themselves?  The answer right now is a simple “No.”  But it doesn’t have to be.  It never has.

There’s a lot of work to do, but much like a training camp for MMA, that’s why you bring in the best partners with which to train and improve and ultimately you get to the next level.

All I know is that I’d hate to be in the Octagon with Cisco just like I would with Lesnar.

/Hoff