Archive

Archive for the ‘Cloud Computing’ Category

Private Clouds: Even A Blind Squirrel Finds A Nut Once In A While

April 12th, 2009 6 comments

Over the last month it’s been gratifying to watch the “mainstream” IT press provide more substantive coverage on the emergence and acceptance of Private Clouds after the relatively dismissive stance prior.  

I think this has a lot to do with the stabilization of definitions and applications of Cloud Computing and it’s service variants as well as the realities of Cloud adoption in large enterprises and the timing it involves.

To me, Private Clouds represent the natural progression toward wider scale Cloud adoption for larger enterprises with sunk costs and investments in existing infrastructure and it has always meant more than simply “Amazon-izing your Intranet.”  Private Clouds offer larger enterprises a logical, sustainable and intelligent path forward from their virtualization and automation initiatives in play already.

I think my definition a few months ago was still a little rough, but it gets the noodle churning:

Private clouds are about extending the enterprise to leverage infrastructure that makes use of cloud computing capabilities and is not (only) about internally locating the resources used to provide service.  It’s also not an all-or-nothing proposition.

It occurs to me that private clouds make a ton of sense as an enabler to enterprises who want to take advantage of cloud computing for any of the oft-cited reasons, but are loathe to (or unable to) surrender their infrastructure and applications without sufficient control.  Private clouds mean that an enterprise can decide how and how much of the infrastructure can/should be maintained as a non-cloud operational concern versus how much can benefit from the cloud.

Private clouds make a ton of sense; they provide the economic benefits of outsourced scaleable infrastructure that does not require capital outlay, the needed control over that infrastructure combined with the ability to replicate existing topologies and platforms and ultimately the portability of applications and workflow.  These capabilities may eliminate the re-write and/or re-engineering of applications like is often required when moving to typical IaaS (infrastructure as a Service) player such as Amazon.

From a security perspective — which is very much my focus — private clouds provide me with a way of articulating and expressing the value of cloud computing while still enabling me to manage risk to an acceptable level as chartered by my mandate.

Here are some of the blog entries I’ve written on Private Clouds. I go into reasonable detail in my “Frogs Who Desired a King” Cloud Security presentation.  James Urquhart’s got some doozies, too.  Here’s a great one.  Chuck Hollis has been pretty vocal on the subject.

My Google Reader has no less than 10 articles on Private Clouds in the last day or so including an interesting one featuring GE’s initiative over the next three years.

I hope the dialog continues and we can continue to make headway in arriving at common language and set of use cases, but as I discovered a couple of weeks ago, in my post titled “The Vagaries Of Cloudcabulary: Why Public, Private, Internal & External Definitions Don’t Work…”, the definition of Private Cloud is the most variable of all and promotes the most contentious of debates:

hppiev7

Private Clouds seem to point to validate the proimise of what real time infrastructure/adapative enterprise visions painted many years ago, with the potential for even more scale and control.  The intersection of virtualization, automation, Cloud and converged and unified computing are making sure of that.

/Hoff

Categories: Cloud Computing, Cloud Security Tags:

Does Cloud Infrastructure Matter? You Bet Your Ass(ets) It Does!

April 8th, 2009 5 comments

James Urquhart wrote a great blog today titled “The new cloud infrastructure: do you care?” in which he says:

…if you are a consumer of cloud-based resources, the mantra has long been that you can simply deploy or consume your applications/services without any regard to the infrastructure on which they are being hosted. A very cool concept for an application developer, to be sure, but I think it’s a mistake to ignore what lies under the hood.

At the very least, the future of hardware ought to touch the inner geek in all of us.

What is happening in data center infrastructure is a complete rethinking of the architectures utilized to deliver online services, from the overall data center architectures all the way down to the very components that serve the “big four” elements of the data center: facilities, servers, storage and networking.

Amen!

While James’ post focused mostly on how the underlying compute platforms are changing such as his illustration with Cisco’s UCS, Rackable’s C2 and Google’s custom machines, this trend will expand up and down the infrastructure stack.

From a technologist or architect’s perspective, what powers the underlying Cloud infrastructure is really important. As James alludes to, issues of interoperability can and will be impacted by the underlying platforms upon which the abstracted application resources sit.  This may sound contentious from the PaaS and SaaS perspective, but not so from that of IaaS, afterall the “I” in IaaS stands for infrastructure.

I made this point recently from a security perspective in my blog post titled “The Cloud Is a Fickle Mistress: DDoS&M…”  wherein I said:

We’re told we shouldn’t have to worry about the underlying infrastructure with Cloud, that it’s abstracted and someone else’s problem to manage…until it’s not.

…or here in Cloud Catastrophes (Cloudtastophes?) Caused by Clueless Caretakers?:

The abstraction of infrastructure and democratization of applications and data that Cloud Computing services can bring does not mean that all services are created equal.  It does not make our services or information more secure (or less for that matter.)  Just because a vendor brands themselves as a “Cloud” provider does not mean that “their” infrastructure is any more implicitly reliable, stable or resilient than traditional infrastructure or that proper enterprise architecture as it relates to people, process and technology is in place.  How the infrastructure is built and maintained is just as important as ever.

What we’ll also see is that even though we’re not supposed to care what our Cloud providers’ infrastructure is powered by and how, we absolutely will in the long term and the vendors know it.   This is where people start to freak about how standards and consolidation will kill innovation in the space but it’s also where the realities of running a business come crashing down on early adopters. Large enterprises will move to providers who can demonstrate that their services are solid by way of co-branding with the reputation of the providers of infrastructure coupled with the compliance to “standards.”

Remember the “Cisco Powered Network” program?  How about a “Cisco Powered Cloud?”  See how GoGrid advertises their load balancers are f5?

In the long term, like the CapitalOne credit card commercials challenging the company providing your credit card services by asking “What’s in your wallet?” you can expect to start asking the same thing about your Cloud providers’ offerings, also.

So, depending on what you do and what you need, your choice of provider — and what sits under their hood — may matter a ton.

/Hoff

Categories: Cloud Computing, Cloud Security Tags:

Google’s Updated App Engine – “Secure” Data Connector: Your Firewall Means Nothing (Again)

April 8th, 2009 3 comments

This will be a quickie.  

This is such a juicy topic and really merits a ton more than just a mention, but unfortunately, I’m out of time.

Google’s latest updates to the Google App Engine Platform has all sorts of interesting  functionality:

  • Access to firewalled data: grant policy-controlled access to your data behind the firewall.
  • Cron support: schedule tasks like report generation or DB clean-up at an interval of your choosing.
  • Database import: move GBs of data easily into your App Engine app. Matching export capabilities are coming soon, hopefully within a month.

To me, the most interesting is the boldfaced item above…Google Apps access to information behind corporate firewalls*

From a Cloud interoperability and integration perspective, this is fantastic.  From a security perspective, I am as intrigued and concerned as I am about anytime I hear “access internal data from an external service.”

The capability to gain access to internal data is provided by the Secure Data Connector.  You can find reasonably detailed information about it here.

Here’s how it works:

SDC forms an encrypted connection between your data and Google Apps. SDC lets you control who in your domain can access which resources using Google Apps.

SDC works with Google Apps to provide data connectivity and enable IT administrators to control the data and services that are accessible in Google Apps. With SDC, you can build private gadgets, spreadsheets, and applications that interact with your existing corporate systems.

The following illustration shows SDC connection components.

Secure Data Connector Components

The steps are:

  1. Google Apps forwards authorized data requests from users who are within the Google Apps domain to the Google tunnel protocol servers.
  2. The tunnel servers validate that a user is authorized to make the request to the specified resource. Google tunnel servers are connected by an encrypted tunnel to SDC, which runs within a company’s internal network.
  3. The tunnel protocol allows SDC to connect to a Google tunnel server, authenticate, and encrypt the data that flows across the Internet.
  4. SDC uses resource rules to validate if a user is authorized to make a request to a specified resource.
  5. An optional intranet firewall can be used to provide extra network security.
  6. SDC performs a network request to the specified resource or services.
  7. The service verifies the signed requests and if the user is authorized, returns the data.

From a security perspective, access control and confidentiality are provided by filters, resource rules, and SSL/TLS encrypted tunnels.  We’ll take this apart in detail (as time permits) later.

In the mean time, here’s a link to the SDC Security guide for developers.

…and no, you’re firewall likely won’t help save you (again.) 

At least I won’t be bored now.

/Hoff

* The database import/export is profound also. Craig Balding followed up with his OAuth-focused commentary here.

Categories: Cloud Computing, Cloud Security, Google Tags:

The Vagaries Of Cloudcabulary: Why Public, Private, Internal & External Definitions Don’t Work…

April 5th, 2009 19 comments

Updated again at 13:43pm EST – Please see bottom of post

Hybrid, Public, Private, Internal and External.

The HPPIE model; you’ve heard these terms used to describe and define the various types of Cloud.

What’s always disturbed me about using these terms singularly is that separetely they actually address scenarios that are orthogonal and yet are often used to compare and contrast one service/offering to another.

The short story: Hybrid, Public, and Private denote ownership and governance whilst Internal and External denote location.

The longer story: Hybrid, Public, Private, Internal and External seek to summarily describe no less than five different issues and categorize a cloud service/offering into one dumbed-down term for convenience.  In terms of a Cloud service/offering, using one of the HPPIE labels actually attempts to address in one word:

  1. Who manages it
  2. Who owns it
  3. Where it’s located
  4. Who has access to it
  5. How it’s accessed

That’s a pretty tall order.  I know we’re aiming for simplicity in description by using a label analogous to LAN, WAN, Intranet or Internet , but unfortunately what we’re often describing here is evolving to be much more complex.

Don’t get me wrong, I’m not aiming for precision but instead  accuracy.  I don’t find that these labels do a good enough job when used by themselves.

Further, you’ll find most people using the service deployment models (Hybrid, Public, Private) in absence of the service delivery models (SPI – Saas/PaaS/IaaS) while at the same time intertwining the location of the asset (internal, external) usually relative to a perimeter firewall (more on this in another post.)

This really lends itself to confusion.

I’m not looking to rename the HPPIE terms.  I am looking to use them more accurately.

Here’s a contentious example.  I maintain you can have an IaaS service that is Public and Internal.  WHAT!?  HOW!?

Let’s take a look at a summary table I built to think through use cases by looking at the three service deployment models (Hybrid, Public and Private):

The HPPIE Table

THIS TABLE IS DEPRECATED – PLEASE SEE UPDATE BELOW!

The blue separators in the table designate derivative service offerings and not just a simple and/or; they represent an actual branching of the offering.

Back to my contentious example wherein I maintain you can have an IaaS offering which is Public and yet also Internal. Again How?

Remember how I said “Hybrid, Public, and Private denote ownership and governance whilst Internal and External denote location?” That location refers to both the physical location of the asset as well as the logical location relative to an organization’s management umbrella which includes operations, security, compliance, etc.

Thus if you look at a managed infrastructure service (name one) that utilizes Cloud Computing principles, there’s no reason that a third party MSP could not deploy said service internally on customer premises equipment which the third party owns but operates and manages on behalf of an organization with the scale and pay-by-use model of Cloud internally that can include access from trusted OR untrusted sources, is there?

Some might call it a perversion of the term “Public.” I highlight it to illustrate that “Public” is a crappy word for the example because just as it’s valid in this example, it’s equally as valid to suggest that Amazon’s EC2 can also share the “Public” moniker, despite being External.

In the same light, one can easily derive examples of SaaS:Private:Internal offerings…You see my problem with these terms?

Moreover, the “consumer” focus of the traditional HPPIE models means that using broad terms like these generally implies that people are describing access to a service/offering by a human operating a web browser, and do not take into account access to services/offerings via things like API’s or programmatic interfaces.

This is a little goofy, too. I don’t generally use a web browser  (directly) to access Amazon’s S3 Storage-as-a-Service offering just like I don’t use a web browser to make API calls in GoogleGears.  Other non-interactive elements of the AppStack do that.

I don’t expect people to stop using these dumbed down definitions, but this is why it makes me nuts when people compare “Private” Cloud offerings with “Internal” ones. It’s like comparing apples and buffalo.

What I want is for people to at least not include Internal and External as Cloud models, but rather used them as parameters like I have in the table above.

Does this make any sense to you?


Update: In a great set of discussions regarding this on Twitter with @jamesurquhart from Cisco and @zhenjl from VMware, @zhenjl came up with a really poignant solution to the issues surrounding the redefinition of Public Cloud and their ability to be deployed “internally.”  His idea which highlights the “third party managed” example I gave is to add a new category/class called “Managed” which is essentially the example which I highlighted in boldface above:

managed-clouds

This means that we would modify the table above to look more like this (updated again based on feedback on Twitter & comments) — Ultimately revised as part of the work I did for the Cloud Security Alliance in alignment to the NIST model, abandoning the ‘Managed’ section:

Revised Model

This preserves the notion of how people generally define “Public” clouds but also makes a critical distinction between what amounts to managed Cloud services which are provided by third parties using infrastructure/services located on-premise. It also still allows for the notion of Private Clouds which are distinct.

Thoughts?

Related articles by Zemanta

Enhanced by Zemanta
Categories: Cloud Computing, Cloud Security Tags:

The Cloud Is a Fickle Mistress: DDoS&M…

April 2nd, 2009 6 comments

It’s interesting to see how people react when they are reminded that the “Cloud” still depends upon much of the same infrastructure and underlying protocols that we have been using for years.

BGP, DNS, VPNs, routers, swtiches, firewalls…

While it’s fun to talk about new attack vectors and sexy exploits, it’s the oldies and goodies that will come back to haunt us:

Simplexity

Building more and more of our business’ ability to remain an on-going concern on infrastructure that was never designed to support it is a scary proposition.  We’re certainly being afforded more opportunity to fix some of these problems as the technology improves, but it’s a patching solution to an endemic problem, I’m afraid.  We’ve got two ways to look at Cloud:

  • Skipping over the problems we have and “fixing” crappy infrastructure and applications by simply adding mobility and orchestration to move around an issue, or
  • Actually starting to use Cloud as a forcing function to fundamentally change the way we think about, architect, deploy and manage our computing capabilities in a more resilient, reliable and secure fashion

If I were a betting man…

Remember that just because it’s in the “Cloud” doesn’t mean someone’s sprinkled magic invincibility dust on your AppStack…

That web service still has IP addresses, open sockets. It still gets transported over MANY levels of shared infrastructure, from the telcos to the DNS infrastructure…you’re always at someone elses’ mercy.

Dan Kaminsky has done a fabulous job reminding us of that.

A more poignant reminder of our dependency on the Same Old Stuff™ is the recent DDoS attacks against Cloud provider Go-Grid:

ONGOING DDoS ATTACK

Our network is currently the target of a large, distributed DDoS attack that began on Monday afternoon.   We took action all day yesterday to mitigate the impact of the attack, and its targets, so that we could restore service to GoGrid customers.  Things were stabilized by 4 PM PDT and most customer servers were back online, although some of you continued to experience intermittent loss in network connectivity.

This is an unfortunate thing.  It’s also a good illustration of the sorts of things you ought to ask your Cloud service providers about.  With whom do they peer? What is their bandwidth? How many datacenters do they have and where? What DoS/DDoS countermeasures do you have in place? Have they actually dealt with this before?  Do they drill disaster scenarios like this?

We’re told we shouldn’t have to worry about the underlying infrastructure with Cloud, that it’s abstracted and someone else’s problem to manage…until it’s not.

This is where engineering, architecture and security meet the road.  Your provider’s ability to sustain an attack like this is critical.  Further, how you’ve designed your BCP/DR contingency plans is pretty important, too.  Until we get true portability/interoperability between Cloud providers, it’s still up to you to figure out how to make this all work.  Remember that when you’re assuming those TCO calculations accurately reflect reality.

Big providers like eBay, Amazon, and Microsoft invest huge sums of money and manpower to ensure they are as survivable as they can be during attacks like this.  Do you?  Does your Cloud Provider? How many do you have.

Again, even Amazon goes down.  At this point, it’s largely been operational issues on their end and not the result of a massive attack. Imagine, however, if someday it is.  What would that mean to you?

As more and more of our applications and information are moved from inside our networks to beyond the firewalls and exposed to a larger audience (or even co-mingled with others’ data) the need for innovation and advancement in security is only going to skyrocket to start to deal with many of these problems.

/Hoff

Categories: Cloud Computing, Cloud Security Tags:

Meditating On the Manifesto: It’s Good To Be King…

March 29th, 2009 6 comments

By now you’ve heard of ManifestoGate, no?  If not, click on that link and read all about it as James Urquhart does a nice job summarizing it all.

In the face of all of this controversy, tonight Reuven Cohen twittered that the opencloudmanifesto.org website was live.

So I mosied over to take a look at the promised list of supporters of said manifesto since I’ve been waiting for a definition of the “we” who developed/support it.  

It’s a very interesting list.

There are lots of players. Some of them are just starting to bring their Cloud visions forward.

But clearly there are some noticeable absences, namely Google, Microsoft Salesforce and Amazon — the three four largest established Cloud players in the Cloudusphere.

I think it’s been said in so many words before, but let me make it perfectly clear why, despite the rhetoric both acute and fluffy from both sides, that these three Cloud giants aren’t listed as supporters.

Here are the the listed principles of the Open Cloud from the manifesto itself:

Of course, many clouds will continue to be different in a number of important ways, 
providing unique value for organizations. It is not our intention to define standards for 
every capability in the cloud and create a single homogeneous cloud environment. 

Rather, as cloud computing matures, there are several key principles that must be 
followed to ensure the cloud is open and delivers the choice, flexibility and agility 
organizations demand:
 

1. Cloud providers must work together to ensure that the challenges to 
cloud adoption (security, integration, portability, interoperability, 
governance/management, metering/monitoring) are addressed through 
open collaboration and the appropriate use of standards.
 

2. Cloud providers must not use their market position to lock customers 
into their particular platforms and limit their choice of providers.
 

3. Cloud providers must use and adopt existing standards wherever 
appropriate. The IT industry has invested heavily in existing standards 
and standards organizations; there is no need to duplicate or reinvent 
them.
 

4. When new standards (or adjustments to existing standards) are needed, 
we must be judicious and pragmatic to avoid creating too many 
standards. We must ensure that standards promote innovation and do 
not inhibit it.
 

5. Any community effort around the open cloud should be driven by 
customer needs, not merely the technical needs of cloud providers, and 
should be tested or verified against real customer requirements.
 

6. Cloud computing standards organizations, advocacy groups, and 
communities should work together and stay coordinated, making sure 
that efforts do not conflict or overlap.
 

Fact is, from a customer’s point of view, I find all of these principles agreeable and despite calling it a manifesto, I could see using it as a nice set of discussion points with which I can chat about my needs from the Cloud.   It’s intersting to note that given the audience as stated in the manifesto, that the only list of supporters are vendors and not “customers.”

I think the more discussion we have on the matter, the better.  Personally, I grok and support the principles herein.  I’m sure this point will be missed as I play devil’s advocate, but so be it.  

However, from the “nice theory, wrong universe” vendor’s point-of-view, why/how could I sign it?

See #2 above?  It relates to exactly the point made by James when he said “Those who have publicly stated that they won’t sign have the most to lose.”

Yes they do.  And the last time I looked, all three of them have notions of what the Cloud ought to be, and how and to what degree  it ought to interoperate and with whom.  

I certainly expect they will leverage every ounce of “lock-in” enhanced customer experience through a tightly-coupled relationship they can muster and capitalize on the de facto versus de jure “standardization” that naturally occurs in a free market when you’re in the top 4.  Someone telling me I ought to sign a document to the contrary would likely not get offered a free coffee at the company cafe.

Trying to socialize (in every meaning of the word) goodness works wonders if you’re a kibbutz.  With billions up for grabs in a technology land-grab, not so much.

This is where the ever-hopeful consumer, the idealist integrator, and the vendor-realist personalities in me begin to battle.

Oh, you should hear the voices in my head…

/Hoff

Categories: Cloud Computing Tags:

Incomplete Thought: Looking At An “Open & Interoperable Cloud” Through Azure-Colored Glasses

March 29th, 2009 4 comments

As with the others in my series of “incomplete thoughts,” this one is focused on an issue that has been banging around in my skull for a few days.  I’m not sure how to properly articulate my thought completely, so I throw this up for consideration, looking for your discussion to clarify my thinking.

You may have heard of the little dust-up involving Microsoft and the folk(s) behind the Open Cloud Manifesto. The drama here reminds me of the Dallas episode where everyone tried to guess who shot J.R., and it’s really not the focus of this post.  I use it here for color.

What is the focus of this post is the notion of “open(ness),” portability and interoperability as it relates to Cloud Computing — or more specifically how these terms relate to the infrastructure and enabling platforms of Cloud Computing solution providers.

I put “openness” in quotes because definitionally, there are as many representations of this term as there are for “Cloud,” which is a big part of the problem.  Just to be fair, before you start thinking I’m unduly picking on Microsoft, I’m not. I challenged VMware on the same issues.

So here’s my question as it relates to Microsoft’s strategy regarding Azure given an excerpt from Microsoft’s Steven Martin as he described his employer’s stance on Cloud in regard to the Cloudifesto debacle above in his blog post titled “Moving Toward an Open Process On Cloud Computing Interoperability“:

From the moment we kicked off our cloud computing effort, openness and interop stood at the forefront. As those who are using it will tell you, the  Azure Services Platform is an open and flexible platform that is defined by web addressability, SOAP, XML, and REST.  Our vision in taking this approach was to ensure that the programming model was extensible and that the individual services could be used in conjunction with applications and infrastructure that ran on both Microsoft and non-Microsoft stacks. 

What got me going was this ZDNet interview by Mary Jo Foley wherein she interviewed Julius Sinkevicius, Microsoft’s Director of Product Management for Windows Server, in which she loosely references/compares Cisco’s Cloud strategy is to Microsoft’s and apparently a lack of interoperability between Microsoft’s own virtualization and Cloud Computing platforms:

MJF: Did Cisco ask Microsoft about licensing Azure? Will Microsoft license all of the components of Azure to any other company?

Sinkevicius: No, Microsoft is not offering Windows Azure for on premise deployment. Windows Azure runs only in Microsoft datacenters. Enterprise customers who wish to deploy a highly scalable and flexible OS in their datacenter should leverage Hyper-V and license Windows Server Datacenter Edition, which has unlimited virtualization rights, and System Center for management.

MJF: What does Microsoft see as the difference between Red Dog (Windows Azure) and the OS stack that Cisco announced?

Sinkevicius: Windows Azure is Microsoft’s runtime designed specifically for the Microsoft datacenter. Windows Azure is designed for new applications and allows ISVs and Enterprises to get geo-scale without geo-cost.  The OS stack that Cisco announced is for customers who wish to deploy on-premise servers, and thus leverages Windows Server Datacenter and System Center.

The source of the on-premise Azure hosting confusion appears to be this: All apps developed for Azure will be able to run on Windows Server, according to the Softies. However — at present — the inverse is not true: Existing Windows Server apps ultimately may be able to run on Azure. For now only some can do so, and only with a fairly substantial amount of tweaking.

Microsoft’s cloud pitch to enterprises who are skittish about putting their data in the Microsoft basket isn’t “We’ll let you host your own data using our cloud platform.” Instead, it’s more like: “You can take some/all of your data out of our datacenters and run it on-premise if/when you want — and you can do the reverse and put some/all of your data in our cloud if you so desire.”

What confuses me is how Azure, as a platform, will be limited to deployment only in Microsoft’s operating environment (i.e. their datacenters) and not for use outside of that environment and how that compares to the statements above regarding the interoperability described by Martin.

Doesn’t the proprietary nature of the Azure runtime platform, “open” or not via API, by definition limit its openness and interoperability? If I can’t take my applications and information and operate it anywhere without major retooling, how does that imply openness, portability and interoperability?  

If one cannot do that fully between Windows Server and Azure — both from the same company —  what chance do we have between instances running across different platforms not from Microsoft?

The issue at hand to consider is this:

If you do not have one-to-one parity between the infrastructure that provides your cloud “externally” versus “internally,” (and similarly public versus private clouds) can you truly claim openness, portability and interoperability?

What do you think?

Update on the Cloud (Ontology/Taxonomy) Model…

March 28th, 2009 3 comments

A couple of months ago I kicked off a community-enabled project to build an infrastructure-centric ontology/taxonomy model of Cloud Computing.

You can see the original work with all the comments here.  Despite the distracting haggling over the use of the words “ontology and taxonomy,”  the model (as I now call it) has been well received by those for whom it was created.

Specifically, my goal was to be able to help a network or security professional do these things:

  1. Settle on a relevant and contextual technology-focused definition of Cloud Computing and its various foundational elements beyond the existing academic & 30,000 foot-view models
  2. Understand how this definition maps to the classical SPI (SaaS, PaaS, IaaS) models with which many people are aware
  3. Deconstruct the SPI model and present it in a layered format similar to the OSI model showing how each of the SPI components interact with and build atop one another
  4. Provide a further relevant mapping of infrastructure, applications, etc. at each model so as to relate well-understood solutions experiences to each
  5. Map a set of generally-available solutions from a catalog of compensating controls (from the security perspective) to each layer
  6. Ultimately map the SPI layers to the compensating controls and in turn to a set of governanance and regulatory requirements (SoX, PCI, HIPAA, etc.)

This is very much, and unapologetically so, a controls-based model.  I assume that there exists no utopic state of proper architectural design, secure development lifecycle, etc. Just like the real world.  So rather than assume that we’re going to have universal goodness thanks to proper architecture, design and execution, I thought it more reasonable to think about plugging the holes (sadly) and going from there.

At the end of the day, I wanted an IT/Security professional to use the model like an “Annie Oakley Secret Decoder Ring” in order to help rapidly assess offerings, map them to the specific model layers, understand what controls they or a vendor needs to have in place by mapping that, in turn, to compliance requirements.  This would allow for a quick and accurate manner by which to perform a gap analysis which in turn can be used to feed into a risk assessment/analysis.

We went through 5 versions in a relatively short period of time and arrived at a solid fundamental model based upon feedback from the target audience:

cloudtaxonomyontology_v15

The model is CLEARLY not complete.  The next three steps for improving it are:

  1. Reference other solution taxonomies to complete the rest of the examples and expand upon the various layers with key elements and offerings from vendors/solutions providers.  See here.
  2. Polish up the catalog of compensating controls
  3. Start mapping to various regulatory/compliance requirements
  4. Find a better way of interactively presenting this whole mess.

For my Frogs presentation, I presented the first stab at the example controls mapping and it seemed to make sense given the uptake/interest in the model. Here’s an example:
frogs-cc_sc0621

Frogs: Cloud Model Aligned to Security Controls Model

This still has a ways to go, but I’ve been able to present this to C-levels, bankers, technologists and lay people (with explanation) and it’s gone over well.

I look forward to making more progress on this shortly and would welcome the help, commentary, critique and collaboration.

I’ll start adding more definition to each of the layers so people can feedback appropriately.

Thanks,

/Hoff

P.S. A couple of days ago I discovered that Kevin Jackson had published an extrapolation of the UCSB/IBM version titled “A Tactical Cloud Computing Ontology.

Kevin’s “ontology” is at the 20,000 foot view compared to the original 30,000 foot UCSB/IBM model but is worth looking at.

Categories: Cloud Computing, Cloud Security Tags:

The Most Comprehensive Review Of the Open Cloud Computing Manifesto Debacle, Ever…

March 28th, 2009 3 comments

[This Page Intentionally Left Blank]

That is all.

/Hoff

Categories: Cloud Computing, Cloud Security Tags:

Cloud Catastrophes (Cloudtastrophes?) Caused by Clueless Caretakers?

March 22nd, 2009 4 comments
You'll ask "How?" Everytime... 

 

 

 

You'll ask "How?" Everytime...

Enter the dawn of the Cloudtastrophe…

I read a story today penned by Maureen O’Gara titled “Carbonite Loses Cloud-Based Data, Sues Storage Vendor.”

I thought this was going to be another story regarding a data breach (loss) of customer data by a Cloud Computing service vendor.

What I found, however, was another hyperbolic illustration of how the messaging of the Cloud by vendors has set expectations for service and reliability that are out of alignment with reality when you take a lack of: sound enterprise architecture, proper contingency planning, solid engineering and common sense and add the economic lubricant of the Cloud.

Stir in a little journalistic sensationalism, and you’ve got CloudWow!

Carbonite, the online backup vendor, says it lost data belonging to over 7,500 customers in a number of separate incidents in a suit filed in Massachusetts charging Promise Technology Inc with supplying it with $3 million worth of defective storage, according to a story in Saturday’s Boston Globe.

The catastrophe is the latest in a series of cloud failures.

The widgetry was supposed to detect disk failures and transfer the data to a working drive. It allegedly didn’t.

The story says Promise couldn’t fix the errors and “Carbonite’s senior engineers, senior management and senior operations personnel…spent enormous amounts of time dealing with the problems.”

Carbonite claims the data losses caused “serious damage” to its business and reputation for reliability. It’s demanding unspecified damages. Promise told the Globe there was “no merit to the allegations.”

Carbonite, which sells largely to consumers and small businesses and competes with EMC’s Mozy, tells its customers: “never worry about losing your files again.”

The abstraction of infrastructure and democratization of applications and data that Cloud Computing services can bring does not mean that all services are created equal.  It does not make our services or information more secure (or less for that matter.)  Just because a vendor brands themselves as a “Cloud” provider does not mean that “their” infrastructure is any more implicitly reliable, stable or resilient than traditional infrastructure or that proper enterprise architecture as it relates to people, process and technology is in place.  How the infrastructure is built and maintained is just as important as ever.

If you take away the notion of Carbonite being a “Cloud” vendor, would this story read any differently?

We’ve seen a few failures recently of Cloud-based services, most of them sensationally lumped into the Cloud pile: Google, Microsoft, and even Amazon; most of the stories about them relate the impending doom of the Cloud…

Want another example of how Cloud branding, the Web2.0 experience and blind faith makes for another FUDtastic “catastrophe in the cloud?”  How about the well-known service Ma.gnolia?

There was a meltdown at bookmark sharing website Ma.gnolia Friday morning. The service lost both its primary store of user data, as well as its backup. The site has been taken offline while the team tries to reconstruct its databases, though some users may never see their stored bookmarks again.

The failure appears to be catastrophic. The company can’t say to what extent it will be able to restore any of its users’ data. It also says the data failure was so extensive, repairing the loss will take “days, not hours.”

So we find that a one man shop was offering a service that people liked and it died a horrible death.  Because it was branded as a Cloud offering, it “seemed” bigger than it was.  This is where perception definitely was not reality and now we’re left with a fluffy bad taste in our mouths.

Again, what this illustrates is that just because a service is “Cloud-based” does not imply it’s any more reliable or resilient as one that is not. It’s just as important that as enterprises look to move to the Cloud that they perform as much due diligence on their providers as makes sense. We’ll see a weeding out of the ankle-biters in Cloud Computing.

Nobody ever gets fired for buying IBM…

What we’ll also see is that even though we’re not supposed to care what our Cloud providers’ infrastructure is powered by and how, we absolutely will in the long term and the vendors know it.

This is where people start to freak about how standards and consolidation will kill innovation in the space but it’s also where the realities of running a business come crashing down on early adopters.

Large enterprises will move to providers who can demonstrate that their services are solid by way of co-branding with the reputation of the providers of infrastructure coupled with the compliance to “standards.”

The big players like IBM see this as an opportunity and as early as last year introduced a Cloud certification program:

IBM to Validate Resiliency of Cloud Computing Infrastructures

Will Consult With Businesses of All Sizes to Ensure Resiliency, Availability, Security; Drive Adoption of New Technology

ARMONK, NY – 24 Nov 2008: In a move that could spur the rise of the nascent computing model known as “cloud,” IBM (NYSE: IBM) today said it would introduce a program to validate the resiliency of any company delivering applications or services to clients in the cloud environment. As a result, customers can quickly and easily identify trustworthy providers that have passed a rigorous evaluation, enabling them to more quickly and confidently reap the business benefits of cloud services.

Cloud computing is a model for network-delivered services, in which the user sees only the service and does not view the implementation or infrastructure required for its delivery. The success to date of cloud services like storage, data protection and enterprise applications, has created a large influx of new providers. However, unpredictable performance and some high-profile downtime and recovery events with newer cloud services have created a challenge for customers evaluating the move to cloud.

IBM’s new “Resilient Cloud Validation” program will allow businesses who collaborate with IBM on a rigorous, consistent and proven program of benchmarking and design validation to use the IBM logo: “Resilient Cloud” when marketing their services.

Remember the “Cisco Powered Network” program?  How about a “Cisco Powered Cloud?”  See how GoGrid advertises their load balancers are f5?

In the long term, like the CapitalOne credit card commercials challenging the company providing your credit card services by asking “What’s in your wallet?” you can expect to start asking the same thing about your Cloud providers’ offerings, also.

/Hoff