Archive

Archive for the ‘Cloud Security’ Category

The Vagaries Of Cloudcabulary: Why Public, Private, Internal & External Definitions Don’t Work…

April 5th, 2009 19 comments

Updated again at 13:43pm EST – Please see bottom of post

Hybrid, Public, Private, Internal and External.

The HPPIE model; you’ve heard these terms used to describe and define the various types of Cloud.

What’s always disturbed me about using these terms singularly is that separetely they actually address scenarios that are orthogonal and yet are often used to compare and contrast one service/offering to another.

The short story: Hybrid, Public, and Private denote ownership and governance whilst Internal and External denote location.

The longer story: Hybrid, Public, Private, Internal and External seek to summarily describe no less than five different issues and categorize a cloud service/offering into one dumbed-down term for convenience.  In terms of a Cloud service/offering, using one of the HPPIE labels actually attempts to address in one word:

  1. Who manages it
  2. Who owns it
  3. Where it’s located
  4. Who has access to it
  5. How it’s accessed

That’s a pretty tall order.  I know we’re aiming for simplicity in description by using a label analogous to LAN, WAN, Intranet or Internet , but unfortunately what we’re often describing here is evolving to be much more complex.

Don’t get me wrong, I’m not aiming for precision but instead  accuracy.  I don’t find that these labels do a good enough job when used by themselves.

Further, you’ll find most people using the service deployment models (Hybrid, Public, Private) in absence of the service delivery models (SPI – Saas/PaaS/IaaS) while at the same time intertwining the location of the asset (internal, external) usually relative to a perimeter firewall (more on this in another post.)

This really lends itself to confusion.

I’m not looking to rename the HPPIE terms.  I am looking to use them more accurately.

Here’s a contentious example.  I maintain you can have an IaaS service that is Public and Internal.  WHAT!?  HOW!?

Let’s take a look at a summary table I built to think through use cases by looking at the three service deployment models (Hybrid, Public and Private):

The HPPIE Table

THIS TABLE IS DEPRECATED – PLEASE SEE UPDATE BELOW!

The blue separators in the table designate derivative service offerings and not just a simple and/or; they represent an actual branching of the offering.

Back to my contentious example wherein I maintain you can have an IaaS offering which is Public and yet also Internal. Again How?

Remember how I said “Hybrid, Public, and Private denote ownership and governance whilst Internal and External denote location?” That location refers to both the physical location of the asset as well as the logical location relative to an organization’s management umbrella which includes operations, security, compliance, etc.

Thus if you look at a managed infrastructure service (name one) that utilizes Cloud Computing principles, there’s no reason that a third party MSP could not deploy said service internally on customer premises equipment which the third party owns but operates and manages on behalf of an organization with the scale and pay-by-use model of Cloud internally that can include access from trusted OR untrusted sources, is there?

Some might call it a perversion of the term “Public.” I highlight it to illustrate that “Public” is a crappy word for the example because just as it’s valid in this example, it’s equally as valid to suggest that Amazon’s EC2 can also share the “Public” moniker, despite being External.

In the same light, one can easily derive examples of SaaS:Private:Internal offerings…You see my problem with these terms?

Moreover, the “consumer” focus of the traditional HPPIE models means that using broad terms like these generally implies that people are describing access to a service/offering by a human operating a web browser, and do not take into account access to services/offerings via things like API’s or programmatic interfaces.

This is a little goofy, too. I don’t generally use a web browser  (directly) to access Amazon’s S3 Storage-as-a-Service offering just like I don’t use a web browser to make API calls in GoogleGears.  Other non-interactive elements of the AppStack do that.

I don’t expect people to stop using these dumbed down definitions, but this is why it makes me nuts when people compare “Private” Cloud offerings with “Internal” ones. It’s like comparing apples and buffalo.

What I want is for people to at least not include Internal and External as Cloud models, but rather used them as parameters like I have in the table above.

Does this make any sense to you?


Update: In a great set of discussions regarding this on Twitter with @jamesurquhart from Cisco and @zhenjl from VMware, @zhenjl came up with a really poignant solution to the issues surrounding the redefinition of Public Cloud and their ability to be deployed “internally.”  His idea which highlights the “third party managed” example I gave is to add a new category/class called “Managed” which is essentially the example which I highlighted in boldface above:

managed-clouds

This means that we would modify the table above to look more like this (updated again based on feedback on Twitter & comments) — Ultimately revised as part of the work I did for the Cloud Security Alliance in alignment to the NIST model, abandoning the ‘Managed’ section:

Revised Model

This preserves the notion of how people generally define “Public” clouds but also makes a critical distinction between what amounts to managed Cloud services which are provided by third parties using infrastructure/services located on-premise. It also still allows for the notion of Private Clouds which are distinct.

Thoughts?

Related articles by Zemanta

Enhanced by Zemanta
Categories: Cloud Computing, Cloud Security Tags:

The Cloud Is a Fickle Mistress: DDoS&M…

April 2nd, 2009 6 comments

It’s interesting to see how people react when they are reminded that the “Cloud” still depends upon much of the same infrastructure and underlying protocols that we have been using for years.

BGP, DNS, VPNs, routers, swtiches, firewalls…

While it’s fun to talk about new attack vectors and sexy exploits, it’s the oldies and goodies that will come back to haunt us:

Simplexity

Building more and more of our business’ ability to remain an on-going concern on infrastructure that was never designed to support it is a scary proposition.  We’re certainly being afforded more opportunity to fix some of these problems as the technology improves, but it’s a patching solution to an endemic problem, I’m afraid.  We’ve got two ways to look at Cloud:

  • Skipping over the problems we have and “fixing” crappy infrastructure and applications by simply adding mobility and orchestration to move around an issue, or
  • Actually starting to use Cloud as a forcing function to fundamentally change the way we think about, architect, deploy and manage our computing capabilities in a more resilient, reliable and secure fashion

If I were a betting man…

Remember that just because it’s in the “Cloud” doesn’t mean someone’s sprinkled magic invincibility dust on your AppStack…

That web service still has IP addresses, open sockets. It still gets transported over MANY levels of shared infrastructure, from the telcos to the DNS infrastructure…you’re always at someone elses’ mercy.

Dan Kaminsky has done a fabulous job reminding us of that.

A more poignant reminder of our dependency on the Same Old Stuff™ is the recent DDoS attacks against Cloud provider Go-Grid:

ONGOING DDoS ATTACK

Our network is currently the target of a large, distributed DDoS attack that began on Monday afternoon.   We took action all day yesterday to mitigate the impact of the attack, and its targets, so that we could restore service to GoGrid customers.  Things were stabilized by 4 PM PDT and most customer servers were back online, although some of you continued to experience intermittent loss in network connectivity.

This is an unfortunate thing.  It’s also a good illustration of the sorts of things you ought to ask your Cloud service providers about.  With whom do they peer? What is their bandwidth? How many datacenters do they have and where? What DoS/DDoS countermeasures do you have in place? Have they actually dealt with this before?  Do they drill disaster scenarios like this?

We’re told we shouldn’t have to worry about the underlying infrastructure with Cloud, that it’s abstracted and someone else’s problem to manage…until it’s not.

This is where engineering, architecture and security meet the road.  Your provider’s ability to sustain an attack like this is critical.  Further, how you’ve designed your BCP/DR contingency plans is pretty important, too.  Until we get true portability/interoperability between Cloud providers, it’s still up to you to figure out how to make this all work.  Remember that when you’re assuming those TCO calculations accurately reflect reality.

Big providers like eBay, Amazon, and Microsoft invest huge sums of money and manpower to ensure they are as survivable as they can be during attacks like this.  Do you?  Does your Cloud Provider? How many do you have.

Again, even Amazon goes down.  At this point, it’s largely been operational issues on their end and not the result of a massive attack. Imagine, however, if someday it is.  What would that mean to you?

As more and more of our applications and information are moved from inside our networks to beyond the firewalls and exposed to a larger audience (or even co-mingled with others’ data) the need for innovation and advancement in security is only going to skyrocket to start to deal with many of these problems.

/Hoff

Categories: Cloud Computing, Cloud Security Tags:

Incomplete Thought: Looking At An “Open & Interoperable Cloud” Through Azure-Colored Glasses

March 29th, 2009 4 comments

As with the others in my series of “incomplete thoughts,” this one is focused on an issue that has been banging around in my skull for a few days.  I’m not sure how to properly articulate my thought completely, so I throw this up for consideration, looking for your discussion to clarify my thinking.

You may have heard of the little dust-up involving Microsoft and the folk(s) behind the Open Cloud Manifesto. The drama here reminds me of the Dallas episode where everyone tried to guess who shot J.R., and it’s really not the focus of this post.  I use it here for color.

What is the focus of this post is the notion of “open(ness),” portability and interoperability as it relates to Cloud Computing — or more specifically how these terms relate to the infrastructure and enabling platforms of Cloud Computing solution providers.

I put “openness” in quotes because definitionally, there are as many representations of this term as there are for “Cloud,” which is a big part of the problem.  Just to be fair, before you start thinking I’m unduly picking on Microsoft, I’m not. I challenged VMware on the same issues.

So here’s my question as it relates to Microsoft’s strategy regarding Azure given an excerpt from Microsoft’s Steven Martin as he described his employer’s stance on Cloud in regard to the Cloudifesto debacle above in his blog post titled “Moving Toward an Open Process On Cloud Computing Interoperability“:

From the moment we kicked off our cloud computing effort, openness and interop stood at the forefront. As those who are using it will tell you, the  Azure Services Platform is an open and flexible platform that is defined by web addressability, SOAP, XML, and REST.  Our vision in taking this approach was to ensure that the programming model was extensible and that the individual services could be used in conjunction with applications and infrastructure that ran on both Microsoft and non-Microsoft stacks. 

What got me going was this ZDNet interview by Mary Jo Foley wherein she interviewed Julius Sinkevicius, Microsoft’s Director of Product Management for Windows Server, in which she loosely references/compares Cisco’s Cloud strategy is to Microsoft’s and apparently a lack of interoperability between Microsoft’s own virtualization and Cloud Computing platforms:

MJF: Did Cisco ask Microsoft about licensing Azure? Will Microsoft license all of the components of Azure to any other company?

Sinkevicius: No, Microsoft is not offering Windows Azure for on premise deployment. Windows Azure runs only in Microsoft datacenters. Enterprise customers who wish to deploy a highly scalable and flexible OS in their datacenter should leverage Hyper-V and license Windows Server Datacenter Edition, which has unlimited virtualization rights, and System Center for management.

MJF: What does Microsoft see as the difference between Red Dog (Windows Azure) and the OS stack that Cisco announced?

Sinkevicius: Windows Azure is Microsoft’s runtime designed specifically for the Microsoft datacenter. Windows Azure is designed for new applications and allows ISVs and Enterprises to get geo-scale without geo-cost.  The OS stack that Cisco announced is for customers who wish to deploy on-premise servers, and thus leverages Windows Server Datacenter and System Center.

The source of the on-premise Azure hosting confusion appears to be this: All apps developed for Azure will be able to run on Windows Server, according to the Softies. However — at present — the inverse is not true: Existing Windows Server apps ultimately may be able to run on Azure. For now only some can do so, and only with a fairly substantial amount of tweaking.

Microsoft’s cloud pitch to enterprises who are skittish about putting their data in the Microsoft basket isn’t “We’ll let you host your own data using our cloud platform.” Instead, it’s more like: “You can take some/all of your data out of our datacenters and run it on-premise if/when you want — and you can do the reverse and put some/all of your data in our cloud if you so desire.”

What confuses me is how Azure, as a platform, will be limited to deployment only in Microsoft’s operating environment (i.e. their datacenters) and not for use outside of that environment and how that compares to the statements above regarding the interoperability described by Martin.

Doesn’t the proprietary nature of the Azure runtime platform, “open” or not via API, by definition limit its openness and interoperability? If I can’t take my applications and information and operate it anywhere without major retooling, how does that imply openness, portability and interoperability?  

If one cannot do that fully between Windows Server and Azure — both from the same company —  what chance do we have between instances running across different platforms not from Microsoft?

The issue at hand to consider is this:

If you do not have one-to-one parity between the infrastructure that provides your cloud “externally” versus “internally,” (and similarly public versus private clouds) can you truly claim openness, portability and interoperability?

What do you think?

Update on the Cloud (Ontology/Taxonomy) Model…

March 28th, 2009 3 comments

A couple of months ago I kicked off a community-enabled project to build an infrastructure-centric ontology/taxonomy model of Cloud Computing.

You can see the original work with all the comments here.  Despite the distracting haggling over the use of the words “ontology and taxonomy,”  the model (as I now call it) has been well received by those for whom it was created.

Specifically, my goal was to be able to help a network or security professional do these things:

  1. Settle on a relevant and contextual technology-focused definition of Cloud Computing and its various foundational elements beyond the existing academic & 30,000 foot-view models
  2. Understand how this definition maps to the classical SPI (SaaS, PaaS, IaaS) models with which many people are aware
  3. Deconstruct the SPI model and present it in a layered format similar to the OSI model showing how each of the SPI components interact with and build atop one another
  4. Provide a further relevant mapping of infrastructure, applications, etc. at each model so as to relate well-understood solutions experiences to each
  5. Map a set of generally-available solutions from a catalog of compensating controls (from the security perspective) to each layer
  6. Ultimately map the SPI layers to the compensating controls and in turn to a set of governanance and regulatory requirements (SoX, PCI, HIPAA, etc.)

This is very much, and unapologetically so, a controls-based model.  I assume that there exists no utopic state of proper architectural design, secure development lifecycle, etc. Just like the real world.  So rather than assume that we’re going to have universal goodness thanks to proper architecture, design and execution, I thought it more reasonable to think about plugging the holes (sadly) and going from there.

At the end of the day, I wanted an IT/Security professional to use the model like an “Annie Oakley Secret Decoder Ring” in order to help rapidly assess offerings, map them to the specific model layers, understand what controls they or a vendor needs to have in place by mapping that, in turn, to compliance requirements.  This would allow for a quick and accurate manner by which to perform a gap analysis which in turn can be used to feed into a risk assessment/analysis.

We went through 5 versions in a relatively short period of time and arrived at a solid fundamental model based upon feedback from the target audience:

cloudtaxonomyontology_v15

The model is CLEARLY not complete.  The next three steps for improving it are:

  1. Reference other solution taxonomies to complete the rest of the examples and expand upon the various layers with key elements and offerings from vendors/solutions providers.  See here.
  2. Polish up the catalog of compensating controls
  3. Start mapping to various regulatory/compliance requirements
  4. Find a better way of interactively presenting this whole mess.

For my Frogs presentation, I presented the first stab at the example controls mapping and it seemed to make sense given the uptake/interest in the model. Here’s an example:
frogs-cc_sc0621

Frogs: Cloud Model Aligned to Security Controls Model

This still has a ways to go, but I’ve been able to present this to C-levels, bankers, technologists and lay people (with explanation) and it’s gone over well.

I look forward to making more progress on this shortly and would welcome the help, commentary, critique and collaboration.

I’ll start adding more definition to each of the layers so people can feedback appropriately.

Thanks,

/Hoff

P.S. A couple of days ago I discovered that Kevin Jackson had published an extrapolation of the UCSB/IBM version titled “A Tactical Cloud Computing Ontology.

Kevin’s “ontology” is at the 20,000 foot view compared to the original 30,000 foot UCSB/IBM model but is worth looking at.

Categories: Cloud Computing, Cloud Security Tags:

The Most Comprehensive Review Of the Open Cloud Computing Manifesto Debacle, Ever…

March 28th, 2009 3 comments

[This Page Intentionally Left Blank]

That is all.

/Hoff

Categories: Cloud Computing, Cloud Security Tags:

Cloud Catastrophes (Cloudtastrophes?) Caused by Clueless Caretakers?

March 22nd, 2009 4 comments
You'll ask "How?" Everytime... 

 

 

 

You'll ask "How?" Everytime...

Enter the dawn of the Cloudtastrophe…

I read a story today penned by Maureen O’Gara titled “Carbonite Loses Cloud-Based Data, Sues Storage Vendor.”

I thought this was going to be another story regarding a data breach (loss) of customer data by a Cloud Computing service vendor.

What I found, however, was another hyperbolic illustration of how the messaging of the Cloud by vendors has set expectations for service and reliability that are out of alignment with reality when you take a lack of: sound enterprise architecture, proper contingency planning, solid engineering and common sense and add the economic lubricant of the Cloud.

Stir in a little journalistic sensationalism, and you’ve got CloudWow!

Carbonite, the online backup vendor, says it lost data belonging to over 7,500 customers in a number of separate incidents in a suit filed in Massachusetts charging Promise Technology Inc with supplying it with $3 million worth of defective storage, according to a story in Saturday’s Boston Globe.

The catastrophe is the latest in a series of cloud failures.

The widgetry was supposed to detect disk failures and transfer the data to a working drive. It allegedly didn’t.

The story says Promise couldn’t fix the errors and “Carbonite’s senior engineers, senior management and senior operations personnel…spent enormous amounts of time dealing with the problems.”

Carbonite claims the data losses caused “serious damage” to its business and reputation for reliability. It’s demanding unspecified damages. Promise told the Globe there was “no merit to the allegations.”

Carbonite, which sells largely to consumers and small businesses and competes with EMC’s Mozy, tells its customers: “never worry about losing your files again.”

The abstraction of infrastructure and democratization of applications and data that Cloud Computing services can bring does not mean that all services are created equal.  It does not make our services or information more secure (or less for that matter.)  Just because a vendor brands themselves as a “Cloud” provider does not mean that “their” infrastructure is any more implicitly reliable, stable or resilient than traditional infrastructure or that proper enterprise architecture as it relates to people, process and technology is in place.  How the infrastructure is built and maintained is just as important as ever.

If you take away the notion of Carbonite being a “Cloud” vendor, would this story read any differently?

We’ve seen a few failures recently of Cloud-based services, most of them sensationally lumped into the Cloud pile: Google, Microsoft, and even Amazon; most of the stories about them relate the impending doom of the Cloud…

Want another example of how Cloud branding, the Web2.0 experience and blind faith makes for another FUDtastic “catastrophe in the cloud?”  How about the well-known service Ma.gnolia?

There was a meltdown at bookmark sharing website Ma.gnolia Friday morning. The service lost both its primary store of user data, as well as its backup. The site has been taken offline while the team tries to reconstruct its databases, though some users may never see their stored bookmarks again.

The failure appears to be catastrophic. The company can’t say to what extent it will be able to restore any of its users’ data. It also says the data failure was so extensive, repairing the loss will take “days, not hours.”

So we find that a one man shop was offering a service that people liked and it died a horrible death.  Because it was branded as a Cloud offering, it “seemed” bigger than it was.  This is where perception definitely was not reality and now we’re left with a fluffy bad taste in our mouths.

Again, what this illustrates is that just because a service is “Cloud-based” does not imply it’s any more reliable or resilient as one that is not. It’s just as important that as enterprises look to move to the Cloud that they perform as much due diligence on their providers as makes sense. We’ll see a weeding out of the ankle-biters in Cloud Computing.

Nobody ever gets fired for buying IBM…

What we’ll also see is that even though we’re not supposed to care what our Cloud providers’ infrastructure is powered by and how, we absolutely will in the long term and the vendors know it.

This is where people start to freak about how standards and consolidation will kill innovation in the space but it’s also where the realities of running a business come crashing down on early adopters.

Large enterprises will move to providers who can demonstrate that their services are solid by way of co-branding with the reputation of the providers of infrastructure coupled with the compliance to “standards.”

The big players like IBM see this as an opportunity and as early as last year introduced a Cloud certification program:

IBM to Validate Resiliency of Cloud Computing Infrastructures

Will Consult With Businesses of All Sizes to Ensure Resiliency, Availability, Security; Drive Adoption of New Technology

ARMONK, NY – 24 Nov 2008: In a move that could spur the rise of the nascent computing model known as “cloud,” IBM (NYSE: IBM) today said it would introduce a program to validate the resiliency of any company delivering applications or services to clients in the cloud environment. As a result, customers can quickly and easily identify trustworthy providers that have passed a rigorous evaluation, enabling them to more quickly and confidently reap the business benefits of cloud services.

Cloud computing is a model for network-delivered services, in which the user sees only the service and does not view the implementation or infrastructure required for its delivery. The success to date of cloud services like storage, data protection and enterprise applications, has created a large influx of new providers. However, unpredictable performance and some high-profile downtime and recovery events with newer cloud services have created a challenge for customers evaluating the move to cloud.

IBM’s new “Resilient Cloud Validation” program will allow businesses who collaborate with IBM on a rigorous, consistent and proven program of benchmarking and design validation to use the IBM logo: “Resilient Cloud” when marketing their services.

Remember the “Cisco Powered Network” program?  How about a “Cisco Powered Cloud?”  See how GoGrid advertises their load balancers are f5?

In the long term, like the CapitalOne credit card commercials challenging the company providing your credit card services by asking “What’s in your wallet?” you can expect to start asking the same thing about your Cloud providers’ offerings, also.

/Hoff

 

Azure Users Seeing Red: When Patching the Cloud Causes Cracks

March 19th, 2009 4 comments

No, this isn’t one of those posts that suggests we can’t depend on the Cloud just because of one (ok, many) outages of note lately.  That’s so dystopic.  Besides, everyone else is already doing that.

I mean just because Azure was offline for 22 hours isn’t cause for that much concern, right?  It’s a beta community technology preview, anyway… 😉  Just like Google’s a beta.

azureWhat I found interesting was what Microsoft reported as the root cause for the outage, however:

 

The Windows Azure Malfunction This Weekend

First things first: we’re sorry.  As a result of a malfunction in Windows Azure, many participants in our Community Technology Preview (CTP) experienced degraded service or downtime.  Windows Azure storage was unaffected.

In the rest of this post, I’d like to explain what went wrong, who was affected, and what corrections we’re making.

What Happened?

During a routine operating system upgrade on Friday (March 13th), the deployment service within Windows Azure began to slow down due to networking issues.  This caused a large number of servers to time out and fail.

You catch that bit about “…a routine operating system upgrade?”  Sometimes we call those things “patches.”  Even if this wasn’t a patch, let’s call it one for argument’s sake, okay?

As such, I was reminded of a blog post that I wrote last year titled: “Patching the Cloud” in which I squawked about my concerns regarding patching and change management/roll-back in Cloud services.  It seems apropos:

 

Your application is sitting atop an operating system and underlying infrastructure that is managed by the cloud operator.  This “datacenter OS” may not be virtualized or could actually be sitting atop a hypervisor which is integrated into the operating system (Xen, Hyper-V, KVM) or perhaps reliant upon a third party solution such as VMware.  The notion of cloud implies shared infrastructure and hosting platforms, although it does not imply virtualization.

A patch affecting any one of the infrastructure elements could cause a ripple effect on your hosted applications.  Without understanding the underlying infrastructure dependencies in this model, how does one assess risk and determine what any patch might do up or down the stack?  …

Huh.  Go figure.  

/Hoff

 

Google and Privacy: an EPIC Fail…

March 18th, 2009 2 comments

“I do not think this means what you think it means…”

This isn’t a post specific to Google’s struggles with privacy, specifically, but rather the Electronic Privacy Information Center’s (EPIC) tactics in a complaint/petition filed with the FTC in which EPIC claims that the privacy and security risks associated with Google’s “Cloud Computing Services” are inadequate, injurious to consumers, and that Google has engaged in “unfair and/or deceptive trade policies.”  

EPIC is petitioning the FTC to “..enjoin Google from offering such services until safeguards are verifiable established” as well as compel them to “…contribute $5,000,000 to a public fund that will help support, research concerning privacy enhancing technologies.”

In reading the petition which you can find here, you will notice that parallels are drawn and overtly called out that liken Google’s recent issues to that of TJX and ChoicePoint.  The report is a rambling mess of hyperbolic references and footnotes which appears is meant to froth the FTC into action, especially by suggesting the overt comparison to the breaches of confidential information from the likes of the aforementioned companies.

EPIC suggests that Google’s indadequate security is both an unfair business practice and a deceptive trade practice and while these two claims make up the meat of the complaint, they represent the smallest amount of text in the report with the most amount of emotive melodrama: “…consumer’s justified privacy expectations were dashed…” “…the Google Docs Data Breach exposed consumers’ personal information…”  I can haz evidence of these claims, please?

While I’m not happy with some of Google’s practices as they relate to privacy, nor am I pleased with hiccups they’ve had with services like GMail and the most recent “privacy pollution” issue surrounding Google Docs, here’s an interesting factoid that EPIC seems to have missed:

Google Apps like those mentioned are FREE. We consumers are not engaging in “Trade” when we don’t pay for said services. Further, we as consumers must accept the risk associated with said offerings when we agree to the terms of service. Right, wrong, or indifferent, you get what you pay for and should expect NO privacy despite Google’s best efforts to provide it (or not.)

I could tolerate this pandering to the FTC if it were not for what amounts to the jumping the shark on the part of EPIC by plastering Cloud Computing as the root of all evil (with Google as the ringmaster) and the blatant publicity stunt and fundraising attempt by demanding that the FTC “compel” Google to bleed out $5,000,000 to a fund that would likely feed more of this sort of drivel.

If we want privacy advancements with Google or any Cloud Computing service provider, this isn’t the way to do it.

As my good friend David Mortman said “EPIC apparently thinks its all about publicity. They are turning into the peta of privacy.” 

I agree. What’s next?  Will we rename personally identifiable information to “information kittens?”

/Hoff

P.S. Again, I am not trying to downplay any concerns with privacy in Cloud Computing because EPIC’s report does do a reasonable job of highlighting issues.  My friend Zach Lanier (@quine) did a great job summarizing his reaction to the post here:

It’s almost as though EPIC need to remind everyone that they still exist

and haven’t become entirely decrepit and overshadowed by the EFF. The

document is well assembled, citing examples that most users *don’t*

consider when using Google services (or just about any *aaS, for that

matter). Incidentally, the complaint references a recently published

report from the World Privacy Forum on privacy risks in Cloud

Computing[1]. Both documents raise a few similar points.

 

For example, how many of us actually read, end-to-end, the TOS and

privacy policy of the Provider? How many of us validate claims like

“your data are safe from unauthorized access when you store it on our

Cumulonimbus Mega Awesome Cloud Storage Platform”?

 

I, for one, laud EPIC’s past efforts and the heart whence this complaint

emerges. However, like a few others, the request for enjoinment

basically negated my support for the complaint in its entirety.

 

[1] http://www.worldprivacyforum.org/pdf/WPF_Cloud_Privacy_Report.pdf),

— Zach Lanier | http://n0where.org/ | (617) 606-3451 FP: 7CC5 5DEE E46F 5F41 9913 1577 E320 1D64 A200 AB49

The Frogs Who Desired a King: A Virtualization & Cloud Computing Fable [Slides]

March 17th, 2009 9 comments

frogs-title001I’m loathe to upload this presentation because really the slides accompany me (not the other way around) and there’s a ton of really important subtext and dialog that goes along with them, but I’m getting hammered with requests to release the deck, so here it is.

I will be giving this presentation at various venues over the next few months as well as the second in the series titled “Cloudifornication: Indiscriminate Information Intercourse Involving Internet Infrastructure.”  

At any rate, it’s another rather colorful presentation. It’s in PDF format and is roughly 12MB.

Click here to download it.

Enjoy

/Hoff

The UFC and UCS: Cisco Is Brock Lesnar

March 17th, 2009 7 comments

Lesnar vs. Mir...My favorite sport is mixed martial arts (MMA.)

MMA is a combination of various arts and features athletes who come from a variety of backgrounds and combine many disciplines that they bring to the the ring.  

You’ve got wrestlers, boxers, kickboxers, muay thai practitioners, jiu jitsu artists, judoka, grapplers, freestyle fighters and even the odd karate kid.

Mixed martial artists are often better versed in one style/discipline than another given their strengths and background but as the sport has evolved, not being well-rounded means you run the risk of being overwhelmed when paired against an opponent who can knock you out, take you down, ground and pound you, submit you or wrestle/grind you into oblivion.  

The UFC (Ultimate Fighting Championship) is an organization which has driven the popularity and mainstream adoption of MMA as a recognizable and sanctioned sport and has given rise to some of the most notable MMA match-ups in recent history.

One of those match-ups included the introduction of Brock Lesnar — an extremely popular “professional” wrestler — who has made the  transition to MMA.  It should be noted that Brock Lesnar is an aberration of nature.  He is an absolute monster:  6’3″ and 276 pounds.  He is literally a wall of muscle, a veritable 800 pound gorilla.

In his first match, he was paired up against a veteran in MMA and former heavyweight champion, Frank Mir, who is an amazing grappler known for vicious submissions.  In fact, he submitted Lesnar with a nasty kneebar as Lesnar’s ground game had not yet evolved.  This is simply part of the process.  Lesnar’s second fight was against another veteran, Heath Herring, who he manhandled to victory.  Following the Herring fight, Lesnar went on to fight one of the legends of the sport and reigning heavyweight champion, Randy Couture.  

Lesnar’s skills had obviously progressed and he looked great against Couture and ultimately won by a TKO.

So what the hell does the UFC have to do with the Unified Computing System (UCS?)

Cisco UCS Components

Cisco UCS Components

 

Cisco is to UCS as Lesnar is to the UFC.

Everyone wrote Lesnar off after he entered the MMA world and especially after the first stumble against an industry veteran.

Imagine the surprise when his mass, athleticism, strength, intelligence and tenacity combined with a well-versed strategy paid off as he’s become an incredible force to be reckoned with in the MMA world as his skills progressed.  Oh, did I mention that he’s the World Heavyweight Champion now?

Cisco comes to the (datacenter) cage much as Lesnar did; an 800 pound gorilla incredibly well-versed in one  set of disciplines, looking to expand into others and become just as versatile and skilled in a remarkably short period of time.  Cisco comes to win, not compete. Yes, Lesnar stumbled in his first outing.  Now he’s the World Heavyweight Champion.  Cisco will have their hiccups, too.

The first elements of UCS have emerged.  The solution suite with the help of partners will refine the strategy and broaden the offerings into a much more well-rounded approach.  Some of Cisco’s competitors who are bristling at Cisco’s UCS vision/strategy are quick to criticize them and reduce UCS to simply an ill-executed move “…entering the server market.”  

I’ve stated my opinions on this short-sighted perspective:

Yes, yes. We’ve talked about this before here. Cisco is introducing a blade chassis that includes compute capabilities (heretofore referred to as a ‘blade server.’)  It also includes networking, storage and virtualization all wrapped up in a tidy bundle.

So while that looks like a blade server (quack!,) walks like a blade server (quack! quack!) that doesn’t mean it’s going to be positioned, talked about or sold like a blade server (quack! quack! quack!)
What’s my point?  What Cisco is building is just another building block of virtualized INFRASTRUCTURE. Necessary infrastructure to ensure control and relevance as their customers’ networks morph.

My point is that what Cisco is building is the natural by-product of converged technologies with an approach that deserves attention.  It *is* unified computing.  It’s a solution that includes integrated capabilities that otherwise customers would be responsible for piecing together themselves…and that’s one of the biggest problems we have with disruptive innovation today: integration.

 

The knee-jerk dismissals witnessed since yesterday by the competition downplaying the impact of UCS are very similar to how many people reacted to Lesnar wherein they suggested he was one dimensional and had no core competencies beyond wrestling, discounting his ability to rapidly improve and overwhelm the competition.  

Everyone seems to be focused on the 5100 — the “blade server” — and not the solution suite of which it is a single piece; a piece of a very innovative ecosystem, some Cisco, some not.  Don’t get lost in the “but it’s just a blade server and HP/IBM/Dell can do that” diatribe.  It’s the bigger picture that counts.

The 5100 is simply that — one very important piece of the evolving palette of tools which offer the promise of an integrated solution to a radically complex set of problems.

Is it complete?  Is it perfect?  Do we have all the details? Can they pull it off themselves?  The answer right now is a simple “No.”  But it doesn’t have to be.  It never has.

There’s a lot of work to do, but much like a training camp for MMA, that’s why you bring in the best partners with which to train and improve and ultimately you get to the next level.

All I know is that I’d hate to be in the Octagon with Cisco just like I would with Lesnar.

/Hoff