Archive

Archive for the ‘Cloud Computing’ Category

Apparently In The Government You Can Have Your Cloud & Eat It, Too…

June 9th, 2009 2 comments

I’m sure more details will emerge, but as written in Information Week, this story is just bizarre:

Less than a month and a half after coming out as federal cloud CTO, Patrick Stingley has returned to his role as CTO of the Bureau of Land Management*, with the General Services Administration saying the creation of the new role came too early. “It just wasn’t the right time to have any formalized roles and responsibilities because this is still kind of in the analysis stage,” GSA CIO Casey Coleman said in an interview today. “Once it becomes an ongoing initiative, it might be a suitable time to look at roles such as a federal cloud CTO, but it’s just a little premature.”

Cloud computing is a major initiative of federal CIO Vivek Kundra, and its importance was even outlined in an addendum to the president’s 2010 budget last month. Kundra introduced Google Apps to city employees in his former role as CTO of Washington, D.C., and has said that he believes cloud computing could be one way to cut the federal IT budget.

So Cloud Computing, despite all we hear about the Government’s demands for such services as a critical national initiative is “…still kind of in the analysis stage” and isn’t at the “…right time to have any formalized roles and responsibilities”?

Wow.

While Stingley is no longer the formal federal cloud CTO, he has by no means turned his attention away from cloud computing. As of last Thursday, he was still scheduled to give a presentation titled “Development Of A Federal Cloud Computing Infrastructure” at the Geospatial Service-Oriented Architecture Best Practices Workshop on Tuesday morning, though as CTO of the BLM, not as a representative of the GSA.

The GSA isn’t by any means taking its foot off the accelerator with cloud computing. However, Coleman wants to make sure it’s done in the right way. “As we formalize the cloud computing initiative, we will have a program office, we will have a governance model,” she said.

Despite the elimination — for now — of the federal cloud CTO role, Coleman said that it’s “fair to say” that the GSA will be taking a central role in pushing the Obama administration’s cloud computing initiative, noting that the GSA should be a “center of gravity” for federal government IT.

Perhaps this was simply lost in GovSpeak translation, but something does not compute here.  I’m very much for correcting missteps early, but what an absolutely confusing message to send: Cloud is uber-important, we’re moving full-steam ahead, but nobody — or at least not the GSA — is steering the ship?

What could go wrong?

Whether it was decided that the GSA was not the appropriate office to lead/govern the Federal Cloud efforts, they are amongst the most innovative:

The GSA is experimenting with cloud computing for its own internal use. For example, federal information Web site USA.gov is hosted via Terremark’s Enterprise Cloud infrastructure as a service product, which charges by capacity used. When it was time for renegotiation of its old hosting contract, the GSA opened the contract to bidders and ended up saving between 80% and 90% with Terremark on a multiyear contract worth up to $135 million.

It’s also possible that under the Obama administration, the GSA might begin playing more of a shared-services role in IT, as it does in building management. However, Coleman is coy about whether that’s likely to happen, saying only that it would depend on the goals of the administration and the incoming GSA administrator. Stingley is reported to have been thinking about how the GSA might build out a federal cloud that agencies could easily tap into.

There’s a back-story here…

Hoff

* Aha!  I figured it out. See the problem is that you can’t appoint the CTO from something called the Bureau of LAND Management and expect them to be able to manage CLOUDS!  Silly me!

The Nines Have It…

June 8th, 2009 4 comments

fiveninesThere are numerous cliches and buzzwords we hear daily that creep into our lexicon without warrant of origin or meaning.

One of them that you’re undoubtedly used to hearing relates to the measurement of availability expressed as a percentage: the dreaded “nines.”

I read a story this morning on the launch of the “Stratus Trusted Cloud” that promises the following:

Since it is built on the industry’s most robust, scalable, fully redundant architecture, Stratus delivers unmatched performance, availability and security with 99.99% SLAs.

It’s interesting to note what 99.99% availability means within the context of an SLA — “four nines” means you have the equivalent of 52.6 minutes of resource unavailability per year.  That may sound perfectly wonderful and may even lead some to consider that this exceeds what many enterprises can deliver today (I’m interested in the veracity of these claims.)  However, I would ask you to consider this point:

I don’t have access to the contract/SLA to know whether this metric refers to total availability that includes both planned and unplanned downtime or only planned downtime.

This is pretty important, especially in light of what we’ve seen with other large and well-established Cloud service providers who offer similar or better  SLA’s (with or without real fiscal repercussion) and have experienced unplanned outages for hours on end.

Is four nines good enough for your most critical applications?  Do you measure this today?  Does it even matter?

/Hoff


Here’s a handy Wikipedia reference on availability table you can print out:

Availability % Downtime per year Downtime per month* Downtime per week
90% 36.5 days 72 hours 16.8 hours
95% 18.25 days 36 hours 8.4 hours
98% 7.30 days 14.4 hours 3.36 hours
99% 3.65 days 7.20 hours 1.68 hours
99.5% 1.83 days 3.60 hours 50.4 minutes
99.8% 17.52 hours 86.23 minutes 20.16 minutes
99.9% (“three nines”) 8.76 hours 43.2 minutes 10.1 minutes
99.95% 4.38 hours 21.56 minutes 5.04 minutes
99.99% (“four nines”) 52.6 minutes 4.32 minutes 1.01 minutes
99.999% (“five nines”) 5.26 minutes 25.9 seconds 6.05 seconds
99.9999% (“six nines”) 31.5 seconds 2.59 seconds 0.605 seconds

* For monthly calculations, a 30-day month is used.

Most CIO’s Not Sold On Cloud? Good, They Shouldn’t Be…

June 7th, 2009 13 comments

I find it amusing that there is so much drama surrounding the notion of Cloud adoption.

There are those who paint Cloud as the savior of today’s IT great unwashed and others who claim it’s simply hype and not ready for prime time.

They’re both right and Cloud adoption is exactly where it should be today.

Here’s a great illustration: “Cloud or Fog? Two-Thirds of UK CIOs and CFOs Not Yet Sold on Cloud“:

Sixty-seven per cent of Chief Information Officers and Chief Financial Officers in UK enterprises say they are either not planning to adopt cloud computing (35 per cent) or are unsure (32 per cent) of whether their company will adopt cloud computing during the next two years, according to a major new report from managed hosting (http://www.ntteuropeonline.com/) specialists NTT Europe Online.

Whose perspective you share comes down to well-established market dynamics relating to technology adoption and should not come as a surprise to anyone.

One of the best-known examples of this can be visualized a by a graphical representation of what Geoffrey Moore wrote about it in his book “Crossing the Chasm: Marketing and Selling High-Tech Products to Mainstream Customers“:

techadoptioncurve

Because I’m lazy, I’ll just refer you to the Wikipedia entry which describes “the Chasm” and the technology adoption lifecycle:

In Crossing the Chasm, Moore begins with the diffusion of innovations theory from Everett Rogers, and argues there is a chasm between the early adopters of the product (the technology enthusiasts and visionaries) and the early majority (the pragmatists). Moore believes visionaries and pragmatists have very different expectations, and he attempts to explore those differences and suggest techniques to successfully cross the “chasm,” including choosing a target market, understanding the whole product concept, positioning the product, building a marketing strategy, choosing the most appropriate distribution channel and pricing.

Crossing the Chasm is closely related to the Technology adoption lifecycle where five main segments are recognized; innovators, early adopters, early majority, late majority and laggards. According to Moore, the marketer should focus on one group of customers at a time, using each group as a base for marketing to the next group. The most difficult step is making the transition between visionaries (early adopters) and pragmatists (early majority). This is the chasm that he refers to. If a successful firm can create a bandwagon effect in which the momentum builds and the product becomes a de facto standard. However, Moore’s theories are only applicable for disruptive or discontinuous innovations. Adoption of continuous innovations (that do not force a significant change of behavior by the customer) are still best described by the original Technology adoption lifecycle. Confusion between continuous and discontinuous innovation is a leading cause of failure for high tech products.

Cloud is firmly entrenched in the Chasm, clawing its way out as the market matures*.

It will, over the next 18-24 months by my estimates arrive at the early majority phase.

Those who are today evangelizing Cloud Computing are the “technology enthusiasts” and “visionaries” in the “innovator” and “early adopter” phases respectively.  If you look at the article I quoted at the top of the blog, CIO’s are generally NOT innovators or early adopters, so…

So don’t be put off or overly excited when you see hyperbolic references to Cloud adoption because depending upon who you are and who you’re talking about, you’ll likely always get a different perspective for completely natural reasons.

/Hoff

* To be clear, I wholeheartedly agree with James Urquhart that “Cloud” is not a technology, it’s an operational model. So as not to confuse people, within the context of the “technology adoption curve” above you can likewise see how “model” or “paradigm” works, also.  It doesn’t really have to be limited to a pure technology.

The Six Worst Cloud Security Mistakes? I Can Do You One Better…

June 6th, 2009 2 comments

I recently read a story from Kelly Jackson Higgins of Dark Reading outlining what are described as the “Six Worst Cloud Security Mistakes:

  1. Assuming the cloud is less secure than your data
  2. Not verifying, testing, or auditing the security of your cloud-based service provider.
  3. Failing to vet your cloud provider’s viability as a business.
  4. Assuming you’re no longer responsible for securing data once it’s in the cloud.
  5. Putting insecure apps in the cloud and expecting that to make them more secure.
  6. Having no clue that your business units are already using some cloud-based services.

A very interesting list, for sure, and a reasonable set of potential “mistakes” to ponder, but I’m really having trouble with one in particular.

The one that’s getting my goose honking is #1: Assuming the cloud is less secure than your data.

Really? I maintain that this generalization about Cloud being more or less secure (in regards to one’s own capabilities) is a silly thing to argue; let’s see why.

We start off with what I think is a strange bit of contradiction:

It’s only natural for security pros to be control freaks. Being charged with securing a company’s data and intellectual property requires a healthy dose of paranoia and protectionism. But sometimes that leads to false impressions about cloud security. “One common mistake is that as soon as you talk about the cloud, [organizations] assume it’s less secure than their own IT security operation,” says Chenxi Wang, principal analyst at Forrester Research. “More control does not necessarily lead to more security.”

Assuming that one of the reasons a company might consider outsourcing their IT security operations to a third party [Cloud] provider IS the fact that they have more control or at least equal to what a company can provide themselves, it occurs to me this sort of statement can be interpreted many ways.  Here’s one, for example.

I find myself confused by the highlighted sentence regarding control and security within the context of what is written.  In fact, if you read the next paragraph, it seems to imply that the because a Cloud provider has more control they can offer better security:

In fact, with services such as Google’s SaaS, data loss is less likely because the information is accessible from anywhere and anytime without saving it to an easily lost or stolen USB stick or CD, according to Eran Feigenbaum, director of security for Google Apps. And Google’s security-patching process is more streamlined than a typical enterprise because its server architecture is homogeneous, he says. “Many attacks [come from a] lack of patch management and server misconfiguration…For Google, when the time comes to patch, we can do so across the entire platform in a uniform fashion,” he said.

I’ll say it again: SaaS is a convenient way of dumbing down “Cloud Computing” to a singular instance/application/service but it completely obviates Platform and Infrastructure as a Service offerings, which are wildly different animals, especially from a security perspective.  Please see my latest commentary about this in my response to Bruce Schneier’s equation of SaaS with Cloud Computing to the exclusion of PaaS/IaaS.

I’ve made the point before that comparing managing/patching a single application and its supporting infrastructure in a SaaS offering to an enterprise that would otherwise have to support not only that service but potentially hundreds more is a completely unfair comparison.  If you want to compare apples to apples, I’d maintain that any organization with a mature security program whose only charter was to support (securely) a single application could do it just as well as a SaaS provider, all other things being equal.

The differences here become scale and multi-tenancy in the case of the Cloud provider, I think these issues actually make a Cloud environment more difficult to secure.

Also, suggesting with the Google example that “data loss is less likely” because it’s “accessible from anywhere” and doesn’t involve “…lost or stolen USB stick(s) or CD(s)” seems an awfully arbitrary one given the fact that one of the most interesting data loss/leakage incidents in recent Cloud history came from Google’s Docs offering due to an operator (Google) system misconfiguration.  USB sticks and CDs are also a very narrow definition of data loss/leakage.

Then there’s the more global view SaaS and other cloud providers have, Feigenbaum says. “As an enterprise, you only see a small slice of what’s affecting you [threat-wise],” Feigenbaum said during a panel on cloud security at the RSA Conference in April. “A cloud provider can have the economy of scale for a holistic vision…the cloud shifts security and also makes it better,” he said.

I don’t have anything to argue about here; a wider perspective and better visibility is a good thing.  Again, however, this depends upon the type of service, what is being monitored and protected, on behalf of whom and from whom.

But that doesn’t mean you should blindly trust your cloud provider, though the larger ones do tend to have a better handle on threats due to their size, Forrester’s Wang says. “These people deal with security issues at more complex levels than your own IT team sees on a daily basis,” Wang says. “It’s a misconception to say cloud security is definitely less capable or more problematic.”

No, you shouldn’t blindly trust your providers but that last statement suggests we should similary trust that providers do a better job and deal with security issues at more complex levels?  What does that even mean? Please do NOT tell me that a SAS70 Type II is your answer.  Just as “It’s a misconception to say cloud security is definitely less capable or more problematic,” I can just as easily suggest the converse is true without evidence.

I would like to see the empirical data that backs that set of statements up and the common metrics I can use to measure across providers and enterprises alike.  Thought so.

Thus far, security has been one of the main hurdles to adoption of cloud-based services, says Michelle Dennedy, chief governance officer for cloud computing at Sun Microsystems. “Trust in the cloud, more than technical abilities, has been hindering adoption,” Dennedy says. “But the cloud can be more secure than a private environment in many cases.”

Michelle is definitely correct; trust represents a fundamental issue with Cloud adoption, and it rolls both ways.  Asking us to “trust but verify” when what we’re being asked to verify can’t easily be trusted poses a very difficult scenario indeed.

By the way, I think the worst Cloud Security mistake is not knowing what Cloud Security even means.

/Hoff

Dear Mr. Schneier, If Cloud Is Nothing New, Why Are You Talking So Much About It?

June 3rd, 2009 13 comments

squidly

Update: Please see this post if you’re wondering why I edited this piece.

I read a recent story in the Guardian from Bruce Schneier titled “Be Careful When You Come To Put Your Trust In the Clouds” in which he suggests that Cloud Computing is “…nothing new.”

Fundamentally it’s hard to argue with that title as clearly we’ve got issues with security and trust models as it relates to Cloud Computing, but the byline seems to be at odds with Schneier’s ever-grumpy dismissal of Cloud Computing in the first place.  We need transparency and trust: got it.

Many of the things Schneier says make perfect sense whilst others just make me scratch my head in abstract.  Let’s look at a couple of them:

This year’s overhyped IT concept is cloud computing. Also called software as a service (Saas), cloud computing is when you run software over the internet and access it via a browser. The salesforce.com customer management software is an example of this. So is Google Docs. If you believe the hype, cloud computing is the future.

Clearly there is a lot of hype around Cloud Computing, but I believe it’s important — especially as someone who spends a lot of time educating and evangelizing — that people like myself and Schneier effectively separate the hype from the hope and try and paint a clearer picture of things.

To that point, Schneier does his audience a disservice by dumbing down Cloud Computing to nothing more than outsourcing via SaaS.  Throwing the baby out with the rainwater seems a little odd to me and while it’s important to relate to one’s audience, I keep sensing a strange cognitive dissonance whilst reading Schneier’s opining on Cloud.

Firstly, and as I’ve said many times, Cloud Computing is more than just Software as a Service (SaaS.)  SaaS is clearly the more mature and visible set of offerings in the evolving Cloud Computing taxonomy today, but one could argue that players like Amazon with their Infrastructure as a Service (IaaS) or even the aforementioned Google and Salesforce.com with the Platform as a Service (PaaS) offerings might take umbrage with Schneier’s suggestion that Cloud is simply some “…software over the internet” accessed “…via a browser.”

Overlooking IaaS and PaaS is clearly a huge miss here and it calls into question the point Schneier makes when he says:

But, hype aside, cloud computing is nothing new . It’s the modern version of the timesharing model from the 1960s, which was eventually killed by the rise of the personal computer. It’s what Hotmail and Gmail have been doing all these years, and it’s social networking sites, remote backup companies, and remote email filtering companies such as MessageLabs. Any IT outsourcing – network infrastructure, security monitoring, remote hosting – is a form of cloud computing.

The old timesharing model arose because computers were expensive and hard to maintain. Modern computers and networks are drastically cheaper, but they’re still hard to maintain. As networks have become faster, it is again easier to have someone else do the hard work. Computing has become more of a utility; users are more concerned with results than technical details, so the tech fades into the background.

<sigh> Welcome to the evolution of technology and disruptive innovation.  What’s the point?

Fundamentally, as we look beyond speeds and feeds, Cloud Computing — at all layers and offering types — is driving huge headway and innovation in the evolution of automation, autonomics and the applied theories of dealing with massive scale in compute, network and storage realms.  Sure, the underlying problems — and even some of the approaches — aren’t new in theory, but they are in practice.  The end result may very well be that a consumer of service may not see elements that are new technologically as they are abstracted, but the economic, cultural, business and operational differences are startling.

If we look at what makes up Cloud Computing, the five elements I always point to are:

cloud-keyingredients018

Certainly the first three are present today — and have been for some while — in many different offerings.  However, combining the last two: on-demand, self-service scale and dynamism with new economic models of consumption and allocation are quite different, especially when doing so at extreme levels of scale with multi-tenancy.

So let’s get to the meat of the matter: security and trust.

But what about security? Isn’t it more dangerous to have your email on Hotmail’s servers, your spreadsheets on Google’s, your personal conversations on Facebook’s, and your company’s sales prospects on salesforce.com’s? Well, yes and no.

IT security is about trust. You have to trust your CPU manufacturer, your hardware, operating system and software vendors – and your ISP. Any one of these can undermine your security: crash your systems, corrupt data, allow an attacker to get access to systems. We’ve spent decades dealing with worms and rootkits that target software vulnerabilities. We’ve worried about infected chips. But in the end, we have no choice but to blindly trust the security of the IT providers we use.

Saas moves the trust boundary out one step further – you now have to also trust your software service vendors – but it doesn’t fundamentally change anything. It’s just another vendor we need to trust.

Fair enough.  So let’s chalk one up here to “Cloud is nothing new — we still have to put our faith and trust in someone else.”  Got it.  However, by again excluding the notion of PaaS and IaaS, Bruce fails to recognize the differences in both responsibility and accountability that these differing models brings; limiting Cloud to SaaS while simple for cute argument does not a complete case make:

cloud-lower030

To what level you are required to and/or feel comfortable transferring responsibility depends upon the provider and the deployment model; the risks associated with an IaaS-based service can be radically different than that of one from a SaaS vendor. With SaaS, security can be thought of from a monolithic perspective — that of the provider; they are responsible for it.  In the case of PaaS and IaaS, this trade-off’s become more apparent and you’ll find that this “outsourcing” of responsibility is diminished whilst the mantle of accountability is not.  This is pretty important if you want ot be generic in your definition of “Cloud.”

Here’s where I see Bruce going off the rails from his “Cloud is nothing new” rant, much in the same way I’d expect he would suggest that virtualization is nothing new, either:

There is one critical difference. When a computer is within your network, you can protect it with other security systems such as firewalls and IDSs. You can build a resilient system that works even if those vendors you have to trust may not be as trustworthy as you like. With any outsourcing model, whether it be cloud computing or something else, you can’t. You have to trust your outsourcer completely. You not only have to trust the outsourcer’s security, but its reliability, its availability, and its business continuity.

You don’t want your critical data to be on some cloud computer that abruptly disappears because its owner goes bankrupt . You don’t want the company you’re using to be sold to your direct competitor. You don’t want the company to cut corners, without warning, because times are tight. Or raise its prices and then refuse to let you have your data back. These things can happen with software vendors, but the results aren’t as drastic.


Trust is a concept as old as humanity, and the solutions are the same as they have always been. Be careful who you trust, be careful what you trust them with, and be careful how much you trust them. Outsourcing is the future of computing. Eventually we’ll get this right, but you don’t want to be a casualty along the way.

So therefore I see a huge contradiction.  How we secure — or allow others to — our data is very different in Cloud, it *is* something new in its practical application.   There are profound operational, business and technical (let alone regulatory, legal, governance, etc.) differences that do pose new challenges. Yes, we should take our best practices related to “outsourcing” that we’ve built over time and apply them to Cloud.  However, the collision course of virtualization, converged fabrics and Cloud Computing are pushing the boundaries of all we know.

Per the examples above, our challenges are significant.  The tech industry thrives on the ebb and flow of evolutionary punctuated equilibrium; what’s old is always new again, so it’s important to remember a couple of things:

  1. Harking back (a whopping 60 years) to the “dawn of time” in the IT/Computing industry making the case that things “aren’t new” is sort of silly and simply proves you’re the tallest and loudest guy in a room full of midgets.  Here’s your sign.
  2. I don’t see any suggestions for how to make this better in all these rants about mainframes, only FUD
  3. If “outsourcing is the future of computing” and we are to see both evolutionary and revolutionary disruptive innovation, shouldn’t we do more than simply hope that “…eventually we’ll get this right?”

The past certainly repeats itself, which explains why every 20 years bell-bottoms come back in style…but ignoring the differences in application, however incremental, is a bad idea.  In many regards we have not learned from our mistakes or fail to recognize patterns, but you can’t drive forward by only looking in the rear view mirror, either.

Regards,

/Hoff

Observations on “Securing Microsoft’s Cloud Infrastructure”

June 1st, 2009 1 comment

notice-angleI was reading a blog post from Charlie McNerney, Microsoft’s GM, Business & Risk Management, Global Foundation Services on “Securing Microsoft’s Cloud Infrastructure.”

Intrigued, I read the white paper to first get a better understanding of the context for his blog post and to also grok what he meant by “Microsoft’s Cloud Infrastructure.”  Was he referring to Azure?

The answer is per the whitepaper that Microsoft — along with everyone else in the industry — now classifies all of its online Internet-based services as “Cloud:”

Since the launch of MSN® in 1994, Microsoft has been building and running online services. The GFS division manages the cloud infrastructure and platform for Microsoft online services, including ensuring availability for hundreds of millions of customers around the world 24 hours a day, every day. More than 200 of the company’s online services and Web portals are hosted on this cloud infrastructure, including such familiar consumer-oriented services as Windows Live™ Hotmail® and Live Search, and business-oriented services such as Microsoft Dynamics® CRM Online and Microsoft Business Productivity Online Standard Suite from Microsoft Online Services. 

Before I get to the part I found interesting, I think that the whitepaper (below) does a good job of providing a 30,000 foot view of how Microsoft applies lessons learned over its operational experience and the SDL to it’s “Cloud” offerings.  It’s something designed to market the fact that Microsoft wants us to know they take security seriously.  Okay.

Here’s what I found interesting in Charlie’s blog post, it appears in the last two sentences (boldfaced): 

The white paper we’re releasing today describes how our coordinated and strategic application of people, processes, technologies, and experience with consumer and enterprise security has resulted in continuous improvements to the security practices and policies of the Microsoft cloud infrastructure.  The Online Services Security and Compliance (OSSC) team within the Global Foundation Services division that supports Microsoft’s infrastructure for online services builds on the same security principles and processes the company has developed through years of experience managing security risks in traditional software development and operating environments. Independent, third-party validation of OSSC’s approach includes Microsoft’s cloud infrastructure achieving both SAS 70 Type I and Type II attestations and ISO/IEC 27001:2005 certification. We are proud to be one of the first major online service providers to achieve ISO 27001 certification for our infrastructure. We have also gone beyond the ISO standard, which includes some 150 security controls. We have developed 291 security controls to date to account for the unique challenges of the cloud infrastructure and what it takes to mitigate some of the risks involved.

I think it’s admirable that Microsoft is sharing its methodologies and ISMS objectives and it’s a good thing that they have adopted ISO standards and secured SAS70 as a baseline.  

However, I would be interested in understanding what 291 security controls means to a security posture versus, say 178.  It sounds a little like Twitter follower counts.

I can’t really explain why those last two sentences stuck in my craw, but they did.

I’d love to know more about what Microsoft considers those “unique challenges of the cloud infrastructure” as well as the risk assessment framework(s) used to manage/mitigate them — I’m assuming they’ve made great STRIDEs in doing so. 😉

/Hoff

Incomplete Thought: Cloud Computing & Innovation – Government IT’s Version of Ethanol?

May 31st, 2009 4 comments

ethanolTime to stick my neck outside my shell again…

I was reading MIT’s Technology Review (registration required) and came across an interesting article titled “Can Technology Save the Economy?

This was a very thought-provoking read as it highlighted how tens of billions of dollars allocated to energy and information technology in the U.S. stimulus bill causes many economists and innovation experts to remain extremely skeptical:

The concern over the stimulus bill’s technology spending is not just that it offends conventional macroeconomic theory about the best way to boost the economy; it’s that it might harm the very technologies it means to support.  Because the bill was written quickly and shaped by political expeidiency, economists and experts on innovation policy are leery of many of its funding choices.  

Could extending billions of dollars’ worth of fiber-optic lines to rural communities, for example, become a boondoggle?  Or what if utilities run high-power transmission lines to remote solar or wind farms only to find that the electricity they produce is too expensive to compete with other sources?

As a historical analogy, experts point to corn-derived ethanol.   Once the darling of alternative-energy advocates, the heavily subsidied biofuel is now routinedly condemned by both environmentalists and economists.  Yet because ethanol’s use in gasoline is now mandated by federal law, and a large industry is now invested in its production, and its production is likely to continue even though it offers few envorinmental benefits over gasoline.

This example shows that we’ve gone so far down the path of “corn power” despite it’s lack of delivering on its promises.  We can’t escape from the gravity of our investment driven by the fervor surrounding its adoption which it seems in many cases were based upon untested theories and unsubstantiated practice.

Ethanol was designed to resolve dependencies on straight fossil fuels.  It was supposed to cost less and deliver better performance at lower emissions.  It hasn’t quite worked out that way.  In reality, ethanol has produced many profound unanticipated impacts; financial, environmental, economic, political and social.  Has it’s production and governmentally-forced adoption driven better solutions from being properly investigated?

Despite my unbridled enthusiasm for Cloud Computing, I am conflicted as I examine it using a similar context to the ethanol example above.  I fully admit that I’m stretching the analog here and mixing metaphors, but the article got me thinking and some of that is playing out here.  It *is* an “incomplete thought,” after all.

While Cloud adoption in certain scenarios may certainly offer tremendous agility and in some cases forthright cost savings, one must ask if the cult of personality surrounding Cloud, especially in the public sector, is not unduly influenced by the pressing macroeconomic conditions, confusing applications of ROI across various dozens of use cases enveloped by a single term, and the same sort of political expediency described above.  

With all of its many benefits, Cloud presents many (if not more) challenges stemming from not solving problems we’ve had for decades.  Cloud is a convenient reason to leap forward whilst refusing to look from whence we are jumping; we’re not necessarily solving problems, we’re “transforming” them.

As with much disruptive innovation, the timing and intersection of technology, religion, culture, economics and politics can mean the difference between bust or boom.  

In the case of Cloud, I’d suggest that the collision space provides the proverbial perfect storm; the hyping of Cloud Computing is largely premature and a convenient scape-horse to which we are hitching our cost-laden IT wagons.  The momentous interest surrounding Cloud in the Public Sector sounds eerily similar to the ethanol scenario above.  Are we so wrapped around the axle with Cloud Computing that we’re actually blinding ourselves from solving the problems we have in fundamentally better ways?

The danger, of course, is that while the federal dollars could help renewable-energy companies survive the recession, they could also prop up existing technologies that would not be competitive in an open market.  Not only could the federal spending support uneconomical energy sources (as has been the case with ethanol), but the resulting backlash could discourage policy makers, investors, and the public from embracing newer, more efficient technologies.  

Putting on my devil’s advocate hate I have to ask “Are we ignoring potentially better solutions to our problems?”  Certainly we’re seeing a definite spike in the punctuated equilibrium of IT’s evolution, but at this point, one might argue it’s a supply driven demand.  Is Cloud Computing really the answer to our problems or a fantastic and convenient way of treating the symptoms?

You can’t swing a dead cat in Washington without somebody talking about moving something to the Cloud.   One the one hand, it’s fantastic to see Government think outside the box, but what happens when the box collapses?  

The movie will play out and we’ll have to wait and see whether  the horsepower delivered by Cloud is more analogous to ethanol — a reformulated version of gasoline that doesn’t really deliver  on it’s promises, but that we’re stuck with — or whether we’ll see the equivalent of the IT hydrogen fuel cell instead.

Time will tell.  What do you think? 

/Hoff

JERICHO FORUM AND CLOUD SECURITY ALLIANCE JOIN FORCES TO ADDRESS CLOUD COMPUTING SECURITY

May 27th, 2009 6 comments

At the RSA conference I left the Cloud Security Alliance launch early in order to attend the Jericho Forum’s session on Cloud Computing.  It seems we haven’t solved the teleportation issue yet.  Maybe in the next draft…

We had a great session at the Jericho event with myself, Rich Mogull and Gunnar Peterson discussing Jericho’s COA and Cloud Cube work.  The conclusion of the discussion was that ultimately that Jericho and the CSA should join forces.

Voila:

JERICHO FORUM AND CLOUD SECURITY ALLIANCE JOIN FORCES TO ADDRESS CLOUD COMPUTING SECURITY

London and San Francisco, 21 May 2009 – Jericho Forum, the high level independent security expert group, and the Cloud Security Alliance, a not-for-profit group of information security and cloud computing security leaders, announced today that they are working together to promote best practices for secure collaboration in the cloud.  Both groups have a single goal: to help business understand the opportunity posed by cloud computing and encourage common and secure cloud practices.     Within the framework of the new partnership, both groups will continue to provide practical guidance on how to operate securely in the cloud while actively aiming to align the outcomes of their work.  

“This is good news for the industry” said Adrian Seccombe, CISO and Senior Enterprise Information Architect at Eli Lilly and Jericho Forum board member.  “The Cloud represents a compelling opportunity to achieve more with less but at the same time presents considerable security challenges.  For business to get the most out of it, this new development must be addressed responsibly and with eyes fully open.  Working together we believe that the Cloud Security Alliance and Jericho Forum can bring clear leadership in this important area and dispel some of the hype and confusion stirred up in the cloud.”

"The Cloud represents a fundamental shift in computing with limitless potential.  Solving the new set of risk issues it introduces is a shared responsibility of cloud provider and customer alike," said Jim Reavis, Co-founder of the Cloud Security Alliance (CSA).  "The Jericho Forum has shown early leadership in articulating and addressing the de-perimeterisation concept.  We are proud to join forces with them to provide pragmatic guidance for safely leveraging the cloud today as well as a clear vision for a future of pervasive and secure cloud computing."

Jericho Forum has lead the way for the last five years in the way de-perimeterisation is tackled and more recently in developing secure collaborative architectures. Last year the group published a Collaboration Oriented Architectures framework presenting a set of design principles allowing businesses to protect themselves against the security challenges posed by increased collaboration and the business potential offered by Web 2.0.  The Cloud Security Alliance has engaged, noted and well-recognised experts within crucial areas such as governance, law, network security, audit, application security, storage, cryptography, virtualization and risk management to provide authoritative guidance on how to adopt cloud computing solutions securely. 

Both groups recently published initial guidelines for cloud computing.   The Jericho Forum published a Cloud Cube Model designed to be an essential first tool to help business evaluate the risk and opportunity associated with moving in to the cloud.  A video presentation of this is available on YouTube (see(http://www.youtube.com/jerichoforum) and an accompanying Cloud Cube Model positioning paper is downloadable from the Jericho Forum Web site (http://www.opengroup.org/jericho/cloud_cube_model_v1.0.pdf).   At RSA in San Francisco, Cloud Security Alliance announced its formation and published an inaugural whitepaper, “Guidance for Critical Areas of Focus in Cloud Computing”,  downloadable from  http://www.cloudsecurityalliance.org/guidance/). 

About Jericho Forum

Jericho Forum is an international IT security thought-leadership group dedicated to defining ways to deliver effective IT security solutions that will match the increasing business demands for secure IT operations in our open, Internet-driven, globally networked world.  Members include many leading organisations from both the user and vendor community including IBM, Symantec, Boeing, AstraZeneca, Qualys, BP, Eli Lilly, KLM, Cap Gemini, Motorola and Hewlett Packard.  

Together there aim is to:

·         Drive and influence development of new architectures, inter-workable technology solutions, and implementation approaches for securing our de-perimeterizing world

·         Support development of open standards that will underpin these technology solutions.

A full list of member organisations can be seen at http://www.opengroup.org/jericho/memberCompany.htm.

About Cloud Security Alliance

The Cloud Security Alliance is a not-for-profit organization with a mission to promote the use of best practices for providing security assurance within Cloud Computing, and to provide education on the uses of Cloud Computing to help secure all other forms of computing. The Cloud Security Alliance is led by industry practitioners and supported by founding charter companies PGP Corporation, Qualys, Inc. and Zscaler, Inc. For further information, the Cloud Security Alliance website is www.cloudsecurityalliance.org

It’s great to see things moving along.  Previously we also announced that the CSA and ISACA have joined forces to promote security best practices in Cloud Computing.

In case you’ve not seen it, we’re looking for volunteers to work on specific areas of the v2.0 guidance targeted for October, 2009.  You can also contribute your thoughts on the existing guidance via our CSA Google Group.

Quick Bit: Virtual & Cloud Networking – Where It ISN’T Going…

May 26th, 2009 No comments

In my Four Horsemen presentation, I made reference to one of the challenges with how the networking function is being integrated into virtual environments.  I’ve gone on to highlight how this is exacerbated in Cloud networking, also.

Specifically, as it comes to understanding how the network plays in virtual and Cloud architectures, it’s not where the network *is* in the increasingly complex virtualized, converged and unified computing architectures, it’s where networking *isn’t.*

What do I mean by this?  Here’s a graphical representation that I built about a year ago.  It’s well out-of-date and overly-simplified, but you get the picture:

virtualnetwork-whereThere’s networking at almost every substrate level — in the physical and virtual construct.  In our never-ending quest to balance performance, agility, resiliency and security, we’re ending up with a trade-off I call simplexity: the most complex simplicity in networking we’ve ever seen.   I wrote about this in a blog post last year titled “The Network Is the Computer…(Is the Network, Is the Computer…)

If you take a look at some of the more recent blips to appear on the virtual and Cloud networking  radar, you’ll see examples such as:

This list is far from inclusive.  Yes, I know I’ve left off blade server manufacturers and other players like HP (ProCurve) and Juniper, etc.  as well as ADC vendors like f5.  It’s not that I don’t appreciate their solutions, it’s just that I have a couple of free cycles to write this, and the list above appear on the top of my stack.

I plan on writing in more detail about the impact some of these technologies are having on next generation datacenters and Cloud deployments, as it’s a really interesting subject for me coming from my background at Crossbeam.

The most startling differences are in the approach of either putting the networking (and all its attendant capabilities) back in the hands of the network folks or allowing the server/virtual server admins to continue to leverage their foothold in the space and manage the network as a component of the converged and virtualized solution as a whole.

My friend @aneel (Twitter)  summed it up really well this morning when comparing the Blade Network Technology VMready offering and the Cisco Nexus 100ov:

huh.. where cisco uses nx1kv to put net control more in hands of net ppl, bnt uses vmready to put it further in server/virt admin hands

Looking at just the small sampling of solutions above, we see the diversity in integrated networking, external fabrics, converged fabrics (including storage) and add-on network processors.

It’s going to be a wild ride kids.  Buckle up.

/Hoff

Incomplete Thought: Cloud Workloads – Really?

May 22nd, 2009 2 comments

jugglingcloudOver the last few weeks I’ve listened intently to many Cloud Computing discussions and presentations.  I’ve read many blogs.   I’ve listened to many podcasts.

There’s a residual theme surfacing and I have repeatedly observed a certain word creeping into the colloquy: workloads.

“Balancing workloads,” “distributing workloads,” “managing workloads,” “dynamic workloads…” it just keeps coming up.

I have a question:

Can someone please kindly define “workload” for me within the context of what the Cloud offers TODAY and how it relates to a typical large enterprise?

The reason I ask is that these conversations suggest we’ve reached some capability maturity wherein one can actually quantify Cloud workloads the way one might in a Grid or HPC context.  I mean this beyond the simple scope of something like simple network load-balancing.  I qualify this within the context of how an enterprise with a set of applications and information would ask it were they considering moving to the Cloud:  “We don’t have ‘workloads,’ we have applications.’

I dare say that from a provider perspective you’d have different definitions also: Providers of SaaS would look at things differently than that of PaaS and IaaS.  In fact, the further down the stack you go from SaaS to IaaS, I’d suggest the definitions get less and less granular as for the most part the packaging of an application/information gets more coarse from the provider’s perspective: a VM is a good example.  Is THAT a workload or is it the processes, apps and data within it?  See what I mean?

The way “workloads” are being bandied about, it seems to suggest that TODAY using “Cloud,” one is able to enable and trust autonomics and governance capabilities for any and all elements of compute, network and storage. It also implies that the points of integration are well defined and that the applications and data are separated.  Nice theory, wrong Universe.

To me, we’re certainly not there in the enterprise space — in fact we’ve skipped over this critical step (Real Time Infrastructure/Adaptive Enterprise, etc.) in the hopes that we’ll mythicaly achieve this in the Cloud.  I won’t bother to poke the bear with the arguments related to portability and interoperability.

(Ab)using this term and suggesting that we’re anywhere near being able to do anything with “workloads” when we can barely even get our arms around applications and service definition in the Cloud is laughable.

It’s writing checks the Cloud can’t cash.

Maybe it’s just me, but discussing “workloads” is another exercise in Shark Jumping today.

/Hoff