Archive

Archive for the ‘Cloud Computing’ Category

ENISA launches Cloud Computing Security Risk Assessment Document

November 20th, 2009 1 comment

ENISA-LOGOENISA (the European Network and Information Security Agency) today launched their 124 page report on Cloud Computing Security Risk Assessment.

At first glance it’s an excellent read and will be a fantastic accompaniment to the the CSA’s guidance.  I plan to dig into it more over the weekend.  I really appreciate the risk assessment approach which allows folks to prioritize their efforts on understanding the relevant high-level issues associed with Cloud.

Very well done.  I look forward to seeing how CSA and ENISA can further work together on upcoming projects!  I think the European perspective will help bring some balance and alternative views on Cloud in regards to legal and compliance issues specifically.

You can find the document here.

ENISA, supported by a group of subject matter expert comprising representatives from Industries, Academia and Governmental Organizations, has conducted, in the context of the Emerging and Future Risk Framework project, an risks assessment on cloud computing business model and technologies. The result is an in-depth and independent analysis that outlines some of the information security benefits and key security risks of cloud computing. The report provide also a set of practical recommendations.


Cloud Security: Dilbert Style

November 19th, 2009 1 comment

dilbert-cloudsec

From: http://dilbert.com/strips/comic/2009-11-19/

Just A Reflective Bookmark: Microsoft’s Azure…The Dark Horse Emergeth…

November 17th, 2009 3 comments

darkhorseI’ve said it before, I’ll say it again:

Don’t underestimate Microsoft and the potential disruption Azure will deliver.*

You might not get Microsoft’s strategy for Azure. Heck, much of Microsoft may not get Microsoft’s strategy for Azure, but one thing is for sure: Azure will be THE platform for products, solutions and services across all mediums from Redmond moving forward. Ray Ozzie said it best at PDC:

The vision of Azure, said Ozzie, is “three screens and a cloud,” meaning internet-based data and software that plays equally well on PCs, mobile devices, and TVs.

I think the underlying message here is that while we often think of Cloud from the perspective of interacting with “data,” we should not overlook how mobility, voice and video factor into the equation…

According to Ozzie, Azure will become production live on January 1st and “six data centers in North America, Europe, and Asia will come online.” (I wonder when Amazon will announce APAC support…)

Azure will be disruptive, especially for Windows-heavy development shops and the notion of secure data access/integration between public/private clouds is not lost on them, either:

Microsoft also announced another of its city-based code names. Sydney is a security mechanism that lets businesses exchange data between their servers and the Azure cloud. Entering testing next year, Sydney should allow a local application to talk to a cloud application. It will help businesses that want to run most of an application in Microsoft’s data center, but that want to keep some sensitive parts running on their own servers.

It will be interesting to see how “Sydney” manifests itself as compared to AWS’s Virtual Private Cloud.

Competitors know the Azure is no joke, either, which is why we see a certain IaaS provider adding .NET framework support as well as Cloud Brokers (bridges) such as RightScale announcing support for Azure. Heck, even GoGrid demo’d “interoperability” with Azure. Many others are announcing support, including the Federal Government via Vivek Kundra who joined Ozzie to announce that the 2009 Pathfinder Innovation Challenge will be hosted on Azure.

Stir in the fact that Microsoft is also extending its ecosystem of supported development frameworks and languages, at PDC Matt Mullenwegg from WordPress (Automattic to be specific) is developing on Azure. This shows how Azure will support things like PHP, MySQL as well as .NET (now called AppFabric Access Control.)

Should be fun.

Hey, I wonder (*wink*) if Microsoft will be interested in participating in the A6 Working Group to provide transparency and visibility that some of their IaaS/PaaS competitors (*cough* Amazon *cough*) who are clawing their way up the stack do not…

/Hoff

*To be fair a year ago when Azure was announced, I don’t think any of us got Azure and I simply ignored it for the most part. Not the case any longer; it makes a ton of sense if they can execute.

Silent Lucidity: IaaS — Already A Dinosaur? The Evolution of PaaSasaurus Rex…

November 12th, 2009 8 comments

dinosaurSitting in an impressive room at the Google campus in Mountain View last month, I asked the collective group of brainpower a slightly rhetorical question:

How much longer do you feel pure-play Infrastructure-As-A-Service will be a relevant service model within the spectrum of cloud services?

I couched the question with previous “incomplete thoughts*” relating to the move “up-stack” by IaaS providers — providing value-added, at-cost services to both differentiate and soften the market for what I call the “PaaSification” of the consumer.  I also highlighted the move “down-stack” by SaaS vendors building out platforms to support a broader ecosystem and value proposition.

In the long term, I think ultimately the trichotomy of the SPI model will dissolve thanks to commoditization and the need for providers to differentiate — even at mass scale.  We’ll ultimately just talk about service delivery and the platform(s) used to deliver them.  Infrastructure will enable these services, of course, but that’s not where the money will come from.

Just look at the approach of providers such as Amazon, Terremark and Savvis and how they are already clawing their way up the PaaS stack, adding more features and functions that either equalize public cloud capabilities with those of the enterprise or even differentiate from it.  Look at Microsoft’s Azure.  How about Heroku, Engine Yard, Joyent?  How about VMware and Springsource?  All platform plays. Develop, click, deploy.

As I mention in my Cloudifornication presentation, I think that from a security perspective, PaaS offers the potential of eliminating entire classes of vulnerabilities in the application development lifecycle by enforcing sanitary programmatic practices across the derivate works built upon them.  I look forward also to APIs and standards that allow for consistency across providers. I think PaaS has the greatest potential to deliver this.

There are clearly trade-offs here, but as we start to move toward the two key differentiators (at least for public clouds) — management and security — I think the value of PaaS will really start to shine.

Probably just another bout of obviousness, but if I were placing bets, this is where I’d sink my nickels.

You?

/Hoff

* The most relevant “incomplete thought” is the one titled “Incomplete Thought: Virtual Machines Are the Problem, Not the Solution…” in which I kicked around the notion that virtualization-enabled IaaS and the VM containers they enable are simply an ugly solution to an uglier problem…

Dear Santa: All I Want For Christmas On My Amazon Wishlist Is a Straight Answer…

October 31st, 2009 4 comments

A couple of weeks ago amidst another interesting Amazon Web Services announcement featuring the newly-arrived Relational Database Service, Werner Vogels (Amazon CTO) jokingly retweeted a remark that someone made suggesting he was like “…Santa for nerds.”

All I want for Christmas is my elastic IP...

All I want for Christmas is my elastic IP...

So, now that I have Werner following me on Twitter and a confirmed mailing address (clearly the North Pole) I thought I’d make my Christmas wish early this year.  I’ve put a lot of thought into this.

Just when I had settled on a shiny new gadget from the bookstore side of the house, I saw Amazon’s response to Eran Tromer’s (et al) research on Cloud Cartography featured in this Computerworld article written by my old friend Jaikumar Vijayan titled “Amazon downplays report highlighting vulnerabilities in its cloud service.”

I feature Eran and his team’s work in my Cloudifornication presentation.  You can read more about it on Craig’s blog here.

I quickly cast aside my yuletyde treasure list and instead decided to ask Santa (Werner/AWS) for a most important present: a straight answer from AWS that isn’t delivered by a PR spokeshole that instead speaks openly, transparently and in an engaging fashion with customers and the security community.

Here’s what torqued me (emphasis is mine):

In response, Amazon spokeswoman Kay Kinton said today that the report describes cloud cartography methods that could increase an attacker’s probability of launching a rogue virtual machine (VM) on the same physical server as another specific target VM.

What remains unclear, however, is how exactly attackers would be able to use that presence on the same physical server to then attack the target VM, Kinton told Computerworld via e-mail.

The research paper itself described how potential attackers could use so-called “side-channel” attacks to try and try and steal information from a target VM. The researchers had argued that a VM sitting on the same physical server as a target VM, could monitor shared resources on the server to make highly educated inferences about the target VM.

By monitoring CPU and memory cache utilization on the shared server, an attacker could determine periods of high-activity on the target servers, estimate high-traffic rates and even launch keystroke timing attacks to gather passwords and other data from the target server, the researchers had noted.

Such side-channel attacks have proved highly successful in non-cloud contexts, so there’s no reason why they shouldn’t work in a cloud environment, the researchers postulated.

However, Kinton characterized the attack described in the report as “hypothetical,” and one that would be “significantly more difficult in practice.”

“The side channel techniques presented are based on testing results from a carefully controlled lab environment with configurations that do not match the actual Amazon EC2 environment,” Kinton said.

“As the researchers point out, there are a number of factors that would make such an attack significantly more difficult in practice,” she said.

So while the Amazon spokesperson admits the vulnerability/capability exists, rather than rationally address that issue, thank the researchers for pointing this out and provide customers some level of detail regarding how this vulnerability is mitigated, we get handwaving that attempts to have us not focus on the vulnerability, but rather the difficulty of a hypothetical exploit.  That example isn’t the point of the paper. The fact that I could deliver a targeted attack is.

Earth to Amazon: this sort of thing doesn’t work. It’s a lousy tactic.  It simply says that either you think we’re all stupid or you’re suffering from a very bad case of incident handling immaturity. Take a look around you, there are plenty of companies doing this right. You’re not one of them.  Consistently.

Tromer and crew gave a single example of how this vulnerability might be exploited that was latched on to by the AWS spokesperson as a way of defusing the seriousness of the underlying vulnerability by downplaying this sample exploit.  There are potentially dozens of avenues to be explored here.  Craig talked about many of them in his blog (above.)  What we got instead was this:

At the same time, Amazon takes all reports of vulnerabilities in its cloud infrastructure very seriously, she said. The company will continue to investigate potential exploits thoroughly and continue to develop features bolster security for users of its cloud service, she said.

Amazon Web Services has already rolled out safeguards that prevent potential attackers from using the cartography techniques described in the paper, Kinton said without offering any details.

She also pointed to the recently launched Amazon Web Service Multi-Factor Authentication (AWS MFA) as another example of the company’s continuing effort to bolster cloud security. AWS MFA is designed to provide an extra layer access control to a customer’s Web services account, Kinton said.

Did you catch “…without offering any details” or were you simply overwhelmed by the fact that you can use a token to authenticate your single-key driven AWS console instead?

I’m not interested in getting into a “full disclosure” battle here, but being dismissive, not providing clear-cut answers and being evasive without regard for transparency about issues like this or the DDoS attacks we saw with Bitbucket, etc. are going to backfire.  I posted about this before in previous blogs here and here.

If you want to be taken seriously by large enterprises and government agencies that require real answers to issues like this, you can engage with the security community or ignore us and get focused on by it (and me) until you decide that it’s a much better idea to do the former.  You’ll gain much more credibility and an eagerness to work with you instead of against you if you choose to use the force wisely 😉

Until then, may I suggest this?  I found it in the Amazon.com bookstore:

beinghonest…you can download it to your Kindle in under a minute.

/Hoff

Cloud/Cloud Computing Definitions – Why they Do(n’t) Matter…

October 28th, 2009 No comments

A couple of weeks ago I wrote a piece titled Cloud: The Other White Meat…On Service Failures & Hysterics in which I summarized why Cloud/Cloud Computing (or what I now refer to as Cloudputing 😉 has become such a definitional Super-Fund clean up site:

To me, cloud is the “other white meat” to the Internet’s array of widely-available chicken parts.  Both are tasty and if I order parmigiana made with either, they may even look or taste the same.  If someone orders it in a restaurant, all they say they care about is how it tastes and how much they paid for it.  They simply trust that it’s prepared properly and hygienically.   The cook, on the other hand, cares about the ingredients that went into making it, its preparation and delivery.  Expectations are critical on both sides of the table.

It’s all a matter of perspective.

and

It occurs to me that the explanation for this arises from two main perspectives that frame the way in which people discuss cloud computing:

  1. The experiential consumer’s view where anything past or present connected via the Internet to someone/thing where data and services are provided and managed remotely on infrastructure by a third party is cloud, or
  2. The operational provider’s view where the service architecture, infrastructure, automation and delivery models matter and fitting within a taxonomic box for the purpose of service description and delivery is important.

The consumer’s view is emotive and perceptive: “I just put my data in The Cloud” without regard to what powers it or how it’s operated. This is a good thing. Consumers shouldn’t have to care *how* it’s operated. They should ultimately just know it works, as advertised, and that their content is well handled.  Fair enough.

The provider’s view, however, is much more technical, clinical, operationally-focused and defined by architecture and characteristics that consumers don’t care about: infrastructure, provisioning, automation, governance, orchestration, scale, programmatic models, etc…this is the stuff that makes the magical cloud tick but is ultimately abstracted from view.  Fair enough.

However, context switching between “marketing” and “architecture” is folly; it’s an invalid argument, as is speaking from the consumer’s perspective to represent that of a provider and vice-versa.

Here are the graphical representations of those statements from my Cloudifornication presentation:

Cloud-Provider's View

Cloud-Provider's View

Cloud-Consumer's View

Cloud-Consumer's View

Don’t Hassle the Hoff: Recent Press & Podcast Coverage & Upcoming Speaking Engagements

October 26th, 2009 1 comment

Microphone

Here is some of the recent coverage from the last month or so on topics relevant to content on my blog, presentations and speaking engagements.  No particular order or priority and I haven’t kept a good record, unfortunately.

Press/Technology & Security eZines/Website/Blog Coverage/Meaningful Links:

Podcasts/Webcasts/Video:

Recent Speaking Engagements/Confirmed to  speak at the following upcoming events:

  • Enterprise Architecture Conference, D.C.
  • Intel Security Summit 2009, Hillsboro OR
  • SecTor 2009, Toronto CA
  • EMC Innovation Forum, Franklin MA
  • NY Technology Forum, NY, NY
  • Microsoft Bluehat v9, Redmond WA
  • Office of the Comptroller & Currency, San Antonio TX
  • Intercloud Working Group, GooglePlex CA 😉
  • CSC Leading Edge Forum, VA
  • DojoCon, VA

I also forgot to thank Eric Siebert for putting together the VMware Top 20 blog list and putting me on it as well as the fact that Rational Survivability made the Datamation 2009 Top 200 Tech Blogs list.

/Hoff

Can We Secure Cloud Computing? Can We Afford Not To?

October 22nd, 2009 2 comments

[The following is a re-post from the Microsoft (Technet) blog I did as a lead up to my Cloudifornication presentation at Bluehat v9 I’ll be posting after I deliver the revised edition tomorrow.]

There have been many disruptive innovations in the history of modern computing, each of them in some way impacting how we create, interact with, deliver, and consume information. The platforms and mechanisms used to process, transport, and store our information likewise endure change, some in subtle ways and others profoundly.

Cloud computing is one such disruption whose impact is rippling across the many dimensions of our computing experience. Cloud – in its various forms and guises — represents the potential cauterization of wounds which run deep in IT; self-afflicted injuries of inflexibility, inefficiency, cost inequity, and poor responsiveness.

But cost savings, lessening the environmental footprint, and increased agility aren’t the only things cited as benefits. Some argue that cloud computing offers the potential for not only equalling what we have for security today, but bettering it. It’s an interesting argument, really, and one that deserves some attention.

To address it, it requires a shift in perspective relative to the status quo.

We’ve been at this game for nearly forty years. With each new (r)evolutionary period of technological advancement and the resultant punctuated equilibrium that follows, we’ve done relatively little to solve the security problems that plague us, including entire classes of problems we’ve known about, known how to fix, but have been unable or unwilling to fix for many reasons.

With each pendulum swing, we attempt to pay the tax for the sins of our past with technology of the future that never seems to arrive.

Here’s where the notion of doing better comes into play.

Cloud computing is an operational model that describes how combinations of technology can be utilized to better deliver service; it’s a platform shuffle that is enabling a fierce and contentious debate on the issues surrounding how we secure our information and instantiate trust in an increasingly open and assumed-hostile operating environment which is in many cases directly shared with others, including our adversaries.

Cloud computing is the natural progression of the reperimeterization, consumerization, and increasingly mobility of IT we’ve witnessed over the last ten years. Cloud computing is a forcing function that is causing us to shine light on the things we do and defend not only how we do them, but who does them, and why.

To set a little context and simplify discussion, if we break down cloud computing into a visual model that depicts bite-sized chunks, it looks like this:

Infostructure/Metastructure/Infrastructure

Infostructure/Metastructure/Infrastructure

At the foundation of this model is the infrastructure layer that represents the traditional computer, network and storage hardware, operating systems, and virtualization platforms familiar to us all.

Cresting the model is the infostructure layer that represents the programmatic components such as applications and service objects that produce, operate on, or interact with the content, information, and metadata.

Sitting in between infrastructure and infostructure is the metastructure layer. This layer represents the underlying set of protocols and functions such as DNS, BGP, and IP address management, which “glue” together and enable the applications and content at the infostructure layer to in turn be delivered by the infrastructure.

We’ve made incremental security progress at the infrastucture and infostructure layers, but the technology underpinnings at the metastructure layer have been weighed, measured, and found lacking. The protocols that provide the glue for our fragile Internet are showing their age; BGP, DNS, and SSL are good examples.

Ultimately the most serious cloud computing concern is presented by way of the “stacked turtles” analogy: layer upon layer of complex interdependencies predicated upon fragile trust models framed upon nothing more than politeness and with complexities and issues abstracted away with additional layers of indirection. This is “cloudifornication.”

The dynamism, agility and elasticity of cloud computing is, in all its glory, still predicated upon protocols and functions that were never intended to deal with these essential characteristics of cloud.

Without re-engineering these models and implementing secure protocols and the infrastructure needed to support them, we run the risk of cloud computing simply obfuscating the fragility of the supporting layers until the stack of turtles topples as something catastrophic occurs.

There are many challenges associated with the unique derivative security issues surrounding cloud computing, but we have the ability to remedy them should we so desire.

Cloud computing is a canary in the coal mine and it’s chirping wildly for now but that won’t last.  It’s time to solve the problems, not the symptoms.

/Hoff

[Edited the last sentence for clarity]

Reblog this post [with Zemanta]

Incomplete Thought: The Cloud Software vs. Hardware Value Battle & Why AWS Is Really A Grid…

October 18th, 2009 2 comments

Some suggest in discussing the role and long-term sustainable value of infrastructure versus software in cloud that software will marginalize bespoke infrastructure and the latter will simply commoditize.

I find that an interesting assertion, given that it tends to ignore the realities that both hardware and software ultimately suffer from a case of Moore’s Law — from speeds and feeds to the multi-core crisis, this will continue ad infinitum.  It’s important to keep that perspective.

In discussing this, proponents of software domination almost exclusively highlight Amazon Web Services as their lighthouse illustration.  For the purpose of simplicity, let’s focus on compute infrastructure.

Here’s why pointing to Amazon Web Services (AWS) as representative of all cloud offerings in general to anchor the hardware versus software value debate is not a reasonable assertion:

  1. AWS delivers a well-defined set of services designed to be deployed without modification across a massive number of customers; leveraging a common set of standardized capabilities across these customers differentiates the service and enables low cost
  2. AWS enjoys general non-variability in workload from *their* perspective since they offer fixed increments of compute and memory allocation per unit measure of exposed abstracted and virtualized infrastructure resources, so there’s a ceiling on what workloads per unit measure can do. It’s predictable.
  3. From AWS’ perspective (the lens of the provider) regardless of the “custom stuff” running within these fixed-sized containers, the main focus of their core “cloud” infrastructure actually functions like a grid — performing what amounts to a few tasks on a finely-tuned platform to deliver such
  4. This yields the ability for homogeneity in infrastructure and a focus on standardized and globalized power efficient, low cost, and easy-to-replicate components since the problem of expansion beyond a single unit measure of maximal workload capacity is simply a function of scaling out to more of them (or stepping up to one of the next few rungs on the scale-up ladder)

Yup, I just said that AWS is actually a grid whose derivative output is a set of cloud services.

Why does this matter?  Because not all IaaS cloud providers are architected to achieve this — by design — and this has dramatic impact on where hardware and software, leveraged independently or as a total solution, play in the debate.

This is because AWS built and own the entire “CloudOS” stack from customized hardware through to the VMM, management and security planes (much as Google does the same) versus other providers who use what amounts to more generic software offerings from the likes of VMware and lean on API’s and an ecosystem to extend it’s capabilities as well as big iron to power it.  This will yield more customizable offerings that likely won’t scale as highly as AWS.

That’s because they’re not “grids” and were never designed to be.

Many other IaaS providers that have evolved from hosting are building their next-generation offerings from unified fabric and unified computing platforms (so-called “big iron”) which are the furtherest thing from “commodity” hardware you can get.  Further, SaaS and PaaS providers generally tend to do the same based on design goals and business models.  Remember, IaaS is not representative of all things cloud — it’s only one of the service models.

Comparing AWS to most other IaaS cloud providers is a false argument upon which to anchor the hardware versus software debate.

/Hoff

“Open” means more than just an API…Google’s Data Liberation Project Ponies Up

October 16th, 2009 2 comments

datalibThis is chewy goodness.

Short and sweet from the Googleborg via a Webmonkey article titled “Pack Up Your Data and Leave Whenever You Want, It’s the New Rule of the Cloud:

Users should be able to control the data they store in any of Google’s products. Our team’s goal is to make it easier for them to move data in and out.

Bravo.  Brian Fitzpatrick and his team gains major street cred here; they’re up-front about the benefits to both end-users and Google.  Openness and transparency benefit everyone.

Read more about the project at dataliberation.org

/Hoff