Archive

Author Archive

Pimping My Friends: One Of My Favorite NonCons – Troopers

April 8th, 2009 No comments

One of my favorite international security conferences is happening April 22nd/23rd in Munich, Germany. It’s run by my good friend Enno Rey and his team at ERNW:

TROOPERS09 – WHAT IS IT?
Troopers09 is an international IT-Security Conference on the 22nd and 23rd of April 2009 in Munich, Germany. This event is created for CISOs, ISOs, IT-Auditors, IT-Sec-Admins, IT-Sec Consultants and everyone who is involved with IT-Security on a professional basis. The goal is to share in-depth knowledge about the aspects of attacking and defending information technology infrastructure and applications. The featured presentations and demonstrations represent the latest discoveries and developments of the global hacking scene and will provide the audience with valuable practical know-how.

Troopers09 is hosted by ERNW GmbH, an independent IT-Security consultancy from Heidelberg, Germany. In the past years, speakers from ERNW were invited all around the world to present their latest IT-Sec research results and to share their knowledge within the global hacking community. With this global experience in mind ERNW decided to launch an international conference in Germany in 2008. After last year’s success of Troopers08 we’re thrilled to do it again. Once more it’s going to be an event unlike all other „Security Conferences“ we have seen in Germany so far: No product presentations, no marketing blabla, no bull*ht-bingo – just pure practical IT-Security. Real answers and practical benefits to meet today´s and tomorrows threats.

Troopers08 was a fantastic event, so I can only imagine that this year’s will be just as good if not better.

Check it out here.

/Hoff

Categories: Security Conferences Tags:

HyTrust: An Elegant Solution To a Messy Problem

April 6th, 2009 8 comments

logo_hytrust I had a pre-release briefing with the folks from HyTrust on Friday and was impressed with their solution.  I had previously met with the VC’s within whose portfolio HyTrust sits and they were bullish on the team and technology approach.  Here’s why.

  “Security” solutions in virtualized environments are becoming less about “pure” security functions like firewalls and IDP and much more focused on increasing the management and visibility of virtualization and keeping pace with the velocity of change, configuration control and compliance.  I’ve talked about that a lot recently.

HyTrust approaches this problem in a very elegant manner. Their approach is based on the old adage “you cannot manage that which you cannot see.”  

In the case of VMware, there are numerous vectors for managing and configuring the platform; from the various host and platform management interfaces to the guests and virtual networking components.

There are many tools on the market which address these issues. Reflex, Third Brigade and Catbird come to mind with the latter being the most similar.

The difference between HyTrust and their competitors is how they integrate their solution to provide visibility and protect the management network.  

HyTrust’s answer is to both physically and logically sit in front of the the virtualization platform management network and actually proxy each configuration request, whether that’s an SSH session to the service console, or a VirtualCenter configuration
change through the GUI. 

These requests are mapped to roles which are in turn authenticated against an Enterprises’ Active Directory service so fine-grained role-based access to specific functions via templates can be performed. Further, since every request is proxied, logging is robust and can be mapped back directly to a single user.

The policy engine and templates appear quite easy to use given the demo I saw and the logging and reporting looks good.

Actions that violate policy can be allowed or permitted and can either be simply logged or even remediated should a violation occur.

This centralized approach is very elegant. It has its downsides, of course, inasmuch as it becomes a single point of failure and performance and high-availability should be paid close attention to.

 The HyTrust offering will be available as both a hardware appliance as well as a virtual appliance. They will also release what they call a FREE “Community Edition” which is a full-featured version but is limited to securing three VMware ESX hosts.

Check them out here.

/Hoff

Categories: Virtualization Security, VMware Tags:

The Vagaries Of Cloudcabulary: Why Public, Private, Internal & External Definitions Don’t Work…

April 5th, 2009 19 comments

Updated again at 13:43pm EST – Please see bottom of post

Hybrid, Public, Private, Internal and External.

The HPPIE model; you’ve heard these terms used to describe and define the various types of Cloud.

What’s always disturbed me about using these terms singularly is that separetely they actually address scenarios that are orthogonal and yet are often used to compare and contrast one service/offering to another.

The short story: Hybrid, Public, and Private denote ownership and governance whilst Internal and External denote location.

The longer story: Hybrid, Public, Private, Internal and External seek to summarily describe no less than five different issues and categorize a cloud service/offering into one dumbed-down term for convenience.  In terms of a Cloud service/offering, using one of the HPPIE labels actually attempts to address in one word:

  1. Who manages it
  2. Who owns it
  3. Where it’s located
  4. Who has access to it
  5. How it’s accessed

That’s a pretty tall order.  I know we’re aiming for simplicity in description by using a label analogous to LAN, WAN, Intranet or Internet , but unfortunately what we’re often describing here is evolving to be much more complex.

Don’t get me wrong, I’m not aiming for precision but instead  accuracy.  I don’t find that these labels do a good enough job when used by themselves.

Further, you’ll find most people using the service deployment models (Hybrid, Public, Private) in absence of the service delivery models (SPI – Saas/PaaS/IaaS) while at the same time intertwining the location of the asset (internal, external) usually relative to a perimeter firewall (more on this in another post.)

This really lends itself to confusion.

I’m not looking to rename the HPPIE terms.  I am looking to use them more accurately.

Here’s a contentious example.  I maintain you can have an IaaS service that is Public and Internal.  WHAT!?  HOW!?

Let’s take a look at a summary table I built to think through use cases by looking at the three service deployment models (Hybrid, Public and Private):

The HPPIE Table

THIS TABLE IS DEPRECATED – PLEASE SEE UPDATE BELOW!

The blue separators in the table designate derivative service offerings and not just a simple and/or; they represent an actual branching of the offering.

Back to my contentious example wherein I maintain you can have an IaaS offering which is Public and yet also Internal. Again How?

Remember how I said “Hybrid, Public, and Private denote ownership and governance whilst Internal and External denote location?” That location refers to both the physical location of the asset as well as the logical location relative to an organization’s management umbrella which includes operations, security, compliance, etc.

Thus if you look at a managed infrastructure service (name one) that utilizes Cloud Computing principles, there’s no reason that a third party MSP could not deploy said service internally on customer premises equipment which the third party owns but operates and manages on behalf of an organization with the scale and pay-by-use model of Cloud internally that can include access from trusted OR untrusted sources, is there?

Some might call it a perversion of the term “Public.” I highlight it to illustrate that “Public” is a crappy word for the example because just as it’s valid in this example, it’s equally as valid to suggest that Amazon’s EC2 can also share the “Public” moniker, despite being External.

In the same light, one can easily derive examples of SaaS:Private:Internal offerings…You see my problem with these terms?

Moreover, the “consumer” focus of the traditional HPPIE models means that using broad terms like these generally implies that people are describing access to a service/offering by a human operating a web browser, and do not take into account access to services/offerings via things like API’s or programmatic interfaces.

This is a little goofy, too. I don’t generally use a web browser  (directly) to access Amazon’s S3 Storage-as-a-Service offering just like I don’t use a web browser to make API calls in GoogleGears.  Other non-interactive elements of the AppStack do that.

I don’t expect people to stop using these dumbed down definitions, but this is why it makes me nuts when people compare “Private” Cloud offerings with “Internal” ones. It’s like comparing apples and buffalo.

What I want is for people to at least not include Internal and External as Cloud models, but rather used them as parameters like I have in the table above.

Does this make any sense to you?


Update: In a great set of discussions regarding this on Twitter with @jamesurquhart from Cisco and @zhenjl from VMware, @zhenjl came up with a really poignant solution to the issues surrounding the redefinition of Public Cloud and their ability to be deployed “internally.”  His idea which highlights the “third party managed” example I gave is to add a new category/class called “Managed” which is essentially the example which I highlighted in boldface above:

managed-clouds

This means that we would modify the table above to look more like this (updated again based on feedback on Twitter & comments) — Ultimately revised as part of the work I did for the Cloud Security Alliance in alignment to the NIST model, abandoning the ‘Managed’ section:

Revised Model

This preserves the notion of how people generally define “Public” clouds but also makes a critical distinction between what amounts to managed Cloud services which are provided by third parties using infrastructure/services located on-premise. It also still allows for the notion of Private Clouds which are distinct.

Thoughts?

Related articles by Zemanta

Enhanced by Zemanta
Categories: Cloud Computing, Cloud Security Tags:

The Cloud Is a Fickle Mistress: DDoS&M…

April 2nd, 2009 6 comments

It’s interesting to see how people react when they are reminded that the “Cloud” still depends upon much of the same infrastructure and underlying protocols that we have been using for years.

BGP, DNS, VPNs, routers, swtiches, firewalls…

While it’s fun to talk about new attack vectors and sexy exploits, it’s the oldies and goodies that will come back to haunt us:

Simplexity

Building more and more of our business’ ability to remain an on-going concern on infrastructure that was never designed to support it is a scary proposition.  We’re certainly being afforded more opportunity to fix some of these problems as the technology improves, but it’s a patching solution to an endemic problem, I’m afraid.  We’ve got two ways to look at Cloud:

  • Skipping over the problems we have and “fixing” crappy infrastructure and applications by simply adding mobility and orchestration to move around an issue, or
  • Actually starting to use Cloud as a forcing function to fundamentally change the way we think about, architect, deploy and manage our computing capabilities in a more resilient, reliable and secure fashion

If I were a betting man…

Remember that just because it’s in the “Cloud” doesn’t mean someone’s sprinkled magic invincibility dust on your AppStack…

That web service still has IP addresses, open sockets. It still gets transported over MANY levels of shared infrastructure, from the telcos to the DNS infrastructure…you’re always at someone elses’ mercy.

Dan Kaminsky has done a fabulous job reminding us of that.

A more poignant reminder of our dependency on the Same Old Stuff™ is the recent DDoS attacks against Cloud provider Go-Grid:

ONGOING DDoS ATTACK

Our network is currently the target of a large, distributed DDoS attack that began on Monday afternoon.   We took action all day yesterday to mitigate the impact of the attack, and its targets, so that we could restore service to GoGrid customers.  Things were stabilized by 4 PM PDT and most customer servers were back online, although some of you continued to experience intermittent loss in network connectivity.

This is an unfortunate thing.  It’s also a good illustration of the sorts of things you ought to ask your Cloud service providers about.  With whom do they peer? What is their bandwidth? How many datacenters do they have and where? What DoS/DDoS countermeasures do you have in place? Have they actually dealt with this before?  Do they drill disaster scenarios like this?

We’re told we shouldn’t have to worry about the underlying infrastructure with Cloud, that it’s abstracted and someone else’s problem to manage…until it’s not.

This is where engineering, architecture and security meet the road.  Your provider’s ability to sustain an attack like this is critical.  Further, how you’ve designed your BCP/DR contingency plans is pretty important, too.  Until we get true portability/interoperability between Cloud providers, it’s still up to you to figure out how to make this all work.  Remember that when you’re assuming those TCO calculations accurately reflect reality.

Big providers like eBay, Amazon, and Microsoft invest huge sums of money and manpower to ensure they are as survivable as they can be during attacks like this.  Do you?  Does your Cloud Provider? How many do you have.

Again, even Amazon goes down.  At this point, it’s largely been operational issues on their end and not the result of a massive attack. Imagine, however, if someday it is.  What would that mean to you?

As more and more of our applications and information are moved from inside our networks to beyond the firewalls and exposed to a larger audience (or even co-mingled with others’ data) the need for innovation and advancement in security is only going to skyrocket to start to deal with many of these problems.

/Hoff

Categories: Cloud Computing, Cloud Security Tags:

Introducing the Cloud Security Alliance

March 31st, 2009 5 comments

I’m a founding member and serve as the technical advisor for the Cloud Security Alliance (CSA.)  This is an organization you may not have heard of yet, so I wanted to introduce you.

The more formal definition of the role and goals of the CSA appears below, but it’s most easily described as a member-driven forum for both industry, providers and “consumers” of Cloud Computing services to discuss issues and opportunities for security in this emerging space and help craft awareness, guidance and best practices for secure Cloud adoption.  It’s not a standards body. It’s not a secret cabal of industry-only players shuffling for position.  

It’s a good mix of vendors, practitioners and interested parties who are concerned with framing the most pressing concerns related to Cloud security and working together to bring ideas to life on how we can address them. 

From the website, here’s the more formal definition:

The CSA is a non-profit organization formed to promote the use of best practices for providing security assurance within Cloud Computing, and provide education on the uses of Cloud Computing to help secure all other forms of computing.

The Cloud Security Alliance is comprised of many subject matter experts from a wide variety disciplines, united in our objectives:

  • Promote a common level of understanding between the consumers and providers of cloud computing regarding the necessary security requirements and attestation of assurance.
  • Promote independent research into best practices for cloud computing security.
  • Launch awareness campaigns and educational programs on the appropriate uses of cloud computing and cloud security solutions.
  • Create consensus lists of issues and guidance for cloud security assurance.

The Cloud Security Alliance will be launched at the RSA Conference 2009 in San Francisco, April 20-24, 2009.

It’s clear that people will likely draw parallels between the CSA and the Open Cloud Manifesto given the recent announcement of the latter.  

The key difference between the two efforts relates to the CSA’s engagement and membership by both providers and consumers of Cloud Services and the organized non-profit structure of the CSA.  The groups are complimentary in nature and goals.

You can see who is participating in the CSA now based upon the pre-release of the working draft of our initial whitepaper.  Full attribution of company affiliation will be posted as the website is updated:

 

Co-Founders

Nils Puhlmann
Jim Reavis

Founding Members and Contributors

Todd Barbee
Alan Boehme
Jon Callas
Sean Catlett
Shawn Chaput
Dave Cullinane
Ken Fauth
Pam Fusco
Francoise Gilbert
Christofer Hoff
Dennis Hurst
Michael Johnson
Shail Khiyara
Subra Kumaraswamy
Paul Kurtz
Mark Leary
Liam Lynch
Tim Mather
Scott Matsumoto
Luis Morales
Dave Morrow
Izak Mutlu
Jean Pawluk
George Reese
Jeff Reich
Jeffrey Ritter
Ward Spangenberg
Jeff Spivey
Michael Sutton
Lynn Terwoerds
Dave Tyson
John Viega
Dov Yoran
Josh Zachry

Founding Charter Companies

PGPQualysZscaler

If you’d like to get involved, here’s how:

Individuals

Individuals with an interest in cloud computing and expertise to help make it more secure receive a complimentary individual membership based on a minimum level of participation. If you are interested in becoming a member, apply to join our LinkedIn Group

Affiliates

Not-for-profit associations and industry groups may form an affiliate partnership with the Cloud Security Alliance to collaborate on initiatives of mutual concern. Contact us at affiliates@cloudsecurityalliance.org for more information.

Corporate

Information on corporate memberships and sponsorship programs will be available soon. Contact info@cloudsecurityalliance.org for more information.

/Hoff


Meditating On the Manifesto: It’s Good To Be King…

March 29th, 2009 6 comments

By now you’ve heard of ManifestoGate, no?  If not, click on that link and read all about it as James Urquhart does a nice job summarizing it all.

In the face of all of this controversy, tonight Reuven Cohen twittered that the opencloudmanifesto.org website was live.

So I mosied over to take a look at the promised list of supporters of said manifesto since I’ve been waiting for a definition of the “we” who developed/support it.  

It’s a very interesting list.

There are lots of players. Some of them are just starting to bring their Cloud visions forward.

But clearly there are some noticeable absences, namely Google, Microsoft Salesforce and Amazon — the three four largest established Cloud players in the Cloudusphere.

I think it’s been said in so many words before, but let me make it perfectly clear why, despite the rhetoric both acute and fluffy from both sides, that these three Cloud giants aren’t listed as supporters.

Here are the the listed principles of the Open Cloud from the manifesto itself:

Of course, many clouds will continue to be different in a number of important ways, 
providing unique value for organizations. It is not our intention to define standards for 
every capability in the cloud and create a single homogeneous cloud environment. 

Rather, as cloud computing matures, there are several key principles that must be 
followed to ensure the cloud is open and delivers the choice, flexibility and agility 
organizations demand:
 

1. Cloud providers must work together to ensure that the challenges to 
cloud adoption (security, integration, portability, interoperability, 
governance/management, metering/monitoring) are addressed through 
open collaboration and the appropriate use of standards.
 

2. Cloud providers must not use their market position to lock customers 
into their particular platforms and limit their choice of providers.
 

3. Cloud providers must use and adopt existing standards wherever 
appropriate. The IT industry has invested heavily in existing standards 
and standards organizations; there is no need to duplicate or reinvent 
them.
 

4. When new standards (or adjustments to existing standards) are needed, 
we must be judicious and pragmatic to avoid creating too many 
standards. We must ensure that standards promote innovation and do 
not inhibit it.
 

5. Any community effort around the open cloud should be driven by 
customer needs, not merely the technical needs of cloud providers, and 
should be tested or verified against real customer requirements.
 

6. Cloud computing standards organizations, advocacy groups, and 
communities should work together and stay coordinated, making sure 
that efforts do not conflict or overlap.
 

Fact is, from a customer’s point of view, I find all of these principles agreeable and despite calling it a manifesto, I could see using it as a nice set of discussion points with which I can chat about my needs from the Cloud.   It’s intersting to note that given the audience as stated in the manifesto, that the only list of supporters are vendors and not “customers.”

I think the more discussion we have on the matter, the better.  Personally, I grok and support the principles herein.  I’m sure this point will be missed as I play devil’s advocate, but so be it.  

However, from the “nice theory, wrong universe” vendor’s point-of-view, why/how could I sign it?

See #2 above?  It relates to exactly the point made by James when he said “Those who have publicly stated that they won’t sign have the most to lose.”

Yes they do.  And the last time I looked, all three of them have notions of what the Cloud ought to be, and how and to what degree  it ought to interoperate and with whom.  

I certainly expect they will leverage every ounce of “lock-in” enhanced customer experience through a tightly-coupled relationship they can muster and capitalize on the de facto versus de jure “standardization” that naturally occurs in a free market when you’re in the top 4.  Someone telling me I ought to sign a document to the contrary would likely not get offered a free coffee at the company cafe.

Trying to socialize (in every meaning of the word) goodness works wonders if you’re a kibbutz.  With billions up for grabs in a technology land-grab, not so much.

This is where the ever-hopeful consumer, the idealist integrator, and the vendor-realist personalities in me begin to battle.

Oh, you should hear the voices in my head…

/Hoff

Categories: Cloud Computing Tags:

Incomplete Thought: Looking At An “Open & Interoperable Cloud” Through Azure-Colored Glasses

March 29th, 2009 4 comments

As with the others in my series of “incomplete thoughts,” this one is focused on an issue that has been banging around in my skull for a few days.  I’m not sure how to properly articulate my thought completely, so I throw this up for consideration, looking for your discussion to clarify my thinking.

You may have heard of the little dust-up involving Microsoft and the folk(s) behind the Open Cloud Manifesto. The drama here reminds me of the Dallas episode where everyone tried to guess who shot J.R., and it’s really not the focus of this post.  I use it here for color.

What is the focus of this post is the notion of “open(ness),” portability and interoperability as it relates to Cloud Computing — or more specifically how these terms relate to the infrastructure and enabling platforms of Cloud Computing solution providers.

I put “openness” in quotes because definitionally, there are as many representations of this term as there are for “Cloud,” which is a big part of the problem.  Just to be fair, before you start thinking I’m unduly picking on Microsoft, I’m not. I challenged VMware on the same issues.

So here’s my question as it relates to Microsoft’s strategy regarding Azure given an excerpt from Microsoft’s Steven Martin as he described his employer’s stance on Cloud in regard to the Cloudifesto debacle above in his blog post titled “Moving Toward an Open Process On Cloud Computing Interoperability“:

From the moment we kicked off our cloud computing effort, openness and interop stood at the forefront. As those who are using it will tell you, the  Azure Services Platform is an open and flexible platform that is defined by web addressability, SOAP, XML, and REST.  Our vision in taking this approach was to ensure that the programming model was extensible and that the individual services could be used in conjunction with applications and infrastructure that ran on both Microsoft and non-Microsoft stacks. 

What got me going was this ZDNet interview by Mary Jo Foley wherein she interviewed Julius Sinkevicius, Microsoft’s Director of Product Management for Windows Server, in which she loosely references/compares Cisco’s Cloud strategy is to Microsoft’s and apparently a lack of interoperability between Microsoft’s own virtualization and Cloud Computing platforms:

MJF: Did Cisco ask Microsoft about licensing Azure? Will Microsoft license all of the components of Azure to any other company?

Sinkevicius: No, Microsoft is not offering Windows Azure for on premise deployment. Windows Azure runs only in Microsoft datacenters. Enterprise customers who wish to deploy a highly scalable and flexible OS in their datacenter should leverage Hyper-V and license Windows Server Datacenter Edition, which has unlimited virtualization rights, and System Center for management.

MJF: What does Microsoft see as the difference between Red Dog (Windows Azure) and the OS stack that Cisco announced?

Sinkevicius: Windows Azure is Microsoft’s runtime designed specifically for the Microsoft datacenter. Windows Azure is designed for new applications and allows ISVs and Enterprises to get geo-scale without geo-cost.  The OS stack that Cisco announced is for customers who wish to deploy on-premise servers, and thus leverages Windows Server Datacenter and System Center.

The source of the on-premise Azure hosting confusion appears to be this: All apps developed for Azure will be able to run on Windows Server, according to the Softies. However — at present — the inverse is not true: Existing Windows Server apps ultimately may be able to run on Azure. For now only some can do so, and only with a fairly substantial amount of tweaking.

Microsoft’s cloud pitch to enterprises who are skittish about putting their data in the Microsoft basket isn’t “We’ll let you host your own data using our cloud platform.” Instead, it’s more like: “You can take some/all of your data out of our datacenters and run it on-premise if/when you want — and you can do the reverse and put some/all of your data in our cloud if you so desire.”

What confuses me is how Azure, as a platform, will be limited to deployment only in Microsoft’s operating environment (i.e. their datacenters) and not for use outside of that environment and how that compares to the statements above regarding the interoperability described by Martin.

Doesn’t the proprietary nature of the Azure runtime platform, “open” or not via API, by definition limit its openness and interoperability? If I can’t take my applications and information and operate it anywhere without major retooling, how does that imply openness, portability and interoperability?  

If one cannot do that fully between Windows Server and Azure — both from the same company —  what chance do we have between instances running across different platforms not from Microsoft?

The issue at hand to consider is this:

If you do not have one-to-one parity between the infrastructure that provides your cloud “externally” versus “internally,” (and similarly public versus private clouds) can you truly claim openness, portability and interoperability?

What do you think?

Pimping My Friends: Joshua Corman on Virtualization Security

March 29th, 2009 1 comment
Josh Corman - Virtualization Security Tutorial

Josh Corman - Virtualization Security Tutorial

Joshua Corman is IBM/ISS’ Principal Security Strategist and a longtime friend.

Josh has a great virtualization security tutorial up at the Internet Evolution “macro-site.”

I like the layout and functionality as well as the content; there is a ton of great information here.

Check it out.

/Hoff

Update on the Cloud (Ontology/Taxonomy) Model…

March 28th, 2009 3 comments

A couple of months ago I kicked off a community-enabled project to build an infrastructure-centric ontology/taxonomy model of Cloud Computing.

You can see the original work with all the comments here.  Despite the distracting haggling over the use of the words “ontology and taxonomy,”  the model (as I now call it) has been well received by those for whom it was created.

Specifically, my goal was to be able to help a network or security professional do these things:

  1. Settle on a relevant and contextual technology-focused definition of Cloud Computing and its various foundational elements beyond the existing academic & 30,000 foot-view models
  2. Understand how this definition maps to the classical SPI (SaaS, PaaS, IaaS) models with which many people are aware
  3. Deconstruct the SPI model and present it in a layered format similar to the OSI model showing how each of the SPI components interact with and build atop one another
  4. Provide a further relevant mapping of infrastructure, applications, etc. at each model so as to relate well-understood solutions experiences to each
  5. Map a set of generally-available solutions from a catalog of compensating controls (from the security perspective) to each layer
  6. Ultimately map the SPI layers to the compensating controls and in turn to a set of governanance and regulatory requirements (SoX, PCI, HIPAA, etc.)

This is very much, and unapologetically so, a controls-based model.  I assume that there exists no utopic state of proper architectural design, secure development lifecycle, etc. Just like the real world.  So rather than assume that we’re going to have universal goodness thanks to proper architecture, design and execution, I thought it more reasonable to think about plugging the holes (sadly) and going from there.

At the end of the day, I wanted an IT/Security professional to use the model like an “Annie Oakley Secret Decoder Ring” in order to help rapidly assess offerings, map them to the specific model layers, understand what controls they or a vendor needs to have in place by mapping that, in turn, to compliance requirements.  This would allow for a quick and accurate manner by which to perform a gap analysis which in turn can be used to feed into a risk assessment/analysis.

We went through 5 versions in a relatively short period of time and arrived at a solid fundamental model based upon feedback from the target audience:

cloudtaxonomyontology_v15

The model is CLEARLY not complete.  The next three steps for improving it are:

  1. Reference other solution taxonomies to complete the rest of the examples and expand upon the various layers with key elements and offerings from vendors/solutions providers.  See here.
  2. Polish up the catalog of compensating controls
  3. Start mapping to various regulatory/compliance requirements
  4. Find a better way of interactively presenting this whole mess.

For my Frogs presentation, I presented the first stab at the example controls mapping and it seemed to make sense given the uptake/interest in the model. Here’s an example:
frogs-cc_sc0621

Frogs: Cloud Model Aligned to Security Controls Model

This still has a ways to go, but I’ve been able to present this to C-levels, bankers, technologists and lay people (with explanation) and it’s gone over well.

I look forward to making more progress on this shortly and would welcome the help, commentary, critique and collaboration.

I’ll start adding more definition to each of the layers so people can feedback appropriately.

Thanks,

/Hoff

P.S. A couple of days ago I discovered that Kevin Jackson had published an extrapolation of the UCSB/IBM version titled “A Tactical Cloud Computing Ontology.

Kevin’s “ontology” is at the 20,000 foot view compared to the original 30,000 foot UCSB/IBM model but is worth looking at.

Categories: Cloud Computing, Cloud Security Tags:

The Most Comprehensive Review Of the Open Cloud Computing Manifesto Debacle, Ever…

March 28th, 2009 3 comments

[This Page Intentionally Left Blank]

That is all.

/Hoff

Categories: Cloud Computing, Cloud Security Tags: