Archive

Archive for the ‘Virtualization’ Category

The Emotion of VMotion…

September 29th, 2009 8 comments
VMotion - Here's Where We Are Today

VMotion - Here's Where We Are Today

A lot has been said about the wonders of workload VM portability.

Within the construct of virtualization, and especially VMware, an awful lot of time is spent on VM Mobility but as numerous polls and direct customer engagements have shown, the majority (50% and higher) do not use VMotion.  I talked about this in a post titled “The VM Mobility Myth:

…the capability to provide for integrated networking and virtualization coupled with governance and autonomics simply isn’t mature at this point. Most people are simply replicating existing zoned/perimertized non-virtualized network topologies in their consolidated virtualized environments and waiting for the platforms to catch up. We’re really still seeing the effects of what virtualization is doing to the classical core/distribution/access design methodology as it relates to how shackled much of this mobility is to critical components like DNS and IP addressing and layer 2 VLANs.  See Greg Ness and Lori Macvittie’s scribblings.

Furthermore, Workload distribution (Ed: today) is simply impractical for anything other than monolithic stacks because the virtualization platforms, the applications and the networks aren’t at a point where from a policy or intelligence perspective they can easily and reliably self-orchestrate.

That last point about “monolithic stacks” described what I talked about in my last post “Virtual Machines Are the Problem, Not the Solution” in which I bemoaned the bloat associated with VM’s and general purpose OS’s included within them and the fact that VMs continue to hinder the notion of being able to achieve true workload portability within the construct of how programmatically one might architect a distributed application using an SOA approach of loosely coupled services.

Combined with the VM bloat — which simply makes these “workloads” too large to practically move in real time — if one couples the annoying laws of physics and current constraints of virtualization driving the return to big, flat layer 2 network architecture — collapsing core/distribution/access designs and dissolving classical n-tier application architectures — one might argue that the proposition of VMotion really is a move backward, not forward, as it relates to true agility.

That’s a little contentious, but in discussions with customers and other Social Media venues, it’s important to think about other designs and options; the fact is that the Metastructure (as it pertains to supporting protocols/services such as DNS which are needed to support this “infrastructure 2.0”) still isn’t where it needs to be in regards to mobility and even with emerging solutions like long-distance VMotion between datacenters, we’re butting up against laws of physics (and costs of the associated bandwidth and infrastructure.)

While we do see advancements in network-driven policy stickiness with the development of elements such as distributed virtual switching, port profiles, software-based vSwitches and virtual appliances (most of which are good solutions in their own right,) this is a network-centric approach.  The policies really ought to be defined by the VM’s themselves (similar to SOA service contracts — see here) and enforced by the network, not the other way around.

Further, what isn’t talked about much is something that @joe_shonk brought up, which is that the SAN volumes/storage from which most of these virtual machines boot, upon which their data is stored and in some cases against which they are archived, don’t move, many times for the same reasons.  In many cases we’re waiting on the maturation of converged networking and advances in networked storage to deliver solutions to some of these challenges.

In the long term, the promise of mobility will be delivered by a split into three four camps which have overlapping and potentially competitive approaches depending upon who is doing the design:

  1. The quasi-realtime chunking approach of VMotion via the virtualization platform [virtualization architect,]
  2. Integration distribution and “mobility” at the application/OS layer [application architect,] or
  3. The more traditional network-based load balancing of traffic to replicated/distributed images [network architect.]
  4. Moving or redirecting pointers to large pools of storage where all the images/data(bases) live [Ed. forgot to include this from above]

Depending upon the need and capability of your application(s), virtualization/Cloud platform, and network infrastructure, you’ll likely need a mash-up of all three four.  This model really mimics the differences today in architectural approach between SaaS and IaaS models in Cloud and further suggests that folks need to take a more focused look at PaaS.

Don’t get me wrong, I think VMotion is fantastic and the options it can ultimately delivery intensely useful, but we’re hamstrung by what is really the requirement to forklift — network design, network architecture and the laws of physics.  In many cases we’re fascinated by VM Mobility, but a lot of that romanticization plays on emotion rather than utilization.

So what of it?  How do you use VM mobility today?  Do you?

/Hoff

Redux: Patching the Cloud

September 23rd, 2009 3 comments

Back in 2008 I wrote a piece titled “Patching the Cloud” in which I highlighted the issues associated with the black box ubiquity of Cloud and what that means to patching/upgrading processes:

Your application is sitting atop an operating system and underlying infrastructure that is managed by the cloud operator.  This “datacenter OS” may not be virtualized or could actually be sitting atop a hypervisor which is integrated into the operating system (Xen, Hyper-V, KVM) or perhaps reliant upon a third party solution such as VMware.  The notion of cloud implies shared infrastructure and hosting platforms, although it does not imply virtualization.

A patch affecting any one of the infrastructure elements could cause a ripple effect on your hosted applications.  Without understanding the underlying infrastructure dependencies in this model, how does one assess risk and determine what any patch might do up or down the stack?  How does an enterprise that has no insight into the “black box” model of the cloud operator, setup a dev/test/staging environment that acceptably mimics the operating environment?

What happens when the underlying CloudOS gets patched (or needs to be) and blows your applications/VMs sky-high (in the PaaS/IaaS models?)

How does one negotiate the process for determining when and how a patch is deployed?  Where does the cloud operator draw the line?   If the cloud fabric is democratized across constituent enterprise customers, however isolated, how does a cloud provider ensure consistent distributed service?  If an application can be dynamically provisioned anywhere in the fabric, consistency of the platform is critical.

I followed this up with a practical example when Microsoft’s Azure services experienced a hiccup due to this very thing.  We see wholesale changes that can be instantiated on a whim by Cloud providers that could alter service functionality and service availability such as this one from Google (Published Google Documents to appear in Google search) — have you thought this through?

So now as we witness ISP’s starting to build Cloud service offerings from common Cloud OS platforms and espouse the portability of workloads (*ahem* VM’s) from “internal” Clouds to Cloud Providers — and potentially multiple Cloud providers — what happens when the enterprise is at v3.1 of Cloud OS, ISP A is at version 2.1a and ISP B is at v2.9? Portability is a cruel mistress.

Pair that little nugget with the fact that even “global” Cloud providers such as Amazon Web Services have not maintained parity in terms of functionality/services across their regions*. The US has long had features/functions that the european region has not.  Today, in fact, AWS announced bringing infrastructure capabilities to parity for things like elastic load balancing and auto-scale…

It’s important to understand what happens when we squeeze the balloon.

/Hoff

*corrected – I originally said “availability zones” which was in error as pointed out by Shlomo in the comments. Thanks!

Incomplete Thought: Forget VM Sprawl, Worry More About SaaSprawl…

September 19th, 2009 17 comments

A lot of fuss has been made about run-away VM sprawl in enterprises who are heavily virtualized due to the ease with which a VM can constructed and operationalized.

I’m not convinced about the reality versus the potential of VM Sprawl, meaning that I have no evidence from anyone facing this issue to date.  I wrote about this a while ago here.

As virtualization and the attendant vendors push more from enterprise virtualization to enterprise Clouds, what I’m actually more concerned with is SaaSprawl.

This scenario describes how enterprises will deal with managing what could amount to dozens of “CloudSourced” SaaS vendors as companies edge toward Cloud adoption by cherry picking applications for externalization using SaaS as the platforms, technologies and standards catch up to allow those pesky workloads that used to run internally, to do the same externally…

Outsource email, security, CRM, ERP, Legal/HR, Purchasing, Desktop apps — all from different vendors, each with different contracts, SLA’s, data integration issues, security concerns, audit constraints, regulatory compliance hiccups.

What we likely could end up with is another illustration of a “squeezing the balloon” problem; trading off CapEx for what I call OopsEx — realizing what might amount to substituting one problem for another as you trade reduced upfront (and on-going) capital investment for what amounts to on-going management, security, compliance and service-level management issues in the long term.

Thoughts?

Quick Question: Any Public Cloud Providers Using Intel TXT?

September 15th, 2009 3 comments

Does anyone know of any Public Cloud Provider (or Private for that matter) that utilizes Intel’s TXT?

Specifically, does anyone know if Amazon makes use of Intel’s TXT via their Xen-derivative VMM?

Anyone care to share whether they know of any Cloud provider that PLANS to?

Thanks in advance.

Email responses welcome also [hoff @ packetfilter .com]

/Hoff

Variety & Darwinism In Solutions Is Innovation, In Standards It’s A War?

September 5th, 2009 6 comments

I find it quite interesting that in the last few months or so, as Cloud has emerged as a full-fledged business opportunity, we’ve seen the rise of many new companies, strategies and technologies. For the most part, hype aside, people praise this as innovation and describe it as a natural evolutionary process.

Strangely enough, with the emergence of new opportunity comes the ever-present push to standards.  Many see standards introduced too early as an innovation squasher; it inhibits free market evolution, crams down the smaller players, and lets the big fish take over — especially when the standards are backed by said big fish.  The open versus proprietary debate is downright religious.

Cloud Computing is no different.

We’ve seen many “standards” float to the surface recently — some backed by vendors, others by groups of concerned citizenry.  Many Cloud providers have published their API’s in an attempt to standardize interfacing to their offerings.  Some are open, some are proprietary.  Some are even open-sourced.  Some are simply de facto based upon the deployment of a set of technology, solutions and an ecosystem built around supporting it.  Professional standards organizations are also now getting involved.

In J. Nicholas Hoover’s blog post titled “Groups Seek Cloud Computing Standards,” Gartner’s David Cearly said :

“Community participation, deliberate action, and planning must be a vital part of any successful standards process…Otherwise, he said, cloud standards efforts could fail miserably.”

“Standards is one of those things that could absolutely strangle and kill everything we want to do in cloud computing if we do it wrong,” he said. “We need to make sure that as were approaching standards, we’re approaching standards more as they were approached in the broader internet, just in time.”

I suppose that depends upon how you measure success…

Tom Nolle wrote an interesting piece titled: “Multiple Standards Cloud Spoil Cloud Computing” in which he lists 7 standards bodies “competing” for Cloud, wondering out loud why if they all have similar interests, do they exist separately.  After he talks about the difference between those focused on Public and Private Clouds, he bemoans the bifurcation and then plugs the one he finds best 😉

So now we have live public cloud services with incomplete standards and evolving private cloud standards with no implementations.

The best hope for a unification is the Cloud Computing Interoperability Forum. Its Unified Cloud Architecture tackles standards by making public cloud computing interoperable. Their map of cloud computing shows the leading public cloud providers and a proposed Unified Cloud Interface that the body defines, with a joking reference to Tolkien’s Lord of the Rings, as “One API to Rule them All.”

So make that 8 players…

This week we’ve seen the release of the VMware-sponsored and DMTF-submitted vCloud. We also saw RedHat introduce their Deltacloud API.  We have the Open Cloud Computing Interface (OCCI) standards work which getting underway within the Open Grid Forum (OGF.)  There’s a veritable plethora of groups, standards and efforts at play.

Some of it is likely duplicative.

Some of it is likely vendor-fed.

The reality is that unlike others, I find it refreshing.

I think it’s great that we have multiple efforts.

It would, for sure, be nice if we could all agree and have one focused set of work, but that’s simply not reality.  It will be confusing for all concerned in the short term.

The Open vs. mostly-open debates will continue, but this NORMAL.  In the end, we end up with a survival of the marketed-fittest.  The standards that win are the standards that are most optimally muscled, marketed and adopted.

Simon Wardley wrote a piece called “The Cloud Computing War” which to me read like an indictment of the process (I admit my review may be colored by what I perceive as FUD regarding VMware’s vCloud,) but I can’t help but to shrug it off and instead decide to focus on where and whom I will decide to pitch my tent.

I’ve already done so with the Cloud Security Alliance (not a standards body) and I’m looking at using vCloud to find a home for my A6 concept.

A Cloud standards war?  War is such an ugly term.  It’s just the normal activity associated with disruptive innovation and the markets sorting themselves out.  The standards arena is simply where the dirty laundry gets exposed.  Get used to it, there’s enough mud/FUD flinging that you can expect several loads 😉

/Hoff

NESSessary Question: Will Virtualization Undermine Network Equipment Vendors?

August 30th, 2009 1 comment

Greg Ness touched off an interesting discussion when he asked “Will Virtualization Undermine Network Equipment Vendors?”  It’s a great read summarizing how virtualization (and Cloud) are really beginning to accelerate how classical networking equipment vendors are re-evaluating their portfolios in order to come to terms with these disruptive innovations.

I’ve written so much about this over the last three years and my response is short and sweet:

Virtualization has actually long been an enabler for network equipment vendors — not server virtualization, mind you, but network virtualization.  The same goes in the security space. The disruption caused by server virtualization is only acting as an accelerant — pushing the limits of scale, redefining organizational and operational boundaries, and acting as a forcing function causing wholesale reconsideration of archetypal network (and security) topologies.

The compressed timeframe associated with the disruption caused by virtualization and its adoption in conjunction with the arrival of Cloud Computing may seem unnatural given the relatively short window associated with its arrival, but when one takes the longer-term view, it’s quite natural.  We’ve seen it before in vignettes across the evolution of computing, but the convergence of economics, culture, technology and consumerism have amplified its relevance.

To answer Greg’s question, Virtualization will only undermine those network equipment vendors who were not prepared for it in the first place.  Those that were building highly virtualized, context-enabled routing, switching and security products will embrace this swing in the hardware/software pendulum and develop hybrid solutions that span the physical and virtual manifestations of what the “network” has become.

As I mentioned in my blog titled “Quick Bit: Virtual & Cloud Networking – Where It ISN’T Going…

Specifically, as it comes to understanding how the network plays in virtual and Cloud architectures, it’s not where the network *is* in the increasingly complex virtualized, converged and unified computing architectures, it’s where networking *isn’t.*

Where ISN'T The Network?

Where ISN'T The Network?

Take a look at your network equipment vendors.  Where do they play in that stack above?  Compare and contrast that with what is going on with vendors like Citrix/Xen with the Open vSwitch, VyattaArista with vEOS and Cisco with the Nexus 1000v*…interesting times for sure.

/Hoff

*Disclosure: I work for Cisco.

On Appirio’s Prediction: The Rise & Fall Of Private Clouds

August 18th, 2009 9 comments

I was invited to add my comments to Appirio’s corporate blog in response to my opinions of their 2009 prediction “Rise and Fall of the Private Cloud,” but as I mentioned in kind on Twitter, debating a corporate talking point on a company’s blog is like watch two monkeys trying to screw a football; it’s messy and nobody wins.

However, in light of the fact that I’ve been preaching about the realities of phased adoption of Cloud — with Private Cloud being a necessary step — I thought I’d add my $0.02.  Of course, I’m doing so while on vacation, sitting on an ancient lava flow with my feet in the ocean in Hawaii, so it’s likely to be tropical in nature.

Short and sweet, here’s Appirio’s stance on Private Cloud:

Here’s the rub: Private clouds are just an expensive data center with a fancy name. We predict that 2009 will represent the rise and fall of this over-hyped concept. Of course, virtualization, service-oriented architectures, and open standards are all great things for every company operating a data center to consider. But all this talk about “private clouds” is a distraction from the real news: the vast majority of companies shouldn’t need to worry about operating any sort of data center anymore, cloud-like or not.

It’s clear that we’re talking about very different sets of companies. If we’re referring to SME/SMB’s, then I think it’s fair to suggest the sentiment above is valid.

If we’re talking about a large, heavily-regulated enterprise (pick your industry/vertical) with sunk costs and the desire/need to leverage the investment they’ve made in the consolidation, virtualization and enterprise modernization of their global datacenter footprints and take it to the next level, leveraging capabilities like automation, elasticity, and chargeback, it’s poppycock.

Further, it’s pretty clear that the hybrid model of Cloud will ultimately win in this space with the adoption of BOTH Public and Private Clouds where and when appropriate.

The idea that somehow companies can use “private cloud” technology to offer their employees web services similar to Google, Amazon, or salesforce.com will lead to massive disappointment.

So now the definition of “Cloud” is limited to “web services” and is defined by “Google, Amazon, or Salesforce.com?”

I call this MyopiCloud.  If this is the only measure of Cloud success, I’d be massively disappointed, also.

Onto the salient points:

Here’s why:

  • Private clouds are sub-scale: There’s a reason why most innovative cloud computing providers have their roots in powering consumer web technology—that’s where the numbers are. Very few corporate data centers will see anything close to the type of volume seen by these vendors. And volume drives cost—the world has yet to see a truly “at scale” data center.

Interesting. If we hang the definition of “at scale” solely on Internet-based volume, I can see how this rings true.  However, large enterprises with LANs and WANs with multi-gigabit connectivity feeding server farms and client bases of internal constituents (not to mention extranet connections) need to be accounted for in that assessment, especially if we’re going to be honest about volume.  Limiting connectivity to only the Internet is unreasonable.

Certainly most enterprises are not autonomically elastic (neither are most Cloud providers today) but that’s why comparing apples to elephants is a bit silly, even with the benefits that virtualization is beginning to deliver in the compute, network and storage realms.

I know of an eCommerce provider who reports trafficing in (on average) 15 Gb/s of sustained HTTP traffic via its Internet feeds.  Want to guess what the internal traffic levels are inside what amounts to it’s Private Cloud at that level of ingress/egress?  Oh, did I just suggest that this “enterprise” is already running a “Private Cloud?”  Why yes, yes I did.  See James Watter’s interesting blog on something similar titled “Not So Fast Public Cloud: Big Players Still Run Privately.

  • There’s no secret sauce: There’s no simple set of tricks that an operator of a data center can borrow from Amazon or Google. These companies make their living operating the world’s largest data centers. They are constantly optimizing how they operate based on real-time performance feedback from millions of transactions. (check out this presentation from Jeff Barr and Peter Coffee at the Architecture and Integration Summit). Can other operators of data centers learn something from this experience? Of course. But the rate of innovation will never be the same—private data centers will always be many, many steps behind the cloud.
  • Really? So technology such as Eucalyptus or VMware’s vCloud/Project Redwood doesn’t play here?  Certainly leveraging the operational models and technology underpinnings (regardless of volume) should allow an enterprise to scale massively, even it it’s not at the same levels, no?  The ability to scale to the needs of the business are important, even if you never do so at the scale of an AWS.  I don’t really understand this point.  My bandwidth is bigger than your bandwidth?

  • You can’t teach an old dog new tricks: What do you get when you move legacy applications as-is to a new and improved data center? Marginal improvements on your legacy applications. There’s only so much you can achieve without truly re-platforming your applications to a cloud infrastructure… you can’t teach an old dog new tricks. Now that’s not entirely fair…. You can certainly teach an old dog to be better behaved. But it’s still an old dog.
  • Woof! It’s really silly to suggest that the only thing an enterprise will do is simply move “legacy applications as-is to a new and improved data center” without any enterprise modernization, any optimization or the ability to more efficiently migrate to new and improved applications as the agility, flexibility and mobility issues are tackled.  Talk about pissing on fire hydrants!

  • On-premise does not equal secure: the biggest driver towards private clouds has been fear, uncertainty, and doubt about security. For many, it just feels more secure to have your data in a data center that you control. But is it? Unless your company spends more money and energy thinking about security than Amazon, Google, and Salesforce, the answer is probably “no.” (Read Craig Balding walk through “7 Technical Security Benefits of Cloud Computing”)
  • I’ve got news for you, just as on-premise does “…not equal secure,” neither does off-premise assure such.  I offer you this post as an example with all it’s related posts for color.

    Please show me empirically that Amazon, Google or Salesforce spends “…more money and energy thinking about security” than, say, a Fortune 100 company.  Better yet, please show me how I can be, say, PCI compliant using AWS?  Oh, right…Please see the aforementioned posts…especially the one that demonstrates how the most public security gaffes thus far in Cloud are related to the providers you cite in your example.

    May I suggest that being myopic and mixing metaphors broadly by combining the needs and business drivers of the SME/SMB and representing them as that of large enterprises is intellectually dishonest.

    Let’s be real, Appirio is in the business of “Enabling enterprise adoption of on-demand for Salesforce.com and Google Enterprise” — two examples of externally hosted SaaS offerings that clearly aren’t aimed at enterprises who would otherwise be thinking about Private Cloud.

    Oops, the luau drums are sounding.

    Aloha.

    Cloud Computing [Security] Architectural Framework

    July 19th, 2009 3 comments

    CSA-LogoFor those of you who are not in the security space and may not have read the Cloud Security Alliance’s “Guidance for Critical Areas of Focus,” you may have missed the “Cloud Architectural Framework” section I wrote as a contribution.

    We are working on improving the entire guide, but I thought I would re-publish the Cloud Architectural Framework section and solicit comments here as well as “set it free” as a stand-alone reference document.

    Please keep in mind, I wrote this before many of the other papers such as NIST’s were officially published, so the normal churn in the blogosphere and general Cloud space may mean that  some of the terms and definitions have settled down.

    I hope it proves useful, even in its current form (I have many updates to make as part of the v2 Guidance document.)

    /Hoff


    Problem Statement

    Cloud Computing (“Cloud”) is a catch-all term that describes the evolutionary development of many existing technologies and approaches to computing that at its most basic, separates application and information resources from the underlying infrastructure and mechanisms used to deliver them with the addition of elastic scale and the utility model of allocation.  Cloud computing enhances collaboration, agility, scale, availability and provides the potential for cost reduction through optimized and efficient computing.

    More specifically, Cloud describes the use of a collection of distributed services, applications, information and infrastructure comprised of pools of compute, network, information and storage resources.  These components can be rapidly orchestrated, provisioned, implemented and decommissioned using an on-demand utility-like model of allocation and consumption.  Cloud services are most often, but not always, utilized in conjunction with and enabled by virtualization technologies to provide dynamic integration, provisioning, orchestration, mobility and scale.

    While the very definition of Cloud suggests the decoupling of resources from the physical affinity to and location of the infrastructure that delivers them, many descriptions of Cloud go to one extreme or another by either exaggerating or artificially limiting the many attributes of Cloud.  This is often purposely done in an attempt to inflate or marginalize its scope.  Some examples include the suggestions that for a service to be Cloud-based, that the Internet must be used as a transport, a web browser must be used as an access modality or that the resources are always shared in a multi-tenant environment outside of the “perimeter.”  What is missing in these definitions is context.

    From an architectural perspective given this abstracted evolution of technology, there is much confusion surrounding how Cloud is both similar and differs from existing models and how these similarities and differences might impact the organizational, operational and technological approaches to Cloud adoption as it relates to traditional network and information security practices.  There are those who say Cloud is a novel sea-change and technical revolution while others suggest it is a natural evolution and coalescence of technology, economy, and culture.  The truth is somewhere in between.

    There are many models available today which attempt to address Cloud from the perspective of academicians, architects, engineers, developers, managers and even consumers. We will focus on a model and methodology that is specifically tailored to the unique perspectives of IT network and security professionals.

    The keys to understanding how Cloud architecture impacts security architecture are a common and concise lexicon coupled with a consistent taxonomy of offerings by which Cloud services and architecture can be deconstructed, mapped to a model of compensating security and operational controls, risk assessment and management frameworks and in turn, compliance standards.

    Setting the Context: Cloud Computing Defined

    Understanding how Cloud Computing architecture impacts security architecture requires an understanding of Cloud’s principal characteristics, the manner in which cloud providers deliver and deploy services, how they are consumed, and ultimately how they need to be safeguarded.

    The scope of this area of focus is not to define the specific security benefits or challenges presented by Cloud Computing as these are covered in depth in the other 14 domains of concern:

    • Information lifecycle management
    • Governance and Enterprise Risk Management
    • Compliance & Audit
    • General Legal
    • eDiscovery
    • Encryption and Key Management
    • Identity and Access Management
    • Storage
    • Virtualization
    • Application Security
    • Portability & Interoperability
    • Data Center Operations Management
    • Incident Response, Notification, Remediation
    • “Traditional” Security impact (business continuity, disaster recovery, physical security)

    We will discuss the various approaches and derivative offerings of Cloud and how they impact security from an architectural perspective using an in-process model developed as a community effort associated with the Cloud Security Alliance.

    Principal Characteristics of Cloud Computing

    Cloud services are based upon five principal characteristics that demonstrate their relation to, and differences from, traditional computing approaches:

    1. Abstraction of Infrastructure
      The compute, network and storage infrastructure resources are abstracted from the application and information resources as a function of service delivery. Where and by what physical resource that data is processed, transmitted and stored on becomes largely opaque from the perspective of an application or services’ ability to deliver it.  Infrastructure resources are generally pooled in order to deliver service regardless of the tenancy model employed – shared or dedicated.  This abstraction is generally provided by means of high levels of virtualization at the chipset and operating system levels or enabled at the higher levels by heavily customized filesystems, operating systems or communication protocols.
    2. Resource Democratization
      The abstraction of infrastructure yields the notion of resource democratization – whether infrastructure, applications, or information – and provides the capability for pooled resources to be made available and accessible to anyone or anything authorized to utilize them using standardized methods for doing so.
    3. Services Oriented Architecture
      As the abstraction of infrastructure from application and information yields well-defined and loosely-coupled resource democratization, the notion of utilizing these components in whole or part, alone or with integration, provides a services oriented architecture where resources may be accessed and utilized in a standard way.  In this model, the focus is on the delivery of service and not the management of infrastructure.
    4. Elasticity/Dynamism
      The on-demand model of Cloud provisioning coupled with high levels of automation, virtualization, and ubiquitous, reliable and high-speed connectivity provides for the capability to rapidly expand or contract resource allocation to service definition and requirements using a self-service model that scales to as-needed capacity.  Since resources are pooled, better utilization and service levels can be achieved.
    5. Utility Model Of Consumption & Allocation
      The abstracted, democratized, service-oriented and elastic nature of Cloud combined with tight automation, orchestration, provisioning and self-service then allows for dynamic allocation of resources based on any number of governing input parameters.  Given the visibility at an atomic level, the consumption of resources can then be used to provide an “all-you-can-eat” but “pay-by-the-bite” metered utility-cost and usage model. This facilitates greater cost efficiencies and scale as well as manageable and predictive costs.

    Cloud Service Delivery Models

    Three archetypal models and the derivative combinations thereof generally describe cloud service delivery.  The three individual models are often referred to as the “SPI Model,” where “SPI” refers to Software, Platform and Infrastructure (as a service) respectively and are defined thusly[1]:

    1. Software as a Service (SaaS)
      The capability provided to the consumer is to use the provider’s applications running on a cloud infrastructure and accessible from various client devices through a thin client interface such as a Web browser (e.g., web-based email). The consumer does not manage or control the underlying cloud infrastructure, network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings.
    2. Platform as a Service (PaaS)
      The capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created applications using programming languages and tools supported by the provider (e.g., java, python, .Net). The consumer does not manage or control the underlying cloud infrastructure, network, servers, operating systems, or storage, but the consumer has control over the deployed applications and possibly application hosting environment configurations.
    3. Infrastructure as a Service (IaaS)
      The capability provided to the consumer is to rent processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, deployed applications, and possibly select networking components (e.g., firewalls, load balancers).

    Understanding the relationship and dependencies between these models is critical.  IaaS is the foundation of all Cloud services with PaaS building upon IaaS, and SaaS – in turn – building upon PaaS.  We will cover this in more detail later in the document.

    The OpenCrowd Cloud Solutions Taxonomy shown in Figure 1 provides an excellent reference that demonstrates the swelling ranks of solutions available today in each of the models above.

    Narrowing the scope or specific capabilities and functionality within each of the *aaS offerings or employing the functional coupling of services and capabilities across them may yield derivative classifications.  For example “Storage as a Service” is a specific sub-offering with the IaaS “family,”  “Database as a Service” may be seen as a derivative of PaaS, etc.

    Each of these models yields significant trade-offs in the areas of integrated features, openness (extensibility) and security.  We will address these later in the document.

    Figure 1 - The OpenCrowd Cloud Taxonomy

    Figure 1 - The OpenCrowd Cloud Taxonomy

    Cloud Service Deployment and Consumption Modalities

    Regardless of the delivery model utilized (SaaS, PaaS, IaaS,) there are four primary ways in which Cloud services are deployed and are characterized:

    1. Private
      Private Clouds are provided by an organization or their designated service provider and offer a single-tenant (dedicated) operating environment with all the benefits and functionality of elasticity and the accountability/utility model of Cloud.The physical infrastructure may be owned by and/or physically located in the organization’s datacenters (on-premise) or that of a designated service provider (off-premise) with an extension of management and security control planes controlled by the organization or designated service provider respectively.

      The consumers of the service are considered “trusted.”  Trusted consumers of service are those who are considered part of an organization’s legal/contractual
      umbrella including employees, contractors, & business partners.  Untrusted consumers are those that may be authorized to consume some/all services but are not logical extensions of the organization.

    2. Public
      Public Clouds are provided by a designated service provider and may offer either a single-tenant (dedicated) or multi-tenant (shared) operating environment with all the benefits and functionality of elasticity and the  accountability/utility model of Cloud.
      The physical infrastructure is generally owned by and managed by the designated service provider and located within the provider’s datacenters (off-premise.)  Consumers of Public Cloud services are considered to be untrusted.
    3. Managed
      Managed Clouds are provided by a designated service provider and may offer either a single-tenant (dedicated) or multi-tenant (shared) operating environment with all the benefits and functionality of elasticity and the  accountability/utility model of Cloud.The physical infrastructure is owned by and/or physically located in the organization’s datacenters with an extension of management and security control planes controlled by the designated service provider.  Consumers of Managed Clouds may be trusted or untrusted.

    4. Hybrid
      Hybrid Clouds are a combination of public and private cloud offerings that allow for transitive information exchange and possibly application compatibility and portability across disparate Cloud service offerings and providers utilizing standard or proprietary methodologies regardless of ownership or location.  This model provides for an extension of management and security control planes.  Consumers of Hybrid Clouds may be trusted or untrusted.

    The difficulty in using a single label to describe an entire service/offering is that it actually attempts to describe the following elements:

    • Who manages it
    • Who owns it
    • Where it’s located
    • Who has access to it
    • How it’s accessed

    The notion of Public, Private, Managed and Hybrid when describing Cloud services really denotes the attribution of management and the availability of service to specific consumers of the service.

    It is important to note that often the characterizations that describe how Cloud services are deployed are often used interchangeably with the notion of where they are provided; as such, you may often see public and private clouds referred to as “external” or “internal” clouds.  This can be very confusing.

    The manner in which Cloud services are offered and ultimately consumed is then often described relative to the location of the asset/resource/service owner’s management or security “perimeter” which is usually defined by the presence of a “firewall.”

    While it is important to understand where within the context of an enforceable security boundary an asset lives, the problem with interchanging or substituting these definitions is that the notion of a well-demarcated perimeter separating the “outside” from the “inside” is an anachronistic concept.

    It is clear that the impact of the re-perimeterization and the erosion of trust boundaries we have seen in the enterprise is amplified and accelerated due to Cloud.  This is thanks to ubiquitous connectivity provided to devices, the amorphous nature of information interchange, the ineffectiveness of traditional static security controls which cannot deal with the dynamic nature of Cloud services and the mobility and velocity at which Cloud services operate.

    Thus the deployment and consumption modalities of Cloud should be thought of not only within the construct of “internal” or “external” as it relates to asset/resource/service physical location, but also by whom they are being consumed and who is responsible for their governance, security and compliance to policies and standards.

    This is not to suggest that the on- or off-premise location of an asset/resource/information does not affect the security and risk posture of an organization, because it does, but it also depends upon the following:

    • The types of application/information/services being managed
    • Who manages them and how
    • How controls are integrated
    • Regulatory issues

    Table 1 illustrates the summarization of these points:

    Table 1 - Cloud Computing Service Deployment

    Table 1 - Cloud Computing Service Deployment

    As an example, one could classify a service as IaaS/Public/External (Amazon’s AWS/EC2 offering is a good example) as well as SaaS/Managed/Internal (an internally-hosted, but third party-managed custom SaaS stack using Eucalyptus, as an example.)

    Thus when assessing the impact a particular Cloud service may have on one’s security posture and overall security architecture, it is necessary to classify the asset/resource/service within the context of not only its location but also its criticality and business impact as it relates to management and security.  This means that an appropriate level of risk assessment is performed prior to entrusting it to the vagaries of “The Cloud.”

    Which Cloud service deployment and consumption model is used depends upon the nature of the service and the requirements that govern it.  As we demonstrate later in this document, there are significant trade-offs in each of the models in terms of integrated features, extensibility, cost, administrative involvement and security.

    Figure 2 - Cloud Reference Model

    Figure 2 - Cloud Reference Model

    It is therefore important to be able to classify a Cloud service quickly and accurately and compare it to a reference model that is familiar to an IT networking or security professional.

    Reference models such as that shown in Figure 2 allows one to visualize the boundaries of *aaS definitions, how and where a particular Cloud service fits, and also how the discrete *aaS models align and interact with one another.  This is presented in an OSI-like layered structure with which security and network professionals should be familiar.

    Considering each of the *aaS models as a self-contained “solution stack” of integrated functionality with IaaS providing the foundation, it becomes clear that the other two models – PaaS and SaaS – in turn build upon it.

    Each of the abstract layers in the reference model represents elements which when combined, comprise the services offerings in each class.

    IaaS includes the entire infrastructure resource stack from the facilities to the hardware platforms that reside in them. Further, IaaS incorporates the capability to abstract resources (or not) as well as deliver physical and logical connectivity to those resources.  Ultimately, IaaS provides a set of API’s which allows for management and other forms of interaction with the infrastructure by the consumer of the service.

    Amazon’s AWS Elastic Compute Cloud (EC2) is a good example of an IaaS offering.

    PaaS sits atop IaaS and adds an additional layer of integration with application development frameworks, middleware capabilities and functions such as database, messaging, and queuing that allows developers to build applications which are coupled to the platform and whose programming languages and tools are supported by the stack.  Google’s AppEngine is a good example of PaaS.

    SaaS in turn is built upon the underlying IaaS and PaaS stacks and provides a self-contained operating environment used to deliver the entire user experience including the content, how it is presented, the application(s) and management capabilities.

    SalesForce.com is a good example of SaaS.

    It should therefore be clear that there are significant trade-offs in each of the models in terms of features, openness (extensibility) and security.

    Figure 3 - Trade-off’s Across *aaS Offerings

    Figure 3 - Trade-off’s Across *aaS Offerings

    Figure 3 demonstrates the interplay and trade-offs between the three *aaS models:

    • Generally, SaaS provides a large amount of integrated features built directly into the offering with the least amount of extensibility and a relatively high level of security.
    • PaaS generally offers less integrated features since it is designed to enable developers to build their own applications on top of the platform and is therefore more extensible than SaaS by nature, but due to this balance trades off on security features and capabilities.
    • IaaS provides few, if any, application-like features, provides for enormous extensibility but generally less security capabilities and functionality beyond protecting the infrastructure itself since it expects operating systems, applications and content to be managed and secured by the consumer.

    The key takeaway from a security architecture perspective in comparing these models is that the lower down the stack the Cloud service provider stops, the more security capabilities and management the consumer is responsible for implementing and managing themselves.

    This is critical because once a Cloud service can be classified and referenced against the model, mapping the security architecture, business and regulatory or other compliance requirements against it becomes a gap-analysis exercise to determine the general “security” posture of a service and how it relates to the assurance and protection requirements of an asset.

    Figure 4 below shows an example of how mapping a Cloud service can be compared to a catalog of compensating controls to determine what existing controls exist and which do not as provided by either the consumer, the Cloud service provider or another third party.

    Figure 4 - Mapping the Cloud Model to the Security Model
    Figure 4 – Mapping the Cloud Model to the Security Model

    Once this gap analysis is complete as governed by the requirements of any regulatory or other compliance mandates, it becomes much easier to determine what needs to be done in order to feed back into a risk assessment framework to determine how the gaps and ultimately how the risk should be addressed: accept, transfer, mitigate or ignore.

    Conclusion

    Understanding how architecture, technology, process and human capital requirements change or remain the same when deploying Cloud Computing services is critical.   Without a clear understanding of the higher-level architectural implications of Cloud services, it is impossible to address more detailed issues in a rational way.

    The keys to understanding how Cloud architecture impacts security architecture are a common and concise lexicon coupled with a consistent taxonomy of offerings by which Cloud services and architecture can be deconstructed, mapped to a model of compensating security and operational controls, risk assessment and management frameworks and in turn, compliance standards.


    [1] Credit: Peter M. Mell, NIST

    Incomplete Thought: The Opportunity For Desktop As a Service – The Client Cloud?

    June 16th, 2009 8 comments

    Please excuse me if I’m late to the party bringing this up…

    We talk a lot about the utility of Public Clouds to enable the cost-effective and scalable implementation of “server” functionality, whether that’s SaaS, PaaS, or IaaS model, the concept is pretty well understood: use someone else’s infratructure to host your applications and information.

    As it relates to the desktop/client side of Cloud, we normally think about hosting the desktop/client capabilities as a function of Private Cloud capabilities; behind the firewall.  Whether we’re talking about terminal service-like capabilities and VDI, it seems to me people continue to think of this as a predominantly “internal” opportunity.

    I don’t think people are talking enough about the client side of Cloud and desktop as a service (DaaS) and what this means:

    If the physical access methods continue to get skinnier (smart phones, thin clients, client hypervisors, virtual machines, etc.) is there an opportunity for providers of Infrastructure as a Service to host desktop instances outside a corporate firewall?  If I can take advantage of all of the evolving technology in the space and couple it with the same sorts of policy advancements, networking and VPN functionality to connect me to IaaS server resources running in Private or Public Clouds, isn’t that a huge opportunity for further cost savings, distributed availability and potentially better security?

    There are companies such as Desktone looking to do this very thing in a way to offset the costs of VDI and further the efforts of consolidation.  It makes a lot of sense for lots of reasons and despite my lack of hands-on exposure to the technology, it sure looks like we have the technical capability to do this today.   Dana Gardner wrote about this back in 2007 and it’s as valid a set of points then as it is now — albeit with a much bigger uptake in Cloud:

    The stars and planets finally appear to be aligning in a way that makes utility-oriented delivery of a full slate of client-side computing and resources an alternative worth serious consideration. As more organizations are set up as service bureaus — due to such  IT industry developments as ITIL and shared services — the advent of off the wire everything seems more likely in many more places

    I could totally see how Amazon could offer the same sorts of workstation utility as they do for server instances.

    Will DaaS be the next frontier of consolidation in the enterprise?

    If you’re considering hosting your service instances elsewhere, why not your desktops?  Citrix and VMware (as examples) seem to think you might…

    /Hoff

    Hey, Uh, Someone Just Powered Off Our Firewall Virtual Appliance…

    June 11th, 2009 11 comments

    onoffswitchI’ve covered this before in more complex terms, but I thought I’d reintroduce the topic due to a very relevant discussion I just had recently (*cough cough*)

    So here’s an interesting scenario in virtualized and/or Cloud environments that make use of virtual appliances to provide security capabilities*:

    Since virtual appliances (VAs) are just virtual machines (VMs) what happens when a SysAdmin spins down or moves one that happens to be your shiny new firewall protecting your production VMs behind it, accidentally or maliciously?  Brings new meaning to the phrase “failing closed.”

    Without getting into the vagaries of vendor specific mobility-enabled/enabling technologies, one of the issues with VMs/VAs is that there’s not really a good way of designating one as being “more important” or functionally differentiated such as “security” or “critical application” that would otherwise ensure a higher priority for service availability (read: don’t spin this down unless…) or provide a topological dependency hierarchy in virtualized network constructs.

    Unlike physical environments where system administrators (servers) are segregated from access to network and security appliances, this isn’t the case in virtual environments. In Cloud environments (especially public, multi-tenant) where we are often reliant only upon virtual security capabilities since we have no option for physical alternatives, this is an interesting corner case.

    We’ve talked a lot about visibility, audit and policy management in virtual environments and this is a poignant example.

    /Hoff

    *Despite the silly notion that the Google dudes tried to suggest I equated virtualization with Cloud as one-in-the-same, I don’t.