Archive

Archive for the ‘Cloud Computing’ Category

CloudSQL – Accessing Datastores in the Sky using SQL…

December 2nd, 2008 5 comments
Zohosql-angled
I think this is definitely a precursor of things to come and introduces some really interesting security discussions to be had regarding the portability, privacy and security of datastores in the cloud.

Have you heard of Zoho?  No?  Zoho is a SaaS vendor that describe themselves thusly:

Zoho is a suite of online applications (services) that you sign up for and access from our Website. The applications are free for individuals and some have a subscription fee for organizations. Our vision is to provide our customers (individuals, students, educators, non-profits, small and medium sized businesses) with the most comprehensive set of applications available anywhere (breadth); and for those applications to have enough features (depth) to make your user experience worthwhile.

Today, Zoho announced the availability of CloudSQL which is middleware that allows customers who use Zoho's SaaS apps to "…access their data on Zoho SaaS
applications using SQL queries."
 

From their announcement:

Zoho CloudSQL is a technology that allows developers to interact with business data stored across Zoho Services using the familiar SQL language. In addition, JDBC and ODBC database drivers make writing code a snap – just use the language construct and syntax you would use with a local database instance. Using the latest Web technology no longer requires throwing away years of coding and learning.

Zoho CloudSQL allows businesses to connect and integrate the data and applications they have in Zoho with the data and applications they have in house, or even with other SaaS services. Unlike other methods for accessing data in the cloud, CloudSQL capitalizes on enterprise developers’ years of knowledge and experience with the widely‐used SQL language. This leads to faster deployments and easier (read: less expensive) integration projects.

Basically, CloudSQL is interposed between the suite of Zoho applications and the backend datastores and functions as an intermediary receiving SQL queries against the pooled data sets using standard SQL commands and dialects. Click on the diagram below for a better idea of what this looks like.

Zoho-cloud-sql
What's really interesting about allowing native SQL access is the ability to then allow much easier information interchange between apps/databases on an enterprises' "private cloud(s)" and the Zoho "public" cloud.

Further, it means that your data is more "portable" as it can be backed up, accessed, and processed by applications other than Zoho's.  Imagine if they were to extend the SQL exposure to other cloud/SaaS providers…this is where it will get really juicy. 

This sort of thing *will* happen.  Customers will see the absolute utility of exposing their cloud-based datastores and sharing them amongst business partners, much in the spirit of how it's done today, but with the datastores (or chunks of them) located off-premises.

That's all good and exciting, but obviously security questions/concerns immediately surface regarding such things as: authentication, encryption, access control, input sanitation, privacy and compliance…

Today our datastores typically live inside the fortress with multiple
layers of security and proxied access from applications, shielded from
direct access and yet we still have basic issues with attacks such as
SQL injection.  Imagine how much fun we can have with this!

The best I could find regarding security and Zoho came from their FAQ which doesn't exactly inspire confidence given the fact that they address logical/software security by suggesting that anti-virus software is the best line of defense ffor protecting your data and that "data encryption" will soon be offered as an "option" and (implied) SSL will make you secure:

6. Is my data secured?

Many people ask us this question. And rightly so; Zoho has invested alot of time and money to ensure that your information is secure and private. We offer security on multiple levels including the physical, software and people/process levels; In fact your data is more secure than walking around with it on a laptop or even on your corporate desktops.

Physical: Zoho servers and infrastructure are located in the most secure types of data centers that have multiple levels of restrictions for access including: on-premise security guards, security cameras, biometric limited access systems, and no signage to indicate where the buildings are, bullet proof glass, earthquake ratings, etc.

Hardware: Zoho employs state of the art firewall protection on multiple levels eliminating the possibility of intrusion from outside attacks

Logical/software protection: Zoho deploys anti-virus software and scans all access 24 x7 for suspicious traffic and viruses or even inside attacks; All of this is managed and logged for auditing purposes.

Process: Very few Zoho staff have access to either the physical or logical levels of our infrastructure. Your data is therefore secure from inside access; Zoho performs regular vulnerability testing and is constantly enhancing its security at all levels. All data is backed up on multiple servers in multiple locations on a daily basis. This means that in the worst case, if one data center was compromised, your data could be restored from other locations with minimal disruption. We are also working on adding even more security; for example, you will soon be able to select a "data encryption" option to encrypt your data en route to our servers, so that in the unlikely event of your own computer getting hacked while using Zoho, your documents could be securely encrypted and inaccessible without a "certificate" which generally resides on the server away from your computer.

Fun times ahead, folks.

/Hoff

Virtual Jot Pad: The Cloud As a Fluffy Offering In the Consumerization Of IT?

December 2nd, 2008 1 comment

This a post that's bigger than a thought on Twitter but almost doesn't deserve a blog, but for some reason, I just felt the need to write it down.  This may be one of those "well, duh" sorts of posts, but I can't quite verbalize what is tickling my noggin here.

As far as I can tell, the juicy bits stem from the intersection of cloud cost models, cloud adopter profile by company size/maturity and the concept of the consumerization of IT.

I think 😉

This thought was spawned by a couple of interesting blog posts:

  1. James Urquhart's blog titled "The Enterprise barrier-to-exit in cloud computing" and "What is the value of IT convenience" which led me to…
  2. Billy Marshall from rPath and his blog titled "The Virtual Machine Tsunami."

These blogs are about different things entirely but come full circle around to the same point.

James first shed some interesting light on the business taxonomy, the sorts of IT use cases and classes of applications and operations that drive businesses and their IT operations to the cloud, distinguishing between what can be described as the economically-driven early adopters of the cloud in SMB's versus mature larger enterprises in his discussion with George Reese from O'Reilly via Twitter:

George and I were coming at the problem from two different angles. George was talking about many SMB organizations, which really can't justify the cost of building their own IT infrastructure, but have been faced with a choice of doing just that, turning to (expensive and often rigid) managed hosting, or putting a server in a colo space somewhere (and maintaining that server). Not very happy choices.

Enter the cloud. Now these same businesses can simply grab capacity on demand, start and stop billing at their leisure and get real world class power, virtualization and networking infrastructure without having to put an ounce of thought into it. Yeah, it costs more than simply running a server would cost, but when you add the infrastructure/managed hosting fees/colo leases, cloud almost always looks like the better deal.

I, on the other hand, was thinking of medium to large enterprises which already own significant data center infrastructure, and already have sunk costs in power, cooling and assorted infrastructure. When looking at this class of business, these sunk costs must be added to server acquisition and operation costs when rationalizing against the costs of gaining the same services from the cloud. In this case, these investments often tip the balance, and it becomes much cheaper to use existing infrastructure (though with some automation) to deliver fixed capacity loads. As I discussed recently, the cloud generally only gets interesting for loads that are not running 24X7.

This existing investment in infrastructure therefore acts almost as a "barrier-to-exit" for these enterprises when considering moving to the cloud. It seems to me highly ironic, and perhaps somewhat unique, that certain aspects of the cloud computing market will be blazed not by organizations with multiple data centers and thousands upon thousands of servers, but by the little mom-and-pop shop that used to own a couple of servers in a colo somewhere that finally shut them down and turned to Amazon. How cool is that

That's a really interesting differentiation that hasn't been made as much as it should, quite honestly.  In the marketing madness that has ensued, you get the feeling that everyone, including large enterprises, are rushing willy-nilly to the cloud and outsourcing the majority of their compute loads, not the cloudbursting overflow.

Billy Marshall's post offers some profound points including one that highlights the oft-reported and oft-harder-to-prove concept of VM sprawl and the so-called "frictionless" model of IT, but with a decidedly cloud perspective. 

What was really interesting was the little incandescent bulb that began to glow when I read the following after reading James' post:

Amazon EC2
demand continues to skyrocket. It seems that business units are quickly
sidestepping those IT departments that have not yet found a way to say
“yes” to requests for new capacity due to capital spending constraints
and high friction processes for getting applications into production
(i.e. the legacy approach of provisioning servers with a general
purpose OS and then attempting to install/configure the app to work on
the production implementation which is no doubt different than the
development environment).

I heard a rumor that a new datacenter in
Oregon was underway to support this burgeoning EC2 demand. I also saw
our most recent EC2 bill, and I nearly hit the roof. Turns out when you
provide frictionless capacity via the hypervisor, virtual machine
deployment, and variable cost payment, demand explodes. Trust me.

I've yet to figure out if the notion of frictionless capacity is a good thing or not if your ability to capacity plan is outpaced by a consumption model and a capacity yield that can just continue to climb without constraint.  At what point does the crossover between cost savings from infrastructure that bounded costs by resource constraints of physical servers become eclipsed by runaway use?

I guess I'll have to wait to see his bill 😉

Back to James' post, he references an interchange on Twitter with George Reese (whose post on "20 Rules for Amazon Cloud Security" I am waiting to fully comment on) in which George commented:

"IT is a barrier to getting things done for most businesses; the Cloud reduces or eliminates that barrier."

…which is basically the same thing Billy said in a Nick Carr kind of way.  The key question here is for whom?  As it relates to the SMB, I'd agree with this statement, but the thing that really sunk it was that statement just doesn't yet jive for larger enterprises.  In James' second post, he drives this home:

I think these examples demonstrate an important decision point for IT organizations, especially during these times of financial strife. What is the value of IT convenience? When is it wise to choose to pay more dollars (or euros, or yen, or whatever) to gain some level of simplicity or focus or comfort? In the case of virtualization, is it always wise to leverage positive economic changes to expand service coverage? In the case of cloud computing, is it always wise to accept relatively high price points per CPU hour over managing your own cheaper compute loads?

Is the cloud about convenience or true business value?  Is any opportunity to eliminate a barrier — whether that barrier actually acts as a logical check and balance within the system — simply enough to drive business to the cloud?

I know the side-stepping IT bit has been spoken about ad nauseum within the context of cloud; namely when describing agility, flexibility, and economics, but it never really occurred to me that the cloud — much in the way you might talk about an iPhone — is now being marketed itself as another instantiation of the democratization, commoditization and consumerization of IT — almost as an application — and not just a means to an end.

I think the thing that was interesting to me in looking at this issue from two perspectives is that differentiation between the SMB and the larger enterprise and their respective "how, what and why" cloud use cases are very much different.  That's probably old news to most, but I usually don't think about the SMB in my daily goings-on.

Just like the iPhone and its adoption for "business use," the larger enterprise is exercising discretion in what's being dumped onto the cloud with a more measured approach due, in part, to managing risk and existing sunk costs, while the SMB is running to embrace it it at full speed, not necessarily realizing the hidden costs.

/Hoff

Categories: Cloud Computing Tags:

Application Delivery Control: More Hardware Or Function Of the Hypervisor?

December 1st, 2008 3 comments

CrisisoutoforderUpdate: Ooops.  I forgot to announce that I'm once again putting on my Devil's Advocacy cap. It fits nicely and the contrasting color makes my eyes pop.;)

It should be noted that obviously I recognize that dedicated
hardware offers performance and scale capabilities
that in many cases
are difficult (if not impossible) to replicate in virtualized software
instantiations of the same functionality. 

However, despite spending the best part of two years raising
awareness as to the issues surrounding scalability, resiliency,
performance, etc. of security software solutions in virtualized
environments via my Four Horsemen of the Virtualization Security Apocalypse presentation, perception is different
than reality and many network capabilities will simply consolidate into the virtualization platforms until the next big swing of the punctuated equlibrium.

This is another classic example of "best of breed" versus "good enough" and in many cases this debate becomes a corner-case argument of speeds and
feeds and the context/location of the network topology you're talking
about. There's simply no way to sprinkle enough specialized hardware around to get the pervasive autonomics across the entire fabric/cloud without a huge chunk of it existing in the underlying virtualization platform or underlying network infrastructure.

THIS is the real scaling problem that software can address (by penetration) that specialized hardware cannot.

There will always be a need for dedicated hardware for specific needs, and if you have an infrastructure service issue that requires massive hardware to support traffic loads until the sophistication and technology within the virtualization layer catches up, by all means use it!  In fact, just today after writing this piece Joyent announced they use f5 BigIP's to power their IaaS cloud service…

In the longer term, however, application delivery control (ADC) will ultimately become a feature of the virtual networking stack provided by software as part of a larger provisioning/governance/autonomics challenge provided by the virtualization layer.  If you're going to get as close to this new atomic unit of measurement in the VM, you're going to have to decide where the network ends and the virtualization layer begins…across every cloud you expect to host your apps and those they may transit.


I've been reading Lori McVittie's f5 DevCentral blog for quite some time.  She and Greg Ness have been feeding off one another's commentary in their discussion on "Infrastructure 2.0" and the unique set of challenges that the dynamic nature of virtualization and cloud computing place on "the network" and the corresponding service layers that tie applications and infrastructure together.

The interesting thing to me is that why I do not disagree that that the infrastructure must adapt to the liquidity, agility and flexibility enabled by virtualization and become more instrumented as to the things running atop it, much of the functionality Greg and Lori allude to will ultimately become a function of the virtualization and cloud layers themselves*.

One of the more interesting memes is the one Lori summarized this morning in her post titled "Managing Virtual Infrastructure Requires an Application Centric Approach," wherein the she lays the case for the needs of infrastructure becoming "application" centric based upon the "highly dynamic" nature of virtualized and cloud computing environments:

…when applications are decoupled from the servers on which they are deployed and the network infrastructure that supports and delivers them, they cannot be effectively managed unless they are recognized as individual components themselves.

Traditional infrastructure and its associated management intrinsically ties applications to servers and servers to IP addresses and IP addresses to switches and routers. This is a tightly coupled model that leaves very little room to address the dynamic nature of a virtual infrastructure such as those most often seen in cloud computing models.

We've watched as SOA was rapidly adopted and organizations realized the benefits of a loosely coupled application architecture. We've watched the explosion of virtualization and the excitement of de-coupling applications from their underlying server infrastructure. But in the network infrastructure space, we still see applications tied to servers tied to IP addresses tied to switches and routers.

That model is broken in a virtual, dynamic infrastructure because applications are no longer bound to servers or IP addresses. They can be anywhere at any time, and infrastructure and management systems that insist on binding the two together are simply going to impede progress and make managing that virtual infrastructure even more painful.

It's all about the application. Finally.

…and yet the applications themselves, despite how integrated they may be, suffer from the same horizontal management problem as the network today does.  So I'm not so sure about the finality of the "it's all about the application" because we haven't even solved the "virtual infrastructure management" issues yet.

Bridging the gap between where we are today and the infrastructure 2.0/application-centric focus of tomorrow is illustrated nicely by Billy Marshall from rPath in his post titled "The Virtual Machine Tsunami," in which he describes how we're really still stuck being VM-centric as the unit measure of application management:

Bottom line, we are all facing an impending tsunami of VMs unleashed by
an unprecedented liquidity in system capacity which is enabled by
hypervisor based cloud computing. When the virtual machine becomes the
unit of application management
, extending the legacy, horizontal
approaches for management built upon the concept of a physical host
with a general purpose OS simply will not scale. The costs will
skyrocket.

The new approach will have vertical management
capability based upon the concept of an application as a coordinated
set of version managed VMs.
This approach is much more scalable for 2
reasons. First, the operating system required to support an application
inside a VM is one-tenth the size of an operating system as a general
purpose host atop a server. One tenth the footprint means one tenth the
management burden – along with some related significant decrease in the
system resources required to host the OS itself (memory, CPU, etc.).
Second, strong version management across the combined elements of the
application and the system software that supports it within the VM
eliminates the unintended consequences associated with change. These
unintended consequences yield massive expenses for testing and
certification when new code is promoted from development to production
across each horizontal layer (OS, middleware, application). Strong
version management across these layers within an isolated VM eliminates
these massive expenses.

So we still have all the problems of managing the applications atomically, but I think there's some general agreement between these two depictions.

However, where it gets interesting is where Lori essentially paints the case that "the network" today is unable to properly provide for the delivery of applications:

And that's what makes application delivery focused solutions so important to both virtualization and cloud computing models in which virtualization plays a large enabling role.

Because application delivery controllers are more platforms than they are devices; they are programmable, adaptable, and internally focused on application delivery, scalability, and security.They are capable of dealing with the demands that a virtualized application infrastructure places on the entire delivery infrastructure. Where simple load balancing fails to adapt dynamically to the ever changing internal network of applications both virtual and non-virtual, application delivery excels.

It is capable of monitoring, intelligently, the availability of applications not only in terms of whether it is up or down, but where it currently resides within the data center. Application delivery solutions are loosely coupled, and like SOA-based solutions they rely on real-time information about infrastructure and applications to determine how best to distribute requests, whether that's within the confines of a single data center or fifteen data centers.

Application delivery controllers focus on distributing requests to applications, not servers or IP addresses, and they are capable of optimizing and securing both requests and responses based on the application as well as the network.

They are the solution that bridges the gap that lies between applications and network infrastructure, and enables the agility necessary to build a scalable, dynamic delivery system suitable for virtualization and cloud computing.

This is where I start to squint a little because Lori's really taking the notion of "application intelligence" and painting what amounts to a router/switch in an appliction delivery controller as a "platform" as she attempts to drive wedge between an ADC and "the network."

Besides the fact that "the network" is also rapidly evolving to adapt to this more loosely-coupled model and the virtualization layer, the traditional networking functions and the infrastructure service layers are becoming more integrated and aware thanks to the homgenizing effect of the hypervisor, I'll ask the question I asked Lori on Twitter this morning:

ADC-Question

Why won't this ADC functionality simply show up in the hypervisor?  If you ask me, that's exactly the goal.  vCloud, anyone?  Amazon EC2?  Azure?

If we take the example of Cisco and VMware, the coupled vision of the networking and virtualization 800 lb gorillas is exactly the same as she pens above; but it goes further because it addresses the end-to-end orchestration of infrastructure across the network, compute and storage fabrics.

So, why do we need yet another layer of network routers/switches called "application delivery controllers" as opposed to having this capability baked into the virtualization layer or ultimately the network itself?

That's the whole point of cloud computing and virtualization, right?  To decouple the resources from the hardware delivering it but putting more and more of that functionality into the virtualization layer?

So, can you really make the case for deploying more "application-centric" routers/switches (which is what an application delivery controller is) regardless of how aware it may be?

/Hoff

Cloud Computing Security: From DDoS (Distributed Denial Of Service) to EDoS (Economic Denial of Sustainability)

November 27th, 2008 12 comments

Turkeykini
It's Thanksgiving here in the U.S., so in between baking, roasting and watching Risk Astley rickroll millions in the Macy's Thanksgiving Day Parade, I had a thought about how the utility and agility of the cloud computing models such as Amazon AWS (EC2/S3) and the pricing models that go along with them can actually pose a very nasty risk to those who use the cloud to provide service.

That thought — in between vigorous whisking during cranberry sauce construction — got me noodling about how the pay-as-you-go model could be used for nefarious means.

Specifically, this usage-based model potentially enables $evil_person who knows that a service is cloud-based to manipulate service usage billing in orders of magnitude that could be disguised easily as legitimate use of the service but drive costs to unmanageable levels. 

If you take Amazon's AWS usage-based pricing model (check out the cost calculator here,) one might envision that instead of worrying about a lack of resources, the elasticity of the cloud could actually provide a surplus of compute, network and storage utility that could be just as bad as a deficit.

Instead of worrying about Distributed Denial of Service (DDos) attacks from botnets and the like, imagine having to worry about delicately balancing forecasted need with capabilities like Cloudbursting to deal with a botnet designed to make seemingly legitimate requests for service to generate an economic denial of sustainability (EDoS) — where the dyamicism of the infrastructure allows scaling of service beyond the economic means of the vendor to pay their cloud-based service bills.

Imagine the shock of realizing that the outsourcing to the cloud to reduce CapEx and move to an OpEx model just meant that while availability, confidentiality and integrity of your service and assets are solid, your sustainability and survivability are threatened.

I know there exists the ability to control instance sprawl and constrain costs, but imagine if this is managed improperly or inexactly because we can't distinguish between legitimate and targeted resource consumption from an "attack."

If you're in the business of ensuring availability of service for large events on the web for a timed event, are you really going to limit service when you think it might drive revenues?

I think this is where service governors will have to get much more
intelligent regarding how services are being consumed and how that
affects the transaction supply chain and embed logic that takes into
account financial risk when scaling to ensure EDoS doesn't kill you. 

I can't say that I haven't had similar concerns when dealing with scalability and capacity planning in hosted or dedicated co-location environments, but generally storage and compute were not billable service options I had to worry about, only network bandwidth.

Back to the mashed potatoes.

/Hoff

The Big Four Cloud Computing Providers: Security Compared (Part I)

November 26th, 2008 1 comment

James Urquhart posted a summary a week or so ago of what he described as the "Big 4" players in Cloud Computing.  It was a slightly humorous pass at describing their approaches and offerings:

Below is a table that lists these key players, and compares their offerings from the perspective of four core defining aspects of clouds. As this is a comparison of apples to oranges to grapefruit to perhaps pastrami, it is not meant to be a ranking of the participants, nor a judgement of when to choose one over the other. Instead, what I hope to do here is to give a working sysadmin's glimpse into what these four clouds are about, and why they are each unique approaches to enterprise cloud computing in their own right.

James provided quite a bit more (serious) detail in the text below his table which I present to you here, tarted up with a column I've added and James left off titled "Security." 

It's written in the same spirit as James' original, so feel free to take this with an equally well-provisioned grain of NaCl.  I'll be adding my own perfunctory comments with a little more detail shortly:Big4cloud The point here is that the quantification of what "security" means in the cloud is as abstracted and varied as the platforms that provide the service.  We're essentially being asked to take for granted and trust that the underlying mechanicals are sound and secure while not knowing where or what they are.

We don't do that with our physically-tethered operating systems today, so why should we do so with virtualization platform hypervisors and the infrastructure "data center operating systems" of the cloud?  The transparency provided by dedicated infrastructure is being obscured by virtualization and the fog of the cloud.  It's a squeezing the balloon problem.

And so far as the argument goes toward suggesting that this is no different than what we deal with n terms of SaaS today, the difference between what we might define as legacy SaaS and "cloud" is that generally it's someone elses' apps and your data in the former (ye olde ASP model.) 

In the case of the "cloud," it could be a mixture of applications and data, some of which you own, some you don't and some you're simply not even aware of, perhaps running in part on your infrastructure and someone elses'.

It should be noted also that not all cloud providers (excluding those above) even own and operate the platforms they provide you service on…they, in turn, could be utilizing shared infrastructure to provide you service, so cross-pollination of service provisioning could affect portability, reliability and security.

That is why the Big4 above stand up their own multi-billion dollar data centers; they keep the architecture proprietary so you don't have to; lots of little clouds everywhere.

/Hoff

P.S. If you're involved with platform security from any of the providers above, do contact me because I'm going to be expounding upon the security "layers" of each of these providers in as much detail as I have here shortly.  I'd suggest you might be interested in assuring it's as complete and accurate as possible 😉

Cloud Providers Are Better At Securing Your Data Than You Are…

November 21st, 2008 4 comments

Cloudcollapse
"Cloud Providers Are Better At Securing Your Data Than You Are…"

To some, this is a contentious point while to others it seems entirely logical.

I must tell you that I've witnessed this very assertion as it has been raised more times in the last few days than I can count.

Before I get into any more juicy bits regarding this topic, I wonder if you wouldn't mind popping over and reading a blog post I wrote in August, 2007 titled "On-Demand SaaS Vendors Able to Secure Assets Better than Customers?

Come to think of it, you can read the follow-on post to that one which clearly indicated my point when Salesforce.com and Monster.com (you know, those so-called "Cloud" providers) were breached.

Forgot about those breaches, did you?  Oh, that must have been because they were SaaS providers and not Cloud providers at the time.  Gotcha.

As you read these posts, first do so within the context of what we've come to know as software as a service (SaaS.)  Then kindly re-read it and substitute 'SaaS' with 'Cloud,' won't you?

Thanks.

I have more, but I'll wait till you're done.

/Hoff

PDP Says “The Cloud Is Not That Insecure” & Implies Security Concerns Are Trivial…

November 21st, 2008 No comments

Nosethumb-angled
I haven't been whipped into this much of a frenzy since Hormel changed the labels on the SPAM cans in Hawaii.

PDP (of gnucitizen fame) masterfully stitched together a collection of mixed metaphors, generalizations, reductions to the ridiculous and pejoratives to produce his opus magnum on cloud computing (in)security titled "The Cloud Is Not That Insecure."

Oh.

Since I have spent the better part of my security career building large "cloud-like" services and the products that help, at a minimum, to secure them, I feel at least slightly qualified to dispute many of his points, the bulk of which are really focused on purely technology-driven mechanical analogies and platforms rather than items such as the operational, trust, political, jurisdictional, regulatory, organizational and economical issues that really go toward the "security" (or lack thereof) of "cloud-based" service.

Speaking of which, PDP's definition of the cloud is about as abstract as you can get:

"Cloud technologies are in fact no different than non-cloud technologies. Practically they are the same. I mean the term cloud computing
is quite broad and perhaps it is even a buzword rather than a
well-thought term which describes a particular study of the IT field.
To me cloud computing refers to the process of outsourcing computer cycles and memory keeping scalability in mind."

Well, I'm glad we cleared that up.

At any rate, it's a seriously humorous read that would have me taking apart many of his contradictory assumptions and assertions were it not for the fact that I have actual work to do.  So, in the issue of time, I'll offer up his conclusion and you can go back and read the rest:

So, is the cloud secure? I would say yes if you know what you are
doing. A couple of posts back I mentioned that cloud security matters.
It still does. Cloud technologies are quite secure because we tend not
to trust them.
However, because cloud computing can be quite confusing,
you still need to spend time in making sure that all the blocks fit
together nicely.

So, there you have it.  Those of you who "know what you are doing" are otay and thanks to security by obscurity due to a lack of trust, cloud computing is secure.  That's not confusing at all…

This probably won't end well, but…

Sigh.

/Hoff

CohesiveFT VPN-Cubed: Not Your Daddy’s Encrypted Tunnel

November 14th, 2008 4 comments

I had a  briefing with Patrick Kerpan and Craig Heimark of CohesiveFT last week in response to some comments that Craig had left on my blog post regarding PCI compliance in the Cloud, here

I was intrigued, albeit skeptically, with how CohesiveFT was positioning the use of VPNs within the cloud and differentiating their solution from the very well established IPSec and SSL VPN capabilities we all know and love.  What's so special about tunnels across the Intertubes?  Been there, done that, bought the T-Shirt, right?

So I asked the questions…

I have to say that unlike many other companies rushing to associate their brand by rubber cementing the word "cloud" and "security" onto their product names and marketing materials, CohesiveFT spent considerable time upfront describing what they do not do so as to effectively establish what they are and what they do.  I'll let one of their slides do the talking:


CohesiveFT-what

Okay, so they're not a cloud, but they provide cloud and virtualization services…and VPN-Cubed provides some layer of security along with their products and services…check. But…

Digging a little deeper, I still sought to understand why I couldn't just stand up an IPSec tunnel from my corporate datacenter to one or more cloud providers where my assets and data were hosted.  I asked for two things: (1) A couple of sentences summarizing their elevator pitch for VPN-Cubed and (2) a visual representation of what this might look like.

Here's what I got as an answer to question (1):

"VPN-Cubed is a novel implementation of VPN concepts which provides a customer controlled security perimeter within a third-party controlled (cloud, utility infra, hosting provider) computing facility, across multiple third-party controlled computing facilities, or for use in private infrastructure.  It enables customer control of their infrastructure topology by allowing control of the network addressing scheme, encrypted communications, and the use of what might normally be unrouteable protocols such as UDP Multicast."

Here are two great visuals to address question (2):


CohesiveFT-VPNs

 CohesiveFT-ClustersExtended

So the differences between a typical VPN and VPN-Cubed comes down to being able to securely extend your "internal clouds infrastructure" in your datacenters (gasp! I said it) in a highly-available manner to include your externally hosted assets which in turn run on infrastructure you don't own.  You can't stand up an ASA or Neoteris box on a network you can't get to.  The VPN-Cubed Managers are VM's/VA's that run as a complement to your virtual servers/machines hosted by your choice of one or multiple cloud providers.

They become the highly-available, multiprotocol aribters of access via standardized IPSec protocols but do so in a way that addresses the dynamic capabilities of the cloud which includes service governance, load, and "cloudbursting" failover between clouds — in a transparent manner.

Essentially you get secure access to your critical assets utilizing an infrastructure independent solution, extending the VLAN segmentation, isolation and security capabilities your providers may put in place while also controlling your address spaces within the cloudspaces encompassing your VM's "behind" the VPN Managers.

VPN-Cubed is really a prime example of the collision space of dynamic application/service delivery, virtualization, security and cloud capacity governance.
  It speaks a lot to re-perimeterization that I've been yapping about for quite some time and hints at Greg Ness' Infrastructure 2.0 meme.

Currently VPN-Cubed is bundled as a package which includes both product and services and supports the majority of market-leading virtualization formats, operating systems and cloud providers such as Amazon EC2, Flexiscale, GoGrid, Mosso, etc.

It's a very slick offering.

/Hoff

Cloud Security Macro Layers? I’ll Take “It’ll Never Sell” For $1000, Alex…

November 13th, 2008 2 comments

Mogull commented yesterday on my post regarding TCG's IF-MAP and remarked that in discussing cloud security and security models, the majority of folks, myself included, were focusing on the network:

Chris’s posting, and most of the ones I’ve seen, are heavily focused
on network security concepts as they relate to the cloud. But if we
look at cloud computing at the macro level, there are additional layers
which are just as critical (in no particular order):

200811121509.jpg

  • Network: The usual network security controls.
  • Service: Security around the exposed APIs and services.
  • User: Authentication- which in the cloud world, needs to
    move to more adaptive authentication, rather than our current static
    username/password model.
  • Transaction: Security controls around individual transactions- via transaction authentication, adaptive authorization, and other approaches.
  • Data: Information-centric security controls for cloud
    based data. How’s that for buzzword bingo? Okay, this actually includes
    security controls for the back-end data, distributed data, and any
    content exchanged with the user.

I'd say that's a reasonable assertion and a valid set of additional "layers."  There also not especially unique and as such, I think Rich is himself a little disoriented by the fog of the cloud because as you'll read, the same could be said of any networked technology.

The reason we start with the network and usually find ourselves back where we started in this discussion is because the other stuff Rich mentions is just too damned hard, costs too much, is difficult to sell, isn't standardized, is generally platform dependent and is really intrusive.  See this post (Security Will Not End Up In the Network) as an example.

Need proof of how good ideas like this get mangled?  How about Web 2.0 or SOA which is for lack of a better description, exactly what RIch described in his model above; loosely coupled functional components of a modular architecture. 

We haven't even gotten close to having this solved internally on our firewalled enterprise LANs so it's somewhat apparent why it might appear to be missing in conversations regarding "the cloud."  It shouldn't be, but it is. 

It should be noted, however that there is a ton of work, solutions and initiatives that exist and continue to be worked on these topics, it's just not a priority as is evidenced by how people exercise their wallets.

And finally:

Down the road we’ll dig into these in more detail, but any time we
start distributing services and functionality over an open public
network with no inherent security controls, we need to focus on the
design issues and reduce design flaws as early as possible. We can’t
just look at this as a network problem- our authentication,
authorization, information, and service (layer 7) controls are likely
even more important.

I believe we call this thing of which he speaks, "the Internet."  I think we're about 20 years late. 😉

/Hoff

Categories: Cloud Computing, Virtualization Tags:

I Can Haz TCG IF-MAP Support In Your Security Product, Please…

November 10th, 2008 3 comments

Quantumlolcat
In my previous post titled "Cloud Computing: Invented By Criminals, Secured By ???" I described the need for a new security model, methodology and set of technologies in the virtualized and cloud computing realms built to deal with the dynamic and distributed nature of evolving computing:

This
basically means that we should distribute the sampling, detection and
prevention functions across the entire networked ecosystem, not just to
dedicated security appliances; each of the end nodes should communicate
using a standard signaling and telemetry protocol so that common
threat, vulnerability and effective disposition can be communicated up
and downstream to one another and one or more management facilities.

Greg Ness from Infoblox reminded me in the comments of that post of something I was very excited about when it
became news at InterOp this last April: the Trusted Computing Group's (TCG) extension to the Trusted Network Connect (TNC) architecture called IF-MAP.

IF-MAP is a standardized real-time publish/subscribe/search mechanism which utilizies a client/server, XML-based SOAP protocol to provide information about network security objects and events including their state and activity:

IF-MAP extends the TNC architecture to support standardized, dynamic data interchange among a wide variety of networking and security components, enabling customers to implement multi-vendor systems that provide coordinated defense-in-depth.
 
Today’s security systems – such as firewalls, intrusion detection and prevention systems, endpoint security systems, data leak protection systems, etc. – operate as “silos” with little or no ability to “see” what other systems are seeing or to share their understanding of network and device behavior. 

This limits their ability to support coordinated defense-in-depth. 
In addition, current NAC solutions are focused mainly on controlling
network access, and lack the ability to respond in real-time to
post-admission changes in security posture or to provide visibility and
access control enforcement for unmanaged endpoints.  By extending TNC
with IF-MAP, the TCG is providing a standard-based means to address
these issues and thereby enable more powerful, flexible, open network
security systems.

While the TNC was initially designed to support NAC solutions, extending the capabilities to any security product to subscribe to a common telemetry and information exchange/integration protocol is a fantastic idea.

TNC-IFMAP

I'm really interested in how many vendors outside of the NAC space are including IF-MAP in their roadmaps. While IF-MAP has potential in convential non-virtualized infrastructure, I see a tremendous need for it in our move to Infrastructure 2.0 with virtualization and Cloud Computing. 

Integrating, for example, IF-MAP with VM-Introspection capabilities (in VMsafe, XenAccess, etc.) would be fantastic as you could tie the control planes of the hypervisors, management infrastructure, and provisioning/governance engines with that of security and compliance in near-time.

You can read more about the TCG's TNC IF-MAP specification here.

/Hoff