Archive

Archive for the ‘Virtualization’ Category

CloudSQL – Accessing Datastores in the Sky using SQL…

December 2nd, 2008 5 comments
Zohosql-angled
I think this is definitely a precursor of things to come and introduces some really interesting security discussions to be had regarding the portability, privacy and security of datastores in the cloud.

Have you heard of Zoho?  No?  Zoho is a SaaS vendor that describe themselves thusly:

Zoho is a suite of online applications (services) that you sign up for and access from our Website. The applications are free for individuals and some have a subscription fee for organizations. Our vision is to provide our customers (individuals, students, educators, non-profits, small and medium sized businesses) with the most comprehensive set of applications available anywhere (breadth); and for those applications to have enough features (depth) to make your user experience worthwhile.

Today, Zoho announced the availability of CloudSQL which is middleware that allows customers who use Zoho's SaaS apps to "…access their data on Zoho SaaS
applications using SQL queries."
 

From their announcement:

Zoho CloudSQL is a technology that allows developers to interact with business data stored across Zoho Services using the familiar SQL language. In addition, JDBC and ODBC database drivers make writing code a snap – just use the language construct and syntax you would use with a local database instance. Using the latest Web technology no longer requires throwing away years of coding and learning.

Zoho CloudSQL allows businesses to connect and integrate the data and applications they have in Zoho with the data and applications they have in house, or even with other SaaS services. Unlike other methods for accessing data in the cloud, CloudSQL capitalizes on enterprise developers’ years of knowledge and experience with the widely‐used SQL language. This leads to faster deployments and easier (read: less expensive) integration projects.

Basically, CloudSQL is interposed between the suite of Zoho applications and the backend datastores and functions as an intermediary receiving SQL queries against the pooled data sets using standard SQL commands and dialects. Click on the diagram below for a better idea of what this looks like.

Zoho-cloud-sql
What's really interesting about allowing native SQL access is the ability to then allow much easier information interchange between apps/databases on an enterprises' "private cloud(s)" and the Zoho "public" cloud.

Further, it means that your data is more "portable" as it can be backed up, accessed, and processed by applications other than Zoho's.  Imagine if they were to extend the SQL exposure to other cloud/SaaS providers…this is where it will get really juicy. 

This sort of thing *will* happen.  Customers will see the absolute utility of exposing their cloud-based datastores and sharing them amongst business partners, much in the spirit of how it's done today, but with the datastores (or chunks of them) located off-premises.

That's all good and exciting, but obviously security questions/concerns immediately surface regarding such things as: authentication, encryption, access control, input sanitation, privacy and compliance…

Today our datastores typically live inside the fortress with multiple
layers of security and proxied access from applications, shielded from
direct access and yet we still have basic issues with attacks such as
SQL injection.  Imagine how much fun we can have with this!

The best I could find regarding security and Zoho came from their FAQ which doesn't exactly inspire confidence given the fact that they address logical/software security by suggesting that anti-virus software is the best line of defense ffor protecting your data and that "data encryption" will soon be offered as an "option" and (implied) SSL will make you secure:

6. Is my data secured?

Many people ask us this question. And rightly so; Zoho has invested alot of time and money to ensure that your information is secure and private. We offer security on multiple levels including the physical, software and people/process levels; In fact your data is more secure than walking around with it on a laptop or even on your corporate desktops.

Physical: Zoho servers and infrastructure are located in the most secure types of data centers that have multiple levels of restrictions for access including: on-premise security guards, security cameras, biometric limited access systems, and no signage to indicate where the buildings are, bullet proof glass, earthquake ratings, etc.

Hardware: Zoho employs state of the art firewall protection on multiple levels eliminating the possibility of intrusion from outside attacks

Logical/software protection: Zoho deploys anti-virus software and scans all access 24 x7 for suspicious traffic and viruses or even inside attacks; All of this is managed and logged for auditing purposes.

Process: Very few Zoho staff have access to either the physical or logical levels of our infrastructure. Your data is therefore secure from inside access; Zoho performs regular vulnerability testing and is constantly enhancing its security at all levels. All data is backed up on multiple servers in multiple locations on a daily basis. This means that in the worst case, if one data center was compromised, your data could be restored from other locations with minimal disruption. We are also working on adding even more security; for example, you will soon be able to select a "data encryption" option to encrypt your data en route to our servers, so that in the unlikely event of your own computer getting hacked while using Zoho, your documents could be securely encrypted and inaccessible without a "certificate" which generally resides on the server away from your computer.

Fun times ahead, folks.

/Hoff

Application Delivery Control: More Hardware Or Function Of the Hypervisor?

December 1st, 2008 3 comments

CrisisoutoforderUpdate: Ooops.  I forgot to announce that I'm once again putting on my Devil's Advocacy cap. It fits nicely and the contrasting color makes my eyes pop.;)

It should be noted that obviously I recognize that dedicated
hardware offers performance and scale capabilities
that in many cases
are difficult (if not impossible) to replicate in virtualized software
instantiations of the same functionality. 

However, despite spending the best part of two years raising
awareness as to the issues surrounding scalability, resiliency,
performance, etc. of security software solutions in virtualized
environments via my Four Horsemen of the Virtualization Security Apocalypse presentation, perception is different
than reality and many network capabilities will simply consolidate into the virtualization platforms until the next big swing of the punctuated equlibrium.

This is another classic example of "best of breed" versus "good enough" and in many cases this debate becomes a corner-case argument of speeds and
feeds and the context/location of the network topology you're talking
about. There's simply no way to sprinkle enough specialized hardware around to get the pervasive autonomics across the entire fabric/cloud without a huge chunk of it existing in the underlying virtualization platform or underlying network infrastructure.

THIS is the real scaling problem that software can address (by penetration) that specialized hardware cannot.

There will always be a need for dedicated hardware for specific needs, and if you have an infrastructure service issue that requires massive hardware to support traffic loads until the sophistication and technology within the virtualization layer catches up, by all means use it!  In fact, just today after writing this piece Joyent announced they use f5 BigIP's to power their IaaS cloud service…

In the longer term, however, application delivery control (ADC) will ultimately become a feature of the virtual networking stack provided by software as part of a larger provisioning/governance/autonomics challenge provided by the virtualization layer.  If you're going to get as close to this new atomic unit of measurement in the VM, you're going to have to decide where the network ends and the virtualization layer begins…across every cloud you expect to host your apps and those they may transit.


I've been reading Lori McVittie's f5 DevCentral blog for quite some time.  She and Greg Ness have been feeding off one another's commentary in their discussion on "Infrastructure 2.0" and the unique set of challenges that the dynamic nature of virtualization and cloud computing place on "the network" and the corresponding service layers that tie applications and infrastructure together.

The interesting thing to me is that why I do not disagree that that the infrastructure must adapt to the liquidity, agility and flexibility enabled by virtualization and become more instrumented as to the things running atop it, much of the functionality Greg and Lori allude to will ultimately become a function of the virtualization and cloud layers themselves*.

One of the more interesting memes is the one Lori summarized this morning in her post titled "Managing Virtual Infrastructure Requires an Application Centric Approach," wherein the she lays the case for the needs of infrastructure becoming "application" centric based upon the "highly dynamic" nature of virtualized and cloud computing environments:

…when applications are decoupled from the servers on which they are deployed and the network infrastructure that supports and delivers them, they cannot be effectively managed unless they are recognized as individual components themselves.

Traditional infrastructure and its associated management intrinsically ties applications to servers and servers to IP addresses and IP addresses to switches and routers. This is a tightly coupled model that leaves very little room to address the dynamic nature of a virtual infrastructure such as those most often seen in cloud computing models.

We've watched as SOA was rapidly adopted and organizations realized the benefits of a loosely coupled application architecture. We've watched the explosion of virtualization and the excitement of de-coupling applications from their underlying server infrastructure. But in the network infrastructure space, we still see applications tied to servers tied to IP addresses tied to switches and routers.

That model is broken in a virtual, dynamic infrastructure because applications are no longer bound to servers or IP addresses. They can be anywhere at any time, and infrastructure and management systems that insist on binding the two together are simply going to impede progress and make managing that virtual infrastructure even more painful.

It's all about the application. Finally.

…and yet the applications themselves, despite how integrated they may be, suffer from the same horizontal management problem as the network today does.  So I'm not so sure about the finality of the "it's all about the application" because we haven't even solved the "virtual infrastructure management" issues yet.

Bridging the gap between where we are today and the infrastructure 2.0/application-centric focus of tomorrow is illustrated nicely by Billy Marshall from rPath in his post titled "The Virtual Machine Tsunami," in which he describes how we're really still stuck being VM-centric as the unit measure of application management:

Bottom line, we are all facing an impending tsunami of VMs unleashed by
an unprecedented liquidity in system capacity which is enabled by
hypervisor based cloud computing. When the virtual machine becomes the
unit of application management
, extending the legacy, horizontal
approaches for management built upon the concept of a physical host
with a general purpose OS simply will not scale. The costs will
skyrocket.

The new approach will have vertical management
capability based upon the concept of an application as a coordinated
set of version managed VMs.
This approach is much more scalable for 2
reasons. First, the operating system required to support an application
inside a VM is one-tenth the size of an operating system as a general
purpose host atop a server. One tenth the footprint means one tenth the
management burden – along with some related significant decrease in the
system resources required to host the OS itself (memory, CPU, etc.).
Second, strong version management across the combined elements of the
application and the system software that supports it within the VM
eliminates the unintended consequences associated with change. These
unintended consequences yield massive expenses for testing and
certification when new code is promoted from development to production
across each horizontal layer (OS, middleware, application). Strong
version management across these layers within an isolated VM eliminates
these massive expenses.

So we still have all the problems of managing the applications atomically, but I think there's some general agreement between these two depictions.

However, where it gets interesting is where Lori essentially paints the case that "the network" today is unable to properly provide for the delivery of applications:

And that's what makes application delivery focused solutions so important to both virtualization and cloud computing models in which virtualization plays a large enabling role.

Because application delivery controllers are more platforms than they are devices; they are programmable, adaptable, and internally focused on application delivery, scalability, and security.They are capable of dealing with the demands that a virtualized application infrastructure places on the entire delivery infrastructure. Where simple load balancing fails to adapt dynamically to the ever changing internal network of applications both virtual and non-virtual, application delivery excels.

It is capable of monitoring, intelligently, the availability of applications not only in terms of whether it is up or down, but where it currently resides within the data center. Application delivery solutions are loosely coupled, and like SOA-based solutions they rely on real-time information about infrastructure and applications to determine how best to distribute requests, whether that's within the confines of a single data center or fifteen data centers.

Application delivery controllers focus on distributing requests to applications, not servers or IP addresses, and they are capable of optimizing and securing both requests and responses based on the application as well as the network.

They are the solution that bridges the gap that lies between applications and network infrastructure, and enables the agility necessary to build a scalable, dynamic delivery system suitable for virtualization and cloud computing.

This is where I start to squint a little because Lori's really taking the notion of "application intelligence" and painting what amounts to a router/switch in an appliction delivery controller as a "platform" as she attempts to drive wedge between an ADC and "the network."

Besides the fact that "the network" is also rapidly evolving to adapt to this more loosely-coupled model and the virtualization layer, the traditional networking functions and the infrastructure service layers are becoming more integrated and aware thanks to the homgenizing effect of the hypervisor, I'll ask the question I asked Lori on Twitter this morning:

ADC-Question

Why won't this ADC functionality simply show up in the hypervisor?  If you ask me, that's exactly the goal.  vCloud, anyone?  Amazon EC2?  Azure?

If we take the example of Cisco and VMware, the coupled vision of the networking and virtualization 800 lb gorillas is exactly the same as she pens above; but it goes further because it addresses the end-to-end orchestration of infrastructure across the network, compute and storage fabrics.

So, why do we need yet another layer of network routers/switches called "application delivery controllers" as opposed to having this capability baked into the virtualization layer or ultimately the network itself?

That's the whole point of cloud computing and virtualization, right?  To decouple the resources from the hardware delivering it but putting more and more of that functionality into the virtualization layer?

So, can you really make the case for deploying more "application-centric" routers/switches (which is what an application delivery controller is) regardless of how aware it may be?

/Hoff

Cloud Computing Security: From DDoS (Distributed Denial Of Service) to EDoS (Economic Denial of Sustainability)

November 27th, 2008 12 comments

Turkeykini
It's Thanksgiving here in the U.S., so in between baking, roasting and watching Risk Astley rickroll millions in the Macy's Thanksgiving Day Parade, I had a thought about how the utility and agility of the cloud computing models such as Amazon AWS (EC2/S3) and the pricing models that go along with them can actually pose a very nasty risk to those who use the cloud to provide service.

That thought — in between vigorous whisking during cranberry sauce construction — got me noodling about how the pay-as-you-go model could be used for nefarious means.

Specifically, this usage-based model potentially enables $evil_person who knows that a service is cloud-based to manipulate service usage billing in orders of magnitude that could be disguised easily as legitimate use of the service but drive costs to unmanageable levels. 

If you take Amazon's AWS usage-based pricing model (check out the cost calculator here,) one might envision that instead of worrying about a lack of resources, the elasticity of the cloud could actually provide a surplus of compute, network and storage utility that could be just as bad as a deficit.

Instead of worrying about Distributed Denial of Service (DDos) attacks from botnets and the like, imagine having to worry about delicately balancing forecasted need with capabilities like Cloudbursting to deal with a botnet designed to make seemingly legitimate requests for service to generate an economic denial of sustainability (EDoS) — where the dyamicism of the infrastructure allows scaling of service beyond the economic means of the vendor to pay their cloud-based service bills.

Imagine the shock of realizing that the outsourcing to the cloud to reduce CapEx and move to an OpEx model just meant that while availability, confidentiality and integrity of your service and assets are solid, your sustainability and survivability are threatened.

I know there exists the ability to control instance sprawl and constrain costs, but imagine if this is managed improperly or inexactly because we can't distinguish between legitimate and targeted resource consumption from an "attack."

If you're in the business of ensuring availability of service for large events on the web for a timed event, are you really going to limit service when you think it might drive revenues?

I think this is where service governors will have to get much more
intelligent regarding how services are being consumed and how that
affects the transaction supply chain and embed logic that takes into
account financial risk when scaling to ensure EDoS doesn't kill you. 

I can't say that I haven't had similar concerns when dealing with scalability and capacity planning in hosted or dedicated co-location environments, but generally storage and compute were not billable service options I had to worry about, only network bandwidth.

Back to the mashed potatoes.

/Hoff

The Big Four Cloud Computing Providers: Security Compared (Part I)

November 26th, 2008 1 comment

James Urquhart posted a summary a week or so ago of what he described as the "Big 4" players in Cloud Computing.  It was a slightly humorous pass at describing their approaches and offerings:

Below is a table that lists these key players, and compares their offerings from the perspective of four core defining aspects of clouds. As this is a comparison of apples to oranges to grapefruit to perhaps pastrami, it is not meant to be a ranking of the participants, nor a judgement of when to choose one over the other. Instead, what I hope to do here is to give a working sysadmin's glimpse into what these four clouds are about, and why they are each unique approaches to enterprise cloud computing in their own right.

James provided quite a bit more (serious) detail in the text below his table which I present to you here, tarted up with a column I've added and James left off titled "Security." 

It's written in the same spirit as James' original, so feel free to take this with an equally well-provisioned grain of NaCl.  I'll be adding my own perfunctory comments with a little more detail shortly:Big4cloud The point here is that the quantification of what "security" means in the cloud is as abstracted and varied as the platforms that provide the service.  We're essentially being asked to take for granted and trust that the underlying mechanicals are sound and secure while not knowing where or what they are.

We don't do that with our physically-tethered operating systems today, so why should we do so with virtualization platform hypervisors and the infrastructure "data center operating systems" of the cloud?  The transparency provided by dedicated infrastructure is being obscured by virtualization and the fog of the cloud.  It's a squeezing the balloon problem.

And so far as the argument goes toward suggesting that this is no different than what we deal with n terms of SaaS today, the difference between what we might define as legacy SaaS and "cloud" is that generally it's someone elses' apps and your data in the former (ye olde ASP model.) 

In the case of the "cloud," it could be a mixture of applications and data, some of which you own, some you don't and some you're simply not even aware of, perhaps running in part on your infrastructure and someone elses'.

It should be noted also that not all cloud providers (excluding those above) even own and operate the platforms they provide you service on…they, in turn, could be utilizing shared infrastructure to provide you service, so cross-pollination of service provisioning could affect portability, reliability and security.

That is why the Big4 above stand up their own multi-billion dollar data centers; they keep the architecture proprietary so you don't have to; lots of little clouds everywhere.

/Hoff

P.S. If you're involved with platform security from any of the providers above, do contact me because I'm going to be expounding upon the security "layers" of each of these providers in as much detail as I have here shortly.  I'd suggest you might be interested in assuring it's as complete and accurate as possible 😉

PDP Says “The Cloud Is Not That Insecure” & Implies Security Concerns Are Trivial…

November 21st, 2008 No comments

Nosethumb-angled
I haven't been whipped into this much of a frenzy since Hormel changed the labels on the SPAM cans in Hawaii.

PDP (of gnucitizen fame) masterfully stitched together a collection of mixed metaphors, generalizations, reductions to the ridiculous and pejoratives to produce his opus magnum on cloud computing (in)security titled "The Cloud Is Not That Insecure."

Oh.

Since I have spent the better part of my security career building large "cloud-like" services and the products that help, at a minimum, to secure them, I feel at least slightly qualified to dispute many of his points, the bulk of which are really focused on purely technology-driven mechanical analogies and platforms rather than items such as the operational, trust, political, jurisdictional, regulatory, organizational and economical issues that really go toward the "security" (or lack thereof) of "cloud-based" service.

Speaking of which, PDP's definition of the cloud is about as abstract as you can get:

"Cloud technologies are in fact no different than non-cloud technologies. Practically they are the same. I mean the term cloud computing
is quite broad and perhaps it is even a buzword rather than a
well-thought term which describes a particular study of the IT field.
To me cloud computing refers to the process of outsourcing computer cycles and memory keeping scalability in mind."

Well, I'm glad we cleared that up.

At any rate, it's a seriously humorous read that would have me taking apart many of his contradictory assumptions and assertions were it not for the fact that I have actual work to do.  So, in the issue of time, I'll offer up his conclusion and you can go back and read the rest:

So, is the cloud secure? I would say yes if you know what you are
doing. A couple of posts back I mentioned that cloud security matters.
It still does. Cloud technologies are quite secure because we tend not
to trust them.
However, because cloud computing can be quite confusing,
you still need to spend time in making sure that all the blocks fit
together nicely.

So, there you have it.  Those of you who "know what you are doing" are otay and thanks to security by obscurity due to a lack of trust, cloud computing is secure.  That's not confusing at all…

This probably won't end well, but…

Sigh.

/Hoff

CohesiveFT VPN-Cubed: Not Your Daddy’s Encrypted Tunnel

November 14th, 2008 4 comments

I had a  briefing with Patrick Kerpan and Craig Heimark of CohesiveFT last week in response to some comments that Craig had left on my blog post regarding PCI compliance in the Cloud, here

I was intrigued, albeit skeptically, with how CohesiveFT was positioning the use of VPNs within the cloud and differentiating their solution from the very well established IPSec and SSL VPN capabilities we all know and love.  What's so special about tunnels across the Intertubes?  Been there, done that, bought the T-Shirt, right?

So I asked the questions…

I have to say that unlike many other companies rushing to associate their brand by rubber cementing the word "cloud" and "security" onto their product names and marketing materials, CohesiveFT spent considerable time upfront describing what they do not do so as to effectively establish what they are and what they do.  I'll let one of their slides do the talking:


CohesiveFT-what

Okay, so they're not a cloud, but they provide cloud and virtualization services…and VPN-Cubed provides some layer of security along with their products and services…check. But…

Digging a little deeper, I still sought to understand why I couldn't just stand up an IPSec tunnel from my corporate datacenter to one or more cloud providers where my assets and data were hosted.  I asked for two things: (1) A couple of sentences summarizing their elevator pitch for VPN-Cubed and (2) a visual representation of what this might look like.

Here's what I got as an answer to question (1):

"VPN-Cubed is a novel implementation of VPN concepts which provides a customer controlled security perimeter within a third-party controlled (cloud, utility infra, hosting provider) computing facility, across multiple third-party controlled computing facilities, or for use in private infrastructure.  It enables customer control of their infrastructure topology by allowing control of the network addressing scheme, encrypted communications, and the use of what might normally be unrouteable protocols such as UDP Multicast."

Here are two great visuals to address question (2):


CohesiveFT-VPNs

 CohesiveFT-ClustersExtended

So the differences between a typical VPN and VPN-Cubed comes down to being able to securely extend your "internal clouds infrastructure" in your datacenters (gasp! I said it) in a highly-available manner to include your externally hosted assets which in turn run on infrastructure you don't own.  You can't stand up an ASA or Neoteris box on a network you can't get to.  The VPN-Cubed Managers are VM's/VA's that run as a complement to your virtual servers/machines hosted by your choice of one or multiple cloud providers.

They become the highly-available, multiprotocol aribters of access via standardized IPSec protocols but do so in a way that addresses the dynamic capabilities of the cloud which includes service governance, load, and "cloudbursting" failover between clouds — in a transparent manner.

Essentially you get secure access to your critical assets utilizing an infrastructure independent solution, extending the VLAN segmentation, isolation and security capabilities your providers may put in place while also controlling your address spaces within the cloudspaces encompassing your VM's "behind" the VPN Managers.

VPN-Cubed is really a prime example of the collision space of dynamic application/service delivery, virtualization, security and cloud capacity governance.
  It speaks a lot to re-perimeterization that I've been yapping about for quite some time and hints at Greg Ness' Infrastructure 2.0 meme.

Currently VPN-Cubed is bundled as a package which includes both product and services and supports the majority of market-leading virtualization formats, operating systems and cloud providers such as Amazon EC2, Flexiscale, GoGrid, Mosso, etc.

It's a very slick offering.

/Hoff

Cloud Security Macro Layers? I’ll Take “It’ll Never Sell” For $1000, Alex…

November 13th, 2008 2 comments

Mogull commented yesterday on my post regarding TCG's IF-MAP and remarked that in discussing cloud security and security models, the majority of folks, myself included, were focusing on the network:

Chris’s posting, and most of the ones I’ve seen, are heavily focused
on network security concepts as they relate to the cloud. But if we
look at cloud computing at the macro level, there are additional layers
which are just as critical (in no particular order):

200811121509.jpg

  • Network: The usual network security controls.
  • Service: Security around the exposed APIs and services.
  • User: Authentication- which in the cloud world, needs to
    move to more adaptive authentication, rather than our current static
    username/password model.
  • Transaction: Security controls around individual transactions- via transaction authentication, adaptive authorization, and other approaches.
  • Data: Information-centric security controls for cloud
    based data. How’s that for buzzword bingo? Okay, this actually includes
    security controls for the back-end data, distributed data, and any
    content exchanged with the user.

I'd say that's a reasonable assertion and a valid set of additional "layers."  There also not especially unique and as such, I think Rich is himself a little disoriented by the fog of the cloud because as you'll read, the same could be said of any networked technology.

The reason we start with the network and usually find ourselves back where we started in this discussion is because the other stuff Rich mentions is just too damned hard, costs too much, is difficult to sell, isn't standardized, is generally platform dependent and is really intrusive.  See this post (Security Will Not End Up In the Network) as an example.

Need proof of how good ideas like this get mangled?  How about Web 2.0 or SOA which is for lack of a better description, exactly what RIch described in his model above; loosely coupled functional components of a modular architecture. 

We haven't even gotten close to having this solved internally on our firewalled enterprise LANs so it's somewhat apparent why it might appear to be missing in conversations regarding "the cloud."  It shouldn't be, but it is. 

It should be noted, however that there is a ton of work, solutions and initiatives that exist and continue to be worked on these topics, it's just not a priority as is evidenced by how people exercise their wallets.

And finally:

Down the road we’ll dig into these in more detail, but any time we
start distributing services and functionality over an open public
network with no inherent security controls, we need to focus on the
design issues and reduce design flaws as early as possible. We can’t
just look at this as a network problem- our authentication,
authorization, information, and service (layer 7) controls are likely
even more important.

I believe we call this thing of which he speaks, "the Internet."  I think we're about 20 years late. 😉

/Hoff

Categories: Cloud Computing, Virtualization Tags:

When The Carrot Doesn’t Work, Try a Stick: VMware Joins PCI SSC…

November 12th, 2008 1 comment

Carrotstick
I've made no secret of my displeasure with the PCI Security Standards Council's lack of initiative when it comes to addressing the challenges and issues associated with virtualization and PCI compliance*. 

My last post on the topic  brought to light an even more extreme example of the evolution of virtualization's mainstream adoption and focused on the implications that cloud computing brings to bear when addressing the PCI DSS.

I was disheartened to find that upon inquiring as to status of the formation of and participation in a virtualization-specific special interest group (SIG,) the SSC's email response to me was as follows:

On Oct 29, 2008, at 1:24 PM, PCI Participation wrote:

Hello Christofer,

Thank you for contacting the PCI Security Standards Council. At this
time, there is currently no Virtualization SIG.
The current SIGs are
Pre-Authorization and Wireless.

Please let us know if you are interested in either of those groups.

Regards,
The PCI Security Standards Council

—–Original Message—–
From: Christofer Hoff [mailto:choff@packetfilter.com]
Sent: Wednesday, October 29, 2008 12:58 PM
To: PCI Participation
Subject: Participation in the PCI DSS Virtualization SIG?

How does one get involved in the PCI DSS Virtualization SIG?

Thanks,

Christofer Hoff

The follow-on email to that said there were no firm plans to form a virtualization SIG. <SIGh>

So assuming that was the carrot approach, I'm happy to see that VMware has taken the route that only money, influence and business necessity can bring: the virtualization vendor 'stick.'  To wit (and a head-nod to David Marshall🙂

VMware is Joining PCI Security Standards Council as Participating Organization

VMware, the global leader in virtualization solutions from the
desktop to the datacenter, announced today that it is joining the PCI
Security Standards Council. As a participating organization, VMware
will work with the council to evolve the PCI Data Security Standard
(DSS) and other payment card data protection standards. This will help
those VMware customers in the retail industry who are required to meet
these standards to remain compliant while leveraging VMware
virtualization. VMware has also launched the VMware Compliance Center Web site,
an initiative to help educate merchants and auditors about how to
achieve, maintain and demonstrate compliance in virtual environments to
meet a number of industry standards, including the PCI DSS.

As a participating organization, VMware will now have access to the
latest payment card security standards from the council, be able to
provide feedback on the standards and become part of a growing
community that now includes more than 500 organizations. In an era of
increasingly sophisticated attacks on systems, adhering to the PCI DSS
represents a significant aspect of an entitys protection against data criminals. By joining as a participating organization, VMware is adding its voice to the process.

The PCI Security Standards Council is committed to helping everyone involved in the payment chain protect consumer payment data, said Bob Russo, general manager of the PCI Security Standards Council. By participating in the standards setting process, VMware demonstrates it is playing an active part in this important end goal.

Let's see if this leads to the formation of a virtualization SIG or at least a timetable for when the DSS will be updated with virtualization-specific guidelines.   I'd like to see other virtualization vendors also become participating organizations in the PCI SSC.

/Hoff

* Here are a couple of my other posts on PCI compliance and virtualization:


Categories: Virtualization, VMware Tags:

I Can Haz TCG IF-MAP Support In Your Security Product, Please…

November 10th, 2008 3 comments

Quantumlolcat
In my previous post titled "Cloud Computing: Invented By Criminals, Secured By ???" I described the need for a new security model, methodology and set of technologies in the virtualized and cloud computing realms built to deal with the dynamic and distributed nature of evolving computing:

This
basically means that we should distribute the sampling, detection and
prevention functions across the entire networked ecosystem, not just to
dedicated security appliances; each of the end nodes should communicate
using a standard signaling and telemetry protocol so that common
threat, vulnerability and effective disposition can be communicated up
and downstream to one another and one or more management facilities.

Greg Ness from Infoblox reminded me in the comments of that post of something I was very excited about when it
became news at InterOp this last April: the Trusted Computing Group's (TCG) extension to the Trusted Network Connect (TNC) architecture called IF-MAP.

IF-MAP is a standardized real-time publish/subscribe/search mechanism which utilizies a client/server, XML-based SOAP protocol to provide information about network security objects and events including their state and activity:

IF-MAP extends the TNC architecture to support standardized, dynamic data interchange among a wide variety of networking and security components, enabling customers to implement multi-vendor systems that provide coordinated defense-in-depth.
 
Today’s security systems – such as firewalls, intrusion detection and prevention systems, endpoint security systems, data leak protection systems, etc. – operate as “silos” with little or no ability to “see” what other systems are seeing or to share their understanding of network and device behavior. 

This limits their ability to support coordinated defense-in-depth. 
In addition, current NAC solutions are focused mainly on controlling
network access, and lack the ability to respond in real-time to
post-admission changes in security posture or to provide visibility and
access control enforcement for unmanaged endpoints.  By extending TNC
with IF-MAP, the TCG is providing a standard-based means to address
these issues and thereby enable more powerful, flexible, open network
security systems.

While the TNC was initially designed to support NAC solutions, extending the capabilities to any security product to subscribe to a common telemetry and information exchange/integration protocol is a fantastic idea.

TNC-IFMAP

I'm really interested in how many vendors outside of the NAC space are including IF-MAP in their roadmaps. While IF-MAP has potential in convential non-virtualized infrastructure, I see a tremendous need for it in our move to Infrastructure 2.0 with virtualization and Cloud Computing. 

Integrating, for example, IF-MAP with VM-Introspection capabilities (in VMsafe, XenAccess, etc.) would be fantastic as you could tie the control planes of the hypervisors, management infrastructure, and provisioning/governance engines with that of security and compliance in near-time.

You can read more about the TCG's TNC IF-MAP specification here.

/Hoff


 

Cloud Computing: Invented By Criminals, Secured By ???

November 3rd, 2008 10 comments

I was reading Reuven Cohen's "Elastic Vapor: Life In the Cloud Blog" yesterday and he wrote an interesting piece on what is being coined "Fraud as a Service."  Basically, Reuven describes the rise of botnets as the origin of "cloud" based service utilities as chronicled from Uri Rivner's talk at RSA Europe:

I hate to tell you this, it wasn't Amazon, IBM or even Sun who invented
cloud computing. It was criminal technologists, mostly from eastern
Europe who did. Looking back to the late 90's and the use of
decentralized "warez" darknets. These original private "clouds" are the
first true cloud computing infrastructures seen in the wild. Even way
back then the criminal syndicates had developed "service oriented
architectures" and federated id systems including advanced encryption.
It has taken more then 10 years before we actually started to see this
type of sophisticated decentralization to start being adopted by
traditional enterprises
.

The one sentence that really clicked for me was the following:

In this new world order, cloud computing will not just be a requirement for scaling your data center but also protecting it.

Amen. 

One of the obvious benefits of cloud computing is the distribution of applications, services and information.  The natural by-product of this is additional resiliency from operational downtime caused by error or malicious activity.

This benefit is a also a forcing function; it will require new security methodologies and technology to allow the security (policies) to travel with the applications and data as well as enforce it.

I wrote about this concept back in 2007 as part of my predictions for 2008 and highlighted it again in a post titled: "Thinning the Herd and Chlorinating the Malware Gene Pool" based on some posts by Andy Jaquith:

Grid and distributed utility computing models will start to creep into security
A
really interesting by-product of the "cloud compute" model is that as
data, storage, networking, processing, etc. get distributed, so shall
security.  In the grid model, one doesn't care where the actions take
place so long as service levels are met and the experiential and
business requirements are delivered.  Security should be thought of in
exactly the same way. 

The notion that you can point to a
physical box and say it performs function 'X' is so last Tuesday.
Virtualization already tells us this.  So, imagine if your security
processing isn't performed by a monolithic appliance but instead is
contributed to in a self-organizing fashion wherein the entire
ecosystem (network, hosts, platforms, etc.) all contribute in the
identification of threats and vulnerabilities as well as function to
contain, quarantine and remediate policy exceptions.

Sort
of sounds like that "self-defending network" schpiel, but not focused
on the network and with common telemetry and distributed processing of
the problem.
Check out Red Lambda's cGrid technology for an interesting view of this model.

This
basically means that we should distribute the sampling, detection and
prevention functions across the entire networked ecosystem, not just to
dedicated security appliances; each of the end nodes should communicate
using a standard signaling and telemetry protocol so that common
threat, vulnerability and effective disposition can be communicated up
and downstream to one another and one or more management facilities.

It will be interesting to watch companies, established and emerging, grapple with this new world.

/Hoff