Archive

Author Archive

Joanna Rutkowska: Making Invisible Things Visible…

March 25th, 2009 No comments

I’ve had my issues in the past with Joanna Rutkowska; the majority of which have had nothing to do with technical content of her work, but more along the lines of how it was marketed.  That was then, this is now.

Recently, the Invisible Things Lab team have released some really interesting work regarding attacking the SMM.  What I’m really happy about is that Joanna and her team are really making an effort to communicate the relevance and impact  the team’s research and exploits really have in ways they weren’t doing before.  As much as I was critical previously, I must acknowledge and thank her for that, too.

I’m reasonably sure that Joanna could care less what I think, but I think the latest work is great and really does indicate the profoundly shaky foundation upon which we’ve built our infrastructure and I am thankful for what this body of work points out. 

Here’s a copy of Joanna’s latest blog titled “The Sky is Falling?” explaining such:

A few reporters asked me if our recent paper on SMM attacking via CPU cache poisoning means the sky is really falling now?

Interestingly, not many people seem to have noticed that this is the 3rd attack against SMM our team has found in the last 10 months. OMG 😮

But anyway, does the fact we can easily compromise the SMM today, and write SMM-based malware, does that mean the sky is falling for the average computer user?

No! The sky has actually fallen many years ago… Default users with admin privileges, monolithic kernels everywhere, most software unsigned and downloadable over plaintext HTTP — these are the main reasons we cannot trust our systems today. And those pathetic attempts to fix it, e.g. via restricting admin users on Vista, but still requiring full admin rights to install any piece of stupid software. Or selling people illusion of security via A/V programs, that cannot even protect themselves properly…

It’s also funny how so many people focus on solving the security problems by “Security by Correctness” or “Security by Obscurity” approaches — patches, patches, NX and ASLR — all good, but it is not gonna work as an ultimate protection (if it could, it would worked out already).

On the other hand, there are some emerging technologies out there that could allow us to implement effective“Security by Isolation” approach. Such technologies as VT-x/AMD-V, VT-d/IOMMU or Intel TXT and TPM.

So we, at ITL, focus on analyzing those new technologies, even though almost nobody uses them today. Because those technologies could actually make the difference. Unlike A/V programs or Patch Tuesdays, those technologies can change the level of sophistication required for the attacker dramatically.

The attacks we focus on are important for those new technologies — e.g. today Intel TXT is pretty much useless without protection from SMM attacks. And currently there is no such protection, which sucks. SMM rootkits sound sexy, but, frankly, the bad guys are doing just fine using traditional kernel mode malware (due to the fact that A/V is not effective). Of course, SMM rootkits are just yet another annoyance for the traditional A/V programs, which is good, but they might not be the most important consequence of SMM attacks.

So, should the average Joe Dow care about our SMM attacks? Absolutely not!

I really appreciate the way this is being discussed; I think the ITL work is (now) moving the discussion forward by framing the issues instead of merely focusing on sensationalist exploits that whip people into a frenzy and cause them to worry about things they can’t control instead of the things they unfortunately choose not to 😉 

I very much believe that we can and will see advancements with the “security by isolation” approach; a lot of other bad habits and classes of problems can be eliminated (or at least significantly reduced) with the benefit of virtualization technology. 

/Hoff

Cloud Catastrophes (Cloudtastrophes?) Caused by Clueless Caretakers?

March 22nd, 2009 4 comments
You'll ask "How?" Everytime... 

 

 

 

You'll ask "How?" Everytime...

Enter the dawn of the Cloudtastrophe…

I read a story today penned by Maureen O’Gara titled “Carbonite Loses Cloud-Based Data, Sues Storage Vendor.”

I thought this was going to be another story regarding a data breach (loss) of customer data by a Cloud Computing service vendor.

What I found, however, was another hyperbolic illustration of how the messaging of the Cloud by vendors has set expectations for service and reliability that are out of alignment with reality when you take a lack of: sound enterprise architecture, proper contingency planning, solid engineering and common sense and add the economic lubricant of the Cloud.

Stir in a little journalistic sensationalism, and you’ve got CloudWow!

Carbonite, the online backup vendor, says it lost data belonging to over 7,500 customers in a number of separate incidents in a suit filed in Massachusetts charging Promise Technology Inc with supplying it with $3 million worth of defective storage, according to a story in Saturday’s Boston Globe.

The catastrophe is the latest in a series of cloud failures.

The widgetry was supposed to detect disk failures and transfer the data to a working drive. It allegedly didn’t.

The story says Promise couldn’t fix the errors and “Carbonite’s senior engineers, senior management and senior operations personnel…spent enormous amounts of time dealing with the problems.”

Carbonite claims the data losses caused “serious damage” to its business and reputation for reliability. It’s demanding unspecified damages. Promise told the Globe there was “no merit to the allegations.”

Carbonite, which sells largely to consumers and small businesses and competes with EMC’s Mozy, tells its customers: “never worry about losing your files again.”

The abstraction of infrastructure and democratization of applications and data that Cloud Computing services can bring does not mean that all services are created equal.  It does not make our services or information more secure (or less for that matter.)  Just because a vendor brands themselves as a “Cloud” provider does not mean that “their” infrastructure is any more implicitly reliable, stable or resilient than traditional infrastructure or that proper enterprise architecture as it relates to people, process and technology is in place.  How the infrastructure is built and maintained is just as important as ever.

If you take away the notion of Carbonite being a “Cloud” vendor, would this story read any differently?

We’ve seen a few failures recently of Cloud-based services, most of them sensationally lumped into the Cloud pile: Google, Microsoft, and even Amazon; most of the stories about them relate the impending doom of the Cloud…

Want another example of how Cloud branding, the Web2.0 experience and blind faith makes for another FUDtastic “catastrophe in the cloud?”  How about the well-known service Ma.gnolia?

There was a meltdown at bookmark sharing website Ma.gnolia Friday morning. The service lost both its primary store of user data, as well as its backup. The site has been taken offline while the team tries to reconstruct its databases, though some users may never see their stored bookmarks again.

The failure appears to be catastrophic. The company can’t say to what extent it will be able to restore any of its users’ data. It also says the data failure was so extensive, repairing the loss will take “days, not hours.”

So we find that a one man shop was offering a service that people liked and it died a horrible death.  Because it was branded as a Cloud offering, it “seemed” bigger than it was.  This is where perception definitely was not reality and now we’re left with a fluffy bad taste in our mouths.

Again, what this illustrates is that just because a service is “Cloud-based” does not imply it’s any more reliable or resilient as one that is not. It’s just as important that as enterprises look to move to the Cloud that they perform as much due diligence on their providers as makes sense. We’ll see a weeding out of the ankle-biters in Cloud Computing.

Nobody ever gets fired for buying IBM…

What we’ll also see is that even though we’re not supposed to care what our Cloud providers’ infrastructure is powered by and how, we absolutely will in the long term and the vendors know it.

This is where people start to freak about how standards and consolidation will kill innovation in the space but it’s also where the realities of running a business come crashing down on early adopters.

Large enterprises will move to providers who can demonstrate that their services are solid by way of co-branding with the reputation of the providers of infrastructure coupled with the compliance to “standards.”

The big players like IBM see this as an opportunity and as early as last year introduced a Cloud certification program:

IBM to Validate Resiliency of Cloud Computing Infrastructures

Will Consult With Businesses of All Sizes to Ensure Resiliency, Availability, Security; Drive Adoption of New Technology

ARMONK, NY – 24 Nov 2008: In a move that could spur the rise of the nascent computing model known as “cloud,” IBM (NYSE: IBM) today said it would introduce a program to validate the resiliency of any company delivering applications or services to clients in the cloud environment. As a result, customers can quickly and easily identify trustworthy providers that have passed a rigorous evaluation, enabling them to more quickly and confidently reap the business benefits of cloud services.

Cloud computing is a model for network-delivered services, in which the user sees only the service and does not view the implementation or infrastructure required for its delivery. The success to date of cloud services like storage, data protection and enterprise applications, has created a large influx of new providers. However, unpredictable performance and some high-profile downtime and recovery events with newer cloud services have created a challenge for customers evaluating the move to cloud.

IBM’s new “Resilient Cloud Validation” program will allow businesses who collaborate with IBM on a rigorous, consistent and proven program of benchmarking and design validation to use the IBM logo: “Resilient Cloud” when marketing their services.

Remember the “Cisco Powered Network” program?  How about a “Cisco Powered Cloud?”  See how GoGrid advertises their load balancers are f5?

In the long term, like the CapitalOne credit card commercials challenging the company providing your credit card services by asking “What’s in your wallet?” you can expect to start asking the same thing about your Cloud providers’ offerings, also.

/Hoff

 

Azure Users Seeing Red: When Patching the Cloud Causes Cracks

March 19th, 2009 4 comments

No, this isn’t one of those posts that suggests we can’t depend on the Cloud just because of one (ok, many) outages of note lately.  That’s so dystopic.  Besides, everyone else is already doing that.

I mean just because Azure was offline for 22 hours isn’t cause for that much concern, right?  It’s a beta community technology preview, anyway… 😉  Just like Google’s a beta.

azureWhat I found interesting was what Microsoft reported as the root cause for the outage, however:

 

The Windows Azure Malfunction This Weekend

First things first: we’re sorry.  As a result of a malfunction in Windows Azure, many participants in our Community Technology Preview (CTP) experienced degraded service or downtime.  Windows Azure storage was unaffected.

In the rest of this post, I’d like to explain what went wrong, who was affected, and what corrections we’re making.

What Happened?

During a routine operating system upgrade on Friday (March 13th), the deployment service within Windows Azure began to slow down due to networking issues.  This caused a large number of servers to time out and fail.

You catch that bit about “…a routine operating system upgrade?”  Sometimes we call those things “patches.”  Even if this wasn’t a patch, let’s call it one for argument’s sake, okay?

As such, I was reminded of a blog post that I wrote last year titled: “Patching the Cloud” in which I squawked about my concerns regarding patching and change management/roll-back in Cloud services.  It seems apropos:

 

Your application is sitting atop an operating system and underlying infrastructure that is managed by the cloud operator.  This “datacenter OS” may not be virtualized or could actually be sitting atop a hypervisor which is integrated into the operating system (Xen, Hyper-V, KVM) or perhaps reliant upon a third party solution such as VMware.  The notion of cloud implies shared infrastructure and hosting platforms, although it does not imply virtualization.

A patch affecting any one of the infrastructure elements could cause a ripple effect on your hosted applications.  Without understanding the underlying infrastructure dependencies in this model, how does one assess risk and determine what any patch might do up or down the stack?  …

Huh.  Go figure.  

/Hoff

 

Bypassing the Hypervisor For Performance & Network “Simplicity” = Bypassing Security?

March 18th, 2009 4 comments

As part of his coverage of Cisco’s UCS, Alessandro Perilli from virtualization.info highlighted this morning something I’ve spoken about many times since it was a one-slider at VMworld (latest, here) but that we’ve not had a lot of details about: the technology evolution of Cisco’s Nexus 1000v & VN-Link to the “Initiator:”

Chad Sakac, Vice President of VMware Technology Alliance at EMC, adds more details on his personal blog:

…[The Cisco] VN-Link can apply tags to ethernet frames –  and is something Cisco and VMware submitted together to the IEEE to be added to the ethernet standards.

It allows ethernet frames to be tagged with additional information (VN tags) that mean that the need for a vSwitch is eliminated.   the vSwitch is required by definition as you have all these virtual adapters with virtual MAC addresses, and they have to leave the vSphere host on one (or at most a much smaller number) of ports/MACs.   But, if you could somehow stretch that out to a physical switch, that would mean that the switch now has “awareness” of the VM’s attributes in network land – virtual adapters, ports and MAC addresses.   The physical world is adapting to andgaining awareness of the virtual world…

 

Bundle that with Scott Lowe’s interesting technical exploration of some additional elements of UCS as it relates to abstracting — or more specifically completely removing virtual networking from the hypervisor — and things start to get heated.  I’ve spoken about this in my Four Horsemen presentation:

Today, in the VMware space, virtual machines are connected to a vSwitch because connecting them directly to a physical adapter just isn’t practical. Yes, there is VMDirectPath, but for VMDirectPath to really work it needs more robust hardware support. Otherwise, you lose useful features like VMotion. (Refer back to my VMworld 2008 session notes from TA2644.) So, we have to manage physical switches and virtual switches—that’s two layers of management and two layers of switching. Along comes the Cisco Nexus 1000V. The 1000V helps to centralize management but we still have two layers of switching.

That’s where the “Palo” adapter comes in. Using VMDirectPath “Gen 2″ (again, refer to my TA2644 notes) and the various hardware technologies I listed and described above, we now gain the ability to attach VMs directly to the network adapter and eliminate the virtual switching layer entirely. Now we’ve both centralized the management and eliminated an entire layer of switching. And no matter how optimized the code may be, the fact that the hypervisor doesn’t have to handle packets means it has more cycles to do other things. In other words, there’s less hypervisor overhead. I think we can all agree that’s a good thing

 

So here’s what I am curious about. If we’re clawing back networking form the hosts and putting it back into the network, regardless of flow/VM affinity AND we’re bypassing the VMM (where the dvfilters/fastpath drivers live for VMsafe,) do we just lose all the introspection capabilities and the benefits of VMsafe that we’ve been waiting for?  Does this basically leave us with having to shunt all traffic back out to the physical switches (and thus physical appliances) in order to secure traffic?  Note, this doesn’t necessarily impact the other components of VMsafe (memory, CPU, disk, etc.) but the network portion it would seem, is obviated.

Are we trading off security once again for performance and “efficiency?”  How much hypervisor overhead (as Scott alluded to) are we really talking about here for network I/O?

Anyone got any answers?  Is there a simple  answer to this or if I use this option, do I just give up what I’ve been waiting 2 years for in VMsafe/vNetworking?

/Hoff

Categories: Cisco, Virtualization, VMware Tags:

Google and Privacy: an EPIC Fail…

March 18th, 2009 2 comments

“I do not think this means what you think it means…”

This isn’t a post specific to Google’s struggles with privacy, specifically, but rather the Electronic Privacy Information Center’s (EPIC) tactics in a complaint/petition filed with the FTC in which EPIC claims that the privacy and security risks associated with Google’s “Cloud Computing Services” are inadequate, injurious to consumers, and that Google has engaged in “unfair and/or deceptive trade policies.”  

EPIC is petitioning the FTC to “..enjoin Google from offering such services until safeguards are verifiable established” as well as compel them to “…contribute $5,000,000 to a public fund that will help support, research concerning privacy enhancing technologies.”

In reading the petition which you can find here, you will notice that parallels are drawn and overtly called out that liken Google’s recent issues to that of TJX and ChoicePoint.  The report is a rambling mess of hyperbolic references and footnotes which appears is meant to froth the FTC into action, especially by suggesting the overt comparison to the breaches of confidential information from the likes of the aforementioned companies.

EPIC suggests that Google’s indadequate security is both an unfair business practice and a deceptive trade practice and while these two claims make up the meat of the complaint, they represent the smallest amount of text in the report with the most amount of emotive melodrama: “…consumer’s justified privacy expectations were dashed…” “…the Google Docs Data Breach exposed consumers’ personal information…”  I can haz evidence of these claims, please?

While I’m not happy with some of Google’s practices as they relate to privacy, nor am I pleased with hiccups they’ve had with services like GMail and the most recent “privacy pollution” issue surrounding Google Docs, here’s an interesting factoid that EPIC seems to have missed:

Google Apps like those mentioned are FREE. We consumers are not engaging in “Trade” when we don’t pay for said services. Further, we as consumers must accept the risk associated with said offerings when we agree to the terms of service. Right, wrong, or indifferent, you get what you pay for and should expect NO privacy despite Google’s best efforts to provide it (or not.)

I could tolerate this pandering to the FTC if it were not for what amounts to the jumping the shark on the part of EPIC by plastering Cloud Computing as the root of all evil (with Google as the ringmaster) and the blatant publicity stunt and fundraising attempt by demanding that the FTC “compel” Google to bleed out $5,000,000 to a fund that would likely feed more of this sort of drivel.

If we want privacy advancements with Google or any Cloud Computing service provider, this isn’t the way to do it.

As my good friend David Mortman said “EPIC apparently thinks its all about publicity. They are turning into the peta of privacy.” 

I agree. What’s next?  Will we rename personally identifiable information to “information kittens?”

/Hoff

P.S. Again, I am not trying to downplay any concerns with privacy in Cloud Computing because EPIC’s report does do a reasonable job of highlighting issues.  My friend Zach Lanier (@quine) did a great job summarizing his reaction to the post here:

It’s almost as though EPIC need to remind everyone that they still exist

and haven’t become entirely decrepit and overshadowed by the EFF. The

document is well assembled, citing examples that most users *don’t*

consider when using Google services (or just about any *aaS, for that

matter). Incidentally, the complaint references a recently published

report from the World Privacy Forum on privacy risks in Cloud

Computing[1]. Both documents raise a few similar points.

 

For example, how many of us actually read, end-to-end, the TOS and

privacy policy of the Provider? How many of us validate claims like

“your data are safe from unauthorized access when you store it on our

Cumulonimbus Mega Awesome Cloud Storage Platform”?

 

I, for one, laud EPIC’s past efforts and the heart whence this complaint

emerges. However, like a few others, the request for enjoinment

basically negated my support for the complaint in its entirety.

 

[1] http://www.worldprivacyforum.org/pdf/WPF_Cloud_Privacy_Report.pdf),

— Zach Lanier | http://n0where.org/ | (617) 606-3451 FP: 7CC5 5DEE E46F 5F41 9913 1577 E320 1D64 A200 AB49

The Frogs Who Desired a King: A Virtualization & Cloud Computing Fable [Slides]

March 17th, 2009 9 comments

frogs-title001I’m loathe to upload this presentation because really the slides accompany me (not the other way around) and there’s a ton of really important subtext and dialog that goes along with them, but I’m getting hammered with requests to release the deck, so here it is.

I will be giving this presentation at various venues over the next few months as well as the second in the series titled “Cloudifornication: Indiscriminate Information Intercourse Involving Internet Infrastructure.”  

At any rate, it’s another rather colorful presentation. It’s in PDF format and is roughly 12MB.

Click here to download it.

Enjoy

/Hoff

The UFC and UCS: Cisco Is Brock Lesnar

March 17th, 2009 7 comments

Lesnar vs. Mir...My favorite sport is mixed martial arts (MMA.)

MMA is a combination of various arts and features athletes who come from a variety of backgrounds and combine many disciplines that they bring to the the ring.  

You’ve got wrestlers, boxers, kickboxers, muay thai practitioners, jiu jitsu artists, judoka, grapplers, freestyle fighters and even the odd karate kid.

Mixed martial artists are often better versed in one style/discipline than another given their strengths and background but as the sport has evolved, not being well-rounded means you run the risk of being overwhelmed when paired against an opponent who can knock you out, take you down, ground and pound you, submit you or wrestle/grind you into oblivion.  

The UFC (Ultimate Fighting Championship) is an organization which has driven the popularity and mainstream adoption of MMA as a recognizable and sanctioned sport and has given rise to some of the most notable MMA match-ups in recent history.

One of those match-ups included the introduction of Brock Lesnar — an extremely popular “professional” wrestler — who has made the  transition to MMA.  It should be noted that Brock Lesnar is an aberration of nature.  He is an absolute monster:  6’3″ and 276 pounds.  He is literally a wall of muscle, a veritable 800 pound gorilla.

In his first match, he was paired up against a veteran in MMA and former heavyweight champion, Frank Mir, who is an amazing grappler known for vicious submissions.  In fact, he submitted Lesnar with a nasty kneebar as Lesnar’s ground game had not yet evolved.  This is simply part of the process.  Lesnar’s second fight was against another veteran, Heath Herring, who he manhandled to victory.  Following the Herring fight, Lesnar went on to fight one of the legends of the sport and reigning heavyweight champion, Randy Couture.  

Lesnar’s skills had obviously progressed and he looked great against Couture and ultimately won by a TKO.

So what the hell does the UFC have to do with the Unified Computing System (UCS?)

Cisco UCS Components

Cisco UCS Components

 

Cisco is to UCS as Lesnar is to the UFC.

Everyone wrote Lesnar off after he entered the MMA world and especially after the first stumble against an industry veteran.

Imagine the surprise when his mass, athleticism, strength, intelligence and tenacity combined with a well-versed strategy paid off as he’s become an incredible force to be reckoned with in the MMA world as his skills progressed.  Oh, did I mention that he’s the World Heavyweight Champion now?

Cisco comes to the (datacenter) cage much as Lesnar did; an 800 pound gorilla incredibly well-versed in one  set of disciplines, looking to expand into others and become just as versatile and skilled in a remarkably short period of time.  Cisco comes to win, not compete. Yes, Lesnar stumbled in his first outing.  Now he’s the World Heavyweight Champion.  Cisco will have their hiccups, too.

The first elements of UCS have emerged.  The solution suite with the help of partners will refine the strategy and broaden the offerings into a much more well-rounded approach.  Some of Cisco’s competitors who are bristling at Cisco’s UCS vision/strategy are quick to criticize them and reduce UCS to simply an ill-executed move “…entering the server market.”  

I’ve stated my opinions on this short-sighted perspective:

Yes, yes. We’ve talked about this before here. Cisco is introducing a blade chassis that includes compute capabilities (heretofore referred to as a ‘blade server.’)  It also includes networking, storage and virtualization all wrapped up in a tidy bundle.

So while that looks like a blade server (quack!,) walks like a blade server (quack! quack!) that doesn’t mean it’s going to be positioned, talked about or sold like a blade server (quack! quack! quack!)
What’s my point?  What Cisco is building is just another building block of virtualized INFRASTRUCTURE. Necessary infrastructure to ensure control and relevance as their customers’ networks morph.

My point is that what Cisco is building is the natural by-product of converged technologies with an approach that deserves attention.  It *is* unified computing.  It’s a solution that includes integrated capabilities that otherwise customers would be responsible for piecing together themselves…and that’s one of the biggest problems we have with disruptive innovation today: integration.

 

The knee-jerk dismissals witnessed since yesterday by the competition downplaying the impact of UCS are very similar to how many people reacted to Lesnar wherein they suggested he was one dimensional and had no core competencies beyond wrestling, discounting his ability to rapidly improve and overwhelm the competition.  

Everyone seems to be focused on the 5100 — the “blade server” — and not the solution suite of which it is a single piece; a piece of a very innovative ecosystem, some Cisco, some not.  Don’t get lost in the “but it’s just a blade server and HP/IBM/Dell can do that” diatribe.  It’s the bigger picture that counts.

The 5100 is simply that — one very important piece of the evolving palette of tools which offer the promise of an integrated solution to a radically complex set of problems.

Is it complete?  Is it perfect?  Do we have all the details? Can they pull it off themselves?  The answer right now is a simple “No.”  But it doesn’t have to be.  It never has.

There’s a lot of work to do, but much like a training camp for MMA, that’s why you bring in the best partners with which to train and improve and ultimately you get to the next level.

All I know is that I’d hate to be in the Octagon with Cisco just like I would with Lesnar.

/Hoff

BeanSec! Wednesday, March 18, 2009 – 6PM to ?

March 15th, 2009 1 comment

Beansec3_2Yo!  BeanSec! is once again upon us.  Wednesday, March 18, 2009.

Middlesex Lounge: 315 Massachusetts Ave, Cambridge 02139. 

BeanSec! is an informal meetup of information security professionals, researchers and academics in the Greater Boston area that meets the third Wednesday of each month.

I say again, BeanSec! is hosted the third Wednesday of every month.  Add it to your calendar.

Come get your grub on and have a drink.  Lots of good people show up.  Really.

Unlike other meetings, you will not be expected to pay dues, “join up”, present a zero-day exploit, or defend your dissertation to attend.

Don't worry about being "late" because most people just show up when they can. 6:30 is a good time to aim for. We'll try and save you a seat. There is a plenty of parking around or take the T.

The food selection is basically high-end finger-food appetizers and the drinks are really good; an attentive staff and eclectic clientèle make the joint fun for people watching. Zach and I will generally annoy you into participating somehow, even if it's just fetching napkins. 😉

This week's BeanSec refreshments sponsored by: IOActive

We often retire across the street to Asgard for more substantive fare after the event and then to Tosci's for coffee…

A little administrivia note: After 2 years, we're finally getting the beansec.org domain, blog, email, etc. setup…expect completion in about a week.

See you there!

/Hoff

Categories: BeanSec! Tags:

How To Be PCI Compliant in the Cloud…

March 15th, 2009 9 comments

Monkeys
I kicked off a bit of a dust storm some months ago when I wrote a tongue-in-cheek post titled "Please Help Me: I Need a QSA to Assess PCI/DSS Compliance In the Cloud."  It may have been a little contrived, but it asked some really important questions and started some really good conversations on my blog and elsewhere.

At SourceBoston I sat in on Mike Dahn's presentation titled "Cloud Compliance and Privacy" in which he did an excellent job outlining the many issues surrounding PCI and Compliance and it's relevance to Cloud Computing.  

Shortly thereafter, I was speaking to Geva Perry and James Urquhart on their "Overcast" podcast and the topic of PCI and Cloud came up. 

Geva asked me if after my rant on PCI and Cloud if what I was saying was that one could never be PCI compliant in the Cloud.  I basically answered that one could be PCI compliant in the Cloud depending upon the services used/offered by the provider and what sort of data you trafficked in.

Specifically, Geva made reference to the latest announcement by Rackspace regarding their Mosso Cloud offering and PCI compliance in which they tout that by using Mosso, a customer can be "PCI Compliant"  Since I hadn't seen the specifics of the offering, I deferred my commentary but here's what I found:

Cloud Sites, Mosso|The Rackspace Cloud’s Flagship offering, is officially the very first cloud hosting solution to enable an Internet merchant to pass PCI Compliance scans for both McAfee’s PCI scans and McAfee Secure Site scans. 

This achievement occurred just after Computer World published an article where some CIO’s shared their concern that Cloud Computing is still limited to “things that don’t require full levels of security.”  This landmark breakthrough may be the beginning of an answer to those fears, as Mosso leads Cloud Hosting towards a solid future of trust and reliability.

Mosso's blog featured an example of a customer — The Spreadsheet Store — who allegedly attained PCI compliance by using Mosso's offering. Pay very close attention to the bits below:

“We are making the Cloud business-ready.  Online merchants, like The Spreadsheet Store can now benefit from the scalability of the Cloud without compromising the security of online transactions,” says Emil Sayegh, General Manager of Mosso|The Rackspace Cloud.  “We are thrilled to have worked with The Spreadsheet Store to prepare the Cloud for their online transactions.”

The Spreadsheet Store set up their site using aspdotnetstorefront, “Which is, in our opinion, the best shopping cart solution on the market today,” says Murphy.  “It also happens to be fully compatible with Mosso.”  Using Authorize.Net, a secure payment gateway, to handle credit card transaction, The Spreadsheet Store does not store any credit card information on the servers.  Murphy and team use MaxMind for fraud prevention, Cardinal Commerce for MasterCard Secure Code and Verified by Visa, McAfee for PCI and daily vulnerability scans, and Thawte for SSL certification.

So after all of those lofty words relating to "…preparing the Cloud for…online transactions," what you can decipher is that Mosso doesn't seem to provide services to The Spreadsheet Store which are actually in scope for PCI in the first place!*

The Spreadsheet store redirects that functionality to a third party card processor!  

So what this really means is if you utilize a Cloud based offering and don't traffic in data that is within PCI scope and instead re-direct/use someone else's service to process and store credit card data, then it's much easier to become PCI compliant.  Um, duh. 

The goofiest bit here is that in Mosso's own "PCI How-To" (warning: PDF) primer, they basically establish that you cannot be PCI compliant by using them if you traffic in credit card information:

Cloud Sites is not currently designed for the storage or archival of credit card information.  In order to build a PCI compliant e-commerce solution, Cloud Sites needs to be paired up with a payment gateway partner.

Doh!

I actually wrote quite a detailed breakdown of this announcement for this post yes
terday, but I awoke to find my buddy Craig Balding had already done a stellar job of that (curses, timezones!)  I'll refer you to his post on the matter, but here's the gem in all of this.  Craig summed it up perfectly:

The fact that Mosso is seeking ways to help their customers off-load as much PCI compliance requirements to other 3rd parties is fine – it makes business sense for them and their merchant customers.  It’s their positioning of the effort as a “landmark breakthrough” and that they are somehow pioneers which leads to generalisations rooted in misunderstandings that is the problem.
Next time you hear someone say ‘Cloud Provider X is PCI compliant’, ask the golden PCI question: is their Cloud receiving, processing, storing or transmitting Credit Card data (as defined by the PCI DSS)?  If they say ‘No’, you’ll know what that really means…marketecture.

There's some nifty marketing for you, eh?

* Except for the fact that the web servers housed at Mosso must undergo regularly-scheduled vulnerability scans — which Mosso doesn't do, either.

On the Overcast Podcast with Geva Perry and James Urquhart

March 13th, 2009 No comments

Overcastlogo
Geva and James were kind (foolish?) enough to invite me onto their Overcast podcast today:

In this podcast we talk to Christopher Hoff, renowned information security expert, and especially security in the context of virtualization and cloud computing. Chris is the author of the Rational Survivability blog, and can be followed as @Beaker on Twitter.
Show Notes:

    • Chris talks about some of the myths and misconceptions about security in the cloud. He addresses the claim that Cloud Providers Are Better At Securing Your Data Than You Are and the benefits and shortcomings of security in the cloud.
    • We talk about Chris's Taxonomy of Cloud Computing (excuse me, model of cloud computing)
    • Chris goes through some specific challenges and solutions for PCI-compliance in the cloud
    • Chris examines some of the security issues associated with multi-tenant architecture and virtualization
Check it out here.

/Hoff