Archive

Author Archive

The Cloud As a (Hax0r’s) Calculator. Yawn…

January 11th, 2011 2 comments
How a botnet works: 1. A botnet operator sends...
Image via Wikipedia

If I see another news story that talks about how a “hacker” has shown that by “utilizing the cloud” to harness compute on demand to do what otherwise one might use a botnet or specialized hardware to perform BUT otherwise suggest that it somehow compromises an entire branch of technology, I’m going to…

Meh.

Yeah, cloud makes this cheap and accessible…rainbow table cracking using IaaS images via a cloud provider…passwords, wifi creds, credit card numbers, pi…

Please.

See:

Researcher cracks Wi-Fi passwords with Amazon cloud

Using Cloud Computing To Crack Passwords – Amazon’s EC2

Enhanced by Zemanta
Categories: Cloud Computing, Cloud Security Tags:

Incomplete Thought: Why Security Doesn’t Scale…Yet.

January 11th, 2011 1 comment
X-ray machines and metal detectors are used to...
Image via Wikipedia

There are lots of reasons one might use to illustrate why operationalizing security — both from the human and technology perspectives — doesn’t scale.

I’ve painted numerous pictures highlighting the cyclical nature of technology transitions, the supply/demand curve related to threats, vulnerabilities, technology and compensating controls and even relevant anecdotes involving the intersection of Moore’s and Metcalfe’s laws.  This really was a central theme in my Cloudinomicon presentation; “idempotent infrastructure, building survivable systems and bringing sexy back to information centricity.”

Here are some other examples of things I’ve written about in this realm.

Batting around how public “commodity” cloud solutions forces us to re-evaluate how, where, why and who “does” security was an interesting journey.  Ultimately, it comes down to architecture and poking at the sanctity of models hinged on an operational premise that may or may not be as relevant as it used to be.

However, I think the most poignant and yet potentially obvious answer to the “why doesn’t security scale?” question is the fact that security products, by design, don’t scale because they have not been created to allow for automation across almost every aspect of their architecture.

Automation and the interfaces (read: APIs) by which security products ought to be provisioned, orchestrated, and deployed are simply lacking in most security products.

Yes, there exist security products that are distributed but they are still managed, provisioned and deployed manually — generally using a management hub-spoke model that doesn’t lend itself to automated “anything” that does not otherwise rely upon bubble-gum and bailing wire scripting…

Sure, we’ve had things like SNMP as a “standard interface” for “management” for a long while 😉  We’ve had common ways of describing threats and vulnerabilities.  Recently we’ve seen the emergence of XML-based APIs emerge as a function of the latest generation of (mostly virtualized) firewall technologies, but most products still rely upon stand-alone GUIs, CLIs, element managers and a meat cloud of operators to push the go button (or reconfigure.)

Really annoying.

Alongside the lack of standard API-based management planes, control planes are largely proprietary and the output for correlated event-driven telemetry at all layers of the stack is equally lacking.  Of course the applications and security layers that run atop infrastructure are still largely discrete thus making the problem more difficult.

The good news is that virtualization in the enterprise and the emergence of the cultural and operational models predicated upon automation are starting to influence product roadmaps in ways that will positively affect the problem space described above but we’ve got a long haul as we make this transition.

Security vendors are starting to realize that they must retool many of their technology roadmaps to deal with the impact of dynamism and automation.  Some, not all, are discovering painfully the fact that simply creating a virtualized version of a physical appliance doesn’t make it a virtual security solution (or cloud security solution) in the same way that moving an application directly to cloud doesn’t necessarily make it a “cloud application.”

In the same way that one must often re-write or specifically design applications “designed” for cloud, we have to do the same for security.  Arguably there are things that can and should be preserved; the examples of the basic underpinnings such as firewalls that at their core don’t need to change but their “packaging” does.

I’m privy to lots of the underlying mechanics of these activities — from open source to highly-proprietary — and I’m heartened by the fact that we’re beginning to make progress.  We shouldn’t have to make a distinction between crafting and deploying security policies in physical or virtual environments.  We shouldn’t be held hostage by the separation of application logic from the underlying platforms.

In the long term, I’m optimistic we won’t have to.

/Hoff

Related articles

Enhanced by Zemanta

The Curious Case Of the MBO Cloud

December 23rd, 2010 1 comment

I was speaking to an enterprise account manager the other day regarding strategic engagements in Cloud Computing in very large enterprises.

He remarked on the non-surprising parallelism occurring as these companies build and execute on cloud strategies that involve both public and private cloud initiatives.

Many of them are still trying to leverage the value of virtualization and are thus often conservative about their path forward.  Many are blazing new trails.

We talked about the usual barriers to entry for even small PoC’s: compliance, security, lack of expertise, budget, etc., and then he shook his head solemnly, stared at the ground and mumbled something about a new threat to the overall progress toward enterprise cloud adoption.

MBO Cloud.

We’ve all heard of public, private, virtual private, hybrid, and community clouds, right? But “MBO Cloud?”

I asked. He clarified:

Cloud computing is such a hot topic, especially with its promise of huge cost savings, agility, and the reduced time-to-market for services and goods that many large companies who might otherwise be unable or unwilling to be able to pilot using a public cloud provider and also don’t want to risk much if any capital outlay for software and infrastructure to test private cloud are taking an interesting turn.

They’re trying to replicate Amazon or Google but not for the right reasons or workloads. They just look at “cloud” as some generic low-cost infrastructure platform that requires some open source and a couple of consultants — or even a full-time team of “developers” assigned to make it tick.

They rush out, buy 10 off-the-shelf white-label commodity multi-CPU/multi-core servers, acquire a plain vanilla NAS or low-end SAN storage appliance, sprinkle on some Xen or KVM, load on some unremarkable random set of open source software packages to test with a tidy web front-end and call it “cloud.”  No provisioning, no orchestration, no self-service portals, no chargeback, no security, no real scale, no operational re-alignment, no core applications…

It costs them next to nothing and it delivers about the same because they’re not designing for business cases that are at all relevant, they’re simply trying to copy Amazon and point to a shiny new rack as “cloud.”

Why do they do this? To gain experience and expertise? To dabble cautiously in an emerging set of technological and operational models?  To offload critical workloads that scale up and down?

Nope.  They do this for two reasons:

  1. Now that they have proven they can “successfully” spin up a “cloud” — however useless it may be — that costs next to nothing, it gives them leverage to squeeze vendors on pricing when and if they are able to move beyond this pile of junk, and
  2. Management By Objectives (MBO) — or a fancy way of saying, “bonus.”  Many C-levels and their ops staff are compensated via bonus on hitting certain objectives. One of them (for all the reasons stated above) is “deliver on the strategy and execution for cloud computing.” This half-hearted effort sadly qualifies.

So here’s the problem…when these efforts flame out and don’t deliver, they will impact the success of cloud in general — everywhere from a private cloud vendor to even potentially public cloud offers like AWS.  Why?  Because as we already know, *anything* that smells at all like failure gets reflexively blamed on cloud these days, and as these craptastic “cloud” PoC’s fail to deliver — even on minimal cash outlay — it’s going to be hard to gain a second choice given the bad taste left in the mouths of the business and management.

The opposite point could also be made in regard to public cloud services — that these truly “false cloud*” trials based on poorly architected and executed bubble gum and bailing wire will drive companies to public cloud (however longer that may take as compliance and security catch up.)

It will be interesting to see which happens first.

Either way, beware the actual “false cloud” but realize that the motivation behind many of them isn’t the betterment of the business or evolution of IT, it’s the fattening of wallets.

/Hoff

* I’m leveraging “false cloud” here to truly illustrate a point; despite actually useful private cloud initiatives, this is a term unfortunately levied on all private cloud initiatives by certain public cloud providers.

Enhanced by Zemanta

CloudSwitch: Traitor To the [Public Cloud] Cause…

December 15th, 2010 2 comments

Ellen Rubin and John Considine from CloudSwitch chuckled when I muttered this toward them in some sort of channeled pantomime of what an evaluation of their offering might bring from public-only cloud apologists.

Afterall, simply taking an application and moving it to a cloud doesn’t make it a “cloud application.”  Further, to fully leverage the automation, scale, provisioning and orchestration of “true” cloud platforms, one must re-write one’s applications and deal with the associated repercussions of doing so.  Things like security, networking, compliance, operations, etc.  Right?

Well…

CloudSwitch’s solutions — which defy this fundamental rearchitecture requirement —  enable enterprises to securely encapsulate and transport enterprise datacenter-hosted VM-based applications as-is and run them atop public cloud provider environments such as Amazon Web Services and Terremark in a rather well designed, security-conscious manner.

The reality is that their customer base — large enterprises in many very demanding verticals — seek to divine strategic technologies that allow them to pick and choose what, how and when to decide to “cloudify” their environments.  In short, CloudSwitch TODAY offers these customers a way to leverage the goodness of public utility in cloud without the need to fundamentally rearchitect the applications and accompanying infrastructure stacks — assuming they are already virtualized.  CloudSwitch seeks to do a lot more as they mature the product.

I went deep on current product capabilities and then John and I spent a couple of hours going off the reservation discussing what the platform plans are — both roadmapped and otherwise.

It was fascinating.

The secure isolation and network connectivity models touch on overlay capabilities from third parties, hypervisor/cloud stack providers like VMware (vCloud Director) as well as offers from folks like Amazon and their VPC, but CloudSwitch provides a way to solve many of the frustrating and sometimes show-stopping elements of application migration to cloud.  The preservation of bridged/routed networking connectivity back to the enterprise LAN is well thought out.

This is really an audit and compliance-friendly solution…pair a certified cloud provider (like AWS, as an example) up with app stacks in VMs that the customer is responsible for getting certified (see the security/compliance=shared responsibility post) and you’ve got something sweeter’n YooHoo…

It really exemplifies the notion of what people think of when they envision Hybrid Cloud today.  “Native” cloud apps written specifically for cloud environments, “transported” cloud apps leveraging CloudSwitch, and on-premises enterprise datacenters all interconnected.  Sweet.  More than just networking…

For the sake of not treading on FrieNDA elements that weaved their way in and out of our conversation, I’m not at liberty to discuss many of the things that really make this a powerful platform both now and in future releases.  If you want more technical detail on how it works, call ’em up, visit their website or check out Krishnan’s post.

Let me just say that the product today is impressive — it has some features from a security, compliance, reporting and auditing perspective I think can be further improved, but if you are an enterprise looking for a way to today make graceful use of public cloud computing in a secure manner, I’d definitely take a look at CloudSwitch.

/Hoff

Related articles

Enhanced by Zemanta
Categories: Cloud Computing, Cloud Security Tags:

Incomplete Thought: Why We Have The iPhone and AT&T To Thank For Cloud…

December 15th, 2010 1 comment
Image representing iPhone as depicted in Crunc...
Image via CrunchBase

I’m not sure this makes any sense whatsoever, but that’s why it’s labeled “incomplete thought,” isn’t it? 😉

A few weeks ago I was delivering my Cloudinomicon talk at the Cloud Security Alliance Congress in Orlando and as I was describing the cyclical nature of computing paradigms and the Security Hamster Sine Wave of Pain, it dawned on me — out loud — that we have Apple’s iPhone and its U.S. carrier, AT&T, to thank for the success of Cloud Computing.

My friends from AT&T perked up when I said that.  Then I explained…

So let me set this up. It will require some blog article ping-pong in order to reference earlier scribbling on the topic, but here’s the very rough process:

  1. I’ve pointed out that there are two fundamental perspectives when describing Cloud and Cloud Computing: the operational provider’s view and the experiential consumer’s view.  To the provider, the IT-centric, empirical and clinical nuances are what matters. To the consumer, anything that connects to the Internet via any computing platform using any app that interacts with any sort of information is also cloud.  There’s probably a business/market view, but I’ll keep things simple for purpose of illustration.  I wrote about this here:  Cloud/Cloud Computing Definitions – Why they Do(n’t) Matter…
  2. As we look at the adoption of cloud computing, the consumption model ultimately becomes more interesting than how the service is delivered (as it commoditizes.) My presentation “The Future of Cloud” focused on the fact that the mobile computing platforms (phones, iPads, netbooks, thin(ner) clients, etc) are really the next frontier.  I pointed out that we have the simultaneous mass re-centralization of applications and data in massive cloud data centers (however distributed ethereally they may be) and the massive distribution of the same applications and data across increasingly more intelligent, capable and storage-enabled mobile computing devices.  I wrote about this here: Slides from My Cloud Security Alliance Keynote: The Cloud Magic 8 Ball (Future Of Cloud)
  3. The iPhone isn’t really that remarkable a piece of technology in and of itself, in fact it capitalizes on and cannibalizes many innovations and technologies that came before it.  However, as I mentioned in my post “Cloud Maturity: Just Like the iPhone, There’s An App For That…The thing I love about my iPhone is that it’s not a piece of technology I think about but rather, it’s the way I interact with it to get what I want done.  It has its quirks, but it works…for millions of people.  Add in iTunes, the community of music/video/application artists/developers and the ecosystem that surrounds it, and voila…Cloud.”

  4. At each and every compute paradigm shift, we’ve seen the value of the network waffle between “intelligent” platform and simple transport depending upon where we were with the intersection of speeds/feeds and ubiquity/availability of access (the collision of Moore’s and Metcalfe’s laws?)  In many cases, we’ve had to rely on workarounds that have hindered the adoption of really valuable and worthwhile technologies and operational models because the “network” didn’t deliver.

I think we’re bumping up against point #4 today.  So here’s where I find this interesting.  If we see the move to the consumerized view of accessing resources from mobile platforms to resources located both on-phone and in-cloud, you’ll notice that even in densely-populated high-technology urban settings, we have poor coverage, slow transit and congested, high-latency, low-speed access — wired and wireless for that matter.

This is a problem. In fact it’s such a problem that if we look backward to about 4 years ago when “cloud computing” and the iPhone became entries in the lexicon of popular culture, this issue completely changed the entire application deployment model and use case of the iPhone as a mobile platform.  Huh?

Do you remember when the iPhone first came out? It was a reasonably capable compute platform with a decent amount of storage. It’s network connectivity, however, sucked.

Pair that with the fact that the application strategy was that there was emphatically, per Steve Jobs, not going to be native applications on the iPhone for many reasons, including security.  Every application was basically just a hyperlink to a web application located elsewhere.  The phone was nothing more than a web browser that delivered applications running elsewhere (for the most part, especially when things like Flash were’nt present.) Today we’d call that “The Cloud.”

Interestingly, at this point, he value of the iPhone as an application platform was diminished since it was not highly differentiated from any other smartphone that had a web browser.

Time went by and connectivity was still so awful and unreliable that Apple reversed direction to drive value and revenue in the platform, engaged a developer community, created the App Store and provided for a hybrid model — apps both on-platform and off — in order to deal with this lack of ubiquitous connectivity.  Operating systems, protocols and applications were invented/deployed in order to deal with the synchronization of on- and off-line application and information usage because we don’t have pervasive high-speed connectivity in the form of cellular or wifi such that we otherwise wouldn’t care.

So this gets back to what I meant when I said we have AT&T to thank for Cloud.  If you can imagine that we *did* have amazingly reliable and ubiquitous connectivity from devices like our iPhones — those consumerized access points to our apps and data — perhaps the demand for and/or use patterns of cloud computing would be wildly different from where they are today. Perhaps they wouldn’t, but if you think back to each of those huge compute paradigm shifts — mainframe, mini, micro, P.C., Web 1.0, Web 2.0 — the “network” in terms of reliability, ubiquity and speed has always played a central role in adoption of technology and operational models.

Same as it ever was.

So, thanks AT&T — you may have inadvertently accelerated the back-end of cloud in order to otherwise compensate, leverage and improve the front-end of cloud (and vice versa.)  Now, can you do something about the fact that I have no signal at my house, please?

/Hoff

Enhanced by Zemanta

From the Concrete To The Hypervisor: Compliance and IaaS/PaaS Cloud – A Shared Responsibility

December 6th, 2010 No comments

* Update:  A few hours after writing this last night, AWS announced they had achieved Level 1 PCI DSS Compliance.* If you pay attention to how the announcement is worded, you’ll find a reasonable treatment of what PCI compliance means to an IaaS cloud provider – it’s actually the first time I’ve seen this honestly described:

Merchants and other service providers can now run their applications on AWS PCI-compliant technology infrastructure to store, process and transmit credit card information in the cloud. Customers can use AWS cloud infrastructure, which has been validated at the highest level (Level 1) of PCI compliance, to build their cardholder environment and achieve PCI certification for their applications.

Note how they phrased this, then read my original post below.

However, pay no attention to the fact that they chose to make this announcement on Pearl Harbor Day 😉

Here’s the thing…

A cloud provider can achieve compliance (such as PCI — yes v2.0 even) such that the in-scope elements of that provider which are audited and assessed can ultimately contribute to the compliance of a customer operating atop that environment.  We’ve seen a number of providers assert compliance across many fronts, but they marketed their way into a yellow card by over-reaching…

It should be clear already, but for a service to be considered compliant, it clearly means that the customer’s in-scope elements running atop a cloud provider must also undergo and achieve compliance.

That means compliance is elementally additive the same way “security” is when someone else has direct operational control over elements in the stack you don’t.

In the case of an IaaS cloud provider who may achieve compliance from the “concrete to the hypervisor,” (let’s use PCI again,) the customer in turn must have the contents of the virtual machine (OS, Applications, operations, controls, etc.) independently assessed and meet PCI compliance in order that the entire stack of in-scope elements can be described as compliant.

Thus security — and more specifically compliance — in IaaS (and PaaS) is a shared responsibility.

I’ve spent many a blog battling marketing dragons from cloud providers that assert or imply that by only using said provider’s network which has undergone and passed one or more audits against a compliance framework, that any of its customers magically inherit certification by default. I trust this is recognized as completely false.

As compliance frameworks catch up to the unique use-cases that multi-tenancy and technologies such as virtualization bring, we’ll see more “compliant cloud” offerings spring up, easing customer pain related to the underlying moving parts.  This is, for example, what FedRAMP is aiming to provide with “pre-approved” cloud offerings.  We’ve got visibility and transparency issues to solve , as well as temporal issues such as the frequency and period of compliance audits, but there’s progress.

We’re going to see more and more of this as infrastructure- and platform-as-a-service vendors look to mutually accelerate compliance to achieve that which software-as-a-service can more organically deliver as a function of stack control.

/Hoff

* Note: It’s still a little unclear to me how some of the PCI requirements are met in an environment like an IaaS Cloud provider where “applications” that we typically think of that traffic in PCI in-scope data don’t exist (but the infrastructure does,) but I would assume that AWS leverages other certifications such as SAS and ISO as a cumulative to petition the QSA for consideration during certification.  I’ll ask this question of AWS and see what I get back.

Enhanced by Zemanta

EMC [Private] Cloud Architect Certifications…Interesting.

December 6th, 2010 4 comments

EMC today launched two new private cloud architect certifications.  I find it intriguing that the certification is described as “Leverag[ing] ‘open’ curriculum training and certification focused on technology concepts and principles applicable to any vendor environment.”  I’ll be interested to see how applicable to Citrix and Hyper-V environments the courseware is… 😉

From their community blog:

Today we announced two new EMC Proven Professional certifications tracks. These advanced level tracks are targeted toward architects, designers, and consultants who are, or will be, responsible for designing highly virtualized cloud-ready infrastructures leading to the design of IT-as-a-Service environments for Private Cloud as well as for Service Providers.

  • Cloud Architect (EMCCA) certification is targeted toward architects who deliver virtualization and cloud designs based on business strategies encompassing all key technical domains (compute, storage, networking, applications, etc).
  • Data Center Architect (EMCDCA) certification is for architects and designers who provide detailed designs for information storage specific technical domains to complement, expand, and complete their overall virtualization and cloud design.

Both tracks are based on ‘open’ curriculum where the focus is on technology/principles rather than specific products (similar to ISM design).

We strongly believe that both these tracks meet a necessary requirement for organizations and individual professionals as they plan extensive virtualization and adoption of cloud computing…please do take some time and review the details of these new exciting tracks by visting the EMC Education Services Portal or downloading this brochure (pdf).

It’s taking me a while to click through the various PDF’s which explain the various levels and requirements for the EMC Cloud Architect (EMCCA) certifications:

  • EMCISA PREREQUISITE: INFORMATION STORAGE ASSOCIATE CERTIFICATION
  • EMCCA VIRTUALIZED INFRASTRUCTURE – SPECIALIST-LEVEL CERTIFICATION
  • EMCCAe IT-AS-A-SERVICE – EXPERT-LEVEL CERTIFICATION
I look forward to digesting this all and seeing where the Cloud Security Alliance’s CCSK (Certificate of Cloud Security Knowledge) aligns.
/Hoff
Enhanced by Zemanta

I’ll Say It Again: Security Is NOT the Biggest Barrier To Cloud…

December 6th, 2010 3 comments
Cloud computing icon
Image via Wikipedia

Nope.

Security is not the biggest barrier to companies moving to applications, information and services delivered using cloud computing.

What is?

Compliance.

See Cloud: Security Doesn’t Matter (Or, In Cloud, Nobody Can Hear You Scream) and You Can’t Secure The Cloud…

That means what one gives up in terms of direct operational control, one must gain back in terms of visibility and transparency (sort of like www.cloudaudit.org)

Discuss.

/Hoff

Enhanced by Zemanta

On Security Conference Themes: Offense *Versus* Defense – Or, Can You Code?

November 22nd, 2010 7 comments

This morning’s dialog on Twitter from @wmremes and @singe reminded me of something that’s been bouncing around in my head for some time.

Wim blogged about a tweet Jeff Moss made regarding Black Hat DC in which he suggested CFP submissions should focus on offense (versus defense.)

Black Hat (and Defcon) have long focused on presentations which highlight novel emerging attacks.  There are generally not a lot of high-profile “defensive” presentations/talks because for the most part, they’re just not sexy, generally they involve hard work/cultural realignment and the reality that as hard as we try, attackers will always out-innovate and out-pace defenders.

More realistically, offense is sexy and offense sells — and it often sells defense.  That’s why vendors sponsor those shows in the first place.

Along these lines, one will notice that within our industry, the defining criterion for the attack versus defend talks and those that give them, is one’s ability to write code and produce tools that demonstrate the vulnerability via exploit.  Conceptual vulnerabilities paired with non-existent exploits are generally thought of as fodder for academia.  Only when a tool that weaponizes an attack shows up do people pay attention.

Zero days rule by definition. There’s no analog on the defensive side unless you buy into marketing like “…ahead of the threat.” *cough* Defense for offense that doesn’t exist generally doesn’t get the majority of the funding 😉

So it’s no wonder that security “rockstars” in our industry are generally those who produce attack/offensive code which illustrate how a vector can be exploited.  It’s tangible.  It’s demonstrable.  It’s sexy.

On the other hand, most defenders are reconciled to using tools that others wrote — or become specialists in the integration of them — in order to parlay some advantage over the ever-increasing wares of the former.

Think of those folks who represent the security industry in terms of mindshare and get the most amount of press.  Overwhelmingly it’s those “hax0rs” who write cool tools — tools that are more offensive in nature, even if they produce results oriented toward allowing practitioners to defend better (or at least that’s how they’re sold.)  That said, there are also some folks who *do* code and *do* create things that are defensive in nature.

I believe the answer lies in balance; we need flashy exploits (no matter how impractical/irrelevant they may be to a large amount of the population) to drive awareness.  We also need  more practitioner/governance talks to give people platforms upon which they can start to architect solutions.  We need more defenders to be able to write code.

Perhaps that’s what Richard Bejtlich meant when he tweeted: “Real security is built, not bought.”  That’s an interesting statement on lots of fronts. I’m selfishly taking Richard’s statement out of context to support my point, so hopefully he’ll forgive me.

That said, I don’t write code.  More specifically, I don’t write code well.  I have hundreds of ideas of things I’d like to do but can’t bridge the gap between ideation and proof-of-concept because I can’t write code.

This is why I often “invent” scenarios I find plausible, talk about them, and then get people thinking about how we would defend against them — usually in the vacuum of either offensive or defensive tools being available, or at least realized.

Sometimes there aren’t good answers.

I hope we focus on this balance more at shows like Black Hat — I’m lucky enough to get to present my “research” there despite it being defensive in nature but we need more defensive tools and talks to make this a reality.

/Hoff

Enhanced by Zemanta

The Future Of Audit & Compliance Is…Facebook?

November 20th, 2010 No comments
SAN FRANCISCO - NOVEMBER 15:  Facebook founder...
Image by Getty Images via @daylife

I’ve had an ephiphany.  The future is coming wherein we’ll truly have social security…

As the technology and operational models of virtualization and cloud computing mature and become operationally ubiquitous, ultimately delivering on the promise of agile, real-time service delivery via extreme levels of automation, the ugly necessities of security, audit and risk assessment will also require an evolution via automation to leverage the same.

At some point, that means the automated collection and overall assessment of posture (from a security, compliance, and risk perspective) will automagically occur (lest we continue to be the giant speed bump we’re described to be,) and pop out indicatively with glee with an end result of “good,” “bad,” or “pass,” “fail,” not unlike one of those in-flesh turkey thermometers that indicates doneness once a pre-set temperature is reached.

What does that have to do with Facebook?

Simple.

When we’ve all been sucked into the collective hive of the InterCloud matrix, the CISO/assessor/auditor/regulator will look at the score, the resultant assertions and the supporting artifacts gathered via automation and simply click on a button:

You see, the auditor/regulator really is your friend. 😉

It’s a cruel future.  We’re all Zuck’d.

/Hoff

Enhanced by Zemanta
Categories: Compliance Tags: