Six Year Old Rationalizes the Cloud

February 22nd, 2010 6 comments

My youngest, Olivia, was interested in a video promo I was filming today for the RSA Security Conference on Cloud Computing.  She mentioned that she wanted to film a spot on Cloud, too.  Who am I to argue?

Direct link here.  Embedded below.

…she gets rather upset about people’s poor password practices around 6:25 or so.  Way to make a security daddy proud! 😉

Next up, virtualization!

/Hoff

Reblog this post [with Zemanta]

Don’t Hassle the Hoff: Recent Press & Podcast Coverage & Upcoming Speaking Engagements

February 19th, 2010 No comments

Here is some of the recent coverage from the last couple of months or so on topics relevant to content on my blog, presentations and speaking engagements.  No particular order or priority and I haven’t kept a good record, unfortunately.

Important Stuff I’m Working On:

Press/Technology & Security eZines/Website/Blog Coverage/Meaningful Links:

Recent Speaking Engagements/Confirmed to  speak at the following upcoming events:

  • Govt Solutions Forum Feb 1-2 (panel |n DC)
  • Govt Solutions Forum Feb 24 D.C.
  • ESAF, San Francisco, March 1
  • Cloud Security Alliance Summit, San Francisco, March 1
  • RSA Security Conference March 1-5 San Francisco
  • Microsoft Bluehat Buenos Aires, Argentina – March 16-19th
  • ISSA General Assembly, Belgium
  • Infosec.be, Belgium
  • Codegate, South Korea, April 7-8
  • SOURCE Boston, April 21-23
  • Shot the Sherrif – Brazil – May 17th
  • Gluecon , Denver, May 26/27
  • FIRST, Miami, FL,  June 13-18
  • SANS DC – August 19th-20th

Conferences I am tentatively attending, trying to attend and/or working on logistics for speaking:

  • InterOp April 25-29 Vegas
  • Cisco Live – June 27th – July 1st Vegas
  • Blackhat 2010 – July 24-29 Vegas
  • Defcon
  • Notacon

Oh, let us not forget these top honors (buahahaha!)

  • Top 10 Sexy InfoSec Geeks (link)
  • The ThreatPost “All Decade Interview Team” (link)
  • ‘Cloud Hero’ and ‘Best Cloud Presentation’ – 2009 Cloudies Awards (link), and
  • 2010 RSA Social Security Bloggers Award nomination (link) 😉

[I often get a bunch of guff as to why I make these lists: ego, horn-tooting, self-aggrandizement. I wish I thought I were that important. 😉 The real reason is that it helps me keep track of useful stuff focused not only on my participation, but that of the rest of the blogosphere.]

/Hoff

Comments on the PwC/TSB Debate: The cloud/thin computing will fundamentally change the nature of cyber security…

February 16th, 2010 2 comments

I saw a very interesting post on LinkedIn with the title PwC/TSB Debate: The cloud/thin computing will fundamentally change the nature of cyber security…

PricewaterhouseCoopers are working with the Technology Strategy Board (part of BIS) on a high profile research project which aims to identify future technology and cyber security trends. These statements are forward looking and are intended to purely start a discussion around emerging/possible future trends. This is a great chance to be involved in an agenda setting piece of research. The findings will be released in the Spring at Infosec. We invite you to offer your thoughts…

The cloud/thin computing will fundamentally change the nature of cyber security…

The nature of cyber security threats will fundamentally change as the trend towards thin computing grows. Security updates can be managed instantly by the solution provider so every user has the latest security solution, the data leakage threat is reduced as data is stored centrally, systems can be scanned more efficiently and if Botnets capture end-point computers, the processing power captured is minimal. Furthermore, access to critical data can be centrally managed and as more email is centralised, malware can be identified and removed more easily. The key challenge will become identity management and ensuring users can only access their relevant files. The threat moves from the end-point to the centre.

What are your thoughts?

My response is simple.

Cloud Computing or “Thin Computing” as described above doesn’t change the “nature” of (gag) “cyber security” it simply changes its efficiency, investment focus, capital model and modality. As to the statement regarding threats with movement “…from the end-point to the centre,” the surface area really becomes amorphous and given the potential monoculture introduced by the virtualization layers underpinning these operations, perhaps expands.

Certainly the benefits described in the introduction above do mean changes to who, where and when risk mitigation might be applied, but those activities are, in most cases, still the same as in non-Cloud and “thick” computing.  That’s not a “fundamental change” but rather an adjustment to a platform shift, just like when we went from mainframe to client/server.  We are still dealing with the remnant security issues (identity management, AAA, PKI, encryption, etc.) from prior  computing inflection points that we’ve yet to fix.  Cloud is a great forcing function to help nibble away at them.

But, if you substitute “client server” in relation to it’s evolution from the “mainframe era” for “cloud/thin computing” above, it all sounds quite familiar.

As I alluded to, there are some downsides to this re-centralization, but it is important to note that I do believe that if we look at what PaaS/SaaS offerings and VDI/Thin/Cloud computing offers, it makes us focus on protecting our information and building more survivable systems.

However, there’s a notable bifurcation occurring. Whilst the example above paints a picture of mass re-centralization, incredibly powerful mobile platforms are evolving.  These platforms (such as the iPhone) employ a hybrid approach featuring both native/local on-device applications and storage of data combined with the potential of thin client capability and interaction with distributed Cloud computing services.*

These hyper-mobile and incredibly powerful platforms — and the requirements to secure them in this mixed-access environment — means that the efficiency gains on one hand are compromised by the need to once again secure  diametrically-opposed computing experiences.  It’s a “squeezing the balloon” problem.

The same exact thing is occurring in the Private versus Public Cloud Computing models.

/Hoff

* P.S. Bernard Golden also commented via Twitter regarding the emergence of Sensor nets which also have a very interesting set of implications on security as it relates to both the examples of Cloud and mobile computing elements above.

Reblog this post [with Zemanta]

The Automated Audit, Assertion, Assessment, and Assurance API (A6) Becomes: CloudAudit

February 12th, 2010 No comments

I’m happy to announce that the Automated Audit, Assertion, Assessment, and Assurance API (A6) working group is organizing under the brand of “CloudAudit.”  We’re doing so to enable reaching a broader audience, ensure it is easier to find us in searches and generally better reflect the mission of the group.  A6 remains our byline.

We’ve refined how we are describing and approaching solving the problems of compliance, audit, and assurance in the cloud space and part of that is reflected in our re-branding.  You can find the original genesis for A6 here in this series of posts. Meanwhile, you can keep track of all things CloudAudit at our new home: http://www.CloudAudit.org.

The goal of CloudAudit is to provide a common interface that allows Cloud providers to automate the Audit, Assertion, Assessment, and Assurance (A6) of their environments and allow authorized consumers of their services to do likewise via an open, extensible and secure API.  CloudAudit is a volunteer cross-industry effort from the best minds and talent in Cloud, networking, security, audit, assurance, distributed application and system architecture backgrounds.

Our execution mantra is to:

  • Keep it simple, lightweight and easy to implement; offer primitive definitions & language structure using HTTP(S)
  • Allow for extension and elaboration by providers and choice of trusted assertion validation sources, checklist definitions, etc.
  • Not require adoption of other platform-specific APIs
  • Provide interfaces to Cloud naming and registry services

The benefits to the cloud provider are clear: a single reference model that allows automation of many functions that today incurs large costs in both manpower and time and costs business.  The base implementation is being designed to require little to no programmatic changes in order for implementation.  For the consumer and interested/authorized third parties, it allows on-demand examination of the same set of functions.

Mapping to compliance, regulatory, service level, configuration, security and assurance frameworks as well as third party trust brokers is part of what A6 will also deliver.  CloudAudit is working closely with other alliance and standards body organizations such as the Cloud Security Alliance and ENISA.

If you want to know who’s working on making this a reality, there are hundreds of interested parties; consumers as well as providers such as: Akamai, Amazon Web Services, Microsoft, NetSuite, Rackspace, Savvis, Terremark, Sun, VMware, and many others.

If you would like to get involved, please join the CloudAudit Working Group or visit the homepage here.

Here is the slide deck from the 2/12/10 working group call (our second) and a link to the WebEx playback of the call.

Reblog this post [with Zemanta]

Pimping the Security Non-Cons: Troopers 2010

February 12th, 2010 No comments

My friends at ERNW in Germany are putting on another fantastic security conference this year. I was lucky enough to attend Troopers ’08 in Munich and this year it’s in Heidelberg.  Check out the details here.

TROOPERS10 – This time it’s a home match.

This year we’re bringing back the action right to the place where everything started: Heidelberg, Germany.

In 2007 the idea of a security conference without the usual product presentations, marketing blabla, and bull*ht-bingo was born – just pure practical IT security. After an enthusiastic response from our audiences in Munich we decided to evolve the concept into a full-blown conference combined with a series of workshops and round tables.

We’re inviting (C)ISOs, IT auditors, sysadmins, security consultants and everyone who is involved with IT security to come to Heidelberg and get in touch with leading experts from all over the world. A number of workshops on monday and tuesday covers highly relevant topics in detail, on wednesday and thursday you’ll learn about the latest developments, threats and achievements from world class security evangelists, experts and hackers. And on friday we seat you on round tables right next to the speakers and fellow experts. You’ll be able to discuss your own strategies and concerns with them face-to-face. You will be listened to, because in the end of the day we’re all the same: TROOPERS in the infosec world.

I’ll be posting a couple of other excellent conferences shortly.
/Hoff

Categories: Conferences Tags:

Microsoft Azure Going “Down Stack,” Adding IaaS Capabilities. AWS/VMware WAR!

February 4th, 2010 4 comments

It’s very interesting to see that now that infrastructure-as-a-service (IaaS) players like Amazon Web Services are clawing their way “up the stack” and adding more platform-as-a-service (PaaS) capabilities, that Microsoft is going “down stack” and providing IaaS capabilities by way of adding RDP and VM capabilities to Azure.

From Carl Brooks’ (@eekygeeky) article today:

Microsoft is expected to add support for Remote Desktops and virtual machines (VMs) to Windows Azure by the end of March, and the company also says that prices for Azure, now a baseline $0.12 per hour, will be subject to change every so often.

Prashant Ketkar, marketing director for Azure, said that the service would be adding Remote Desktop capabilities as soon as possible, as well as the ability to load and run virtual machine images directly on the platform. Ketkar did not give a date for the new features, but said they were the two most requested items.

This move begins a definite trend away from the original concept for Azure in design and execution. It was originally thought of as a programming platform only: developers would write code directly into Azure, creating applications without even being aware of the underlying operating system or virtual instances. It will now become much closer in spirit to Amazon Web Services, where users control their machines directly. Microsoft still expects Azure customers to code for the platform and not always want hands on control, but it is bowing to pressure to cede control to users at deeper and deeper levels.

One major reason for the shift is that there are vast arrays of legacy Windows applications users expect to be able to run on a Windows platform, and Microsoft doesn’t want to lose potential customers because they can’t run applications they’ve already invested in on Azure. While some users will want to start fresh, most see cloud as a way to extend what they have, not discard it.

This sets the path to allow those enterprise customers running HyperV internally to take those VMs and run them on (or in conjunction with) Azure.

Besides the obvious competition with AWS in the public cloud space, there’s also a private cloud element. As it stands now, one of the primary differentiators for VMware from the private-to-public cloud migration/portability/interoperability perspective is the concept that if you run vSphere in your enterprise, you can take the same VMs without modification and move them to a service provider who runs vCloud (based on vSphere.)

This is a very interesting and smart move by Microsoft.

/Hoff

Reblog this post [with Zemanta]

Where Are the Network Virtual Appliances? Hobbled By the Virtual Network, That’s Where…

January 31st, 2010 15 comments

Allan Leinwand from GigaOm wrote a great article asking “Where are the network virtual appliances?” This was followed up by another excellent post by Rich Miller.

Allan sets up the discussion describing how we’ve typically plumbed disparate physical appliances into our network infrastructure to provide discrete network and security capabilities such as load balancers, VPNs, SSL termination, firewalls, etc.  He then goes on to describe the stunted evolution of virtual appliances:

To be sure, some networking devices and appliances are now available in virtual form.  Switches and routers have begun to move toward virtualization with VMware’s vSwitch, Cisco’s Nexus 1000v, the open source Open vSwitch and routers and firewalls running in various VMs from the company I helped found, Vyatta.  For load balancers, Citrix has released a version of its Netscaler VPX software that runs on top of its virtual machine, XenServer; and Zeus Systems has an application traffic controller that can be deployed as a virtual appliance on Amazon EC2, Joyent and other public clouds.

Ultimately I think it prudent for discussion’s sake to separate routing, switching and load balancing (connectivity) from functions such as DLP, firewalls, and IDS/IPS (security) as lumping them together actually abstracts the problem which is that the latter is completely dependent upon the capabilities and functionality of the former.  This is what Allan almost gets to when describing his lament with the virtual appliance ecosystem today:

Yet the fundamental problem remains: Most networking appliances are still stuck in physical hardware — hardware that may or may not be deployed where the applications need them, which means those applications and their associated VMs can be left with major gaps in their infrastructure needs. Without a full-featured and stateful firewall to protect an application, it’s susceptible to various Internet attacks.  A missing load balancer that operates at layers three through seven leaves a gap in the need to distribute load between multiple application servers. Meanwhile, the lack of an SSL accelerator to offload processing may lead to performance issues and without an IDS device present, malicious activities may occur.  Without some (or all) of these networking appliances available in a virtual environment, a VM may find itself constrained, unable to take full advantage of the possible economic benefits.

I’ve written about this many, many times. In fact almost three years ago I created a presentation called  “The Four Horsemen of the Virtualization Security Apocalypse” which described in excruciating detail how network virtual appliances were a big ball of fail and would be for some time. I further suggested that much of the “best-of-breed” products would ultimately become “good enough” features in virtualization vendor’s hypervisor platforms.

Why?  Because there are some very real problems with virtualization (and Cloud) as it relates to connectivity and security:

  1. Most of the virtual network appliances, especially those “ported” from the versions that usually run on dedicated physical hardware (COTS or proprietary) do not provide feature, performance, scale or high-availability parity; most are hobbled or require per-platform customization or re-engineering in order to function.
  2. The resilience and high availability options from today’s off-the-shelf virtual connectivity does not pair well with the mobility and dynamism of de-coupled virtual machines; VMs are ultimately temporal and networks don’t like topological instability due to key components moving or disappearing
  3. The performance and scale of virtual appliances still suffer when competing for I/O and resources on the same physical hosts as the guests they attempt to protect
  4. Virtual connectivity is a generally a function of the VMM (or a loadable module/domain therein.) The architecture of the VMM has dramatic impact upon the architecture of the software designed to provide the connectivity and vice versa.
  5. Security solutions are incredibly topology sensitive.  Given the scenario in #1 when a VM moves or is distributed across the pooled infrastructure, unless the security capabilities are already present on the physical host or the connectivity and security layers share a control plane (or at least can exchange telemetry,) things will simply break
  6. Many virtualization (and especially cloud) platforms do not support protocols or topologies that many connectivity and security virtual appliances require to function (such as multicast for load balancing)
  7. It’s very difficult to mimic the in-line path requirements in virtual networking environments that would otherwise force traffic passing through the connectivity layers (layers 2 through 7) up through various policy-driven security layers (virtual appliances)
  8. There is no common methodology to express what security requirements the connectivity fabrics should ensure are available prior to allowing a VM to spool up let alone move
  9. Virtualization vendors who provide solutions for the enterprise have rich networking capabilities natively as well as with third party connectivity partners, including VM and VMM introspection capabilities. As I wrote about here, mass-market Cloud providers such as Amazon Web Services or Rackspace Cloud have severely crippled networking.
  10. Virtualization and cloud vendors generally force many security vs. performance tradeoffs when implementing introspection capabilities in their platforms: third party code running in the kernel, scheduler prioritization issues, I/O limitations, etc.
  11. Much of the basic networking capabilities are being pushed lower into silicon (into the CPUs themselves) which makes virtual appliances even further removed from the guts that enable them
  12. Physical appliances (in the enterprise) exist en-mass.  Many of them provide highly scalable solutions to the specific functions that Alan refers to.  The need exists, given the limitations I describe above, to provide for integration/interaction between them, the VMM and any virtual appliances in order to offload certain functions as well as provide coverage between the physical and the logical.

What does this mean?  It means that ultimately to ensure their own survival, virtualization and cloud providers will depend less upon virtual appliances and add more of the basic connectivity AND security capabilities into the VMMs themselves as its the only way to guarantee performance, scalability, resilience and satisfy the security requirements of customers. There will be new generations of protocols, APIs and control planes that will emerge to provide for this capability, but this will drive the same old integration battles we’re supposed to be absolved from with virtualization and Cloud.

Connectivity and security vendors will offer virtual replicas of their physical appliances in order to gain a foothold in virtualized/cloud environments in order to intercept traffic (think basic traps/ACL’s) and then interact with higher-performing physical appliance security service overlays or embedded line cards in service chassis.  This is especially true in enterprises but poses many challenges in software-only, mass-market cloud environments where what you’ll continue to get is simply basic connectivity and security with limited networking functionality.  This implies more and more security will be pushed into the guest and application logic layers to deal with this disconnect.

This is exactly where we are today with Cloud providers like Amazon Web Services: basic ingress-only filtering with a very simplistic, limited and abstracted set of both connectivity and security capability.  See “Dear Public Cloud Providers: Please Make Your Networking Capabilities Suck Less. Kthxbye”  Will they add more functionality?  Perhaps. The question is whether they can afford to in order to limit the impact that connecitivity and security variability/instability can bring to an environment.

That said, it’s certainly achievable, if you are willing and able to do so, to construct a completely software-based networking environment, but these environments require a complete approach and stack re-write with an operational expertise that will be hard to support for those who have spent the last 20 years working in a different paradigm and that’s a huge piece of this problem.

The connectivity layer — however integrated into the virtualized and cloud environments they seem — continues to limit how and what the security layers can do and will for some time, thus limiting the uptake of virtual network and security appliances.

Situation normal.

/Hoff

Reblog this post [with Zemanta]

Hacking Exposed: Virtualization & Cloud Computing…Feedback Please

January 30th, 2010 26 comments

Craig Balding, Rich Mogull and I are working on a book due out later this year.

It’s the latest in the McGraw-Hill “Hacking Exposed” series.  We’re focusing on virtualization and cloud computing security.

We have a very interesting set of topics to discuss but we’d like to crowd/cloud-source ideas from all of you.

The table of contents reads like this:

Part I: Virtualization & Cloud Computing:  An Overview
Case Study: Expand the Attack Surface: Enterprise Virtualization & Cloud Adoption
Chapter 1: Virtualization Defined
Chapter 2: Cloud Computing Defined

Part II: Smash the Virtualized Stack
Case Study: Own the Virtualized Enterprise
Chapter 3: Subvert the CPU & Chipsets
Chapter 4: Harass the Host, Hypervisor, Virtual Networking & Storage
Chapter 5: Victimize the Virtual Machine
Chapter 6: Conquer the Control Plane & APIs

Part III: Compromise the Cloud
Case Study: Own the Cloud for Fun and Profit
Chapter 7: Undermine the Infrastructure
Chapter 8: Manipulate the Metastructure
Chapter 9: Assault the Infostructure

Part IV: Appendices

We’ll have a book-specific site up shortly, but if you’d like to see certain things covered (technology, operational, organizational, etc.) please let us know in the comments below.

Also, we’d like to solicit a few critical folks to provide feedback on the first couple of chapters. Email me/comment if interested.

Thanks!

/Hoff, Craig and Rich.

Reblog this post [with Zemanta]

MashSSL – An Excellent Idea You’ve Probably Never Heard Of…

January 30th, 2010 No comments

I’ve been meaning to write about MashSSL for a while as it occurs to me that this is a particularly elegant solution to some very real challenges we have today.  Trusting the browser, operator of said browser or a web service when using multi-party web applications is a fatal flaw.

We’re struggling with how to deal with authentication in distributed web and cloud applications. MashSSL seems as though it’s a candidate for the toolbox of solutions:

MashSSL allows web applications to mutually authenticate and establish a secure channel without having to trust the user or the browser. MashSSL is a Layer 7 security protocol running within HTTP in a RESTful fashion. It uses an innovation called “friend in the middle” to turn the proven SSL protocol into a multi-party protocol that inherits SSL’s security, efficiency and mature trust infrastructure

Make sure you check out the sections on “Why and How,” especially the “MashSSL Overview” section which explains how it works.

I should mention the code is also open source.

/Hoff

Cloud: Security Doesn’t Matter (Or, In Cloud, Nobody Can Hear You Scream)

January 25th, 2010 9 comments

In the Information Security community, many of us have long come to the conclusion that we are caught in what I call my “Security Hamster Sine Wave Of Pain.”  Those of us who have been doing this awhile recognize that InfoSec is a zero-sum game; it’s about staving off the inevitable and trying to ensure we can deal with the residual impact in the face of being “survivable” versus being “secure.”

While we can (and do) make incremental progress in certain areas, the collision of disruptive innovation, massive consumerization of technology along with the slow churn of security vendor roadmaps, dissolving budgets, natural marketspace commoditzation and the unfortunate velocity of attacker innovation yields the constant realization that we’re not motivated or incentivized to do the right thing or manage risk.

Instead, we’re poked in the side and haunted by the four letter word of our industry: compliance.

Compliance is often dismissed as irrelevant in the consumer space and associated instead with government or large enterprise, but as privacy continues to erode and breaches make the news, the fact that we’re putting more and more of our information — of all sorts — in the hands of others to manage is again beginning to stoke an upsurge in efforts to somehow measure and manage visibility against a standardized baseline of general, common sense and minimal efforts to guard against badness.

Ultimately, it doesn’t matter how “secure” Cloud providers suggest they are.  It doesn’t matter what breakthroughs in technology sprout up in the face of this new model of compute. The only measure that counts in the long run is how compliant you are.  That’s what will determine the success of Cloud.  Don’t believe me? Look at how the leading vendors in Cloud are responding today to their biggest (potential) customers — taking the “one size fits all” model of mass-market Cloud and beginning to chop it up and create one-off’s in order to satisfy…compliance.

Why?  Because it’s easier to deal with the vagaries of trust and isolation and multi-tenant environments by eliminating the latter to increase the former. If an auditor/examiner doesn’t understand or cannot measure your compliance to those things he/she is tasked to evaluate you against, you’re sunk.

The only thing that will budge the needle on this issue is how agile those who craft the regulatory guidelines are or how you can clearly demonstrate why your compensating controls mitigate the risk of the provider of service if they cannot. Given the nature and behavior of those involved in this space and where we are with putting our eggs in a vaporous basket, I wouldn’t hold my breath.  Movement in this area is glacial at best and in many cases out of touch with the realities of just how disruptive Cloud Computing is.  All it will take is one monumental cock-up due to a true Cloudtastrophe and the Cloud will hit the fan.

As I have oft suggested, the core issue we need to tackle in Cloud is trust, since the graceful surrender of such is at the heart of what Cloud requires.  Trust is comprised of Security, Control, Service Levels and Compliance.  It’s relatively easy to establish where we are today with the first three, but the last one is MIA.  We’re just *now* seeing movement in the form of SIGs to deal with virtualization.  Cloud?

When the best you have is a SAS-70, it’s time to weep.  Conversely, wishing for more regulation will simply extend the cycle.

What can you do?  Simple. Help educate your auditors and examiners. Read the Cloud Security Alliance’s guidelines. Participate in making the Automated Audit, Assertion, Assessment, and Assurance API (A6) a success so we can at least gain back some visibility and transparency which helps demonstrate compliance, since that’s how we’re measured.  Ultimately, if you’re able, focus on risk assessment in helping to advise your constituent business customers on how to migrate to Cloud Computing safely.

There are TONS of things one can do in order to make up for the shortcomings of Cloud security today.  The problem is, most of them erode the benefits of Cloud: agility, flexibility, cost savings, and dynamism.  We need to make the business aware of these tradeoffs as well as our auditors because we’re stuck.  We need the regulators and examiners to keep pace with technology — as painful as that might be in the short term — to guarantee our success in the long term.

Manage compliance, don’t let it manage you because a Cloud is a terrible thing to waste.

/Hoff

Reblog this post [with Zemanta]