On Stacked Turtles & the AWS Outage…
The best summary I could come up with:
The best summary I could come up with:
Chugging right along on the feature enhancement locomotive, following the extension of networking capabilities of their Virtual Private Cloud (VPC) offerings last week (see: AWS’ New Networking Capabilities – Sucking Less 😉 ,) Amazon Web Services today announced the availability of dedicated (both on-demand and dedicated) compute instances within a VPC:
Dedicated Instances are Amazon EC2 instances launched within your Amazon Virtual Private Cloud (Amazon VPC) that run hardware dedicated to a single customer. Dedicated Instances let you take full advantage of the benefits of Amazon VPC and the AWS cloud – on-demand elastic provisioning, pay only for what you use, and a private, isolated virtual network, all while ensuring that your Amazon EC2 compute instances will be isolated at the hardware level.
That’s interesting, isn’t it? I remember writing this post ” Calling All Private Cloud Haters: Amazon Just Peed On Your Fire Hydrant… and chuckling when AWS announced VPC back in 2009 in which I suggested that VPC:
That got some hackles up.
So this morning, people immediately started squawking on Twitter about how this looked remarkably like (or didn’t) private cloud or dedicated hosting. This is why, about two years ago, I generated this taxonomy that pointed out the gray area of “private cloud” — the notion of who manages it, who owns the infrastructure, where it’s located and who it’s consumed by:
I did a lot of this work well before I utilized it in the original Cloud Security Alliance Guidance architecture chapter I wrote, but that experienced refined what I meant a little more clearly and this version was produced PRIOR to the NIST guidance which is why you don’t see mention of “community cloud”:
* Note: the benefits of elasticity don’t imply massive scale, which in many cases is not a relevant requirement for an enterprise. Also, ultimately I deprecated the “managed” designation because it was a variation on a theme, but you can tell that ultimately the distinction I was going for between private and hybrid is the notion of OR versus AND designations in the various criteria.
AWS’ dedicated VPC options now give you another ‘OR’ option when thinking about who manages, owns the infrastructure your workloads run on, and more importantly where. More specifically, the notion of ‘virtual’ cloud becomes less and less important as the hybrid nature of interconnectedness of resources starts to make more sense — regardless of whether you use overlay solutions like CloudSwitch, “integrated” solutions from vendors like VMware or Citrix or from AWS. In the long term, the answer will probably be “D) all of the above.”
Providing dedicated compute atop a hypervisor for which you are the only tenant will be attractive to many enterprises who have trouble coming to terms with sharing memory/cpu resources with other customers. This dedicated functionality costs a pretty penny – $87,600 a year, and as Simon Wardley pointed out that this has an interesting effect inasmuch as it puts a price tag on isolation:
Here’s the interesting thing that goes to the title of this post:
Is this a capability that AWS really expects to be utilized as they further blur the lines between public, private and hybrid cloud models OR is it a defensive strategy hinged on the exorbitant costs to further push enterprises into shared compute and overlay security models?
Specifically, one wonders if this is a strategically defensive or offensive move?
A single tenant atop a hypervisor atop dedicated hardware — that will go a long way toward addressing one concern: noisy (and nosy) neighbors.
Now, keep in mind that if an enterprise’s threat modeling and risk management frameworks are reasonably rational, they’ll realize that this is compute/memory isolation only. Clearly the network and storage infrastructure is still shared, but the “state of the art” in today’s cloud of overlay encryption (file systems and SSL/IPSec VPNs) will likely address those issues. Shared underlying cloud management/provisioning/orchestration is still an interesting area of concern.
So this will be an interesting play for AWS. Whether they’re using this to take a hammer to the existing private cloud models or just to add another dimension in service offering (logical, either way) I think in many cases enterprises will pay this tax to further satisfy compliance requirements by removing the compute multi-tenancy boogeyman.
/Hoff
This is a revisitation of a blog I wrote last year: Incomplete Thought: Cloud Security IS Host-Based…At The Moment
I use my ‘Security Hamster Sine Wave of Pain” to illustrate the cyclical nature of security investment and deployment models over time and how disruptive innovation and technology impacts the flip-flop across the horizon of choice.
To wit: most mass-market Public Cloud providers such as Amazon Web Services rely on highly-abstracted and limited exposure of networking capabilities. This means that most traditional network-based security solutions are impractical or non-deployable in these environments.
Network-based virtual appliances which expect generally to be deployed in-line with the assets they protect are at a disadvantage given their topological dependency.
So what we see are security solution providers simply re-marketing their network-based solutions as host-based solutions instead…or confusing things with Barney announcements.
Take a press release today from SourceFire:
Snort and Sourcefire Vulnerability Research Team(TM) (VRT) rules are now available through the Amazon Elastic Compute Cloud (Amazon EC2) in the form of an Amazon Machine Image (AMI), enabling customers to proactively monitor network activity for malicious behavior and provide automated responses.
Leveraging Snort installed on the AMI, customers of Amazon Web Services can further secure their most critical cloud-based applications with Sourcefire’s leading protection. Snort and Sourcefire(R) VRT rules are also listed in the Amazon Web Services Solution Partner Directory, so that users can easily ensure that their AMI includes the latest updates.
As far as I can tell, this means you can install a ‘virtual appliance’ of Snort/Sourcefire as a standalone AMI, but there’s no real description on how one might actually implement it in an environment that isn’t topologically-friendly to this sort of network-based implementation constraint.*
Since you can’t easily “steer traffic” through an IPS in the model of AWS, can’t leverage promiscuous mode or taps, what does this packaging implementation actually mean? Also, if one has a few hundred AMI’s which contain applications spread out across multiple availability zones/regions, how does a solution like this scale (from both a performance or management perspective?)
I’ve spoken/written about this many times:
Where Are the Network Virtual Appliances? Hobbled By the Virtual Network, That’s Where… and
Dear Public Cloud Providers: Please Make Your Networking Capabilities Suck Less. Kthxbye
Ultimately, expect that Public Cloud will force the return to host-based HIDS/HIPS deployments — the return to agent-based security models. This poses just as many operational challenges as those I allude to above. We *must* have better ways of tying together network and host-based security solutions in these Public Cloud environments that make sense from an operational, cost, and security perspective.
/Hoff
Related articles by Zemanta
* I “spoke” with Marty Roesch on the Twitter and he filled in the gaps associated with how this version of Snort works – there’s a host-based packet capture element with a “network” redirect to a stand-alone AMI:
@Beaker AWS->Snort implementation is IDS-only at the moment, uses software packet tap off customer app instance, not topology-dependent
and…
they install our soft-tap on their AMI and send the traffic to our AMI for inspection/detection/reporting.
It will be interesting to see how performance nets out using this redirect model.
Allan Leinwand from GigaOm wrote a great article asking “Where are the network virtual appliances?” This was followed up by another excellent post by Rich Miller.
Allan sets up the discussion describing how we’ve typically plumbed disparate physical appliances into our network infrastructure to provide discrete network and security capabilities such as load balancers, VPNs, SSL termination, firewalls, etc. He then goes on to describe the stunted evolution of virtual appliances:
To be sure, some networking devices and appliances are now available in virtual form. Switches and routers have begun to move toward virtualization with VMware’s vSwitch, Cisco’s Nexus 1000v, the open source Open vSwitch and routers and firewalls running in various VMs from the company I helped found, Vyatta. For load balancers, Citrix has released a version of its Netscaler VPX software that runs on top of its virtual machine, XenServer; and Zeus Systems has an application traffic controller that can be deployed as a virtual appliance on Amazon EC2, Joyent and other public clouds.
Ultimately I think it prudent for discussion’s sake to separate routing, switching and load balancing (connectivity) from functions such as DLP, firewalls, and IDS/IPS (security) as lumping them together actually abstracts the problem which is that the latter is completely dependent upon the capabilities and functionality of the former. This is what Allan almost gets to when describing his lament with the virtual appliance ecosystem today:
Yet the fundamental problem remains: Most networking appliances are still stuck in physical hardware — hardware that may or may not be deployed where the applications need them, which means those applications and their associated VMs can be left with major gaps in their infrastructure needs. Without a full-featured and stateful firewall to protect an application, it’s susceptible to various Internet attacks. A missing load balancer that operates at layers three through seven leaves a gap in the need to distribute load between multiple application servers. Meanwhile, the lack of an SSL accelerator to offload processing may lead to performance issues and without an IDS device present, malicious activities may occur. Without some (or all) of these networking appliances available in a virtual environment, a VM may find itself constrained, unable to take full advantage of the possible economic benefits.
I’ve written about this many, many times. In fact almost three years ago I created a presentation called “The Four Horsemen of the Virtualization Security Apocalypse” which described in excruciating detail how network virtual appliances were a big ball of fail and would be for some time. I further suggested that much of the “best-of-breed” products would ultimately become “good enough” features in virtualization vendor’s hypervisor platforms.
Why? Because there are some very real problems with virtualization (and Cloud) as it relates to connectivity and security:
What does this mean? It means that ultimately to ensure their own survival, virtualization and cloud providers will depend less upon virtual appliances and add more of the basic connectivity AND security capabilities into the VMMs themselves as its the only way to guarantee performance, scalability, resilience and satisfy the security requirements of customers. There will be new generations of protocols, APIs and control planes that will emerge to provide for this capability, but this will drive the same old integration battles we’re supposed to be absolved from with virtualization and Cloud.
Connectivity and security vendors will offer virtual replicas of their physical appliances in order to gain a foothold in virtualized/cloud environments in order to intercept traffic (think basic traps/ACL’s) and then interact with higher-performing physical appliance security service overlays or embedded line cards in service chassis. This is especially true in enterprises but poses many challenges in software-only, mass-market cloud environments where what you’ll continue to get is simply basic connectivity and security with limited networking functionality. This implies more and more security will be pushed into the guest and application logic layers to deal with this disconnect.
This is exactly where we are today with Cloud providers like Amazon Web Services: basic ingress-only filtering with a very simplistic, limited and abstracted set of both connectivity and security capability. See “Dear Public Cloud Providers: Please Make Your Networking Capabilities Suck Less. Kthxbye” Will they add more functionality? Perhaps. The question is whether they can afford to in order to limit the impact that connecitivity and security variability/instability can bring to an environment.
That said, it’s certainly achievable, if you are willing and able to do so, to construct a completely software-based networking environment, but these environments require a complete approach and stack re-write with an operational expertise that will be hard to support for those who have spent the last 20 years working in a different paradigm and that’s a huge piece of this problem.
The connectivity layer — however integrated into the virtualized and cloud environments they seem — continues to limit how and what the security layers can do and will for some time, thus limiting the uptake of virtual network and security appliances.
Situation normal.
/Hoff
Recent Comments