On Stacked Turtles & the AWS Outage…
The best summary I could come up with:
The best summary I could come up with:
There are lots of reasons one might use to illustrate why operationalizing security — both from the human and technology perspectives — doesn’t scale.
I’ve painted numerous pictures highlighting the cyclical nature of technology transitions, the supply/demand curve related to threats, vulnerabilities, technology and compensating controls and even relevant anecdotes involving the intersection of Moore’s and Metcalfe’s laws. This really was a central theme in my Cloudinomicon presentation; “idempotent infrastructure, building survivable systems and bringing sexy back to information centricity.”
Here are some other examples of things I’ve written about in this realm.
Batting around how public “commodity” cloud solutions forces us to re-evaluate how, where, why and who “does” security was an interesting journey. Ultimately, it comes down to architecture and poking at the sanctity of models hinged on an operational premise that may or may not be as relevant as it used to be.
However, I think the most poignant and yet potentially obvious answer to the “why doesn’t security scale?” question is the fact that security products, by design, don’t scale because they have not been created to allow for automation across almost every aspect of their architecture.
Automation and the interfaces (read: APIs) by which security products ought to be provisioned, orchestrated, and deployed are simply lacking in most security products.
Yes, there exist security products that are distributed but they are still managed, provisioned and deployed manually — generally using a management hub-spoke model that doesn’t lend itself to automated “anything” that does not otherwise rely upon bubble-gum and bailing wire scripting…
Sure, we’ve had things like SNMP as a “standard interface” for “management” for a long while 😉 We’ve had common ways of describing threats and vulnerabilities. Recently we’ve seen the emergence of XML-based APIs emerge as a function of the latest generation of (mostly virtualized) firewall technologies, but most products still rely upon stand-alone GUIs, CLIs, element managers and a meat cloud of operators to push the go button (or reconfigure.)
Really annoying.
Alongside the lack of standard API-based management planes, control planes are largely proprietary and the output for correlated event-driven telemetry at all layers of the stack is equally lacking. Of course the applications and security layers that run atop infrastructure are still largely discrete thus making the problem more difficult.
The good news is that virtualization in the enterprise and the emergence of the cultural and operational models predicated upon automation are starting to influence product roadmaps in ways that will positively affect the problem space described above but we’ve got a long haul as we make this transition.
Security vendors are starting to realize that they must retool many of their technology roadmaps to deal with the impact of dynamism and automation. Some, not all, are discovering painfully the fact that simply creating a virtualized version of a physical appliance doesn’t make it a virtual security solution (or cloud security solution) in the same way that moving an application directly to cloud doesn’t necessarily make it a “cloud application.”
In the same way that one must often re-write or specifically design applications “designed” for cloud, we have to do the same for security. Arguably there are things that can and should be preserved; the examples of the basic underpinnings such as firewalls that at their core don’t need to change but their “packaging” does.
I’m privy to lots of the underlying mechanics of these activities — from open source to highly-proprietary — and I’m heartened by the fact that we’re beginning to make progress. We shouldn’t have to make a distinction between crafting and deploying security policies in physical or virtual environments. We shouldn’t be held hostage by the separation of application logic from the underlying platforms.
In the long term, I’m optimistic we won’t have to.
/Hoff
Related articles
After my initial post yesterday (How To Wield the New vShield (Edge, App & Endpoint) remarking on the general sessions I sat through on vShield, I thought I’d add some additional color given my hands-on experience in the labs today.
I will reserve more extensive technical analysis of vShield Edge and App (I didn’t get to play with endpoint as there is not a lab for that) once I spend some additional quality-time with the products as they emerge.
Because people always desire for me to pop out of the cake quickly, here you go:
You should walk away from this post understanding that I think the approach holds promise within the scope of what VMware is trying to deliver. I think it can and will offer customers choice and flexibility in their security architecture and I think it addresses some serious segmentation, security and compliance gaps. It is a dramatically impactful set of solutions that is disruptive to the security and networking ecosystem. It should drive some interesting change. The proof, as they say, will be in the vPudding.
Let me first say that from VMware’s perspective I think vShield “2.0” (which logically represents many technologies and adjusted roadmaps both old and new) is clearly an important and integral part of both vSphere and vCloud Director’s future implementation strategies. It’s clear that VMware took a good, hard look at their security solution strategy and made some important and strategically-differentiated investments in this regard.
All things told, I think it’s a very good strategy for them and ultimately their customers. However, there will be some very interesting side-effects from these new features.
vShield Edge is as disruptive to the networking space (it provides L3+ networking, VPN, DHCP and NAT capabilities at the vDC edge) as it is to the security arena. When coupled with vShield App (and ultimately endpoint) you can expect VMware’s aggressive activity in retooling their offers here to cause further hastened organic development, investment, and consolidation via M&A in the security space as other vendors seek to play and complement the reabsorption of critical security capabilities back into the platform itself.
Now all of the goodness that this renewed security strategy brings also has some warts. I’ll get into some of them as I gain more hands-on experience and get some questions answered, but here’s the Cliff Note version with THREE really important points:
While I’ll cover items #1 and #2 in a follow-on post, here’s what VMware can do in the short term to remedy what I think is a huges issue going forward with item #3, usability and management.
Specifically, in the same way vCloud Director sits above vCenter and abstracts away much of the “unnecessary internals” to present a simplified service catalog of resources/services to a consumer, VMware needs to provide a dedicated security administrator’s “portal” or management plane which unites the creation, management and deployment of policy from a SECURITY perspective of the various disparate functions offered by vShield App, Edge and Endpoint. [ED: This looks as though this might be what vShield Manager will address. There were no labs covering this and no session I saw gave any details on this offering (UI or API)]
If you expect a security administrator to have the in-depth knowledge of how to administer the entire (complex) virtualization platform in order to manage security, this model will break and cause tremendous friction. A security administrator shouldn’t have access to vCenter directly or even the vCloud Director interfaces.
Since much of the capability for automation and configuration is made available via API, the notion of building a purposed security interface to do so shouldn’t be that big of a deal. Some people might say that VMware should focus on building API capabilities and allow the ecosystem to fill the void with solutions that take advantage of the interfaces. The problem is that this strategy has not produced solutions that have enjoyed traction today and it’s quite clear that VMware is interested in controlling their own destiny in terms of Edge and App while allowing the rest of the world to play with Endpoint.
I’m sure I’m missing things and that given the exposure I’ve had (without any in-depth briefings) there may be material issues associated with where the products are given their early status, but I think it important to get these thoughts out of my head so I can chart their accuracy and it gives me a good reference point to direct the product managers to when they want to scalp me for heresy.
There’s an enormous amount of detail that I want to/can get into. The last time I did that it ended up in a 150 slide presentation I delivered at Black Hat…
Allow me to reiterate what I said in the beginning:
You should walk away from this post understanding that I think the approach holds promised within the scope of what VMware is trying to deliver. I think it can and will offer customers choice and flexibility in their security architecture and I think it addresses some serious segmentation, security and compliance gaps. It is a dramatically impactful set of solutions that is disruptive to the security and networking ecosystem. It should drive some interesting change. The proof, as they say, will be in the vPudding.
…and we all love vPudding.
/Hoff
Today at VMworld I spent my day in and out of sessions focused on the security of virtualized and cloud environments.
Many of these security sessions hinged on the release of VMware‘s new and improved suite of vShield product offerings which can be simply summarized by a deceptively simple set of descriptions:
The promised capabilities of these solutions offer quite a well-rounded set of capabilities from a network and security perspective but there are many interesting things to consider as one looks at the melding of the VMsafe API, vShield Zones and the nepotistic relationship enjoyed between the vCloud (nee’ VMware vCloud Director) and vSphere platforms.
There are a series of capabilities emerging which seek to solve many of the constraints associated with multi-tenancy and scale challenges of heavily virtualized enterprise and service provider virtual data center environments. However, many of the issues associated with those I raised in the Four Horsemen of the Virtualization Security Apocalypse still stand (performance, resilience/scale, management and cost) — especially since many of these features are delivered in the form of a virtual appliance.
Many of the issues I raise above (and asked again today in session) don’t have satisfactory answers which just shows you how immature we still are in our solution portfolios.
I’ll be diving deeper into each of the components as the week proceeds (and more details around vCloud Director are made available,) but one thing is certain — there’s a very interesting amplification of the existing tug-of-war between the security capabilities/functionality provided by the virtualization/cloud platform providers and the network/security ecosystem trying to find relevance and alignment with them.
There is going to be a wringing out of the last few smaller virtualization/Cloud security players who have not yet been consolidated via M&A or attrition (Altor Networks, Catbird, HyTrust, Reflex, etc) as the three technologies above either further highlight an identified gap or demonstrate irrelevance in the face of capabilities “built-in” (even if you have to pay for them) by VMware themselves.
Further, the uneasy tension between the classical physical networking vendors and the virtualization/cloud platform providers is going to come to a boil, especially as it comes to configuration management, compliance, and reporting as the differentiators between simple integration at the API level of control and data plane capabilities and things like virtual firewalling (and AV, and overlay VPNs and policy zoning) begins to commoditize.
As I’ve mentioned before, it’s not where the network *is* in a virtualized environment, it’s where it *isn’t* — the definition of where the network starts and stops is getting more and more abstracted. This in turn drives the same conversation as it relates to security. How we’re going to define, provision, orchestrate, and govern these virtual data centers concerns me greatly as there are so many touchpoints.
Hopefully this starts to get a little more clear as more and more of the infrastructure (virtual and physical) become manageable via API such that ultimately you won’t care WHAT tool is used to manage networking/security or even HOW other than the fact that policy can be defined consistently and implemented/instantiated via API across all levels transparently, regardless of what’s powering the moving parts.
This goes back to the discussions (video) I had with Simon Crosby on who should own security in virtualized environments and why (blog).
Now all this near term confusion and mess isn’t necessarily a bad thing because it’s going to force further investment, innovation and focus on problem solving that’s simply been stalled in the absence of both technology readiness, customer appetite and compliance alignment.
More later this week. [Ed: You can find the follow-on to this post here “VMware’s (New) vShield: The (Almost) Bottom Line]
/Hoff
Related articles by Zemanta
You’ll forgive my impertinence, but the last time I saw a similar claim of a PCI compliant Cloud offering, it turned out rather anti-climatically for RackSpace/Mosso, so I just want to make sure I understand what is really being said. I may be mixing things up in asking my questions, so hopefully someone can shed some light.
This press release announces that:
“…Verizon’s On-Demand Cloud Computing Solution First to Achieve PCI Compliance” and the company’s cloud computing solution called Computing as a Service (CaaS) which is “…delivered from Verizon cloud centers in the U.S. and Europe, is the first cloud-based solution to successfully complete the Payment Card Industry Data Security Standard (PCI DSS) audit for storing, processing and transmitting credit card information.”
It’s unclear to me (at least) what’s considered in scope and what level/type of PCI certification we’re talking about here since it doesn’t appear that the underlying offering itself is merchant or transactional in nature, but rather Verizon is operating as a service provider that stores, processes, and transmits cardholder data on behalf of another entity.
Here’s what the article says about what Verizon undertook for DSS validation:
To become PCI DSS-validated, Verizon CaaS underwent a comprehensive third-party examination of its policies, procedures and technical systems, as well as an on-site assessment and systemwide vulnerability scan.
I’m interested in the underlying mechanicals of the CaaS offering. Specifically, it would appear that the platform – compute, network, and storage — are virtualized. What is unclear is if the [physical] resources allocated to a customer are dedicated or shared (multi-tenant,) regardless of virtualization.
According to this article in The Register (dated 2009,) the infrastructure is composed like this:
The CaaS offering from Verizon takes x64 server from Hewlett-Packard and slaps VMware’s ESX Server hypervisor and Red Hat Enterprise Linux instances atop it, allowing customers to set up and manage virtualized RHEL partitions and their applications. Based on the customer portal screen shots, the CaaS service also supports Microsoft’s Windows Server 2003 operating system.
Some details emerge from the Verizon website that describes the environment more:
Every virtual farm comes securely bundled with a virtual load balancer, a virtual firewall, and defined network space. Once the farm is designed, built, and named – all in a matter of minutes through the CaaS Customer Management Portal – you can then choose whether you want to manage the servers in-house or have us manage them for you.
If the customer chooses to manage the “servers…in-house (sic)” is the customer’s network, staff and practices now in-scope as part of Verizon’s CaaS validation? Where does the line start/stop?
I’m very interested in the virtual load balancer (Zeus ZXTM perhaps?) and the virtual firewall (vShield? Altor? Reflex? VMsafe-API enabled Virtual Appliance?) What about other controls (preventitive or detective such as IDS, IPS, AV, etc.)
The reason for my interest is how, if these resources are indeed shared, they are partitioned/configured and kept isolated especially in light of the fact that:
Customers have the flexibility to connect to their CaaS environment through our global IP backbone or by leveraging the Verizon Private IP network (our Layer 3 MPLS VPN) for secure communication with mission critical and back office systems.
It’s clear that Verizon has no dominion over what’s contained in the VM’s atop the hypervisor, but what about the network to which these virtualized compute resources are connected?
So for me, all this all comes down to scope. I’m trying to figure out what is actually included in this certification, what components in the stack were audited and how. It’s not clear I’m going to get answers, but I thought I’d ask any way.
Oh, by the way, transparency and auditability would be swell for an environment such as this. How about CloudAudit? We even have a PCI DSS CompliancePack 😉
Question for my QSA peeps: Are service providers required to also adhere to sections like 6.6 (WAF/Binary analysis) of their offerings even if they are not acting as a merchant?
/Hoff
Related articles by Zemanta
In advance of publishing a more consolidated compilation of various recordings of my presentations, I thought I’d post this one.
This is from Microsoft’s BlueHat v9 and is from my “Cloudifornication: Indiscriminate Information Intercourse Involving Internet Infrastructure” presentation.
The direct link is here in case you have scripting disabled.
The follow-on to this is my latest presentation – “Cloudinomicon: Idempotent Infrastructure, Building Survivable Systems, and Bringing Sexy Back To Information Centricity.”
Related articles by Zemanta
Just to point out a fact many/most of you may not be aware of, but Amazon Web Services hired (transferred (?) since he was an AWS insider) Stephen Schmidt as their CISO earlier this year. He has a team that goes along with him, also.
That’s a very, very good thing. I, for one, am very glad to see it. Combine that with folks like Steve Riley and I’m enthusiastic that AWS will make some big leaps when it comes to visibility, transparency and interaction with the security community.
See. Christmas wishes can come true! Thanks, Santa! 😉
You can find more about Mr. Schmidt by checking out his LinkedIn profile.
/Hoff
When my I interact with folks and they bring up the notion of “Cloud Security,” I often find it quite useful to stop and ask them what they mean. I thought perhaps it might be useful to describe why.
In the same way that I differentiated “Virtualizing Security, Securing Virtualization and Security via Virtualization” in my Four Horsemen presentation, I ask people to consider these three models when discussing security and Cloud:
At any rate, I combine these with other models and diagrams I’ve constructed to make sense of Cloud deployment and use cases. This seems to make things more clear. I use it internally at work to help ensure we’re all talking about the same language.
/Hoff
Related articles by Zemanta
That’s right. You can’t secure “The Cloud” and the real shocker is that you don’t need to.
You can and should, however, secure your assets and the elements within your control that are delivered by cloud services and cloud service providers, assuming of course there are interfaces to do so made available by the delivery/deployment model and you’ve appropriately assessed them against your requirements and appetite for risk.
That doesn’t mean it’s easy, cheap or agile, and lest we forget, just because you can “secure” your assets does not mean you’ll achieve “compliance” with those mandates against which you might be measured.
Even if you’re talking about making investments primarily in solutions via software thanks to the abstraction of cloud (and/or virtualization) as well adjusting processes and procedures due to operational impact, you can generally effect compensating controls (preventative and/or detective) that give you security on-par with what you might deploy today in a non-Cloud based offering.
Yes, it’s true. It’s absolutely possible to engineer solutions across most cloud services today that meet or exceed the security provided within the walled gardens of your enterprise today.
The realities of that statement come crashing down, however, when people confuse possibility with the capability to execute whilst not disrupting the business and not requiring wholesale re-architecture of applications, security, privacy, operations, compliance, economics, organization, culture and governance.
Not all of that is bad. In fact, most of it is long overdue.
I think what is surprising is how many people (or at least vendors) simply suggest or expect that the “platform” or service providers to do all of this for them across the entire portfolio of services in an enterprise. In my estimation that will never happen, at least not if one expects anything more than commodity-based capabilities at a cheap price while simultaneously being “secure.”
Vendors conflate the various value propositions of cloud (agility, low cost, scalability, security) and suggest you can achieve all four simultaneously and in equal proportions. This is the fallacy of Cloud Computing. There are trade-offs to be found with every model and Cloud is no different.
If we’ve learned anything from enterprise modernization over the last twenty years, it’s that nothing comes for free — and that even when it appears to, there’s always a tax to pay on the back-end of the delivery cycle. Cloud computing is a series of compromises; it’s all about gracefully losing control over certain elements of the operational constructs of the computing experience. That’s not a bad thing, but it’s a painful process for many.
I really enjoy the forcing function of Cloud Computing; it makes us re-evaluate and sharpen our focus on providing service — at least it’s supposed to. I look forward to using Cloud Computing as a lever to continue to help motivate industry, providers and consumers to begin to fix the material defects that plague IT and move the ball forward.
This means not worrying about securing the cloud, but rather understanding what you should do to secure your assets regardless of where they call home.
/Hoff
Related articles by Zemanta
I just stumbled upon this YouTube video (link here, embedded below) interview I did right after my talk at Blackhat 2008 titled “The 4 Horsemen of the Virtualization Security Apocalypse (PDF)” [There’s a better narrative to the PDF that explains the 4 Horsemen here.]
I found it interesting because while it was rather “new” and interesting back then, if you ‘s/virtualization/cloud‘ especially from the perspective of heavily virtualized or cloud computing environments, it’s even more relevant today! Virtualization and the abstraction it brings to network architecture, design and security makes for interesting challenges. Not much has changed in two years, sadly.
We need better networking, security and governance capabilities! 😉
Same as it ever was.
/Hoff
Recent Comments