Archive

Author Archive

AWS’ New Networking Capabilities – Sucking Less ;)

March 15th, 2011 1 comment
A 6-node clique is a 5-component, structural c...

Image via Wikipedia

I still haven’t had my coffee and this is far from being complete analysis, but it’s pretty darned exciting…

One of the biggest challenges facing public Infrastructure-as-a-Service cloud providers has been balancing the flexibility and control of  datacenter networking capabilities against that present in traditional data center environments.

I’m not talking about complex L2/L3 configurations or converged data/storage networking topologies; I’m speaking of basic addressing and edge functionality (routing, VPN, firewall, etc.)  Furthermore, interconnecting public cloud compute/storage resources in a ‘private, non-Internet facing role) to a corporate datacenter has been less than easy.

Today Jeff Barr ahsploded another of his famous blog announcements which goes a long way solving not only these two issues, but clearly puts AWS on-track for continuing to press VMware on the overlay networking capabilities present in their vCloud Director vShield Edge/App model.

The press release (and Jeff’s blog) were a little confusing because they really focus on VPC, but the reality is that this runs much, much deeper.

I rather liked Shlomo Swidler’s response to that same comment to me on Twitter 🙂

This announcement is fundamentally about the underlying networking capabilities of EC2:

Today we are releasing a set of features that expand the power and value of the Virtual Private Cloud. You can think of this new collection of features as virtual networking for Amazon EC2. While I would hate to be innocently accused of hyperbole, I do think that today’s release legitimately qualifies as massive, one that may very well change that way that you think about EC2 and how it can be put to use in your environment.

The features include:

  • A new VPC Wizard to streamline the setup process for a new VPC.
  • Full control of network topology including subnets and routing.
  • Access controls at the subnet and instance level, including rules for outbound traffic.
  • Internet access via an Internet Gateway.
  • Elastic IP Addresses for EC2 instances within a VPC.
  • Support for Network Address Translation (NAT).
  • Option to create a VPC that does not have a VPC connection.

You can now create a network topology in the AWS cloud that closely resembles the one in your physical data center including public, private, and DMZ subnets. Instead of dealing with cables, routers, and switches you can design and instantiate your network programmatically. You can use the AWS Management Console (including a slick new wizard), the command line tools, or the APIs. This means that you could store your entire network layout in abstract form, and then realize it on demand.

That’s pretty bad-ass and goes along way toward giving enterprises a not-so-gentle kick in the butt regarding getting over the lack of network provisioning flexibility.  This will also shine whcn combined with the VMDK import capabilities — which are albeit limited today from a preservation of networking configuration.  Check out Christian Reilly’s great post “AWS – A Wonka Surprise” regarding how the VMDK-Import and overlay networking elements collide.  This gets right to the heart of what we were discussing.

Granted, I have not dug deeply into the routing capabilities (support for dynamic protocols, multiple next-hop gateways, etc.) or how this maps (if at all) to VLAN configurations and Shlomo did comment that there are limitations around VPC today that are pretty significant: “AWS VPC gotcha: No RDS, no ELB, no Route 53 in a VPC and “AWS VPC gotcha: multicast and broadcast still doesn’t work inside a VPC,” and “No Spot Instances, no Tiny Instances (t1.micro), and no Cluster Compute Instances (cc1.*)” but it’s an awesome first step that goes toward answering my pleas that I highlighted in my blog titled “Dear Public Cloud Providers: Please Make Your Networking Capabilities Suck Less. Kthxbye

Thank you, Santa. 🙂

On Twitter, Dan Glass’ assessment was concise, more circumspect and slightly less enthusiastic — though I’m not exactly sure I’d describe my reaction as that bordering on fanboi:

…to which I responded that clearly there is room for improvement in L3+ and security.  I expect we’ll see some 😉

In the long term, regardless of how this was framed from an announcement perspective, AWS’ VPC as a standalone “offer” should just go away – it will just become another networking configuration option.

While many of these capabilities are basic in nature, it just shows that AWS is paying attention to the fact that if it wants enterprise business, it’s going to have to start offering service capabilities that make the transition to their platforms more like what enterprises are used to using.

Great first step.

Now, about security…while outbound filtering via ACLs is cool and all…call me.

/Hoff

P.S. As you’ll likely see emerging in the comments, there are other interesting solutions to this overlay networking/connectivity solution – CohesiveF/T and CloudSwitch come to mind…

Enhanced by Zemanta

Incomplete Thought: Cloud Capacity Clearinghouses & Marketplaces – A Security/Compliance/Privacy Minefield?

March 11th, 2011 2 comments
Advertisement for the automatic (dial) telepho...

Image via Wikipedia

With my focus on cloud security, I’m always fascinated when derivative business models arise that take the issues associated with “mainstream” cloud adoption and really bring issues of security, compliance and privacy into even sharper focus.

To wit, Enomaly recently launched SpotCloud – a Cloud Capacity Clearinghouse & Marketplace in which cloud providers can sell idle compute capacity and consumers may purchase said capacity based upon “…location, cost and quality.”

Got a VM-based workload?  Want to run it cheaply for a short period of time?

…Have any security/compliance/privacy requirements?

To me, “quality” means something different that simply availability…it means service levels, security, privacy, transparency and visibility.

Whilst one can select the geographic location where your VM will run, as part of offering an “opaque inventory,” the identity of the cloud provider is not disclosed.  This begs the question of how the suppliers are vetted and assessed for security, compliance and privacy.  According to the SpotCloud FAQ, the answer is only a vague “We fully vet all market participants.”

There are two very interesting question/answer pairings on the SpotCloud FAQ that relate to security and service availability:

How do I secure my SpotCloud VM?

User access to VM should be disabled for increased security. The VM package is typically configured to automatically boot, self configure itself and phone home without the need for direct OS access. VM examples available.

Are there any SLA’s, support or guarantees?

No, to keep the costs as low as possible the service is offered without any SLA, direct support or guarantees. We may offer support in the future. Although we do have a phone and are more than happy to talk to you…

:: shudder ::

For now, I would assume that this means that if your workloads are at all mission critical, sensitive, subject to compliance requirements or traffic in any sort of sensitive data, this sort of exchange option may not be for you. I don’t have data on the use cases for the workloads being run using SpotCloud, but perhaps we’ll see Enomaly make this information more available as time goes on.

I would further assume that the criteria for provider selection might be expanded to include certification, compliance and security capabilities — all the more reason for these providers to consider something like CloudAudit which would enable them to provide supporting materials related to their assertions. (*wink*)

To be clear, from a marketplace perspective, I think this is a really nifty idea — sort of the cloud-based SETI-for-cost version of the Mechanical Turk.  It takes the notion of “utility” and really makes one think of the options.  I remember thinking the same thing when Zimory launched their marketplace in 2009.

I think ultimately this further amplifies the message that we need to build survivable systems, write secure code and continue to place an emphasis on the security of information deployed using cloud services. Duh-ja vu.

This sort of use case also begs the interesting set of questions as to what these monolithic apps are intended to provide — surely they transit in some sort of information — information that comes from somewhere?  The oft-touted massively scaleable compute “front-end” overlay of public cloud often times means that the scale-out architectures leveraged to deliver service connect back to something else…

You likely see where this is going…

At any rate, I think these marketplace offerings will, for the foreseeable future, serve a specific type of consumer trafficking in specific types of information/service — it’s yet another vertical service offering that cloud can satisfy.

What do you think?

/Hoff

Enhanced by Zemanta

App Stores: From Mobile Platforms To VMs – Ripe For Abuse

March 2nd, 2011 4 comments
Android Market

Image via Wikipedia

This CNN article titled “Google pulls 21 apps in Android malware scare” describes an alarming trend in which malicious code is embedded in applications which are made available for download and use on mobile platforms:

Google has just pulled 21 popular free apps from the Android Market. According to the company, the apps are malware aimed at getting root access to the user’s device, gathering a wide range of available data, and downloading more code to it without the user’s knowledge.

Although Google has swiftly removed the apps after being notified (by the ever-vigilant “Android Police” bloggers), the apps in question have already been downloaded by at least 50,000 Android users.

The apps are particularly insidious because they look just like knockoff versions of already popular apps. For example, there’s an app called simply “Chess.” The user would download what he’d assume to be a chess game, only to be presented with a very different sort of app.

Wow, 50,000 downloads.  Most of those folks are likely blissfully unaware they are owned.

In my Cloudifornication presentation, I highlighted that the same potential for abuse exists for “virtual appliances” which can be uploaded for public consumption to app stores and VM repositories such as those from VMware and Amazon Web Services:

The feasibility for this vector was deftly demonstrated shortly afterward by the guys at SensePost (Clobbering the Cloud, Blackhat) who showed the experiment of uploading a non-malicious “phone home” VM to AWS which was promptly downloaded and launched…

This is going to be a big problem in the mobile space and potentially just as impacting in cloud/virtual datacenters as people routinely download and put into production virtual machines/virtual appliances, the provenance and integrity of which are questionable.  Who’s going to police these stores?

(update: I loved Christian Reilly’s comment on Twitter regarding this: “Using a public AMI is the equivalent of sharing a syringe”)

/Hoff

Enhanced by Zemanta

Video Of My CSA Presentation: “Commode Computing: Relevant Advances In Toiletry & I.T. – From Squat Pots to Cloud Bots – Waste Management Through Security Automation”

February 19th, 2011 13 comments

This is probably my most favorite presentation I have given.  It was really fun.  I got so much positive feedback on what amounts to a load of crap. 😉

This video is from the Cloud Security Alliance Summit at the 2011 RSA Security Conference in San Francisco.  I followed Mark Benioff from Salesforce and Vivek Kundra, CIO of the United States.

Here is a PDF of the slides if you are interested.

Part 1:

Part 2:

Enhanced by Zemanta

My Warm-Up Acts at the RSA/Cloud Security Alliance Summit Are Interesting…

February 8th, 2011 2 comments
Red and Yellow, two of M&M's

Besides a panel or two and another circus-act talk with Rich Mogull, I’m thrilled to be invited to present again at the Cloud Security Alliance Summit at RSA this year.

One of my previous keynotes at a CSA event was well received: Cloudersize – A cardio, strength & conditioning program for a firmer, more toned *aaS

Normally when negotiating to perform at such a venue, I have my people send my diva list over to the conference organizers.  You know, the normal stuff: only red M&M’s, Tupac walkout music, fuzzy blue cashmere slippers and Hoffaccinos on tap in the green room.

This year, understanding we are all under pressure given the tough economic climate, I relaxed my requirements and instead took a deal for a couple of ace warm-up speakers to goose the crowd prior to my arrival.

Here’s who Jim managed to scrape up:

9:00AM – 9:40AM // Keynote: “Cloud 2: Proven, Trusted and Secure”
Speaker: Marc Benioff, CEO, Salesforce.com

9:40AM – 10:10AM // Keynote: Vivek Kundra, CIO, White House

10:10AM – 10:30AM // Presentation: “Commode Computing: Relevant Advances In Toiletry – From Squat Pots to Cloud Bots – Waste Management Through Security Automation”
Presenting: Christofer Hoff, Director, Cloud & Virtualization Solutions, Cisco Systems

I guess I can’t complain 😉

See you there. Bring rose petals and Evian as token gifts to my awesomeness, won’t you?

/Hoff

Enhanced by Zemanta

CloudPassage & Why Guest-Based Footprints Matter Even More For Cloud Security

February 1st, 2011 4 comments
VM (operating system)

Image via Wikipedia

Every day for the last week or so after their launch, I’ve been asked left and right about whether I’d spoken to CloudPassage and what my opinion was of their offering.  In full disclosure, I spoke with them when they were in stealth almost a year ago and offered some guidance as well as the day before their launch last week.

Disappointing as it may be to some, this post isn’t really about my opinion of CloudPassage directly; it is, however, the reaffirmation of the deployment & delivery models for the security solution that CloudPassage has employed.  I’ll let you connect the dots…

Specifically, in public IaaS clouds where homogeneity of packaging, standardization of images and uniformity of configuration enables scale, security has lagged.  This is mostly due to the fact that for a variety of reasons, security itself does not scale (well.)

In an environment where the underlying platform cannot be counted upon to provide “hooks” to integrate security capabilities in at the “network” level, all that’s left is what lies inside the VM packaging:

  1. Harden and protect the operating system [and thus the stuff atop it,]
  2. Write secure applications and
  3. Enforce strict, policy-driven information-centric security.

My last presentation, “Cloudinomicon: Idempotent Infrastructure, Building Survivable Systems and Bringing Sexy Back to Information Centricity” addressed these very points. [This one is a version I delivered at the University of Michigan Security Summit]

If we focus on the first item in that list, you’ll notice that generally to effect policy in the guest, you must have a footprint on said guest — however thin — to provide the hooks that are needed to either directly effect policy or redirect back to some engine that offloads this functionality.  There’s a bit of marketing fluff associated with using the word “agentless” in many applications of this methodology today, but at some point, the endpoint needs some sort of “agent” to play*

So that’s where we are today.  The abstraction offered by virtualized public IaaS cloud platforms is pushing us back to the guest-centric-based models of yesteryear.

This will bring challenges with scale, management, efficacy, policy convergence between physical and virtual and the overall API-driven telemetry driven by true cloud solutions.

You can read more about this in some of my other posts on the topic:

Finally, since I used them for eyeballs, please do take a look at CloudPassage — their first (free) offerings are based upon leveraging small footprint Linux agents and a cloud-based SaaS “grid” to provide vulnerability management, and firewall/zoning in public cloud environments.

/Hoff

* There are exceptions to this rule depending upon *what* you’re trying to do, such as anti-malware offload via a hypervisor API, but this is not generally available to date in public cloud.  This will, I hope, one day soon change.

Enhanced by Zemanta

George Carlin, Lenny Bruce & The Unspeakable Seven Dirty Words of Cloud Security

January 26th, 2011 1 comment
George Carlin
Cover of George Carlin

I have an upcoming cloud security presentation in which I map George Carlin’s “Seven Dirty Words” to Cloud Security challenges.  This shall accompany my presentation at the Cloud Security Alliance Summit at the RSA Conference titled: Commode Computing: Relevant Advances In Toiletry – From Squat Pots to Cloud Bots – Waste Management Through Security Automation”

I’ll leave it as an exercise for the reader to relate my 7 dirty words to George’s originals:

Scalability
Portability
Fungibility
Compliance
Cost
Manageability
Trust

Of course I could have modeled the talk after Lenny Bruce’s original nine dirty words that spawned George’s, but seven of nine appealed to the geek in me.

/Hoff

P.S. George looks remarkably like Vint Cerf in that picture above…uncanny.

Enhanced by Zemanta
Categories: Uncategorized Tags:

Past Life Regressions & Why Security Is a Petunia (Or a Whale) Depending Upon Where You Stand

January 26th, 2011 1 comment
42, The Answer to the Ultimate Question of Lif...
Image via Wikipedia

In Douglas Adam’s epic “The Hitchhiker’s Guide to the Galaxy,” we read about an organism experiencing a bit of a identity crisis at 30,000 feet:

It is important to note that suddenly, and against all probability, a Sperm Whale had been called into existence, several miles above the surface of an alien planet and since this is not a naturally tenable position for a whale, this innocent creature had very little time to come to terms with its identity. This is what it thought, as it fell:

The Whale: Ahhh! Woooh! What’s happening? Who am I? Why am I here? What’s my purpose in life? What do I mean by who am I? Okay okay, calm down calm down get a grip now. Ooh, this is an interesting sensation. What is it? Its a sort of tingling in my… well I suppose I better start finding names for things. Lets call it a… tail! Yeah! Tail! And hey, what’s this roaring sound, whooshing past what I’m suddenly gonna call my head? Wind! Is that a good name? It’ll do. Yeah, this is really exciting. I’m dizzy with anticipation! Or is it the wind? There’s an awful lot of that now isn’t it? And what’s this thing coming toward me very fast? So big and flat and round, it needs a big wide sounding name like ‘Ow’, ‘Ownge’, ‘Round’, ‘Ground’! That’s it! Ground! Ha! I wonder if it’ll be friends with me? Hello Ground!
[
dies]

Curiously the only thing that went through the mind of the bowl of petunias, as it fell, was, ‘Oh no, not again.’ Many people have speculated that if we knew exactly *why* the bowl of petunias had thought that we would know a lot more about the nature of the universe than we do now.

“Security” is facing a similar problem.

To that end, and without meaning to, Gunnar Petersen and Lenny Zeltser* unintentionally wrote about this whale of a problem in two thought provoking blogs describing what they portray as the sorry state of security today; specifically the inappropriate mission focus and misallocation of investment (Gunnar) and the need for remedying the skills gap and broadening the “information security toolbox” (Lenny)  that exists in an overly infrastructure-centric model used today.

Gunnar followed up with another post titled: “Is infosec busy being born or busy dying?”  Fitting.

Both gents suggest that we need to re-evaluate what, why and how we do what we do and where we invest by engaging in a more elevated service delivery role with a focus on enablement, architecture and cost-efficiency based on models that align spend to a posture I can only say reflects the mantra of survivability (see: A Primer on Information Survivability: Changing Your Perspective On Information Security):

[Gunnar] The budget dollars in infosec are not based on protecting the assets the company needs to conduct business, they are not spent on where the threats and vulnerabilities lie, rather they are spent on infrastructure which happens to be the historical background and hobby interest of the majority of technical people in the industry.

[Lenny] When the only tool you have is a hammer, it’s tempting to treat everything as if it were a nail, wrote Abraham Maslow a few decades ago. Given this observation, it’s not surprising that most of today’s information security efforts seem to focus on networks and systems.

Hard to disagree.

It’s interesting that both Gunnar and Lenny refer to this condition as being a result of our “information security” efforts since, as defined, it would appear to me that their very point is that we don’t practice “information security.”  In fact, I’d say what they really mean is that we primarily practice “network security” and pitter-patter around the other elements of the “stack:”

This is a “confused discipline” indeed.  Fact is, we need infrastructure security. We need application security.  We need information security.  We need all of these elements addressed by a comprehensive architecture and portfolio management process driven by protecting the things that matter most at the points where the maximum benefit can be applied to manage risk for the lowest cost.

Yes.

That’s. Freaking. Hard.

This is exactly why we have the Security Hamster Sine Wave of Pain…we cyclically iterate between host, application, information, user, and network-centric solutions to problems that adapt at a pace that far exceeds our ability to adjust to them let alone align to true business impact:

Whales and Petunias…

The problem is that people like to put things in neat little boxes which is why we have neat, little boxes and the corresponding piles of cash and people distributed to each of them (however unfortunate the ratio.)  Further, the industry that provides solutions across this stack are not incentivized to solve long term problems and innovative solutions brought to bear on emerging problems are often a victim of poor timing.  People don’t buy solutions that solve problems that are 5 years out, they buy solutions that fix short-term problems even if they are themselves predicated on 20 year old issues.

Fixing stuff in infrastructure has been easy up until now; buy another box.

Infrastructure has been pretty much static and thus the apps and information have bouyed about, tethered to the anchor of a static infrastructure.  Now that the infrastructure itself is becoming more dynamic, fixing problems upstack in dynamic applications and information — woohoo, that’s HARD, especially when we’re not organized to do any one of those things well, let alone all of them at once!

Frankly, the issue is one where the tactical impacts of the blending and convergence of new threats, vulnerabilities, socio-economic, political, cultural and technology curves chips away at our ability to intelligently respond without an overall re-engineering of what we do.  We’d have to completely blow up the role of “security” as we know it to deliver what Gunnar and Lenny suggest.

This isn’t a bad idea, it’s just profoundly difficult.  I ought to know. I’ve done it.  It took years to even get to the point where we could chip away at the PEOPLE who were clinging on to what they know as the truth…it’s as much generational and cultural as it is technical.

The issue I have is that it’s important to also realize that we’ve been here before and we’ll be here again and more importantly WHY.  I don’t think it’s a vast conspiracy theory but rather an unfortunate side-effect of our past lives.

I don’t disagree with the need to improve and/or reinvent ourselves as an industry — both from the perspective of the suppliers of solutions, the operators or the architects.  We do every 5 years anyway what with every “next big thing” that hits.

To round this back to the present, new “phase shifts” like Cloud computing are great forcing functions that completely change our perspective on where, how, who, and why we practice “security.”  I’d suggest that we leverage this positively and march to that drum beat Lenny and Gunnar are banging away on, but without the notion that we’re all somehow guilty of doing the wrong things.

BTW, has anyone seen my Improbability Drive?

/Hoff

Related articles

Enhanced by Zemanta

Revisiting Virtualization & Cloud Stack Security – Back to the Future (Baked In Or Bolted On?)

January 17th, 2011 No comments

[Like a good w[h]ine, this post goes especially well with a couple of other courses such as Hack The Stack Or Go On a Bender With a Vendor?, Incomplete Thought: Why Security Doesn’t Scale…Yet, What’s The Problem With Cloud Security? There’s Too Much Of It…, Incomplete Thought: The Other Side Of Cloud – Where The (Wild) Infrastructure Things Are… and Where Are the Network Virtual Appliances? Hobbled By the Virtual Network, That’s Where…]

There are generally three dichotomies of thought when it comes to the notion of how much security should fall to the provider of the virtualization or cloud stack versus that of the consumer of their services or a set of third parties:

  1. The virtualization/cloud stack provider should provide a rich tapestry of robust security capabilities “baked in” to the platform itself, or
  2. The virtualization/cloud stack provider should provide security-enabling hooks to enable an ecosystem of security vendors to provide the bulk of security (beyond isolation) to be “bolted on,” or
  3. The virtualization/cloud stack provider should maximize the security of the underlying virtualization/cloud platform and focus on API security, isolation and availability of service only while pushing the bulk of security up into the higher-level programatic/application layers, or

So where are we today?  How much security does the stack, itself, need to provide. The answer, however polarized, is somewhere in the murkiness dictated by the delivery models, deployment models, who owns what part of the real estate and the use cases of both the virtualization/cloud stack provider and ultimately the consumer.

I’ve had a really interesting series of debates with the likes of Simon Crosby (of Xen/Citrix fame) on this topic and we even had a great debate at RSA with Steve Herrod from VMware.  These two “infrastructure” companies and their solutions typify the diametrically opposed first two approaches to answering this question while cloud providers who own their respective custom-rolled “stacks” at either end of IaaS and SaaS spectrums such as Amazon Web Services and Salesforce bringing up the third.

As with anything, this is about the tenuous balance of “security,” compliance, cost, core competence and maturity of solutions coupled with the sensitivity of the information that requires protection and the risk associated with the lopsided imbalance that occurs in the event of loss.

There’s no single best answer which explains why were have three very different approaches to what many, unfortunately, view as the same problem.

Today’s “baked in” security capabilities aren’t that altogether mature or differentiated, the hooks and APIs that allow for diversity and “defense in depth” provide for new and interesting ways to instantiate security, but also add to complexity, driving us back to an integration play.  The third is looked upon as proprietary and limiting in terms of visibility and transparency and don’t solve problems such as application and information security any more than the other two do.

Will security get “better” as we move forward with virtualization and cloud computing.  Certainly.  Perhaps because of it, perhaps in spite of it.

One thing’s for sure, it’s going to be messy, despite what the marketing says.

Related articles

Enhanced by Zemanta

Why I Don’t Speak At Security B-Sides…

January 13th, 2011 2 comments

Security B-Sides has long since emerged from the “Indie” shadow it was born from and now represents and produces some of the most amazing content and speakers in the security (mainstream and otherwise) industry.

So why don’t I speak at any of them?

Two reasons.

1) Many of the B-Sides get spun up quickly and without much notice.  Those that I might be able to travel to/attend take place alongside the bigger conferences which I am required to attend and/or have committed to speak at far in advance, and…

2) I speak at 30-40 conferences a year. People don’t need to hear me prattle on about the same things I’ve spoken about elsewhere.  Further, many of the folks who respond with awesome CFP submissions to B-Sides don’t (for a number of reasons) speak at the larger conferences…so why should I take up space when others should be given this amazing opportunity?

So there you have it.

Support B-Sides.  One day I’ll get to one live. Until then, I’ll watch the live streams.

/Hoff