Archive

Author Archive

QuickQuip: Don’t run your own data center if you’re a public IaaS < Sorta...

January 10th, 2012 7 comments

Patrick Baillie, the CEO of Swiss IaaS provider, CloudSigma, wrote a very interesting blog published on GigaOm titled “Don’t run your own data center if you’re a public IaaS.”

Baillie leads off by describing how AWS’ recent outage is evidence as to why the complexity of running facilities (data centers) versus the services running atop them is best segregated and outsourced to third parties in the business of such things:

Why public IaaS cloud providers should outsource their data centers

While there are some advantages for cloud providers operating data centers in-house, including greater control, capacity, power and security, the challenges, such as geographic expansion, connectivity, location, cost and lower-tier facilities can often outweigh the benefits. In response to many of these challenges, an increasing number of cloud providers are realizing the benefits of working with a third-party data center provider.

It’s  a very interesting blog, sprinkled throughout with pros and cons of rolling your own versus outsourcing but it falls down in being able to carry the burden in logic of some the assertions.

Perhaps I misunderstood, but the article seemed to focus on single DC availability as though (per my friend @CSOAndy’s excellent summarization) “…he missed the obvious reason: you can arbitrage across data centers” and “…was focused on single DC availability. Arbitrage means you just move your workloads automagically.”

I’ll let you read the setup in its entirety, but check out the conclusion:

In reality, taking a look at public cloud providers, those with legacy businesses in hosting, including Rackspace and GoGrid, tend to run their own facilities, whereas pure-play cloud providers, like my company CloudSigma, tend to let others run the data centers and host the infrastructure.

The business of operating a data center versus operating a cloud is very different, and it’s crucial for such providers to focus on their core competency. If a provider attempts to do both, there will be sacrifices and financial choices with regards to connectivity, capacity, supply, etc. By focusing on the cloud and not the data center, public cloud IaaS providers don’t need to make tradeoffs between investing in the data center over the cloud, thereby ensuring the cloud is continually operating at peak performance with the best resources available.

The points above were punctuated as part of a discussion on Twitter where @georgereese commented “IaaS is all about economies of scale. I don’t think you win at #cloud by borrowing someone else’s”

Fascinating.  It’s times like these that I invoke the widsom of the Intertubes and ask “WWWD” (or What Would Werner Do?)

If we weren’t artificially limited in this discussion to IaaS only, it would have been interesting to compare this to SaaS providers like Google or Salesforce or better yet folks like Zynga…or even add supporting examples like Heroku (who run atop AWS but are now a part of SalesForce o_O)

I found many of the points raised in the discussion intriguing and good food for thought but I think that if we’re talking about IaaS — and we leave out AWS which directly contradicts the model proposed — the logic breaks almost instantly…unless we change the title to “Don’t run your own data center if you’re a [small] public IaaS and need to compete with AWS.”

Interested in your views…

/Hoff

Enhanced by Zemanta

QuickQuip: “Networking Doesn’t Need a VMWare” < tl;dr

January 10th, 2012 1 comment

I admit I was enticed by the title of the blog and the introductory paragraph certainly reeled me in with the author creds:

This post was written with Andrew Lambeth.  Andrew has been virtualizing networking for long enough to have coined the term “vswitch”, and led the vDS distributed switching project at VMware

I can only assume that this is the same Andrew Lambeth who is currently employed at Nicira.  I had high expectations given the title,  so I sat down, strapped in and prepared for a fire hose.

Boy did I get one…

27 paragraphs amounting to 1,601 words worth that basically concluded that server virtualization is not the same thing as network virtualization, stateful L2 & L3 network virtualization at scale is difficult and ultimately virtualizing the data plane is the easy part while the hard part of getting the mackerel out of the tin is virtualizing the control plane – statefully.*

*[These are clearly *my* words as the only thing fishy here was the conclusion…]

It seems the main point here, besides that mentioned above, is to stealthily and diligently distance Nicira as far from the description of “…could be to networking something like what VMWare was to computer servers” as possible.

This is interesting given that this is how they were described in a NY Times blog some months ago.  Indeed, this is exactly the description I could have sworn *used* to appear on Nicira’s own about page…it certainly shows up in Google searches of their partners o_O

In his last section titled “This is all interesting … but why do I care?,” I had selfishly hoped for that very answer.

Sadly, at the end of the day, Lambeth’s potentially excellent post appears more concerned about culling marketing terms than hammering home an important engineering nuance:

Perhaps the confusion is harmless, but it does seem to effect how the solution space is viewed, and that may be drawing the conversation away from what really is important, scale (lots of it) and distributed state consistency. Worrying about the datapath , is worrying about a trivial component of an otherwise enormously challenging problem

This smacks of positioning against both OpenFlow (addressed here) as well as other network virtualization startups.

Bummer.

Enhanced by Zemanta

When A FAIL Is A WIN – How NIST Got Dissed As The Point Is Missed

January 2nd, 2012 4 comments

Randy Bias (@randybias) wrote a very interesting blog titled Cloud Computing Came To a Head In 2011, sharing his year-end perspective on the emergence of Cloud Computing and most interestingly the “…gap between ‘enterprise clouds’ and ‘web-scale clouds’”

Given that I very much agree with the dichotomy between “web-scale” and “enterprise” clouds and the very different sets of respective requirements and motivations, Randy’s post left me confused when he basically skewered the early works of NIST and their Definition of Cloud Computing:

This is why I think the NIST definition of cloud computing is such a huge FAIL. It’s focus is on the superficial aspects of ‘clouds’ without looking at the true underlying patterns of how large Internet businesses had to rethink the IT stack.  They essentially fall into the error of staying at my ‘Phase 2: VMs and VDCs’ (above).  No mention of CAP theorem, understanding the fallacies of distributed computing that lead to successful scale out architectures and strategies, the core socio-economics that are crucial to meeting certain capital and operational cost points, or really any acknowledgement of this very clear divide between clouds built using existing ‘enterprise computing’ techniques and those using emergent ‘cloud computing’ technologies and thinking.

Nope. No mention of CAP theorem or socio-economic drivers, yet strangely the context of what the document was designed to convey renders this rant moot.

Frankly, NIST’s early work did more to further the discussion of *WHAT* Cloud Computing meant than almost any person or group evangelizing Cloud Computing…especially to a world of users whose most difficult challenges is trying to understand the differences between traditional enterprise IT and Cloud Computing

As I reacted to this point on Twitter, Simon Wardley (@swardley) commented in agreement with Randy’s assertions, but strangely what I found odd again was the misplaced angst by which the criterion of “success” vs “FAIL” that both Simon and Randy were measuring the NIST document against:

Both Randy and Simon seem to be judging NIST’s efforts against their lack of extolling the virtues, or “WHY” versus the “WHAT” of Cloud, and as such, were basically doing a disservice by perpetuating aged concepts rooted in archaic enterprise practices rather than boundary stretch, trailblaze and promote the enlightened stance of “web-scale” cloud.

Well…

The thing is, as NIST stated in both the purpose and audience sections of their document, the “WHY” of Cloud was not the main intent (and frankly best left to those who make a living talking about it…)

From the NIST document preface:

1.2 Purpose and Scope

Cloud computing is an evolving paradigm. The NIST definition characterizes important aspects of cloud computing and is intended to serve as a means for broad comparisons of cloud services and deployment strategies, and to provide a baseline for discussion from what is cloud computing to how to best use cloud computing. The service and deployment models defined form a simple taxonomy that is not intended to prescribe or constrain any particular method of deployment, service delivery, or business operation.

1.3 Audience

The intended audience of this document is system planners, program managers, technologists, and others adopting cloud computing as consumers or providers of cloud services.

This was an early work (the initial draft was released in 2009, final in late 2011,) and when it was written, many people — Randy, Simon and myself included — we still finding the best route, words and methodology to verbalize the “Why.” And now it’s skewered as “mechanistic drivel.”*

At the time NIST was developing their document, I was working in parallel writing the “Architecture” chapter of the first edition of the Cloud Security Alliance’s Guidance for Cloud Computing.  I started out including my own definitional work in the document but later aligned to the NIST definitions because it was generally in line with mine and was well on the way to engendering a good deal of conversation around standard vocabulary.

This blog post summarized the work at the time (prior to the NIST inclusion).  I think you might find the author of the second comment on that post rather interesting, especially given how much of a FAIL this was all supposed to be… 🙂

It’s only now with the clarity of hindsight that it’s easier to take the “WHY,” and utilize the “WHAT” (from NIST and others, especially practitioners like Randy) in a manner that is complementary so we can talk less about “what and why” and rather “HOW.”

So while the NIST document wasn’t, isn’t and likely never will be “perfect,” and does not address every use case or even eloquently frame the “WHY,” it still serves as a very useful resource upon which many people can start a conversation regarding Cloud Computing.

It’s funny really…the first tenet for “web-scale” cloud that AWS — the “Kings of Cloud” Randy speaks about constantly —  is “PLAN FOR FAIL.”  So if the NIST document truly meets this seal of disapproval and is a FAIL, then I guess it’s a win ;p

Your opinion?

/Hoff

*N.B. I’m not suggesting that critiquing a document is somehow verboten or that NIST is somehow infallible or sacrosanct — far from it.  In fact, I’ve been quite critical and vocal in my feedback with regard to both this document and efforts like FedRAMP.  However, this is during the documents’ construction and with the intent to make it better within the context within which they were designed versus the rear view mirror.

Enhanced by Zemanta

Stuff I’ve Really Wanted To Blog About But Haven’t Had the Time…

December 13th, 2011 1 comment

This is more a post-it note to the Universe simultaneously admitting both blogging bankruptcy as well as my intention to circle back to these reminders and write the damned things:

  1. @embrane launches out of stealth and @ioshints, @etherealmind and @bradhedlund all provide very interesting perspectives on the value proposition of Heleos – their network service virtualization solution. One thing emerges: SDN is the next vocabulary battleground after Cloud and Big Data
  2. With the unintentional assistance of @swardley who warned me about diffusion S-curves and evolution vs. revolution, I announce my plan to launch a new security presentation series around the juxtaposition and overlay of Metcalfe’s + HD Moore’s + (Gordon) Moore’s+ (Geoffrey) Moore’s Laws. I call it the “Composite Calculus of Cloud Computing Causality.”  I’m supposed to add something about Everett Rogers.
  3. Paul Kedrosky posts an interesting graphic reflecting a Gartner/UBS study on cloud revenues through 2015. Interesting on many fronts: http://twitpic.com/7rx1y7
  4. Ah, FedRAMP. I’ve written about it here. @danphilpott does his usual bang-on job summarizing what it means — and what it doesn’t in “New FedRAMP Program: Not Half-Baked but Not Cooked Through”
  5. This Layer7-supplied @owasp presentation by Adam Vincent on Web Services Hacking and Hardening is a good basic introduction to such (PDF.)
  6. via @hrbrmstr, Dan Geer recommends “America the Vulnerable” from Joel Brenner on “the next great battleground; Digital Security.” Good read.
  7. I didn’t know this: @ioshints blogs about the (Cisco) Nexus 1000V and vMotion  Sad summary: you cannot vMotion across two vDS (and thus two NX1KV domains/VSMs).
  8. The AWS patchocalypse causes galactic panic as they issue warnings and schedules associated with the need to reboot images due to an issue that required remediation.  Funny because of how much attention needing to patch a platform can bring when people set their expectations that it won’t happen (or need to.)  Can’t patch that… ;(
  9. @appirio tries to make me look like a schmuck in the guise of a “publicly nominated award for worst individual cloudwasher.” This little gimmick backfires when the Twitterverse exploits holes in the logic of their polling engine they selected and I got over 800,000 votes for first place over Larry Ellison and Steve Ballmer.  Vote for Pedro

More shortly as I compile my list.

Enhanced by Zemanta

Enter the Data Huggers…

November 27th, 2011 6 comments
VM (operating system)

Image via Wikipedia

A tale of yore:

And from whence the light of the heavens did shimmer upon them from the clouds above, yea the Server Huggers did at first protest.

Yet, as the virtual supplanted the corporeal — these shells of the buzzing contrivance —  they did wilt wearily as the compression of the rings unyieldingly did set upon them.

And strangely, once shod of their vestigial confines, they learned once again to rejoice and verily did they, of their own accord, assume the mantle of the make-believe server clan, and march forward lofting their standards high  to readily bear their mark of the VM.

But as the once earth-bound, now ethereal, ascended their offers to the heavens and their clouds, the vessels of their capital dissolved their curtailment once more.

The VMs became vapor, their service, the yield.  This tribe, once coupled to the shrines of their wanton packaging, became unencumbered once more.

Yet when these offers were set free upon the land, their services — compliant, supple and malleable — were embraced as a newfound purse, and the Clan of the App guarded them jealously once again, hoarding their prize from the undeserving.

Their reign, alas, did not last; their bastions eroded as the platforms that once gained them allegiance crumbled with the surfeit of consumption — their dispersion widespread and resources taxed.

Thus became the rule of the Data Clan whose merit lay in wait as the pretenders of the Cloud were forced to kneel in subservience.

For their data was big and they clutched it to their bosom as they once did their apps, VMs and carnal heat pumps…

It’s interesting to see how in such a short time we’ve seen the following progression:

Server Huggers > VM Huggers > App Huggers > Data Huggers

I wonder what’s next in the lineage of hugs?

/Hoff

Enhanced by Zemanta

802.bah – Beware the SiriSheep Attack!

November 21st, 2011 1 comment

On the heels of a French group reverse-engineering the Siri protocol by intercepting requests to the Internet-based server that Apple sends Siri requests to, Pete Lamonica, a first-time Ruby developer has produced another innovative hack.

Lamonica has created an extensible proxy server to enable not only interception of Siri requests, but provide connectivity/interfacing to other devices, such as his Wifi-enabled thermostat.

Check it out here:

What I think might be an interesting is if, in the future, we see Siri modified/deployed in the same way as Microsoft’s Kinect is today used to control all sorts of originally-unintended devices and software.

Can you imagine if $evil_person deployed (via Proxy) the Siri version of the once famed Starbucks pwnership tool, FireSheep?  SiriSheep.  I call it…

Your house, your car, your stock trades, emails, etc…all Siri-enabled.  All Siri-pwned.

I have to go spend some time with the original code — it’s unclear to me if the commands to Siri are sent via SSL and if they are, how gracefully (or ungracefully) errors are thrown/dealt with should one MITM the connection.  It seems like it doesn’t give a crap…

Thanks to @JDeLuccia, here’s the github link to the original code.

/Hoff

Enhanced by Zemanta

Cloud: The Turducken Of Computing? [Oh, and Happy Thanksgiving]

November 20th, 2011 3 comments

In these here United States Of ‘Merica, we’re closing in on our National Day of Gluttony, Thanksgiving.

As we approach #OccupyStuffedGullet, I again find myself quizzically introspective, however frightening the prospect of navel gazing combined with fatal doses of Tryptophan leaves me.

Reviewing the annual list of consumables for said celebration in conjunction with yet another interminable series of blog posts on “What Cloud *really* means for [insert target market here,]” I am compelled to suggest that what Cloud *really* means and *really* represents is…

A Turducken.

According to the Oracle of Mr. Wales, a Turducken is as follows:

turducken is a dish consisting of a de-boned chicken stuffed into a de-boned duck, which itself is stuffed into a de-boned turkey. The word turducken is a portmanteau of turkey, duck, and chicken or hen.

The thoracic cavity of the chicken/game hen and the rest of the gaps are stuffed, sometimes with a highly seasoned breadcrumb mixture or sausage meat, although some versions have a different stuffing for each bird.

I’ll leave it to you to map the visual to the…Oh, who am I kidding?

What does this *really* have to do with Cloud? Nothing, really, but I’m rather bloated with the resurgence of metaphors and analogs which seek to clarify new computing recipes in order to justify more gut-busting consumption, warranted or not, diets-be-damned.  But really…sometimes a dish is delicious in its simplicity and doesn’t need any garnish. 😉

In fact, this comparison isn’t really at all that accurate or interesting, but I found it funny nonetheless…

Frankly, it’s *really* just an excuse to wish you all a very merry ClouDucken Day 🙂

May your IT be stuffed and your waistline elastic.

/Hoff

Enhanced by Zemanta

Oh, c’mon…

October 28th, 2011 1 comment

Story here from Network World.

Frankly, XML Signature-Wrapping and XSS don’t represent “massive security flaws in cloud architectures.”

They represent unfortunate vulnerabilities in authentication mechanism and web app security, but “cloud architecture?”

These vulnerabilities were also fixed.  Quickly.

Further, while the attack vector will continue to play an important role when using Cloud (publicly) as a delivery model (that is, APIs,) this story is being over played.

Will this/could this/is this type of vulnerability pervasive? Certainly there are opportunities for abuse of Internet-facing APIs and authentication schemes, especially given the reliance on vulnerable protocols and security models?

Perhaps.

Is it scary?

Yes.

See: Cloudifornication and Cloudinomicon.

 

Enhanced by Zemanta
Categories: Cloud Computing, Cloud Security Tags:

The Killer App For OpenFlow and SDN? Security.

October 27th, 2011 8 comments

I spent yesterday at the PacketPushers/TechFieldDay OpenFlow Symposium. The event provided a good overview of what OpenFlow [currently] means, how it fits into the overall context of software-defined networking (SDN) and where it might go from here.

I’d suggest reading Ethan Banks’ (@ecbanks) overview here.

Many of us left the event, however, still wondering about what the “killer app” for OpenFlow might be.

Chatting with Ivan Pepelnjak (@ioshints) and Derrick Winkworth (@CloudToad,) I reiterated that selfishly, I’m still thrilled about the potential that OpenFlow and SDN can bring to security.  This was a topic only briefly skirted during the symposium as the ACL-like capabilities of OpenFlow were discussed, but there’s so much more here.

I wrote about this back in May (OpenFlow & SDN – Looking forward to SDNS: Software Defined Network Security):

… “security” needs to be as programmatic/programmable, agile, scaleable and flexible as the workloads (and stacks) it is designed to protect. “Security” in this context extends well beyond the network, but the network provides such a convenient way of defining templated containers against which we can construct and enforce policies across a wide variety of deployment and delivery models.

So as I watch OpenFlow (and Software Defined Networking) mature, I’m really, really excited to recognize the potential for a slew of innovative ways we can leverage and extend this approach to networking [monitoring and enforcement] in order to achieve greater visibility, scale, agility, performance, efficacy and reduced costs associated with security.  The more programmatic and instrumented the network becomes, the more capable our security options will become also.

I had to chuckle at a serendipitous tweet from a former Cisco co-worker (Stefan Avgoustakis, @savgoust) because it’s really quite apropos for this topic:

…I think he’s oddly right!

Frankly, if you look at what OpenFlow and SDN (and network programmability in general) gives an operator — the visibility and effective graph of connectivity as well as the multiple-tupule flow action capabilities, there are numerous opportunities to leverage the separation of control/data plane across both virtual and physical networks to provide better security capabilities in response to threats and at a pace/scale/radius commensurate with said threat.

To be able to leverage telemetry and flow tables in the controllers “centrally” and then “dispatch” the necessary security response on an as-needed basis to the network location ONLY that needs it, really does start to sound a lot like the old “immune system” analogy that SDN (self defending networks) promised.

The ability to distribute security capabilities more intelligently as a service layer which can be effected when needed — without the heavy shotgunned footprint of physical in-line devices or the sprawl of virtualized appliances — is truly attractive.  Automation for detection and ultimately prevention is FTW.

Bundling the capabilities delivered via programmatic interfaces and coupling that with ways of integrating the “network” and “applications” (of which security is one) produces some really neat opportunities.

Now, this isn’t just a classical “data center core” opportunity, either. How about the WAN/Edge?  Campus, branch…? Anywhere you have the need to deliver security as a service.

For non-security examples, check out Dave Ward’s (my Juniper colleague) presentation “Programmable Networks are SFW” where he details interesting use cases such as “service engineered paths,” “service appliance pooling,” “service specific topology,” “content request routing,” and “bandwidth calendaring” for example.

…think of the security ramifications and opportunities linked to those capabilities!

I’ve mocked up a couple of very interesting security prototypes using OpenFlow and some open source security components; from IDP to Anti-malware and the potential is exciting because OpenFlow — in conjunction with other protocols and solutions in the security ecosystem — could provide some of the missing glue necessary to deliver a constant,  but abstracted security command/control (nee API-like capability) across heterogenous infrastructure.

NOTE: I’m not suggesting that OpenFlow itself provide these security capabilities, but rather enable security solutions to take advantage of the control/data plane separation to provide for more agile and effective security.

If the potential exists for OpenFlow to effectively allow choice of packet forwarding engines and network virtualization across the spectrum of supporting vendors’ switches, it occurs to me that we could utilize it for firewalls, intrusion detection/prevention engines, WAFs, NAC, etc.

Thoughts?

Enhanced by Zemanta

A Contentious Question: The Value Proposition & Target Market Of Virtual Networking Solutions?

September 28th, 2011 26 comments

I have, what I think, is a simple question I’d like some feedback on:

Given the recent influx of virtual networking solutions, many of which are OpenFlow-based, what possible in-roads and value can they hope to offer in heavily virtualized enterprise environments wherein the virtual networking is owned and controlled by VMware?

Specifically, if the only third-party VMware virtual switch to date is Cisco’s and access to this platform is limited (if at all available) to startup players, how on Earth do BigSwitch, Nicira, vCider, etc. plan to insert themselves into an already contentious environment effectively doing mindshare and relevance battle with the likes of mainline infrastructure networking giants and VMware?

If you’re answer is “OpenFlow and OpenStack will enable this access,” I’ll follow along with a question that asks how long a runway these startups have hanging their shingle on relatively new efforts (mainly open source) that the enterprise is not a typically early adopter of.

I keep hearing notional references to the problems these startups hope to solve for the “Enterprise,” but just how (and who) do they think they’re going to get to consider their products at a level that gives them reasonable penetration?

Service providers, maybe?

Enterprises…?

It occurs to me that most of these startups are being built to be acquired by traditional networking vendors who will (or will not) adopt OpenFlow when significant enterprise dollars materialize in stacks that are not VMware-centric.

Not meaning to piss anyone off, but many of these startups’ business plans are shrouded in the mystical vail of “wait and see.”

So I do.

/Hoff

Ed: To be clear, this post isn’t about “OpenFlow” specifically (that’s only one of many protocols/approaches,) but rather the penetration of a virtual networking solution into a “closed” platform environment dominated by a single vendor.

If you want a relevant analog, look at the wasteland that represents the virtual security startups that tried to enter this space (and even the larger vendors’ solutions) and how long this has taken/fared.

If you read the comments below, you’ll see people start to accidentally tease out the real answer to the question I was asking…about the value of these virtual networking solutions providers.  The funny part is that despite the lack of comments from most of the startups I mention, it took Brad Hedlund (from Cisco) to recognize why I wrote the post, which is the following:

“The *real* reason I wrote this piece was to illustrate that really, these virtual networking startups are really trying to invade the physical network in virtual sheep’s clothing…”

…in short, the problem space they’re trying to solve is actually in the physical network, or more specifically bridge the gap between the two.

Enhanced by Zemanta