Archive

Author Archive

DDoS – A Moose On Cloud’s Table Or A Pea Under The Mattress?

September 7th, 2009 7 comments

DDoSReaders of my blog will no doubt be familiar with Roland Dobbins.  He’s commented on lots of posts here and whilst we don’t always see eye-to-eye, I really respect both his intellect and his style.

So it’s fair to say that Roland is not a shy lad.  Formerly at Cisco and now at Arbor, he’s made his position (and likely his living) on dealing with a rather unpleasant issue in the highly distributed and networked InterTubes: Distributed Denial of Service (DDoS) attacks.

A recent article in ITWire titled “DDoS, the biggest threat to Cloud Computing” sums up Roland’s focus:

“According to Roland Dobbins, solutions architect for network security specialist Arbor Networks, distributed denial of service attacks are one of the must under-rated and ill-guarded against security threats to corporate IT, and in particular the biggest threat facing cloud computing.”

DDOS, Dobbins claims, is largely ignored in many discussions around network and cloud computing security. “Most discussions around cloud security are centred around privacy, confidentially, the separation of data from the application logic, but the security elephant in the room that very few people seem to want to talk about is DDOS. This is the number one security threat facing the cloud model,” he told last week’s Ausnog conference in Sydney.

“In cloud computing where infrastructure is shared by potentially millions of users, DDOS attacks have the potential to have much greater impact than against single tenanted architectures,” Dobbins argues. Yet, he says, “The cloud providers emerging as leaders don’t tend to talk much about their resiliency to DDOS attacks.”

Depending upon where you stand, especially if we’re talking about Public Clouds — and large Public Cloud providers such as Google, Amazon, Microsoft, etc. — you might cock your head to one side, raise an eyebrow and focus on the sentence fragment “…and in particular the biggest threat facing cloud computing.”  One of the reasons DDoS is under-appreciated is because in relative frequency — and in the stable of solutions and skill sets to deal with them — DDoS is a long tail event.

With unplanned outages afflicting almost all major Cloud providers today, the moose on the table seems to be good ol’ internal operational issues at the moment…that’s not to say it won’t become a bigger problem as the models for networked Cloud resources changes, but as the model changes, so will the defensive options in the stable.

With the decentralization of data but the mass centralization of data centers featured by these large Cloud providers, one might see how this statement could strike fear into the hearts of potential Cloud consumers everywhere and Roland is doing his best to serve us a warning — a Public (denial of) service announcement.

Sadly, at this point, however, I’m not convinced that DDoS is “the biggest threat facing Cloud Computing” and whilst providers may not “…talk much about their resiliency to DDoS attacks,” some of that may likely be due to the fact that they don’t talk much about security at all.  It also may be due to the fact that in many cases, what we can do to respond to these attacks is directly proportional to the size of your wallet.

Large network and service providers have been grappling with DDoS for years, so have large enterprises.  Folks like Roland have been on the front lines.

Cloud will certainly amplify the issues of DDoS because of how resources — even when distributed and resiliently load balanced in elastic and “perceptively infinitely scalable” ways — are ultimately organized, offered and consumed.  This is a valid point.

But if we look at the heart of most criminal elements exploiting the Internet today (and what will become Cloud,) you’ll find that the great majority want — no, *need* — victims to be available.  If they’re not, there’s no exploiting them.  DDoS is blunt force trauma — with big, messy, bloody blows that everybody notices.  That’s simply not very good for business.

At the end of the day, I think DDoS is important to think about.  I think variations of DDoS are, too.

I think that most service providers are thinking about it and investing in technology from companies such as Cisco and Arbor to deal with it, but as Roland points out, most enterprises are not — and if Cloud has its way, they shouldn’t have to:

Paradoxically, although Dobbins sees DDOS as the greatest threat to cloud computing, he also sees it as the potential solution for organisations grappling with the complexities of securing the network infrastructure.

“One answer is to get rid of all IT systems and hand them over to an organisation that specialises in these things. If the cloud providers are following best practice and have the visibility to enable them to exert control over their networks it is possible for organisation to outsource everything to them.”

For those organisations that do run their own data centres, he suggests they can avail themselves of ‘clean pipe’ services which protect against DDOS attacks According to Nick Race, head of Arbor Networks Australia, Telstra, Optus and Nextgen Networks all offer such services.

So what about you?  Moose on the table or pea under the mattress?

/Hoff

Proof Of How I Almost Took The Internet Down…

September 5th, 2009 4 comments

I’ve tripped over it a couple of times.

I’ve done things to it and with it that perhaps I shouldn’t have.

I’ve even rebooted it once or twice.

On Thursday, I tried — unsuccessfully — to once and for all take down the Internet.

It’s he’s just too damned resilient for his own good. 😉

Boy and his Turtle

One of my heroes…and an awesome person. Thank you, Vint.

You can read about the exploits of the Infrastructure 2.0 Working Group at SRI from Greg Ness’ blog here.

/Hoff

Variety & Darwinism In Solutions Is Innovation, In Standards It’s A War?

September 5th, 2009 6 comments

I find it quite interesting that in the last few months or so, as Cloud has emerged as a full-fledged business opportunity, we’ve seen the rise of many new companies, strategies and technologies. For the most part, hype aside, people praise this as innovation and describe it as a natural evolutionary process.

Strangely enough, with the emergence of new opportunity comes the ever-present push to standards.  Many see standards introduced too early as an innovation squasher; it inhibits free market evolution, crams down the smaller players, and lets the big fish take over — especially when the standards are backed by said big fish.  The open versus proprietary debate is downright religious.

Cloud Computing is no different.

We’ve seen many “standards” float to the surface recently — some backed by vendors, others by groups of concerned citizenry.  Many Cloud providers have published their API’s in an attempt to standardize interfacing to their offerings.  Some are open, some are proprietary.  Some are even open-sourced.  Some are simply de facto based upon the deployment of a set of technology, solutions and an ecosystem built around supporting it.  Professional standards organizations are also now getting involved.

In J. Nicholas Hoover’s blog post titled “Groups Seek Cloud Computing Standards,” Gartner’s David Cearly said :

“Community participation, deliberate action, and planning must be a vital part of any successful standards process…Otherwise, he said, cloud standards efforts could fail miserably.”

“Standards is one of those things that could absolutely strangle and kill everything we want to do in cloud computing if we do it wrong,” he said. “We need to make sure that as were approaching standards, we’re approaching standards more as they were approached in the broader internet, just in time.”

I suppose that depends upon how you measure success…

Tom Nolle wrote an interesting piece titled: “Multiple Standards Cloud Spoil Cloud Computing” in which he lists 7 standards bodies “competing” for Cloud, wondering out loud why if they all have similar interests, do they exist separately.  After he talks about the difference between those focused on Public and Private Clouds, he bemoans the bifurcation and then plugs the one he finds best 😉

So now we have live public cloud services with incomplete standards and evolving private cloud standards with no implementations.

The best hope for a unification is the Cloud Computing Interoperability Forum. Its Unified Cloud Architecture tackles standards by making public cloud computing interoperable. Their map of cloud computing shows the leading public cloud providers and a proposed Unified Cloud Interface that the body defines, with a joking reference to Tolkien’s Lord of the Rings, as “One API to Rule them All.”

So make that 8 players…

This week we’ve seen the release of the VMware-sponsored and DMTF-submitted vCloud. We also saw RedHat introduce their Deltacloud API.  We have the Open Cloud Computing Interface (OCCI) standards work which getting underway within the Open Grid Forum (OGF.)  There’s a veritable plethora of groups, standards and efforts at play.

Some of it is likely duplicative.

Some of it is likely vendor-fed.

The reality is that unlike others, I find it refreshing.

I think it’s great that we have multiple efforts.

It would, for sure, be nice if we could all agree and have one focused set of work, but that’s simply not reality.  It will be confusing for all concerned in the short term.

The Open vs. mostly-open debates will continue, but this NORMAL.  In the end, we end up with a survival of the marketed-fittest.  The standards that win are the standards that are most optimally muscled, marketed and adopted.

Simon Wardley wrote a piece called “The Cloud Computing War” which to me read like an indictment of the process (I admit my review may be colored by what I perceive as FUD regarding VMware’s vCloud,) but I can’t help but to shrug it off and instead decide to focus on where and whom I will decide to pitch my tent.

I’ve already done so with the Cloud Security Alliance (not a standards body) and I’m looking at using vCloud to find a home for my A6 concept.

A Cloud standards war?  War is such an ugly term.  It’s just the normal activity associated with disruptive innovation and the markets sorting themselves out.  The standards arena is simply where the dirty laundry gets exposed.  Get used to it, there’s enough mud/FUD flinging that you can expect several loads 😉

/Hoff

NESSessary Question: Will Virtualization Undermine Network Equipment Vendors?

August 30th, 2009 1 comment

Greg Ness touched off an interesting discussion when he asked “Will Virtualization Undermine Network Equipment Vendors?”  It’s a great read summarizing how virtualization (and Cloud) are really beginning to accelerate how classical networking equipment vendors are re-evaluating their portfolios in order to come to terms with these disruptive innovations.

I’ve written so much about this over the last three years and my response is short and sweet:

Virtualization has actually long been an enabler for network equipment vendors — not server virtualization, mind you, but network virtualization.  The same goes in the security space. The disruption caused by server virtualization is only acting as an accelerant — pushing the limits of scale, redefining organizational and operational boundaries, and acting as a forcing function causing wholesale reconsideration of archetypal network (and security) topologies.

The compressed timeframe associated with the disruption caused by virtualization and its adoption in conjunction with the arrival of Cloud Computing may seem unnatural given the relatively short window associated with its arrival, but when one takes the longer-term view, it’s quite natural.  We’ve seen it before in vignettes across the evolution of computing, but the convergence of economics, culture, technology and consumerism have amplified its relevance.

To answer Greg’s question, Virtualization will only undermine those network equipment vendors who were not prepared for it in the first place.  Those that were building highly virtualized, context-enabled routing, switching and security products will embrace this swing in the hardware/software pendulum and develop hybrid solutions that span the physical and virtual manifestations of what the “network” has become.

As I mentioned in my blog titled “Quick Bit: Virtual & Cloud Networking – Where It ISN’T Going…

Specifically, as it comes to understanding how the network plays in virtual and Cloud architectures, it’s not where the network *is* in the increasingly complex virtualized, converged and unified computing architectures, it’s where networking *isn’t.*

Where ISN'T The Network?

Where ISN'T The Network?

Take a look at your network equipment vendors.  Where do they play in that stack above?  Compare and contrast that with what is going on with vendors like Citrix/Xen with the Open vSwitch, VyattaArista with vEOS and Cisco with the Nexus 1000v*…interesting times for sure.

/Hoff

*Disclosure: I work for Cisco.

A Note On Multitenancy As A ‘Defining’ Cloud Attribute…

August 30th, 2009 6 comments

Balakrishna Narasimh and I were discussing the recent hoohaa on Public and Private Clouds when he made an observation on Twitter:

Starting to think public vs private clouds is misleading terminology. more meaningful distinction is single-tenant vs multi-tenant clouds.

I suggested that multitenancy can certainly be an attribute of Cloud deployment, but that I don’t see it as being a differentiator.  I responded thusly:

So different business units in an enterprise don’t represent different “tenants?” They can be governed w/ diff. SLA, policy, $

My point here was that trying to use multitenancy as a way to distinguish between Public and Private Cloud deployments ignores the reality that in many large enterprises — many of whom who are beginning to architect and deploy Private Clouds — they think of their business constituencies as individual “tenants.”  Each of these “tenants” often have different business requirements, service level requirements, cost structure and chargeback rates, policies, etc.

Food for thought.

/Hoff

Calling All Private Cloud Haters: Amazon Just Peed On Your Fire Hydrant…

August 26th, 2009 15 comments

Werner Vogels brought a smile to my face today with his blog titled “Seamlessly Extending the Data Center – Introducing Amazon Virtual Private Cloud.”  In short:

We have developed Amazon Virtual Private Cloud (Amazon VPC) to allow our customers to seamlessly extend their IT infrastructure into the cloud while maintaining the levels of isolation required for their enterprise management tools to do their work.

In one fell swoop, AWS has:

  • Legitimized Private Cloud as a reasonable, needed, and prudent step toward Cloud adoption for enterprises,
  • Substantiated the value proposition of Private Cloud as a way of removing a barrier to Cloud entry for enterprises, and
  • Validated the ultimate vision toward hybrid Clouds and Inter-Cloud

They made this announcement from the vantage point of operating as a Public Cloud provider — in many cases THE Public Cloud provider of choice for those arguing from an exclusionary perspective that Public Cloud is the only way forward.

Now, it’s pretty clear on AWS’ position on Private Cloud; straight form the horse’s mouth Werner says “Private Cloud is not the Cloud” (see below) — but it’s also clear they’re willing to sell you some 😉

The cost for VPC isn’t exorbitant, but it’s not free, either, so the business case is clearly there (see the official VPC site)– VPN connectivity is $0.05 per VPN connection with data transfer rates of $0.10 per GB inbound and ranging from $0.17 per GB – $0.10 per GB outbound depending upon volume (with heavy data replication or intensive workloads people are going to need to watch the odometer.)

I’m going to highlight a couple of nuggets from his post:

We continuously listen to our customers to make sure our roadmap matches their needs. One important piece of feedback that mainly came from our enterprise customers was that the transition to the cloud of more complex enterprise environments was challenging. We made it a priority to address this and have worked hard in the past year to find new ways to help our customers transition applications and services to the cloud, while protecting their investments in their existing IT infrastructure. …

Private Cloud Is Not The Cloud – These CIOs know that what is sometimes dubbed “private cloud” does not meet their goal as it does not give them the benefits of the cloud: true elasticity and capex elimination. Virtualization and increased automation may give them some improvements in utilization, but they would still be holding the capital, and the operational cost would still be significantly higher.

We have been listening very closely to the real requirements that our customers have and have worked closely with many of these CIOs and their teams to understand what solution would allow them to treat the cloud as a seamless extension of their datacenter, where their standard management practices can be applied with limited or no modifications. This needs to be a solution where they get all the benefits of cloud as mentioned above [Ed: eliminates cost, elastic, removes “undifferentiated heavy lifting”] while treating it as a part of their datacenter.

We have developed Amazon Virtual Private Cloud (Amazon VPC) to allow our customers to seamlessly extend their IT infrastructure into the cloud while maintaining the levels of isolation required for their enterprise management tools to do their work.

With Amazon VPC you can:

  • Create a Virtual Private Cloud and assign an IP address block to the VPC. The address block needs to be CIDR block such that it will be easy for your internal networking to route traffic to and from the VPC instance. These are addresses you own and control, most likely as part of your current datacenter addressing practice.
  • Divide the VPC addressing up into subnets in a manner that is convenient for managing the applications and services you want run in the VPC.
  • Create a VPN connection between the VPN Gateway that is part of the VPC instance and an IPSec-based VPN router on your own premises. Configure your internal routers such that traffic for the VPC address block will flow over the VPN.
  • Start adding AWS cloud resources to your VPC. These resources are fully isolated and can only communicate to other resources in the same VPC and with those resources accessible via the VPN router. Accessibility of other resources, including those on the public internet, is subject to the standard enterprise routing and firewall policies.

Amazon VPC offers customers the best of both the cloud and the enterprise managed data center:

  • Full flexibility in creating a network layout in the cloud that complies with the manner in which IT resources are managed in your own infrastructure.
  • Isolating resources allocated in the cloud by only making them accessible through industry standard IPSec VPNs.
  • Familiar cloud paradigm to acquire and release resources on demand within your VPC, making sure that you only use those resources you really need.
  • Only pay for what you use. The resources that you place within a VPC are metered and billed using the familiar pay-as-you-go approach at the standard pricing levels published for all cloud customers. The creation of VPCs, subnets and VPN gateways is free of charge. VPN usage and VPN traffic are also priced at the familiar usage based structure

All the benefits from the cloud with respect to scalability and reliability, freeing up your engineers to work on things that really matter to your business.

Jeff Barr did a great job of giving a little more detail on his blog but also brought up a couple of points I need to noodle on from a security perspective:

Because the VPC subnets are used to isolate logically distinct functionality, we’ve chosen not to immediately support Amazon EC2 security groups. You can launch your own AMIs and most public AMIs, including Microsoft Windows AMIs. You can’t launch Amazon DevPay AMIs just yet, though.

The Amazon EC2 instances are on your network. They can access or be accessed by other systems on the network as if they were local. As far as you are concerned, the EC2 instances are additional local network resources — there is no NAT translation. EC2 instances within a VPC do not currently have Internet-facing IP addresses.

We’ve confirmed that a variety of Cisco and Juniper hardware/software VPN configurations are compatible; devices meeting our requirements as outlined in the box at right should be compatible too. We also plan to support Software VPNs in the near future.

The notion of the VPC and associated VPN connectivity coupled with the “software VPN” statement above reminds me of Cohesive F/T’s VPN-Cubed solution.  While this is an IaaS-focused discussion, it’s only fair to bring up Google’s Secure Data Connector that was announced some moons ago from a SaaS/PaaS perspective, too.

I would be remiss in my musings were I not to also suggest that Cloud brokers and Cloud service providers such as RightScale, GoGrid, Terremark, etc. were on the right path in responding to customers’ needs well before this announcement.

Further, it should be noted that now that the 800lb Gorilla has staked a flag, this will bring up all sorts of additional auditing and compliance questions, as any sort of broad connectivity into and out of security zones and asset groupings always do.  See the PCI debate (How to Be PCI Compliant In the Cloud)

At the end of the day, this is a great step forward toward — one I am happy to say that I’ve been talking about and presenting (see my Frogs presentation) for the last two years.

/Hoff

On Appirio’s Prediction: The Rise & Fall Of Private Clouds

August 18th, 2009 9 comments

I was invited to add my comments to Appirio’s corporate blog in response to my opinions of their 2009 prediction “Rise and Fall of the Private Cloud,” but as I mentioned in kind on Twitter, debating a corporate talking point on a company’s blog is like watch two monkeys trying to screw a football; it’s messy and nobody wins.

However, in light of the fact that I’ve been preaching about the realities of phased adoption of Cloud — with Private Cloud being a necessary step — I thought I’d add my $0.02.  Of course, I’m doing so while on vacation, sitting on an ancient lava flow with my feet in the ocean in Hawaii, so it’s likely to be tropical in nature.

Short and sweet, here’s Appirio’s stance on Private Cloud:

Here’s the rub: Private clouds are just an expensive data center with a fancy name. We predict that 2009 will represent the rise and fall of this over-hyped concept. Of course, virtualization, service-oriented architectures, and open standards are all great things for every company operating a data center to consider. But all this talk about “private clouds” is a distraction from the real news: the vast majority of companies shouldn’t need to worry about operating any sort of data center anymore, cloud-like or not.

It’s clear that we’re talking about very different sets of companies. If we’re referring to SME/SMB’s, then I think it’s fair to suggest the sentiment above is valid.

If we’re talking about a large, heavily-regulated enterprise (pick your industry/vertical) with sunk costs and the desire/need to leverage the investment they’ve made in the consolidation, virtualization and enterprise modernization of their global datacenter footprints and take it to the next level, leveraging capabilities like automation, elasticity, and chargeback, it’s poppycock.

Further, it’s pretty clear that the hybrid model of Cloud will ultimately win in this space with the adoption of BOTH Public and Private Clouds where and when appropriate.

The idea that somehow companies can use “private cloud” technology to offer their employees web services similar to Google, Amazon, or salesforce.com will lead to massive disappointment.

So now the definition of “Cloud” is limited to “web services” and is defined by “Google, Amazon, or Salesforce.com?”

I call this MyopiCloud.  If this is the only measure of Cloud success, I’d be massively disappointed, also.

Onto the salient points:

Here’s why:

  • Private clouds are sub-scale: There’s a reason why most innovative cloud computing providers have their roots in powering consumer web technology—that’s where the numbers are. Very few corporate data centers will see anything close to the type of volume seen by these vendors. And volume drives cost—the world has yet to see a truly “at scale” data center.

Interesting. If we hang the definition of “at scale” solely on Internet-based volume, I can see how this rings true.  However, large enterprises with LANs and WANs with multi-gigabit connectivity feeding server farms and client bases of internal constituents (not to mention extranet connections) need to be accounted for in that assessment, especially if we’re going to be honest about volume.  Limiting connectivity to only the Internet is unreasonable.

Certainly most enterprises are not autonomically elastic (neither are most Cloud providers today) but that’s why comparing apples to elephants is a bit silly, even with the benefits that virtualization is beginning to deliver in the compute, network and storage realms.

I know of an eCommerce provider who reports trafficing in (on average) 15 Gb/s of sustained HTTP traffic via its Internet feeds.  Want to guess what the internal traffic levels are inside what amounts to it’s Private Cloud at that level of ingress/egress?  Oh, did I just suggest that this “enterprise” is already running a “Private Cloud?”  Why yes, yes I did.  See James Watter’s interesting blog on something similar titled “Not So Fast Public Cloud: Big Players Still Run Privately.

  • There’s no secret sauce: There’s no simple set of tricks that an operator of a data center can borrow from Amazon or Google. These companies make their living operating the world’s largest data centers. They are constantly optimizing how they operate based on real-time performance feedback from millions of transactions. (check out this presentation from Jeff Barr and Peter Coffee at the Architecture and Integration Summit). Can other operators of data centers learn something from this experience? Of course. But the rate of innovation will never be the same—private data centers will always be many, many steps behind the cloud.
  • Really? So technology such as Eucalyptus or VMware’s vCloud/Project Redwood doesn’t play here?  Certainly leveraging the operational models and technology underpinnings (regardless of volume) should allow an enterprise to scale massively, even it it’s not at the same levels, no?  The ability to scale to the needs of the business are important, even if you never do so at the scale of an AWS.  I don’t really understand this point.  My bandwidth is bigger than your bandwidth?

  • You can’t teach an old dog new tricks: What do you get when you move legacy applications as-is to a new and improved data center? Marginal improvements on your legacy applications. There’s only so much you can achieve without truly re-platforming your applications to a cloud infrastructure… you can’t teach an old dog new tricks. Now that’s not entirely fair…. You can certainly teach an old dog to be better behaved. But it’s still an old dog.
  • Woof! It’s really silly to suggest that the only thing an enterprise will do is simply move “legacy applications as-is to a new and improved data center” without any enterprise modernization, any optimization or the ability to more efficiently migrate to new and improved applications as the agility, flexibility and mobility issues are tackled.  Talk about pissing on fire hydrants!

  • On-premise does not equal secure: the biggest driver towards private clouds has been fear, uncertainty, and doubt about security. For many, it just feels more secure to have your data in a data center that you control. But is it? Unless your company spends more money and energy thinking about security than Amazon, Google, and Salesforce, the answer is probably “no.” (Read Craig Balding walk through “7 Technical Security Benefits of Cloud Computing”)
  • I’ve got news for you, just as on-premise does “…not equal secure,” neither does off-premise assure such.  I offer you this post as an example with all it’s related posts for color.

    Please show me empirically that Amazon, Google or Salesforce spends “…more money and energy thinking about security” than, say, a Fortune 100 company.  Better yet, please show me how I can be, say, PCI compliant using AWS?  Oh, right…Please see the aforementioned posts…especially the one that demonstrates how the most public security gaffes thus far in Cloud are related to the providers you cite in your example.

    May I suggest that being myopic and mixing metaphors broadly by combining the needs and business drivers of the SME/SMB and representing them as that of large enterprises is intellectually dishonest.

    Let’s be real, Appirio is in the business of “Enabling enterprise adoption of on-demand for Salesforce.com and Google Enterprise” — two examples of externally hosted SaaS offerings that clearly aren’t aimed at enterprises who would otherwise be thinking about Private Cloud.

    Oops, the luau drums are sounding.

    Aloha.

    Do We Need CloudNAPs? It’s A Virtually Certain Maybe.

    August 16th, 2009 10 comments

    Allan Leinwand from GigaOm wrote a really interesting blog the other day titled: “Do Enterprises Need a Toll Road to the Cloud?” in which he suggested that perhaps what is needed to guarantee high performance and high security Cloud connectivity is essentially a middleman that maintains dedicated aggregate connectivity between “…each of the public cloud providers:”

    One solution would be for cloud services providers to offer dedicated leased line connections to their clouds. Though for many enterprises the cost of these leased lines over large geographies would be enough to eat into any savings they’d be getting by using the cloud in the first place. Another solution would come in the form of a service provider that aggregated dedicated connections to each of the public cloud providers.

    This new provider — let’s call it CloudNAP (Cloud Network Access Point) — would solely be in the business of providing a toll road between the enterprise and the public cloud providers. The business of selling connectivity to the Internet, or transit, is a common ISP offering.  The CloudNAP transit service would be different, however, in that it would be focused on delivering connectivity solely between enterprises and cloud services providers and not between enterprises or between clouds.

    The CloudNAP network could guarantee  performance between the enterprise and the cloud by working with the service providers to enable the use of quality-of-service techniques that are not available over the public Internet such a Multiprotocol Label Switching (MPLS) classes for WAN connections or IEEE 802.1p priorities for LAN connections. Perhaps CloudNAP could even restrict the use of connections to cloud service protocols and services like REST (representational state transfer) or HTTPS (Hypertext Transfer Protocol Secure) -– thus preserving the network for its intended use by the enterprise.

    While I have many opinions on multiple points within the article, I’ll focus briefly on just a couple, starting with the boldfaced section (emphasis is mine) above.  Specifically, monetizing connectivity between providers as a sole value add seems quite limited in terms of a business model.  Furthermore, I really see that this is just another feature of what the emerging class of service brokers will offer.

    As to the notion of privatizing transport for the purpose of applying QoS, that’s really just a fancy way of describing private Cloud peering and interconnects on the backside of Public Cloud service providers.  The challenge will come when these service providers (with the SP’s directly or brokers) end up managing what amounts to massive numbers of “extranet” connections in current-day parlance; it’s simply taking the overlay architectures of DMZ’s as we know it today and flipping it outward.  I’m not going to tackle the issue of Net Neutrality in this piece because, well, I’m on vacation in Hawaii and I want to keep my blood pressure down 😉

    The blog mentioned many times about the lack of a “…standard products that allow enterprises to install private network connections (either paid, dedicated leased lines or VPNs) that would provide predictable network performance and security,” but I’d suggest that’s wholly inaccurate — depending upon your definition of a “standard product.”

    In the long term the notion of an open market for hybrid Cloud connectivity — the Inter-Cloud — will take form, and much of the evolving work being done with open protocols and those in the works by loose federations of suppliers with common goals and technology underpinnings will emerge.

    In the long term do we need CloudNAP’s? No. Will we get something similar by virtue of what we already do today? Probably.

    /Hoff

    Follow-On: The Audit, Assertion, Assessment, and Assurance API (A6)

    August 16th, 2009 6 comments

    Update 2/1/10: The A6 effort is in full-swing.  You can find out more about it at the Google Groups here.

    A few weeks ago I penned a blog discussing an idea I presented at a recent Public Sector Cloud gathering that later inherited the name “Audit, Assertion, Assessment, and Assurance API (A6)”

    The case for A6 is straightforward:

    …take the capabilities of something like SCAP and embed a standardized and open API layer into each IaaS, PaaS and SaaS offering [Ed: At the API layer of each deployment model] to provide not only a standardized way of scanning for network vulnerabilities, but also configuration management, asset management, patch remediation, compliance, etc.

    This way you win two ways: automated audit and security management capability for the customer/consumer and a a streamlined, cost effective, and responsive way of automating the validation of said controls in relation to compliance, SLA and legal requirements for service providers.

    Much discussion ensued on Twitter and via email/blogs explaining A6 in better detail and with more specificity.

    The idea has since grown legs and I’ve started to have some serious discussions with “people” (*wink wink*) who are very interested in making this a reality, especially in light of business and technical use cases bubbling to the surface of late.

    To that end, Ben (@ironfog) has taken the conceptual mumblings and begun work on a RESTful interface for A6. You can find the draft documentation here.  You can find his blog and awesome work on making A6 a reality here.  Thank you so much, Ben.

    NOTE: The documentation/definitions below are conceptual and stale. I’ve left them here because they are important and relevant but are likely not representative of the final work product.

    A6 API Documentation – Draft 0.11

    I’m thinking of pulling together a more formalized working group for A6 and push hard with some of those “people” above to get better definition around its operational realities as well as understand the best way to create an open and extensible standard going forward.

    If you’re interested in participating, please contact me ( choff @ packetfilter . com ) and let’s capitalize on the momentum, need and fortuitous timing to make A6 work.

    Thanks,

    /Hoff

    Reblog this post [with Zemanta]

    Cloudifornication: Indiscriminate Information Intercourse Involving Internet Infrastructure

    August 9th, 2009 2 comments

    canary_coal_mineThe talk I was scheduled to give at Blackhat in Vegas had that title.  Due to a timing issue, I couldn’t make Vegas.

    The summary of CI^6 goes something like this:

    What was in is now out.

    This metaphor holds true not only as an accurate analysis of what happens to our data with the adoption trends of disruptive technology and innovation in the enterprise, but also parallels the amazing velocity of how our datacenters are being re-perimiterized and quite literally turned inside out thanks to Cloud computing and virtualization.

    One of the really interesting things happening with the massive convergence of virtualization and cloud computing is its effect on security models, the corresponding compensating controls and the information they are designed to protect.

    Where and how our data is created, processed, accessed, stored, backed up and destroyed in what is sure to become massively overlaid cloud-based services — and by whom and using whose infrastructure — yields significant concerns related to security, privacy, compliance and survivability.

    Further, the “stacked turtle” problem becomes more visible as the notion of nested clouds becomes reality: cloud SaaS providers depending on Cloud IaaS providers which rely on Cloud network providers. It’s a house of, well, turtles.

    The fragile application layer of infostructure, sitting atop infrastructure and held together with the bailing-wire and bubble gum of outdated metastructure yields unintended information intercourse.

    We will show multiple cascading levels of failure associated with relying on cloud-on-cloud infostructure/metastructure/infrastructure including exposing flawed assumptions and untested theories as it relates to security, privacy and confidentiality in the Cloud with some unique attack vectors.

    The gist of the talk shows examples of the fragility at each of the largely independent info-/meta-/infra-structure layers and then as a whole.

    Cloudifornication-Cloudanatomy.031.031

    I spend quite a bit of time on the Metastructure layer:

    While I plan to give the talk publicly soon at a venue which I will announce shortly, thematically, the talk’s content is already playing itself out in the real world.  If you need good examples as to what I am talking about, I’ll use the two I focus in on with the presentation: DNS and BGP.

    You need only look at the latest set of DDoS attacks on social media sites to see how relevant this continues to be.

    Much of what holds the Internet and our Intranets together are based upon protocols and architecture never designed to
    scale to the levels they are going to get pushed to with Cloud.  Further, the inherent trust in the models used to frame fair play are equally as kaput.

    The canaries in the coal mine are starting to chirp very loudly…

    I find that people spend a lot of time criticizing the styles of delivery and presentation around securing the Metastructure layer.

    They say there’s nothing new.  They say it’s just a way of seeking attention.

    I’d suggest listening to the message regardless of what you think of the messengers.*

    Talk amongst yourselves.

    /Hoff

    *Lori Macvittie has an interesting post highlighting this.