Archive

Author Archive

Virtual Routing – The Anti-Matter of Network SECURITY…

December 16th, 2008 10 comments

Here's a nod to Rich Miller who pointed over (route the node, not the packet) to a blog entry from Andreas Antonopoulos titled "Virtual Routing – The anti-matter of network routing."

The premise, as brought up by Doug Gourlay from Cisco at the C-Scape conference was seemingly innocuous but quite cool:

"How about using netflow information to re-balance servers in a data center"

Routing: Controlling the flow of network traffic to an optimal path between two nodes

Virtual-Routing or Anti-Routing: VMotioning nodes (servers) to optimize the flow of traffic on the network.

Using netflow information, identify those nodes (virtual servers)
that have the highest traffic "affinity" from a volume perspective (or
some other desired metric, like desired latency etc) and move (VMotion,
XenMotion) the nodes around to re-balance the network. For example,
bring the virtual servers exchanging the most traffic to hosts on the
same switch or even to the same host to minimize traffic crossing
multiple switches. Create a whole-data-center mapping of traffic flows,
solve for least switch hops per flow and re-map all the servers in the
data center to optimize network traffic.

My first reaction was, yup, that makes a lot of sense from a network point of view, and given who made the comment, it does make sense. Then I choked on my own tongue as the security weenie in me started in on the throttling process, reminding me that while this is fantastic from an autonomics perspective, it's missing some serious input variables.

Latency of the "network" and VM spin-up aside, the dirty little secret is that what's being described here is a realistic and necessary component of real time (or adaptive) infrastructure.  We need to get ultimately to the point where within context, we have the ability to do this, but I want to remind folks that availability is only one leg of the stool.  We've got the other nasty bits to concern ourselves with, too.

Let's look at this from two perspectives: the network plumber and the security wonk

From the network plumbers' purview, this sounds like an awesome idea; do what is difficult in non-virtualized environments and dynamically adjust and reallocate the "location" of an asset (and thus flows to/from it) in the network based upon traffic patterns and arbitrary metrics.  Basically, optimize the network for the lowest latency and best performance or availability by moving VM's around and re-allocating them across the virtual switch fabric (nee DVS) rather than adjusting how the traffic gets to the static nodes.

It's a role reversal: the nodes become dynamic and the network becomes more static and compartmentalized.  Funny, huh?

The security wonk is unavailable for comment.  He's just suffered a coronary event.  Segmented network architecture based upon business policy, security, compliance and risk tolerances make it very difficult to perform this level of automation via service governors today, especially in segmented network architecture based upon asset criticality, role or function as expressed as a function of (gulp) compliance, let's say. 

Again, the concept works great in a flat network where asset grouping is, for the most part, irrelevant (hopefully governed by a policy asserting such) where what you're talking about is balancing the compute with network and storage, but the moment you introduce security, compliance and risk management as factors into the decision fabric, things get very, very difficult.

Now, if you're Cisco and VMware, the
models for how the security engines that apply policy consistently
across these fluid virtualized networks is starting to take shape, but what we're
missing are the set of compacts or contracts that consistently define
and enforce these policies no matter where they move (and control *if* they can move) and how they factor these requirements into
the governance layer.

The standardization of governance approaches — even at the network layer — is lacking. 
There are lots of discrete tools available but the level of integration
and the input streams and output telemetry are not complete.

If you take a look, as an example, at CIRBA's exceptional transformational analytics and capacity management solution, replete with their multi-dimensional array of business process, technical infrastructure and resource mapping, they have no input for risk assessment data, compliance or "security" as variables.

When you look at the utility brought forward by the dynamic, agile and flexible capabilities of virtualized infrastructure, it's hard not to extrapolate all the fantastic things we could do. 

Unfortunately, the crushing weight of what happens when we introduce security, compliance and risk management to the dance means we have a more sobering discussion about those realities.

Here's an example reduced to the ridiculous: we have an interesting time architecting networks to maximize throughput, reduce latency and maximize resilience in the face of what can happen with convergence issues and flapping when we have a "routing" problem.

Can you imagine what might happen when you start bouncing VM's around the network in response to maximizing efficiency while simultaneously making unavailable the very resources we seek to maximize the availability of based upon disassociated security policy violations?  Fun, eh?

While we're witnessing a phase shift in how we design and model our networks to support more dynamic resources and more templated networks, we can't continue to mention the benefits and simply assume we'll catch up on the magical policy side later.

So for me, Virtual Routing is the anti-matter of network SECURITY, not network routing…or maybe more succinctly, perhaps security doesn't matter at all?

/Hoff

Categories: Cisco, Virtualization, VMware Tags:

Beyond the Sumo Match: Crosby, Herrod, Skoudis and Hoff…VirtSec Death Match @ RSA!

December 15th, 2008 2 comments

Besides the sumo suit wrestling match I'm organizing between myself and Simon Crosby at this year's coming RSA 2009 show, I'm really excited to announce that there will be another exciting virtualization security (VirtSec) event happening at the show.

Thanks to Tim Mather at RSA, much scheming and planning has paid off:

"In this verbal cage match session, two well known critics of virtualization security take on two virtualization company CTOs as they spar over how best to secure virtualization platforms: who should be responsible for securing it, and how that ultimately impacts customers and attackers.  We have Hoff and Skoudis versus Crosby and Herrod.  Refereeing will be respected analyst, Antonopoulos."

Simon Crosby (Citrix CTO), Steve Herrod (VMware CTO), Ed Skoudis (InGuardians) and myself will have a lively debate moderated by Andreas Antonopoulos (Nemertes) that is sure to entertain and educate folks as to the many fascinating issues surrounding the present and future of VirtSec.  I expect to push the discussion toward cloud security also…

WAR! 😉

Stay tuned for further announcements.

/Hoff

GigaOm’s Alistair Croll on Cloud Security: The Sky Is Falling!…and So Is My Tolerance For Absurdity

December 14th, 2008 3 comments
Whatmeworry
I just read the latest blog of Alistair Croll from GigaOm titled "Cloud Security: The Sky Is Falling!" in which he suggests that we pillow-hugging security wonks ought to loosen our death grips on our data because not only are we flapping our worry feathers for nothing, but security in "the Cloud" will result in better security than we have today. 

It's an interesting assertion, really, that despite no innovative changes in the underpinnings of security technology, no advances in security architecture or models and no fundamental security operational enhancements besides the notion of enhanced "monitoring," that simply outsourcing infrastructure to a third party "in the cloud" will in some way make security "better," whatever version of "the Cloud" you may be describing:

I don’t believe that clouds themselves will cause the security breaches and data theft they anticipate; in many ways, clouds will result in better security. Here’s why:

    • Fewer humans – Most computer breaches are the result of human error; only 20-40 percent stem from technical malfunctions. Cloud operators that want to be profitable take humans out of the loop whenever possible.
    • Better tools – Clouds can afford high-end data protection and security monitoring tools, as well as the experts to run them. I trust Amazon’s operational skills far more than my own.
    • Enforced processes – You could probably get a co-worker to change your company’s IT infrastructure. But try doing it with a cloud provider without the proper authorization: You simply won’t be able to.
    • Not your employees — Most security breaches are committed by internal employees. Cloud operators don’t work for you. When it comes to corporate espionage, employees are a much more likely target.

    Of course it takes people to muck things up, it always has and always will.  Rushing to embrace a "new" computing model without the introduction of appropriately compensating controls, adapted risk assessment/management methodologies and practices will absolutely introduce new threats, vulnerabilities and risk at a pace driven by supposed economic incentives that have people initially foaming at their good fortune and then fuming when it all goes bad.

    This comes down to the old maxim: "guns don't kill people, people kill people."  Certainly "the Cloud" alone won't increase breaches and data theft, but using it without appropriate safeguards will.

    This is an issue of squeezing the balloon.  The problem doesn't change in volume, it just changes shape.

    Those of us concerned about security and privacy in cloud computing models have good reason to be concerned; we live with and have lived with these sorts of disruptive innovations and technology before and it really, really screws things up because the security models and technology we can lean on to manage risk is not adapted to this at all and the velocity of change eclipses our ability to do do our jobs competently.

    Further bonking things up is the very definition of "the Cloud(s)" in the first place.

    Despite the obvious differences in business models, use cases, technical architecture as well as the non-existence of a singularity called "The Cloud," this article generalizes and marginalizes the security challenges of cloud computing regardless.  In fact, it emphasizes on one leg of the IT stool (people) to the point of downplaying via the suspension of disbelief that the other two (process and technology) are problems less deserving of attention as they are magically addressed.

    To be fair, I can certainly see Alistair's argument holding water within the context of an SME/SMB with no dedicated expertise in security and little or no existing cost burden in IT infrastructure.  The premise: let your outsourced vendor provide you with the expertise in security you don't have as they have a vested interest to do so and can do it better than you.  

    The argument hinges on two things: that insiders intent on malicious activity by tampering with "infrastructure" are your biggest risk eliminated by "the cloud" and that infrastructure and business automation, heretofore highly sought after elements of enterprise modernization efforts, is readily available now and floating about in the cloud despite its general lack of availability in the enterprise.

    So here's what's amusing to me:
    1. It takes humans to operate the cloud infrastructure.  These human operators, despite automation, still suffer from the same scale and knowledge limitations as those in the real world.  Further the service governance layers that translate business process, context and risk into enforceable policy across a heterogeneous infrastructure aren't exactly mature. 
        
    2. The notion that better tools exist in the cloud that haven't as yet been deployed in the larger enterprise seems a little unbelievable.  Again, I agree that this may be the case in the SME/SMB, but it's simply not the case in larger enterprises.  Given issues such as virtualization (which not all cloud providers depend upon, but bear with me) which can actually limit visibility and reach, I'd like to understand what these tools are why we havent' heard of them before.
    3. The notion that you can get a co-worker to "…change your company's IT infrastructure" but you can't get this same event impact to occur in the cloud is ludicrous.  Besides the fact that the bulk of breaches result from abuse or escalation of privilege in operating systems and applications, not general "infrastructure," and   "the Cloud," having abstracted this general infratructure from view. leaves bare the ability to abuse the application layer just as ripely.
    4. Finally, Alaistair's premise that the bulk of attacks originate internally is misleading. Alistair's article was written a few days ago.  The Intranet Journal article he cites to bolster his first point substantiating his claim was written in 2006 and is based upon a study done by CompTIA in 2005.  2005!  That's a lifetime by today's standards. Has he read the Verizon breach study that empirically refutes many of his points? (*See Below in extended post)
     As someone who has been on both the receiving end as well as designed and operated managed (nee Cloud) security as a service for customers globally, there are a number of exceptions to Alistair's assertions regarding the operational security prowess in "the Cloud" with this being the most important: 

    As "the Cloud" provider adds customers, the capability to secure the infrastructure and the data transiting it, ultimately becomes an issue of scale, too. The more automation that is added, the more false positives show up, especially in light of the fact that the service provider has little or no context of the information, business processes or business impact that their monitoring tools observe.  You can get rid of the low-hanging fruit, but when it comes down to impacting the business, some human gets involved.

    The automation that Alastair asserts is one of the most important reasons why Cloud security will be better than non-Cloud security ultimately suffers from the same  lack of eyeballs problem that the enterprise supposedly has in the first place.

    For all the supposed security experts huddled around glowing monitors in CloudSOC's that are vigilantly watching over "your" applications and data in the Cloud, the dirty little secret is that they rely on basically the same operational and technical capabilities as enterprises deploy today, but without context for what it is they are supposedly protecting.  Some rely on less.  In fact, in some cases, unless they're protecting their own infrastructure, they don't do it at all — it's still *your* job to secure the stacks, they just deal with the "pipes."

    We're not all Chicken Little's, Alistair.  Some of us recognize the train when it's heading toward us at full speed and prefer not to be flattened by it, is all.

    /Hoff

    Read more…

    Oh Great Security Spirit In the Cloud: Have You Seen My WAF, IPS, IDS, Firewall…

    December 10th, 2008 4 comments

    SearchingI'm working on the sequel to my Four Horsemen of the Virtualization Security Apocalypse presentation.

    It's called "The Frogs Who Desperately Wanted a King: An Information Security Fable of Virtualization, RTI and Cloud Computing Security." (Okay, it also has the words "interpretive dance" in it, but that's for another time…)

    Many of the interesting issues from the Four Horsemen regarding the deficiencies of security solutions and models in virtualized environments carries over directly to operationalizing security in the Cloud. 

    As a caveat, let's focus on a cost-burdened "large" enterprise who's involved in moving from physical to virtual to cloud-based services.

    I'm not trying to make a habit of picking on Amazon AWS, but it's just such a fantastic example for my point, which is quite simply:

    While the cloud allows you to obviate the need for physical compute, network and storage infrastructure, it requires a concerted build-out and potential reinvestment in a software-based security infrastructure which, for most large enterprises, does not consist of the same solutions deployed today.

    Why?  Let me paint the picture…

    In non-virtualized environments, we generally use dedicated appliances or integrated solutions that provide one of more discrete security functions. 

    These solutions are built generally on hardened OS's, sometimes using custom hardware and sometimes COTS boxes which are tarted up.  They are plumbed in between discretely segregated (physical or logical) zones to provide security boundaries defined by arbitrary policies based upon asset classification, role, function, access, etc.  We've been doing this for decades.  It's the devil we know.

    In virtualized environments, we currently experience some real pain when it comes to replicating the same levels of security, performance, resiliency, and scale using software-based virtual appliances to take the place of the hardware versions in our physical networks when we virtualize the interconnects within these zones.

    There are lots of reasons for this, but the important part is realizing that many of the same hardware solutions are simply not available as virtual appliances and even when they are, they are often not 1:1 in terms of functionality or capability.  Again, I've covered this extensively in the Four Horsemen.

    So if we abstract this to its cloudy logical conclusion, and use AWS as the "platform" example, we start to face a real problem for an enterprise that has a decade(s) of security solutions, processes and talent that is focused on globalizing, standardizing and optimizing their existing security infrastructure and are now being forced to re-evaluate not only the technology selection but the overall architecture and operational model to support it.

    Now, it's really important that you know, dear reader, that I accept that one can definitely deploy security controls instantiated as both network and host-based instances in AWS.  There are loads of options, including the "firewall" provided by AWS.

    However, the problem is that in the case of building an AMI for AWS supporting a particular function (firewall, WAF, IPS, IDS, etc.) you may not have the same solutions available to you given the lack of support for a particular distro, lack of "port" to a VA/VM, or issues surrounding custome kernels, communication protocols, hardware, etc…  You may be limited in many cases to having to rely on open source solutions.

    In fact, when one looks at most of the examples given when describing securing AWS instances, they almost always reference OSS solutions such as Snort, OSSEC, etc.  There's absolutely NOTHING wrong with that, but it's only one dimension.

    That's going to have a profound effect across many dimensions.  In many cases, enterprises have standardized on a particular solution not because it's special from a "security" perspective, but because it's the easiest to manage when you have lots of them and they are supportable 24/7 by vendors with global reach and SLA's that represent the criticality of their function.

    That is NOT to say that OSS solutions are not easy to manage or supportable in this fashion, but I believe it's a valid representation of the state of things.

    (Why am I getting so defensive about OSS? 😉

    Taking it further, and using my favorite PCI in the Cloud argument, what if the web application firewall that you've spent hundreds of thousands of dollars purchasing, tuning and deploying in support of PCI DSS throughout the corporate enterprise simply isn't available as a software module installable in an AMI in the cloud?  Or the firewall?  Or the IPS? 

    In the short term this is a real problem for customers.  In the long term, it's another potential life preserver for security ISV's and an opportunity for emerging startups to think about new ways of solving this problem.

    /Hoff

    Infrastructure 2.0 and Virtualized/Cloud Networking: Wait, Where’s My DNS/DHCP Server Again?

    December 8th, 2008 5 comments

    I read James Urquhart's first blog post written under the Cisco banner today titled "The network: the final frontier for cloud computing" in which he describes the evolving role of "the network" in virtualized and cloud computing environments.

    The gist of his post, which he backs up with examples from Greg Ness' series on Infrastructure 2.0, is that in order to harness the benefits of virtualization and cloud computing, we must automate; from the endpoint to the underlying platforms — including the network — manual processes need to be replaced by automated capabilities:

    When was the last time you thought “network” when you heard
    “cloud computing”? How often have you found yourself working out
    exactly how you can best utilize network resources in your cloud
    applications?  Probably never, as to date the network hasn’t registered
    on most peoples’ cloud radars.

    This is understandable, of course, as the early cloud efforts try to
    push the entire concept of the network into a simple “bandwidth”
    bucket.  However, is it right? Should the network just play dumb and
    let all of the intelligence originate at the endpoints?


    The writing is on the wall. The next frontier to get explored in
    depth in the cloud world will be the network, and what the network can
    do to make cloud computing and virtualization easier for you and your
    organization

    If you walked away from James' blog as I did initially, you might be left with the impression that this isn't really about "the network" gaining additional functionality or innovative capabilities, but rather just tarting up the ability to integrate with virtualization platforms and automate it all.

    Doesn't really sound all that sexy, does it.  Well, it's really not, which is why even today in non-virtualized environments we don't have very good automation and most processes still come down to Bob at the helpdesk. Virtualization and cloud are simply giving IT a swift kick in the ass to ensure we get a move on to extract as much efficiency and remove as much cost from IT as possible.

    Don't be fooled by the simplicity of James' post, however, because there's a huge moose lurking under the table instead of on top of it and it goes toward the fundamental crux of the battle brewing between all those parties interested in becoming your next "datacenter OS" provider.

    There exists one catalytic element that produces very divergent perspectives in IT around what, where, why and who automates things and how, and that's the very definition of "the network" in virtualized and cloud models.

    How someone might describe "the network" as either just a "bandwidth bucket" of feeds and speeds or an "intelligent, aware, sentient platform for service delivery" depends upon whether you're really talking about "the network" as a subset or a superset of "the infrastructure" at large.

    Greg argues that core network services such as IP adddress management, DNS, DHCP, etc. are part of the infrastructure and I agree, but given what we see today, I would say that they are part-in-parcel NOT a component of "the network" — they're generally separate and run atop the plumbing.  There's interaction, for sure, but one generally relies upon these third party service functions to deliver service.  In fact, that's exactly the sort of thing that Greg's company, Infoblox, sells.

    This contributes to part of this definitional quandary.

    Now we have this new virtualization layer injected between the network and the rest of the infrastructure which provides a true lever and frictionless capability for some of this automation but further confuses the definition of "the network" since so much of the movement and delivery of information is now done at this layer and it's not integrated with the traditional hardware-based network.*

    See what I mean in this post titled "The Network Is the Computer…(Is the Network, Is the Computer…)"

    This is exactly why you see Cisco's investment in bringing technologies such as VN-Link and the Nexus-1000v virtual switch to virtualized environments; it homogenizes "the network." It claws back the access layer so they can allow the network teams to manage the network again (and "automate" it) while also getting their hooks deeper into the virtualization layer itself. 

    And that's where this gets interesting to me because in order to truly automate virtualized and cloud computing environments, this means one of three things as it relates to where core/critical infrastructure services live:

    1. They  will continue to be separate as stand-alone applications/appliances or bundled atop an OS
    2. They become absorbed by "the (traditional) network" and extend into the virtualization layer
    3. They get delivered as part of the virtualization layer

    So if you're like most folks and run Microsoft-based "core network services" for things (at least internally) like DNS, DHCP, etc., what does this mean to you?  Well, either you continue as-is via option #1, you transition to integrated services in "the network" via option #2 or you end up with option #3 by the very virtue that you'll upgrade to Windows Server 2008 and Hyper-V anyway.

    SO, this means that the level of integration between, say, Cisco and Microsoft will have to become as strong as it is with VMware in order to support the integration of these services as a "network" function, else they'll continue — in those environments at least — as being a "bandwidth bucket" that provides an environment that isn't really automated.

    In order to hit the sweet spot here, Cisco (and other network providers) need to then start offering core network services as part of "the network."  This means wrestling it away from the integrated OS solutions or simply buying their way in by acquiring and then integrating these services ($10 Cisco buys Infoblox…)

    We also see emerging vendors such as Arista Networks who are entering the grid/utility/cloud computing network market with high density, high-throughput, lower cost "cloud networking" switches that are more about (at least initially) bandwidth bucketing and high-speed interconnects rather than integrated and virtualized core services.  We'll see how the extensibility of Arista's EOS affects this strategy in the long term.

    There *is* another option and that's where third party automation, provisioning, and governance suites come in that hope to tame this integration wild west by knitting together this patchwork of solutions. 

    What's old is new again.

    /Hoff

    *It should be noted, however, that not all things can or should be
    virtualized, so physical non-virtualized components pose another
    interesting challenge because automating 99% of a complex process isn't
    a win if the last 1% is a gating function that requires human
    interaction…you haven't solved the problem, you've just made it less
    steps that still requires Bob at the helpdesk..

     

    Categories: Cloud Computing, Virtualization Tags:

    CloudSQL – Accessing Datastores in the Sky using SQL…

    December 2nd, 2008 5 comments
    Zohosql-angled
    I think this is definitely a precursor of things to come and introduces some really interesting security discussions to be had regarding the portability, privacy and security of datastores in the cloud.

    Have you heard of Zoho?  No?  Zoho is a SaaS vendor that describe themselves thusly:

    Zoho is a suite of online applications (services) that you sign up for and access from our Website. The applications are free for individuals and some have a subscription fee for organizations. Our vision is to provide our customers (individuals, students, educators, non-profits, small and medium sized businesses) with the most comprehensive set of applications available anywhere (breadth); and for those applications to have enough features (depth) to make your user experience worthwhile.

    Today, Zoho announced the availability of CloudSQL which is middleware that allows customers who use Zoho's SaaS apps to "…access their data on Zoho SaaS
    applications using SQL queries."
     

    From their announcement:

    Zoho CloudSQL is a technology that allows developers to interact with business data stored across Zoho Services using the familiar SQL language. In addition, JDBC and ODBC database drivers make writing code a snap – just use the language construct and syntax you would use with a local database instance. Using the latest Web technology no longer requires throwing away years of coding and learning.

    Zoho CloudSQL allows businesses to connect and integrate the data and applications they have in Zoho with the data and applications they have in house, or even with other SaaS services. Unlike other methods for accessing data in the cloud, CloudSQL capitalizes on enterprise developers’ years of knowledge and experience with the widely‐used SQL language. This leads to faster deployments and easier (read: less expensive) integration projects.

    Basically, CloudSQL is interposed between the suite of Zoho applications and the backend datastores and functions as an intermediary receiving SQL queries against the pooled data sets using standard SQL commands and dialects. Click on the diagram below for a better idea of what this looks like.

    Zoho-cloud-sql
    What's really interesting about allowing native SQL access is the ability to then allow much easier information interchange between apps/databases on an enterprises' "private cloud(s)" and the Zoho "public" cloud.

    Further, it means that your data is more "portable" as it can be backed up, accessed, and processed by applications other than Zoho's.  Imagine if they were to extend the SQL exposure to other cloud/SaaS providers…this is where it will get really juicy. 

    This sort of thing *will* happen.  Customers will see the absolute utility of exposing their cloud-based datastores and sharing them amongst business partners, much in the spirit of how it's done today, but with the datastores (or chunks of them) located off-premises.

    That's all good and exciting, but obviously security questions/concerns immediately surface regarding such things as: authentication, encryption, access control, input sanitation, privacy and compliance…

    Today our datastores typically live inside the fortress with multiple
    layers of security and proxied access from applications, shielded from
    direct access and yet we still have basic issues with attacks such as
    SQL injection.  Imagine how much fun we can have with this!

    The best I could find regarding security and Zoho came from their FAQ which doesn't exactly inspire confidence given the fact that they address logical/software security by suggesting that anti-virus software is the best line of defense ffor protecting your data and that "data encryption" will soon be offered as an "option" and (implied) SSL will make you secure:

    6. Is my data secured?

    Many people ask us this question. And rightly so; Zoho has invested alot of time and money to ensure that your information is secure and private. We offer security on multiple levels including the physical, software and people/process levels; In fact your data is more secure than walking around with it on a laptop or even on your corporate desktops.

    Physical: Zoho servers and infrastructure are located in the most secure types of data centers that have multiple levels of restrictions for access including: on-premise security guards, security cameras, biometric limited access systems, and no signage to indicate where the buildings are, bullet proof glass, earthquake ratings, etc.

    Hardware: Zoho employs state of the art firewall protection on multiple levels eliminating the possibility of intrusion from outside attacks

    Logical/software protection: Zoho deploys anti-virus software and scans all access 24 x7 for suspicious traffic and viruses or even inside attacks; All of this is managed and logged for auditing purposes.

    Process: Very few Zoho staff have access to either the physical or logical levels of our infrastructure. Your data is therefore secure from inside access; Zoho performs regular vulnerability testing and is constantly enhancing its security at all levels. All data is backed up on multiple servers in multiple locations on a daily basis. This means that in the worst case, if one data center was compromised, your data could be restored from other locations with minimal disruption. We are also working on adding even more security; for example, you will soon be able to select a "data encryption" option to encrypt your data en route to our servers, so that in the unlikely event of your own computer getting hacked while using Zoho, your documents could be securely encrypted and inaccessible without a "certificate" which generally resides on the server away from your computer.

    Fun times ahead, folks.

    /Hoff

    Virtual Jot Pad: The Cloud As a Fluffy Offering In the Consumerization Of IT?

    December 2nd, 2008 1 comment

    This a post that's bigger than a thought on Twitter but almost doesn't deserve a blog, but for some reason, I just felt the need to write it down.  This may be one of those "well, duh" sorts of posts, but I can't quite verbalize what is tickling my noggin here.

    As far as I can tell, the juicy bits stem from the intersection of cloud cost models, cloud adopter profile by company size/maturity and the concept of the consumerization of IT.

    I think 😉

    This thought was spawned by a couple of interesting blog posts:

    1. James Urquhart's blog titled "The Enterprise barrier-to-exit in cloud computing" and "What is the value of IT convenience" which led me to…
    2. Billy Marshall from rPath and his blog titled "The Virtual Machine Tsunami."

    These blogs are about different things entirely but come full circle around to the same point.

    James first shed some interesting light on the business taxonomy, the sorts of IT use cases and classes of applications and operations that drive businesses and their IT operations to the cloud, distinguishing between what can be described as the economically-driven early adopters of the cloud in SMB's versus mature larger enterprises in his discussion with George Reese from O'Reilly via Twitter:

    George and I were coming at the problem from two different angles. George was talking about many SMB organizations, which really can't justify the cost of building their own IT infrastructure, but have been faced with a choice of doing just that, turning to (expensive and often rigid) managed hosting, or putting a server in a colo space somewhere (and maintaining that server). Not very happy choices.

    Enter the cloud. Now these same businesses can simply grab capacity on demand, start and stop billing at their leisure and get real world class power, virtualization and networking infrastructure without having to put an ounce of thought into it. Yeah, it costs more than simply running a server would cost, but when you add the infrastructure/managed hosting fees/colo leases, cloud almost always looks like the better deal.

    I, on the other hand, was thinking of medium to large enterprises which already own significant data center infrastructure, and already have sunk costs in power, cooling and assorted infrastructure. When looking at this class of business, these sunk costs must be added to server acquisition and operation costs when rationalizing against the costs of gaining the same services from the cloud. In this case, these investments often tip the balance, and it becomes much cheaper to use existing infrastructure (though with some automation) to deliver fixed capacity loads. As I discussed recently, the cloud generally only gets interesting for loads that are not running 24X7.

    This existing investment in infrastructure therefore acts almost as a "barrier-to-exit" for these enterprises when considering moving to the cloud. It seems to me highly ironic, and perhaps somewhat unique, that certain aspects of the cloud computing market will be blazed not by organizations with multiple data centers and thousands upon thousands of servers, but by the little mom-and-pop shop that used to own a couple of servers in a colo somewhere that finally shut them down and turned to Amazon. How cool is that

    That's a really interesting differentiation that hasn't been made as much as it should, quite honestly.  In the marketing madness that has ensued, you get the feeling that everyone, including large enterprises, are rushing willy-nilly to the cloud and outsourcing the majority of their compute loads, not the cloudbursting overflow.

    Billy Marshall's post offers some profound points including one that highlights the oft-reported and oft-harder-to-prove concept of VM sprawl and the so-called "frictionless" model of IT, but with a decidedly cloud perspective. 

    What was really interesting was the little incandescent bulb that began to glow when I read the following after reading James' post:

    Amazon EC2
    demand continues to skyrocket. It seems that business units are quickly
    sidestepping those IT departments that have not yet found a way to say
    “yes” to requests for new capacity due to capital spending constraints
    and high friction processes for getting applications into production
    (i.e. the legacy approach of provisioning servers with a general
    purpose OS and then attempting to install/configure the app to work on
    the production implementation which is no doubt different than the
    development environment).

    I heard a rumor that a new datacenter in
    Oregon was underway to support this burgeoning EC2 demand. I also saw
    our most recent EC2 bill, and I nearly hit the roof. Turns out when you
    provide frictionless capacity via the hypervisor, virtual machine
    deployment, and variable cost payment, demand explodes. Trust me.

    I've yet to figure out if the notion of frictionless capacity is a good thing or not if your ability to capacity plan is outpaced by a consumption model and a capacity yield that can just continue to climb without constraint.  At what point does the crossover between cost savings from infrastructure that bounded costs by resource constraints of physical servers become eclipsed by runaway use?

    I guess I'll have to wait to see his bill 😉

    Back to James' post, he references an interchange on Twitter with George Reese (whose post on "20 Rules for Amazon Cloud Security" I am waiting to fully comment on) in which George commented:

    "IT is a barrier to getting things done for most businesses; the Cloud reduces or eliminates that barrier."

    …which is basically the same thing Billy said in a Nick Carr kind of way.  The key question here is for whom?  As it relates to the SMB, I'd agree with this statement, but the thing that really sunk it was that statement just doesn't yet jive for larger enterprises.  In James' second post, he drives this home:

    I think these examples demonstrate an important decision point for IT organizations, especially during these times of financial strife. What is the value of IT convenience? When is it wise to choose to pay more dollars (or euros, or yen, or whatever) to gain some level of simplicity or focus or comfort? In the case of virtualization, is it always wise to leverage positive economic changes to expand service coverage? In the case of cloud computing, is it always wise to accept relatively high price points per CPU hour over managing your own cheaper compute loads?

    Is the cloud about convenience or true business value?  Is any opportunity to eliminate a barrier — whether that barrier actually acts as a logical check and balance within the system — simply enough to drive business to the cloud?

    I know the side-stepping IT bit has been spoken about ad nauseum within the context of cloud; namely when describing agility, flexibility, and economics, but it never really occurred to me that the cloud — much in the way you might talk about an iPhone — is now being marketed itself as another instantiation of the democratization, commoditization and consumerization of IT — almost as an application — and not just a means to an end.

    I think the thing that was interesting to me in looking at this issue from two perspectives is that differentiation between the SMB and the larger enterprise and their respective "how, what and why" cloud use cases are very much different.  That's probably old news to most, but I usually don't think about the SMB in my daily goings-on.

    Just like the iPhone and its adoption for "business use," the larger enterprise is exercising discretion in what's being dumped onto the cloud with a more measured approach due, in part, to managing risk and existing sunk costs, while the SMB is running to embrace it it at full speed, not necessarily realizing the hidden costs.

    /Hoff

    Categories: Cloud Computing Tags:

    Application Delivery Control: More Hardware Or Function Of the Hypervisor?

    December 1st, 2008 3 comments

    CrisisoutoforderUpdate: Ooops.  I forgot to announce that I'm once again putting on my Devil's Advocacy cap. It fits nicely and the contrasting color makes my eyes pop.;)

    It should be noted that obviously I recognize that dedicated
    hardware offers performance and scale capabilities
    that in many cases
    are difficult (if not impossible) to replicate in virtualized software
    instantiations of the same functionality. 

    However, despite spending the best part of two years raising
    awareness as to the issues surrounding scalability, resiliency,
    performance, etc. of security software solutions in virtualized
    environments via my Four Horsemen of the Virtualization Security Apocalypse presentation, perception is different
    than reality and many network capabilities will simply consolidate into the virtualization platforms until the next big swing of the punctuated equlibrium.

    This is another classic example of "best of breed" versus "good enough" and in many cases this debate becomes a corner-case argument of speeds and
    feeds and the context/location of the network topology you're talking
    about. There's simply no way to sprinkle enough specialized hardware around to get the pervasive autonomics across the entire fabric/cloud without a huge chunk of it existing in the underlying virtualization platform or underlying network infrastructure.

    THIS is the real scaling problem that software can address (by penetration) that specialized hardware cannot.

    There will always be a need for dedicated hardware for specific needs, and if you have an infrastructure service issue that requires massive hardware to support traffic loads until the sophistication and technology within the virtualization layer catches up, by all means use it!  In fact, just today after writing this piece Joyent announced they use f5 BigIP's to power their IaaS cloud service…

    In the longer term, however, application delivery control (ADC) will ultimately become a feature of the virtual networking stack provided by software as part of a larger provisioning/governance/autonomics challenge provided by the virtualization layer.  If you're going to get as close to this new atomic unit of measurement in the VM, you're going to have to decide where the network ends and the virtualization layer begins…across every cloud you expect to host your apps and those they may transit.


    I've been reading Lori McVittie's f5 DevCentral blog for quite some time.  She and Greg Ness have been feeding off one another's commentary in their discussion on "Infrastructure 2.0" and the unique set of challenges that the dynamic nature of virtualization and cloud computing place on "the network" and the corresponding service layers that tie applications and infrastructure together.

    The interesting thing to me is that why I do not disagree that that the infrastructure must adapt to the liquidity, agility and flexibility enabled by virtualization and become more instrumented as to the things running atop it, much of the functionality Greg and Lori allude to will ultimately become a function of the virtualization and cloud layers themselves*.

    One of the more interesting memes is the one Lori summarized this morning in her post titled "Managing Virtual Infrastructure Requires an Application Centric Approach," wherein the she lays the case for the needs of infrastructure becoming "application" centric based upon the "highly dynamic" nature of virtualized and cloud computing environments:

    …when applications are decoupled from the servers on which they are deployed and the network infrastructure that supports and delivers them, they cannot be effectively managed unless they are recognized as individual components themselves.

    Traditional infrastructure and its associated management intrinsically ties applications to servers and servers to IP addresses and IP addresses to switches and routers. This is a tightly coupled model that leaves very little room to address the dynamic nature of a virtual infrastructure such as those most often seen in cloud computing models.

    We've watched as SOA was rapidly adopted and organizations realized the benefits of a loosely coupled application architecture. We've watched the explosion of virtualization and the excitement of de-coupling applications from their underlying server infrastructure. But in the network infrastructure space, we still see applications tied to servers tied to IP addresses tied to switches and routers.

    That model is broken in a virtual, dynamic infrastructure because applications are no longer bound to servers or IP addresses. They can be anywhere at any time, and infrastructure and management systems that insist on binding the two together are simply going to impede progress and make managing that virtual infrastructure even more painful.

    It's all about the application. Finally.

    …and yet the applications themselves, despite how integrated they may be, suffer from the same horizontal management problem as the network today does.  So I'm not so sure about the finality of the "it's all about the application" because we haven't even solved the "virtual infrastructure management" issues yet.

    Bridging the gap between where we are today and the infrastructure 2.0/application-centric focus of tomorrow is illustrated nicely by Billy Marshall from rPath in his post titled "The Virtual Machine Tsunami," in which he describes how we're really still stuck being VM-centric as the unit measure of application management:

    Bottom line, we are all facing an impending tsunami of VMs unleashed by
    an unprecedented liquidity in system capacity which is enabled by
    hypervisor based cloud computing. When the virtual machine becomes the
    unit of application management
    , extending the legacy, horizontal
    approaches for management built upon the concept of a physical host
    with a general purpose OS simply will not scale. The costs will
    skyrocket.

    The new approach will have vertical management
    capability based upon the concept of an application as a coordinated
    set of version managed VMs.
    This approach is much more scalable for 2
    reasons. First, the operating system required to support an application
    inside a VM is one-tenth the size of an operating system as a general
    purpose host atop a server. One tenth the footprint means one tenth the
    management burden – along with some related significant decrease in the
    system resources required to host the OS itself (memory, CPU, etc.).
    Second, strong version management across the combined elements of the
    application and the system software that supports it within the VM
    eliminates the unintended consequences associated with change. These
    unintended consequences yield massive expenses for testing and
    certification when new code is promoted from development to production
    across each horizontal layer (OS, middleware, application). Strong
    version management across these layers within an isolated VM eliminates
    these massive expenses.

    So we still have all the problems of managing the applications atomically, but I think there's some general agreement between these two depictions.

    However, where it gets interesting is where Lori essentially paints the case that "the network" today is unable to properly provide for the delivery of applications:

    And that's what makes application delivery focused solutions so important to both virtualization and cloud computing models in which virtualization plays a large enabling role.

    Because application delivery controllers are more platforms than they are devices; they are programmable, adaptable, and internally focused on application delivery, scalability, and security.They are capable of dealing with the demands that a virtualized application infrastructure places on the entire delivery infrastructure. Where simple load balancing fails to adapt dynamically to the ever changing internal network of applications both virtual and non-virtual, application delivery excels.

    It is capable of monitoring, intelligently, the availability of applications not only in terms of whether it is up or down, but where it currently resides within the data center. Application delivery solutions are loosely coupled, and like SOA-based solutions they rely on real-time information about infrastructure and applications to determine how best to distribute requests, whether that's within the confines of a single data center or fifteen data centers.

    Application delivery controllers focus on distributing requests to applications, not servers or IP addresses, and they are capable of optimizing and securing both requests and responses based on the application as well as the network.

    They are the solution that bridges the gap that lies between applications and network infrastructure, and enables the agility necessary to build a scalable, dynamic delivery system suitable for virtualization and cloud computing.

    This is where I start to squint a little because Lori's really taking the notion of "application intelligence" and painting what amounts to a router/switch in an appliction delivery controller as a "platform" as she attempts to drive wedge between an ADC and "the network."

    Besides the fact that "the network" is also rapidly evolving to adapt to this more loosely-coupled model and the virtualization layer, the traditional networking functions and the infrastructure service layers are becoming more integrated and aware thanks to the homgenizing effect of the hypervisor, I'll ask the question I asked Lori on Twitter this morning:

    ADC-Question

    Why won't this ADC functionality simply show up in the hypervisor?  If you ask me, that's exactly the goal.  vCloud, anyone?  Amazon EC2?  Azure?

    If we take the example of Cisco and VMware, the coupled vision of the networking and virtualization 800 lb gorillas is exactly the same as she pens above; but it goes further because it addresses the end-to-end orchestration of infrastructure across the network, compute and storage fabrics.

    So, why do we need yet another layer of network routers/switches called "application delivery controllers" as opposed to having this capability baked into the virtualization layer or ultimately the network itself?

    That's the whole point of cloud computing and virtualization, right?  To decouple the resources from the hardware delivering it but putting more and more of that functionality into the virtualization layer?

    So, can you really make the case for deploying more "application-centric" routers/switches (which is what an application delivery controller is) regardless of how aware it may be?

    /Hoff

    Cloud Computing Security: From DDoS (Distributed Denial Of Service) to EDoS (Economic Denial of Sustainability)

    November 27th, 2008 12 comments

    Turkeykini
    It's Thanksgiving here in the U.S., so in between baking, roasting and watching Risk Astley rickroll millions in the Macy's Thanksgiving Day Parade, I had a thought about how the utility and agility of the cloud computing models such as Amazon AWS (EC2/S3) and the pricing models that go along with them can actually pose a very nasty risk to those who use the cloud to provide service.

    That thought — in between vigorous whisking during cranberry sauce construction — got me noodling about how the pay-as-you-go model could be used for nefarious means.

    Specifically, this usage-based model potentially enables $evil_person who knows that a service is cloud-based to manipulate service usage billing in orders of magnitude that could be disguised easily as legitimate use of the service but drive costs to unmanageable levels. 

    If you take Amazon's AWS usage-based pricing model (check out the cost calculator here,) one might envision that instead of worrying about a lack of resources, the elasticity of the cloud could actually provide a surplus of compute, network and storage utility that could be just as bad as a deficit.

    Instead of worrying about Distributed Denial of Service (DDos) attacks from botnets and the like, imagine having to worry about delicately balancing forecasted need with capabilities like Cloudbursting to deal with a botnet designed to make seemingly legitimate requests for service to generate an economic denial of sustainability (EDoS) — where the dyamicism of the infrastructure allows scaling of service beyond the economic means of the vendor to pay their cloud-based service bills.

    Imagine the shock of realizing that the outsourcing to the cloud to reduce CapEx and move to an OpEx model just meant that while availability, confidentiality and integrity of your service and assets are solid, your sustainability and survivability are threatened.

    I know there exists the ability to control instance sprawl and constrain costs, but imagine if this is managed improperly or inexactly because we can't distinguish between legitimate and targeted resource consumption from an "attack."

    If you're in the business of ensuring availability of service for large events on the web for a timed event, are you really going to limit service when you think it might drive revenues?

    I think this is where service governors will have to get much more
    intelligent regarding how services are being consumed and how that
    affects the transaction supply chain and embed logic that takes into
    account financial risk when scaling to ensure EDoS doesn't kill you. 

    I can't say that I haven't had similar concerns when dealing with scalability and capacity planning in hosted or dedicated co-location environments, but generally storage and compute were not billable service options I had to worry about, only network bandwidth.

    Back to the mashed potatoes.

    /Hoff

    The Big Four Cloud Computing Providers: Security Compared (Part I)

    November 26th, 2008 1 comment

    James Urquhart posted a summary a week or so ago of what he described as the "Big 4" players in Cloud Computing.  It was a slightly humorous pass at describing their approaches and offerings:

    Below is a table that lists these key players, and compares their offerings from the perspective of four core defining aspects of clouds. As this is a comparison of apples to oranges to grapefruit to perhaps pastrami, it is not meant to be a ranking of the participants, nor a judgement of when to choose one over the other. Instead, what I hope to do here is to give a working sysadmin's glimpse into what these four clouds are about, and why they are each unique approaches to enterprise cloud computing in their own right.

    James provided quite a bit more (serious) detail in the text below his table which I present to you here, tarted up with a column I've added and James left off titled "Security." 

    It's written in the same spirit as James' original, so feel free to take this with an equally well-provisioned grain of NaCl.  I'll be adding my own perfunctory comments with a little more detail shortly:Big4cloud The point here is that the quantification of what "security" means in the cloud is as abstracted and varied as the platforms that provide the service.  We're essentially being asked to take for granted and trust that the underlying mechanicals are sound and secure while not knowing where or what they are.

    We don't do that with our physically-tethered operating systems today, so why should we do so with virtualization platform hypervisors and the infrastructure "data center operating systems" of the cloud?  The transparency provided by dedicated infrastructure is being obscured by virtualization and the fog of the cloud.  It's a squeezing the balloon problem.

    And so far as the argument goes toward suggesting that this is no different than what we deal with n terms of SaaS today, the difference between what we might define as legacy SaaS and "cloud" is that generally it's someone elses' apps and your data in the former (ye olde ASP model.) 

    In the case of the "cloud," it could be a mixture of applications and data, some of which you own, some you don't and some you're simply not even aware of, perhaps running in part on your infrastructure and someone elses'.

    It should be noted also that not all cloud providers (excluding those above) even own and operate the platforms they provide you service on…they, in turn, could be utilizing shared infrastructure to provide you service, so cross-pollination of service provisioning could affect portability, reliability and security.

    That is why the Big4 above stand up their own multi-billion dollar data centers; they keep the architecture proprietary so you don't have to; lots of little clouds everywhere.

    /Hoff

    P.S. If you're involved with platform security from any of the providers above, do contact me because I'm going to be expounding upon the security "layers" of each of these providers in as much detail as I have here shortly.  I'd suggest you might be interested in assuring it's as complete and accurate as possible 😉