Jeff Bardin over on the CSO blog pitched an interesting stake in the ground when he posited "Connectivity As A Utility: Where are My Clean Pipes?"
Specifically, Jeff expects that his (corporate?) Internet service functions in the same manner as his telephone service via something similar to a "do not call list." Basically, he opts out by placing himself on the no-call list and telemarketers cease to call. Others might liken it to turning on a tap and getting clean, potable water; you pay for a utility and expect it to be usable. All of it.
Many telecommunications providers want to charge you for having
clean pipes, deploying a suite of DDoS services that you have to buy to
enhance your security posture. Protection of last mile bandwidth is
very key to network availability as well as confidentiality and
integrity. If I am subscribing for a full T1, shouldn’t I get the full
T1 as part of the price and not just a segment of the T1? Why do I have
to pay for the spam, probes, scans, and malicious activity that my
telecommunications service provider should prevent at 3 miles out
versus my having to subscribe to another service to attain clean pipes
at my doorstep?
I think that most people would agree with the concept of clean pipes in principle. I can’t think of any other utility where the service levels delivered are taken with such a lackadaisical best effort approach and where the consumer can almost always expect that some amount (if not the majority) of the utility is unusable.
Over the last year, I’ve met with many of the largest ISP’s, MSSP’s, TelCo’s and Mobile Operators on the planet and all are in some phase of deploying some sort of clean pipes variant. Gartner even predicts a large amount of security to move "into the cloud."
In terms of adoption, EMEA is leaps and bounds ahead of the US and APAC in these sorts of services and will continue to be. The relative oligopolies associated with smaller nation states allows for much more agile and flexible service definition and roll-outs — no less complex, mind you. It’s incredible to see just how disparate and divergent the gap is between what consumers (SME/SMB/Mobile as well as large enterprise) are offered in EMEA as opposed to the good-ol’ U S of A.
However, the stark reality is that the implementation of clean pipes by your service provider(s) comes down to a balance of two issues: efficacy and economics, with each varying dramatically with the market being served; the large enterprise’s expectations and requirements look very, very different from the SME/SMB.
Let’s take a look at both of these elements.
ECONOMICS
If you ask most service providers about so-called clean pipes up to a year ago, you could expect to get an answer that was based upon a "selfish" initiative aimed at stopping wasteful bandwidth usage upstream in the service provider’s network, not really protecting the consumer.
The main focus here is really on DDoS and viri/worm propagation. Today, the closest you’ll come to "clean pipes" is usually some combination of the following services deployed both (still) at the customer premises as well as somewhere upstream:
- DoS/DDoS
- Anti-Virus
- Anti-Spam
- URL Filtering/Parental Controls
- Managed Firewall/IDS/IPS
What is interesting about these services is that they basically define the same functions you can now get in those small little UTM boxes that consolidate security functionality at the "perimeter." The capital cost of these devices and the operational levies associated with their upkeep are pretty close in the SME/SMB and when you balance what you get in "good enough" services for this market as well as the overall availability of these "in the cloud" offerings, UTM makes more sense for many in the near term.
For the large enterprise, the story is different. Outsourcing some level of security to an MSSP (or perhaps even the entire operation) or moving some amount upstream is a matter of core competence and leveraging the focus of having internal teams focus on the things that matter most while the low hanging fruit can be filtered out and monitored by someone else. I describe that as filtering out the lumps. Some enormous companies have outsourced not only their security functions but their entire IT operations and data center assets in this manner. It’s not pretty, but it works.
I’m not sure they are any more secure than they were before, however. The risk simply was transferred whilst the tolerance/appetite for it didn’t change at all. Puzzling.
Is it really wrong to think that companies (you’ll notice I said companies, not "people" in the general sense) should pay for clean pipes? I don’t think it is. The reality is that for non-commercial subscribers such as home users, broadband or mobile users, some amount of bandwidth hygiene should be free — the potable water approach.
I think, however, that should a company which expects elevated service levels and commensurate guarantees of such, want more secure connectivity, they can expect to ante up. Why? Because the investment required to deliver this sort of service costs a LOT of money — both to spin up and to instantiate over time. You’re going to have to pay for that somewhere.
I very much like Jeff’s statistics:
We stop on average for our organization nearly 600
million malicious emails per year at our doorstep averaging 2.8
gigabytes of garbage per day. You add it up and we are looking at
nearly a terabyte of malicious email we have to stop. Now add in probes
and scans against HTTP and HTTPS sites and the number continues to
skyrocket.
Again, even though Jeff’s organization isn’t small by any means, the stuff he’s complaining about here is really the low-hanging fruit. It doesn’t bear a dent against the targeted, malicious and financially-impacting security threats that really demands a level of service no service provider will be able to deliver without a huge cost premium.
I won’t bore you with the details, but the level of high-availability,
resilience, performance, manageability, and provisioning required to
deliver even this sort of service is enormous. Most vendors simply can’t do
it and most service providers are slow to invest in proprietary
solutions that won’t scale economically with the operational models in
place.
Interestingly, vendors such as McAfee even as recently as 2005 announced with much fanfare that they were going to deliver technology, services and a united consortium of participating service providers with the following lofty clean pipe goals (besides selling more product, that is):
The initiative is one
part of a major product and services push from McAfee, which is
developing its next generation of carrier-grade security appliances and
ramping up its enterprise security offerings with NAC and secure
content management product releases planned for the first half of next
year, said Vatsal Sonecha, vice president of market development and
strategic alliances at McAfee, in Santa Clara, Calif.
Clean Pipes will be a major expansion of McAfee’s managed
services offerings. The company will sell managed intrusion prevention;
secure content management; vulnerability management; malware
protection, including anti-virus, anti-spam and anti-spyware services;
and mobile device security, Sonecha said.
McAfee is working with Cable
and Wireless PLC, British Telecommunications PLC (British Telecom),
Telefónica SA and China Network Communications (China Netcom) to tailor
its offerings through an invitation-only group it calls the Clean Pipes
Consortium.
http://www.eweek.com/article2/0,1895,1855188,00.asp
Look at all those services! What have they delivered as a service in the cloud or clean pipes? Nada.
The chassis-based products which were to deliver these services never materialized and neither did the services. Why? Because it’s really damned hard to do correctly. Just ask Inkra, Nexi, CoSine, etc. Or you can ask me. The difference is, we’re still in business and they’re not. It’s interesting to note that every one of those "consortium members" with the exception of Cable and Wireless are Crossbeam customers. Go figure.
EFFICACY
Once the provider starts filtering at the ingress/egress, one must trust that the things being filtered won’t have an impact on performance — or confidentiality, integrity and availability. Truth be told, as simple as it seems, it’s not just about raw bandwidth. Service levels must be maintained and the moment something that is expected doesn’t make its way down the pipe, someone will be screaming bloody murder for "slightly clean" pipes.
Ask me how I know. I’ve lived through inconsistent application of policies, non-logged protocol filtering, dropped traffic and asymmetric issues introduced by on-prem and in-the-cloud MSSP offerings. Once the filtering moves past your prem. as a customer, your visibility does too. Those fancy dashboards don’t do a damned bit of good, either. Ever consider the forensic impact?
Today, if you asked a service provider what constitutes their approach to clean pipes, most will refer you back to the same list I referenced above:
- DoS/DDoS
- Anti-Virus
- Anti-Spam
- URL Filtering/Parental Controls
- Managed Firewall/IDS/IPS
The problem is that most of these solutions are disparate point products run by different business units at different parts of the network. Most are still aimed at the perimeter service — it’s just that the perimeter has moved outward a notch in the belt.
Look, for the SME/SMB (or mobile user,) "good enough" is, for the most part, good
enough. Having an upstream provider filter out a bunch of spam and
viri is a good thing and most firewall rules in place in the SME/SMB
block everything but a few inbound ports to DMZ hosts (if there are
any) and allow everything from the inside to go out. Not very
complicated and it doesn’t take a rocket scientist to see how, from the
perspective of what is at risk, that this service doesn’t pay off
handsomely.
From the large enterprise I’d say that if you are going to expect that operational service levels will be met, think again. What happens when you introduce web services, SOA and heavy XML onto externally-exposed network stubs. What happens when Web2/3/4.x technologies demand more and more security layers deployed alongside the mechanics and messaging of the service?
You can expect issues and the lack of transparency will be an issue on all but the most simple of issues.
Think your third party due diligence requirements are heady now? Wait until this little transference of risk gets analyzed when something bad happens — and it will. Oh how quickly the pendulum will swing back to managing this stuff in-house again.
This model doesn’t scale and it doesn’t address the underlying deficiencies in the most critical elements of the chain: applications, databases and end-point threats such as co-opted clients as unwilling botnet participants.
But to Jeff’s point, if he didn’t have to spend money on the small stuff above, he could probably spend it elsewhere where he needs it most.
I think services in the cloud/clean pipes makes a lot of sense. I’d sure as hell like to invest less in commoditizing functions at the perimeter and on my desktop. I’m just not sure we’re going to get there anytime soon.
/Hoff
*Image Credit: CleanPipes
Read more…
Thomas and I were barking at each other regarding something last night and today he left a salient and thought-provoking comment that provided a very concise, pragmatic and objective summation of the embedded vs. overlay security quagmire:
I couldn’t agree more. Most of the security components today, including those that run in our little security ecosystem, really don’t intercommunicate. There is no shared understanding of telemetry or instrumentation and there’s certainly little or no correlation of threats, vulnerabilities, risk or disposition.
The problem is bad inasmuch as even best-of-breed solutions usually
require box sprawl and stacking and don’t necessarily provide for a
more secure posture, especially within context of another of Thomas’
interesting posts on defense in depth/mesh…
That’s changing, however. Our latest generation of NPMs (Network Processing Modules) allow discrete security ISV’s (which run on intelligently load-balanced Application Processor Modules — Intel blades in the same chassis) to interact with and control the network hardware through defined API’s — this provides the first step in that common telemetry such that while application A doesn’t need to know about the specifics of application B, they can functionally interact based upon the common output of disposition and/or classification of flows between them.
Later, they’ll be able to perhaps control each other through the same set of API’s.
So, I don’t think we’re going to solve the interoperability issue completely anytime soon inasmuch as we’ll go from 0 to 100%, but I think that the consolidation of these functions into smaller footprints that allow for intelligent traffic classification and disposition is a first good step.
I don’t expect Thomas to agree or even resonate with my statements below, but I found his explanation of the problem space to be dead on. Here’s my explanation of an incremental step towards solving some of the bigger classes of problems in that space which I believe hinges on consolidation of security functionality first and foremost.
The three options for reducing this footprint are as follows:
Pros: Supposedly less boxes, better communication between components and good coverage
given the fact that the security stuff is in the infrastructure. One vendor from which you get
your infrastructure and your protection. Correlation across the network "fabric" will ultimately
allow for near-time zoning and quarantine. Single management pane across the Enterprise
for availability and security. Did I mention the platform is already there?
Cons: You rely on a single vendor’s version of the truth and you get closer to a monoculture
wherein the safeguards protecting the network put at risk the very assets they seek to protect
because there is no separation of "church and state." Also, the expertise and coverage as well
as the agility for product development based upon evolving threats is hampered by the many
moving parts in this machine. Utility vs Security? Utility wins. Good enough vs. Best of breed?
Probably somewhere in between.
Pros: Reduced footprint, consolidated functionality, single management pane across multiple
security functions within the box. Usually excels in one specific area like AV and can add "good enough" functionality as the needs arise. Software moves up and down the scalability stack depending upon performance needed.
Cons: You again rely on a single vendor’s version of the truth. These boxes tend to want to replace switching infrastructure. Many of these platforms utilize ASICs to accelerate certain functions with the bulk of functionality residing in pure software with limited application or network-level intelligence. You pay the price in terms of performance and scale given the architectures of these boxes which do not easily allow for the addition of new classes of solutions to thwart new threats. Not really routers/switches.
Pros: The customer defines best of breed and can rapidly add new security functionality
at a speed that keeps pace with the threats the customer needs to mitigate. Utilizing a scalable and high-performance switching architecture combined with all the benefits
of an open blade-based security application/appliance delivery mechanism gives the best of all
worlds: self-healing, highly resilient, high performance and highly-available while utilizing
hardened Linux OS across load-balanced, virtualized security applications running on optimized
hardware.
Cons: Currently based upon proprietary (even though Intel reference design) hardware for
the application processing while also utilizing proprietary networking switching fabric and
load balancing. Can only offer software as quickly as it can be adapted and tested on the
platforms. No ASICs means small packet performance @ 64byte zero loss isn’t as high as
ASIC based packet-forwarding engines. No single pane of management.
I think that option #3 is a damned good start towards solving the consolidation issues whilst balancing the need to overlay syngergistically with the network infrastructure. You’re not locked into single vendor’s version of the truth and although the hardware may be "proprietary," the operating system and choice in software is not. You can choose from COTS, Open Source or write your own, all in an scaleable platform that is just as much a collapsed switching/routing platform as it is a consolidated blade server.
I think it has the best chance of evolving to solve more classes of problems than the other two at a rate and level of cost-effectiveness balanced with higher efficacy due to best of breed.
This, of course, depends upon how high the level of integration is between the apps — or at least their dispositions. We’re working very, very hard on that.
At any rate, Thomas ended with:
I like NAT. I think this is Paul Francis. The IETF has been hijacked by aliens, actually, and I’m getting a new tattoo: