I Think Cobia’s a Great Idea…Despite Shimel’s Rabid Frothing to the Contrary…
[Ed: I want to add something here…I think people should pay
attention to Cobia for lots of reasons; some of them are apparent and
others cause eyebrows and shoulders to shrug. Just like when Astaro
announced their "Virtual Security Appliance" that I barfed all over because of egregiously overarching claims to revolutionary impact in the security market, one must consider the audience and motivation for creating a "product" like this.
I think folks should pay attention to Cobia because it continues
to provoke discussion and debate surrounding where, how and why
security is positioned in the network not to mention stirring interesting discussions regarding the definition of Open Source…]
—
Look, I think Cobia is compelling, creative, valuable and very interesting and I think people should pay attention to it. I think it’s a great idea and I know that Mitchell, Alan and Martin (and the rest of the team) will make it successful.
Alan’s statements to the contrary are just wrong and are overly controversial — unfortunately at the expense of a reasonable debate on an issue central to security today. I love him, but I suggest he needs Ritalin today!
The SME/SMB market is ripe for this sort of utility, but again, while the packaging and components are put together in new and interesting ways, the underlying framework is not. That’s not a bad thing, but again, forging yet another market classification in an already fractured industry is potentially difficult for everyone.
The WhistleJet from 1999 was a very similar model. Sure, it wasn’t open source and it didn’t run on a VM, but it was a very similar model.
I really didn’t want to bring up this point, because it seems contrived and snarky at this point, but it’s interesting that much of what is being presented with Cobia is already done in our boxes. I have no interest in starting a pissing match because there’s no reason to as Cobia serves a different marketspace than we do and blending utility applications (even though we can) with dedicated security applications isn’t in our interest or business model.
Mitchell even sees some value in running Cobia on Crossbeam.
Again, I think Cobia is an interesting idea and well-timed for the SME/SMB. I think it’s very cool and if you’re in the market for this solution you should definitely look at it.
I’m done arguing about something I wasn’t arguing about in the first place.
/Hoff
On Flying Pigs, DNSSEC, and embedded versus overlaid security…
I found Thomas Ptacek’s comments regarding DNSSEC deliciously ironic not for anything directly related to secure DNS, but rather a point he made in substantiating his position regarding DNSSEC while describing the intelligence (or lack thereof) of the network and application layers.
This may have just been oversight on his part, but it occurs to me that I’ve witnessed something on the order of a polar magnetic inversion of sorts. Or not. Maybe it’s the coffee. Ethiopian Yirgacheffe does that to me.
Specifically, Thomas and I have debated previously about this topic and my contention is that the network plumbing ought to be fast, reliable, resilient and dumb whilst elements such as security and applications should make up a service layer of intelligence running atop the pipes.
Thomas’ assertions focus on the manifest destiny that Cisco will rule the interconnected universe and that security, amongst other things, will — and more importantly should — become absorbed into and provided by the network switches and routers.
While Thomas’ arguments below are admittedly regarding the "Internet" versus the "Intranet," I maintain that the issues are the same. It seems that his statements below which appear to endorse the "…end-to-end argument in system design" regarding the "…fundamental design principle of the Intenet" are at odds with his previous aspersions regarding my belief. Check out the bits in red.
Here’s what Thomas said in "A Case Against DNSSSEC (A Matasano Miniseries):
…You know what? I don’t even agree in principle. DNSSEC is a bad thing, even
if it does work.How could that possibly be?
It violates a fundamental design principle of the Internet.
Nonsense. DNSSEC was designed and endorsed by several of the
architects of the Internet. What principle would they be violating?The end-to-end argument in system design. It says that you want to
keep the Internet dumb and the applications smart. But DNSSEC does the
opposite. It says, “Applications aren’t smart enough to provide
security, and end-users pay the price. So we’re going to bake security
into the infrastructure.”
I could have sworn that the bit in italics is exactly what Thomas used to say. Beautiful. If, Thomas truly agrees with this axiom and that indeed the Internet (the plumbing) is supposed to be dumb and applications (service layer) smart, then I suggest he should revisit his rants regarding how he believes the embedding security in the nework is a good idea since it invalidates the very "foundation" of the Internet.
I wonder what that’ll do internal networks?
That’s all. CSI is on.
/Hoff
(Written @ Home drinking Yirgacheffe watching UFC re-runs)
If it walks like a duck, and quacks like duck, it must be…?
Seriously, this really wasn’t a thread about NAC. It’s a great soundbite to get people chatting (arguing) but there’s a bit more to it than that. I didn’t really mean to offend those NAC-Addicts out there.
My last post was the exploration of security functions and their status (or even migration/transformation) as either a market or feature included in a larger set of features. Alan Shimel responded to my comments; specifically regarding my opinion that NAC is now rapidly becoming a feature and won’t be a competitive market for much longer.
Always the quick wit, Alan suggested that UTM was a "technology" that is going to become a feature much like my description of NAC’s fate. Besides the fact that UTM isn’t a technology but rather a consolidation of lots of other technologies that won’t stand alone, I found a completely orthogonal statement that Alan made to cause my head to spin as a security practitioner.
My reaction stems from the repeated belief that there should be separation of delivery between the network plumbing, the security service layers and ultimately the application(s) that run across them. Note well that I’m not suggesting that common instrumentation, telemetry and disposition shouldn’t be collaboratively shared, but their delivery and execution ought to be discrete. Best tool for the job.
Of course, this very contention is the source of much of the disagreement between me and many others who believe that security will just become absorbed into the "network." It seems now that Alan is suggesting that the model of combining all three is going to be something in high demand (at least in the SME/SMB) — much in the same way Cisco does:
The day is rapidly coming when people will ask why would they buy a box
that all it does is a bunch of security stuff. If it is going to live
on the network, why would the network stuff not be on there too or the
security stuff on the network box.
Firstly, multi-function devices that blend security and other features on the "network" aren’t exactly new.
That’s what the Cisco ISR platform is becoming now what with the whole Branch Office battle waging, and back in ’99 (the first thing that pops into my mind) a bunch of my customers bought and deployed WhistleJet multi-function servers which had DHCP, print server, email server, web server, file server, and security functions such as a firewall/NAT baked in.
But that’s neither here nor there, because the thing I’m really, really interested in Alan’s decidedly non-security focused approach to prioritizing utility over security, given that he works for a security company, that is.
I’m all for bang for the buck, but I’m really surprised that he would make a statement like this within the context of a security discussion.
That is what Mitchell has been
talking about in terms of what we are doing and we are going to go
public Monday. Check back then to see the first small step in the leap
of UTM’s becoming a feature of Unified Network Platforms.
Virtualization is a wonderful thing. It’s also got some major shortcomings. The notion that just because you *can* run everything under the sun on a platform doesn’t always mean that you *should* and often it means you very much get what you pay for. This is what I meant when I quoted Lee Iacocca when he said "People want economy and they will pay any price to get it."
How many times have you tried to consolidate all those multi-function devices (PDA, phone, portable media player, camera, etc.) down into one device. Never works out, does it? Ultimately you get fed up with inconsistent quality levels, you buy the next megapixel camera that comes out with image stabilization. Then you get the new video iPod, then…
Alan’s basically agreed with me on my original point discussing features vs. markets and the UTM vs. UNP thing is merely a handwaving marketing exercise. Move on folks, nothing to see here.
’nuff said.
/Hoff
(Written sitting in front of my TV watching Bill Maher drinking a Latte)
Thomas and I were barking at each other regarding something last night and today he left a salient and thought-provoking comment that provided a very concise, pragmatic and objective summation of the embedded vs. overlay security quagmire:
I couldn’t agree more. Most of the security components today, including those that run in our little security ecosystem, really don’t intercommunicate. There is no shared understanding of telemetry or instrumentation and there’s certainly little or no correlation of threats, vulnerabilities, risk or disposition.
The problem is bad inasmuch as even best-of-breed solutions usually
require box sprawl and stacking and don’t necessarily provide for a
more secure posture, especially within context of another of Thomas’
interesting posts on defense in depth/mesh…
That’s changing, however. Our latest generation of NPMs (Network Processing Modules) allow discrete security ISV’s (which run on intelligently load-balanced Application Processor Modules — Intel blades in the same chassis) to interact with and control the network hardware through defined API’s — this provides the first step in that common telemetry such that while application A doesn’t need to know about the specifics of application B, they can functionally interact based upon the common output of disposition and/or classification of flows between them.
Later, they’ll be able to perhaps control each other through the same set of API’s.
So, I don’t think we’re going to solve the interoperability issue completely anytime soon inasmuch as we’ll go from 0 to 100%, but I think that the consolidation of these functions into smaller footprints that allow for intelligent traffic classification and disposition is a first good step.
I don’t expect Thomas to agree or even resonate with my statements below, but I found his explanation of the problem space to be dead on. Here’s my explanation of an incremental step towards solving some of the bigger classes of problems in that space which I believe hinges on consolidation of security functionality first and foremost.
The three options for reducing this footprint are as follows:
Pros: Supposedly less boxes, better communication between components and good coverage
given the fact that the security stuff is in the infrastructure. One vendor from which you get
your infrastructure and your protection. Correlation across the network "fabric" will ultimately
allow for near-time zoning and quarantine. Single management pane across the Enterprise
for availability and security. Did I mention the platform is already there?
Cons: You rely on a single vendor’s version of the truth and you get closer to a monoculture
wherein the safeguards protecting the network put at risk the very assets they seek to protect
because there is no separation of "church and state." Also, the expertise and coverage as well
as the agility for product development based upon evolving threats is hampered by the many
moving parts in this machine. Utility vs Security? Utility wins. Good enough vs. Best of breed?
Probably somewhere in between.
Pros: Reduced footprint, consolidated functionality, single management pane across multiple
security functions within the box. Usually excels in one specific area like AV and can add "good enough" functionality as the needs arise. Software moves up and down the scalability stack depending upon performance needed.
Cons: You again rely on a single vendor’s version of the truth. These boxes tend to want to replace switching infrastructure. Many of these platforms utilize ASICs to accelerate certain functions with the bulk of functionality residing in pure software with limited application or network-level intelligence. You pay the price in terms of performance and scale given the architectures of these boxes which do not easily allow for the addition of new classes of solutions to thwart new threats. Not really routers/switches.
Pros: The customer defines best of breed and can rapidly add new security functionality
at a speed that keeps pace with the threats the customer needs to mitigate. Utilizing a scalable and high-performance switching architecture combined with all the benefits
of an open blade-based security application/appliance delivery mechanism gives the best of all
worlds: self-healing, highly resilient, high performance and highly-available while utilizing
hardened Linux OS across load-balanced, virtualized security applications running on optimized
hardware.
Cons: Currently based upon proprietary (even though Intel reference design) hardware for
the application processing while also utilizing proprietary networking switching fabric and
load balancing. Can only offer software as quickly as it can be adapted and tested on the
platforms. No ASICs means small packet performance @ 64byte zero loss isn’t as high as
ASIC based packet-forwarding engines. No single pane of management.
I think that option #3 is a damned good start towards solving the consolidation issues whilst balancing the need to overlay syngergistically with the network infrastructure. You’re not locked into single vendor’s version of the truth and although the hardware may be "proprietary," the operating system and choice in software is not. You can choose from COTS, Open Source or write your own, all in an scaleable platform that is just as much a collapsed switching/routing platform as it is a consolidated blade server.
I think it has the best chance of evolving to solve more classes of problems than the other two at a rate and level of cost-effectiveness balanced with higher efficacy due to best of breed.
This, of course, depends upon how high the level of integration is between the apps — or at least their dispositions. We’re working very, very hard on that.
At any rate, Thomas ended with:
I like NAT. I think this is Paul Francis. The IETF has been hijacked by aliens, actually, and I’m getting a new tattoo: