Security Application Instrumentation: Reinventing the Wheel?
Two of my favorite bloggers engaged in a trackback love-fest lately on the topic of building security into applications; specifically, enabling applications as a service delivery function to be able to innately detect, respond to and report attacks.
Richard Bejtlich wrote a piece called Security Application Instrumentation and Gunnar Peterson chimed in with Building Coordinated Response In – Learning from the Anasazis. As usual, these are two extremely well-written pieces and arrive at a well constructed conclusion that we need a standard methodology and protocol for this reporting. I think that this exquisitely important point will be missed by most of the security industry — specifically vendors.
While security vendor’s hearts are in the right place (stop laughing,) the "security is the center of the universe" approach to telemetry and instrumentation will continue to fall on deaf ears because there are no widely-adopted standard ways of reporting across platforms, operating systems and applications that truly integrate into a balanced scorecard/dashboard that demonstrates security’s contribution to service availability across the enterprise. I know what you’re thinking…"Oh God, he’s going to talk about metrics! Ack!" No. That’s Andy’s job and he does it much better than I.
This mess is exactly why the SEIM market emerged to clean up the cesspool of log dumps that spew forth from devices that are, by all approximation, utterly unaware of the rest of ecosystem in which they participate. Take all these crappy log dumps via Syslog and SNMP (which can still be proprietary,) normalize if possible, correlate "stuff" and communicate that something "bad" or "abnormal" has occurred.
How does that communicate what this really means to the business, its ability to function, deliver servce and ultimately the impact on risk posture? It doesn’t because security reporting is the little kid wearing a dunce hat standing in the corner because it doesn’t play well with others.
Gunnar stated this well:
Coordinated detection and response is the logical conclusion to defense
in depth security architecture. I think the reason that we have
standards for authentication, authorization, and encryption is because
these are the things that people typically focus on at design time.
Monitoring and auditing are seen as runtime operational acitivities,
but if there were standards based ways to communicate security
information and events, then there would be an opportunity for the
tooling and processes to improve, which is ultimately what we need.
So, is the call for "security
application instrumentation"
doomed to fail because we in the security industry will try to reinvent the wheel with proprietary solutions and suggest
that the current toolsets and frameworks which are available as part of
a much larger enterprise management and reporting strategy not enough?
Bejtlich remarked on the need for mechanisms that report application state must be built into the application and must report more than just performance:
Today we need to talk about applications defending themselves. When
they are under attack they need to tell us, and when they are abused,
subverted, or breached they would ideally also tell us
I would like to see the next innovation be security application instrumentation,
where you devise your application to report not only performance and
fault logging, but also security and compliance logging. Ideally the
application will be self-defending as well, perhaps offering less
vulnerability exposure as attacks increase (being aware of DoS
conditions of course).
I would agree, but I get the feeling that without integrating this telemetry and the output metrics and folding it into response systems whose primary role is to talk about delivery and service levels — of which "security" is a huge factor — the relevance of this data within the visible single pane of glass of enterprise management is lost.
So, rather than reinvent the wheel and incrementally "innovate," why don’t we take something like the Open Group’s Application Response Measurement (ARM) standard, make sure we subscribe to a telemetry/instrumentation format that speaks to the real issues and enable these systems to massage our output in terms of the language of business (risk?) and work to extend what is already a well-defined and accepted enterprise response management toolset to include security?
To wit:
The Application Response Measurement (ARM) standard describes a common
method for integrating enterprise applications as manageable entities.
The ARM standard allows users to extend their enterprise management
tools directly to applications creating a comprehensive end-to-end
management capability that includes measuring application availability,
application performance, application usage, and end-to-end transaction
response time.
Or how about something like EMC’s Smarts:
Maximize availability and performance of
mission-critical IT resources—and the business services they support.
EMC Smarts software provides powerful solutions for managing complex
infrastructures end-to-end, across technologies, and from the network
to the business level. With EMC Smarts innovative technology you can:
- Model components and their relationships across networks, applications, and storage to understand effect on services.
- Analyze data from multiple sources to pinpoint root cause problems—automatically, and in real time.
- Automate discovery, modeling, analysis, workflow, and updates for dramatically lower cost of ownership.
…add security into these and you’ve got a winner.
There are already industry standards (or at least huge market momentum)
around intelligent automated IT infrastructure, resource management and service level reporting. We should get behind a standard that elevates the perspective of how security contributes to service delivery (and dare I say risk management) instead of trying to reinvent the wheel…unless you happen to like the Hamster Wheel of Pain…
/Hoff
Those of you who know me realize that no matter where I go, who I work for or who’s buying me drinks, I am going to passionately say what I believe at the expense of sometimes being perceived as a bit of a pot-stirrer.
I’m far from being impartial on many topics — I don’t believe that anyone is truly impartial about anything — but at the same time, I have an open mind and will gladly listen to points raised in response to anything I say. I may not agree with it, but I’ll also tell you why.
What I have zero patience for, however, is when I get twisted semantic marketing spin responses. It makes me grumpy. That’s probably why Rothman, Shimmy and I get along so well.
Some of you might remember grudge match #1 between me and Alex Niehaus, the former VP of Marketing for Astaro (coincidence?) This might become grudge match #2. People will undoubtedly roll their eyes and dismiss this as vendors sniping at one another. So be it. Please see paragraphs #1 and 2 above.
My recent interchange with Richard Stiennon is an extension of arguments we’ve been having for a year or so from when Richard was still an independent analyst. He is now employed as the Chief Marketing Officer at Fortinet.
Our disagreements have intensified for what can only be described as obvious reasons, but I’m starting to get as purturbed as I did with Alex Neihaus when the marketing sewerage obfuscates the real issues with hand-waving and hyperbole.
I called Richard out recently for what I believed to be complete doubletalk on his stance on UTM and he responded here in a comment. Comments get buried so I want to bring this back up to the top of the stack for all to see. Don’t mistake this as a personal attack against Richard, but a dissection of what Richard says. I think it’s just gobbledygook.
To be honest, I think it took a lot of guts to respond, but his answer makes my head spin as much as Anna Nicole Smith in a cheesecake factory. Yes, I know she’s dead, but she loved cheesecake and I’m pressed for an analogy.
The beauty of blogging is that the instant you say something, it becomes a record of "fact." That can be good or bad depending upon what you say.
I will begin to respond to Richard’s retort wherein he first summarily states:
I also assume that this means Richard hates the bit buckets that Firewall, IPS, NAC, VA/VM, and Patch Management (as examples) have become, too? This trend is the natural by-product of marketers and strategists scrambling to find a place to hang their hat in a very crowded space. So what.
UTM is about solving applied sets of business problems. You can call it what you like, but the only reason marketeers either love or hate UTM usually depends upon where they sit in the rankings. This intrigues me, Richard, because (as you mention further on) Fortinet pays to be a part of IDC’s UTM Tracker, and they rank Fortinet as #1 in at least one of the product price ranges, so someone at Fortinet seems to think UTM is a decent market to hang a shingle on.
Hate it or not, Fortinet is a UTM vendor, just like Crossbeam. Both companies hang their shingles on this market because it’s established and tracked.
You’re right. Lumping Crossbeam with Fortinet and Astaro is the wrong thing to do. 😉
Arguing the viability of a market which has tremendous coverage and validated presence seems a little odd. Crafting a true strategy of differentiation as to how you’re different in that market is a good thing, however.
So what you’re saying is that you like the nebulous and ill-defined blob that is Gartner’s view, don’t like IDC, but you’ll gladly pay for their services to declare you #1 in a market you don’t respect?
You mean besides when you said:
Just in case you’re interested, you can find that quote here. There are many, many other examples of you saying this, by the way. Podcasts, blog entries, etc.
Also, are you suggesting that Fortinet does not consider itself a UTM player? Someone better tell the Marketing department. Look at one of your news pages on your website. Say, this one, for example — 10 articles have UTM in the title and your own Mr. Akomoto (VP of Fortinet, Japan) says "The UTM market was pioneered by us," says Mr. Okamoto, the vice-president of Fortinet Japan. Mr. Okamoto explains how Fortinet created the UTM category, the initial
popularity of UTM solutions with SMBs…"
Yes, I understand how much you dislike IDC. Can you kindly show reference to where you previously commented on how Fortinet was executing on your vision for Secure Network Fabric? I can show you where you did for Crossbeam — it was at our Sales Meeting two years ago where you presented. I can even upload the slide presentation if you like.
Richard, I’m not really looking for the renewal of your Crossbeam Fan Club membership…really.
Oh, now it’s on! I’m fixin’ to get "Old Testament" on you!
Just so we’re clear, ISV applications that run on Crossbeam such as XML gateways, web-application firewalls, database firewalls and next generation network converged security services such as session border controllers are all UTM "legacy applications!?"
So besides an ASIC for AV, what "new" non-legacy apps does Fortinet bring to the table? I mean now. From the Fortinet homepage, please demonstrate which novel new applications that Firewall, IPS, VPN, Web filtering and Antispam represent?
It must suck to have to craft a story around boat-anchor ASICs that can’t extend past AV offload. That means you have to rely on software and innovation in that space. Cobbling together a bunch of "legacy" applications with a nice GUI doesn’t necessarily represent innovation and "next generation."
It’s clear you have a very
deludedinteresting perspective on security applications. The "innovation" that you’re suggesting differentiates what has classically been described as the natrual evolution of converging marketspaces. That over-played Snort analogy is crap. The old "signature" vs. "anomaly detection" argument paired with "deep packet inspection" is tired. Fortinet doesn’t really do anything that anyone else can’t/doesn’t already do. Except for violating GPL, that is.I suppose now that Check Point has acquired NFR, their technology is crap, too? Marcus would be proud.
Oh come on, Richard. First of all, the answer to your question is that many, many large enterprises and service providers utilize a layered defense and place an IPS before or after their firewall. Some have requirements for firewall/IDS/IPS pairs from different vendors. Others require defense in depth and do not trust that the competence in a solutions provider that claims to "do it all."
Best of breed is what the customer defines as best of breed. Just to be clear, would you consider Fortinet to be best of breed?
If you use a Crossbeam, by the way, it’s not a separate device and you’re not limited to just using the firewall or IPS in "front of" or "behind" one another. You can virtualize placement wherever you desire. Also, in many large enterprises, using IPS’s and firewalls from separate vendors is not only good practice but also required.
How does Fortinet accomplish that?
Your "payload inspection" is leveraging a bunch of OSS-based functionality paired with an ASIC that is used for AV — you know, signatures — with heuristics and a nice GUI. Whilst the Cosine IP Fortinet acquired represents some very interesting technology for provisioning and such, it ain’t in your boxes.
You’re really trying to pick a fight with me about Check Point when you choose to also ignore the fact that we run up to 15 other applications such as SourceFire and ISS on the same platform? We all know you dislike Check Point. Get over it.
Really? So since you don’t have separate products to address these (Fortinet sells UTM, afterall) that means you had nothing to offer them? Convergence is driving UTM adoption. You can call it what you want, but you’re whitewashing to prove a flawed theorem.
…and what the heck is the difference between that and UTM, exactly? People don’t buy IPS, they buy network level protection to defend against attack. IPS is just the product catagory, as is UTM.
I don’t like Scotch, Richard. It leaves a bad taste in my mouth…sort of like your response 😉