Archive

Archive for the ‘General Rants & Raves’ Category

Third Party Patching — Why Virtual Patch Emulation is the Host-est with the Most-est…

September 27th, 2006 3 comments

Dentalhygiene
All this hubbub about third party patching is enough to make one cross-eyed…(read on for the ironic analog)

I’ve written about this twice before…once last month here and the original post from my prior blog written over a year ago!  It’s a different approach (that inevitably and incorrectly gets called an IPS) to solving the patching dilemma — by not touching the host but instead performing virtualized patch emulation in real-time via the network.

Specifically I make reference to a product and service from Blue Lane technologies (the PatchPoint gateway) which so very elegantly provides a layer of protection that is a NETWORK-BASED third party patching solution.

You don’t have to touch the host — no rediculous rush to apply patches that might introduce more operational risk in the hurry to patch them than the risk imposed by the likelihood of the vulnerability being exploited…

You can deploy the virtual (third party) patch and THEN execute your rational and controlled approach towards regression testing those servers you’re about to add software to…

Rather than re-hash the obvious and get Alan Shimel designing book covers to attack my post like he did with Ross Brown from eEye (very cool, Shimmy!) you can just read the premise based upon the link above in the first sentence.

I don’t own any Blue Lane stock but I did happen to buy one of the first of their magical boxes 2 years ago and it saved my ass on many occasion.  Patch Tuesday become a non-event (when combined with the use of Skybox’s amazing risk management toolset…another post.)

Keep your mitts off my servers….

The Immune System Analogous to Security?…SUCKS.

September 10th, 2006 3 comments

Hiv_diagram
I find it oddly ironic that vendors such as Cisco maintain that the human immune system is a good model for how "network" security ought to function.  Now, I know that John Chambers’ parents are doctors, so perhaps he can’t help it…

In a recent blog entry, Richard Stiennon reviews John Chambers’ recent keynote at the Security Standards show, wherein he summarizes:

The human body is a good metaphor for the way security should be. You
hardly ever notice when your body is attacked because the majority of
attacks are warded off. It is the exception when you catch a cold or
have to go to the doctor.

It’s an unfortunate analog because PEOPLE DIE.

In my Unified Risk Management Part I whitepaper, I specifically suggested that this idea sucks:

Networks of the future are being described as being able to self-diagnose and self-prescribe antigens to cure their ills, all the while delivering applications and data transparently and securely to those who desire it.

It is clear, however, that unfortunately there are infections that humans do not recover from.  The immune system is sometimes overwhelmed by attack from invaders that adapt faster than it can.  Pathogens spread before detection and activate in an overwhelming fashion before anything can be done to turn the tide of infection.

Mutations occur that were unexpected, unforeseen and previously unknown.  The body is used against itself as the defense systems attack both attacker and healthy tissue and the patient is ultimately overcome.  These illnesses are terminal with no cure.

Potent drugs, experimental treatments and radical medical intervention may certainly extend or prolong life for a short time, but the victims still die.  Their immune systems fail. 

If this analogy is to be realistically adopted as the basis for information survivability and risk management best practices, then anything worse than a bad case of the sniffles could potentially cause networks – and businesses — to wither and die if a more reasonable and measured approach is not taken regarding what is expendable should the worst occur.  Lose a limb or lose a life?  What is more important? The autonomic system can’t make that decision.

I’m sick of these industry generalizations and fluffy conference sound bites because they’re always painted with a rosy end, downplaying the realities of the "cons" (pun intended) at the expense of the what everyone knows as the truth.

…and the truth be told, this analog is actually the PERFECT model for the Information Security paradigm because of just how spectacularly the immune systems fails.

Chris

Get your head out of your UTM – Hardware is a PIECE of the puzzle…

August 28th, 2006 3 comments

Headup_1
You can forget any amount of nicey-nicey in this response.  I don’t mind debating topics, but generally, I do my homework before I post rather than generalizing or debating via analogy. 

Mitchell Ashley just got my goat with his post since he’s poking the bear with a pretty flimsy stick, all things considered.

Mitchell (and an assembled cast of thousands such as Stiennon, Neihaus, etc…) just can’t get over the fact that their perception and (mis)understanding of what makes an Enterprise/Service Provider UTM solution like the Crossbeam X-Series so phenomenally differentiated and VALUABLE to our customers is NOT about the hardware!

Talk about making me grumpy.  I’m going to sound like Rothman!

It is about the software!  But despite Mitchell’s pleadings like this:

We need some fresh thinking about UTMs or we run the risk of customers thinking the lunatics are on the grass, or something worse.

…he ignores or chooses to remain ignorant about the fact that these "fresh solutions" do exist and how they work and still he continues to eat his own picnic lunch on the lunatic lawn he so dearly hopes to avoid. 🙂

Enterprise/Provider UTM is about the ability to run the best security products on the market in an amazingly scaleable, high-performance, highly-resilient and highly-flexible manner.  It’s about being able to deploy and architecture that allows one to manage risk, improve one’s security posture and deploy on-demand security SOFTWARE solutions where needed, when needed and at a price tag where the risk justifies the cost.

I think it’s fine to be contrarian (God knows I make a career out of it) and I think it’s fine that people continue to generalize about these issues, but I continue to make it my mission to educate those same people that there are solutions that actually solve the very problems they describe in their OTS 1U appliance model hell. 

Of course vendors that base their products on 1U appliances ought to be worried in forecasting the future —  because people are sick of deploying one after another of their darn boxes (even multi-function boxes) to get the value they need…or worse yet, not get the value they need.

The largest enterprises and service providers on the planet are re-evaluating their security architectures and how and why they deploy.  The largest IT/Security project on the planet has evaluated their choices and looked for "fresh solutions" that offer what UTM promises but at levels that support the confidentiality, integrity and availability service levels that they demand.

These folks are not buying Cisco or Juniper and they’re driving vendors who would otherwise have no shot to deploy into their network to run on our boxes.  Out UTM boxes.  They’re buying Crossbeam as an architecture that allows them to deploy a well-planed infrastructure. 

The continued chest pounding about the death of Best-of-Breed and diminishing point of return for the integration of said solutions on "big hardware" are just that — chest pounding.  Why?  Because of the following:

  1. Big hardware that scales and does not require forklifts simply provides a stable foundation upon which to deploy and scale your security SOFTWARE.
  2. A modular architecture allows the customer to invest over time and simply add blades, add additional SOFTWARE or even upgrade their memory/processors to capitalize on compute requirements
  3. Leveraging Best-In-Breed security SOFTWARE solutions allows the customer to choose what the best solution truly is, not settle for one vendor’s version of the truth like Cisco, Juniper, Fortinet or even StillSecure.

I agree that collaboration, interoperatbility and better manageability and useability are needed on ALL platforms, not just UTMs.  Furthermore, one of my biggest missions over the next year is to improve just this on our platforms:

Better appliance hardware’s not the only solution to the customer’s
problem (Sorry Chris and my other hardware friends.)

You’re right, and I never said it was.  You should probably learn about what I did say, however.

They want
solutions that bring needed value by;
intelligently identifying and communicating information events,
taking action when specific security actions occur, integrating the
functions on the box for me, and make it manageable and easy. Will I
need a log aggregator software (on a separate box) to analyze the logs
of the different parts of my UTM box? Even worse, what if I have
multiple UTMs? Integrated doesn’t mean co-located businesses with a
common receptionist. Yes, it needs a shiny GUI (well, at least a GUI
any way) but the functions really need to be integrated. And what if
the customer want to expand what the box can do? Make it run other
network software. Our paradigm needs some changing.

Hello!?  Our box runs 25+ applications that our customers asked us to establish a partnership with!  Perhaps "your" paradigm needs changing, but  "my" paradigm  works just fine — in fact it’s the same one you are  promoting!  Have we achieved the level of integration we desire?  No.  We’re working very hard at it, however.

Mitchell, if you’re going to generalize and call for new, "fresh" ideas regarding UTM and basically challenge the facts I’ve put forward, at least spend the 15 minutes as Alan did to learn about my solution before you dismiss it and lump it into the boneyard with the rest of the skeletons, mkay!?

Chris

Best of Breed Says: “Rumors of my death have been greatly exaggerated…”

August 24th, 2006 4 comments

Marktwain[Editor’s Note: You should also check out Alan Shimel’s blog entry regarding this meme.  I’ll respond to some of his excellent points in a seperate entry, but he beat the crap out of Mike and my Pink Floyd references!  I guess that comes with age ;)]

Uncle Mike and I today debate his notion that Best Of Breed/Best In Breed is dead — it’s actually a sing-a-long to Pink Floyd’s "The Wall."  Who knew security could be so lyrical?

By the way, in case you didn’t figure it out, that’s Mark Twain to the right, who, in his own right was once Best In Breed, is credited for the (butchered) quote above.

I think Mike missed my point — or more realistically, I didn’t do a good enough job of making it before he turned/titled the discussion into another rambling argument about the dying "perimeter." 

This really is the first time I’ve had trouble following Senor Rothman’s logic.  I think Stiennon planted a trojan via our IM chat the other night and is typing in his stead 😉

This is also probably my first really Crossbeam-centric post, but I’ve been prodded by Mike into ‘splaining/defending what we do (and how we do it) via BoB/BiB, so here goes:

Here’s my clarification:

Mike says:

It is my belief
(and remember I get paid to have opinions) that perimeter best of breed
is a dying architecture. Crossbeam even calls what you do UTM. So maybe
we are just disagreeing about semantics and words. Ultimately isn’t
this abstracted "security services" layer that you evangelize more of
what customers are interested in.

Your definition of the "perimeter" no longer interests me 😉 

If you’re talking about the SMB market and their adoption of Perimeter UTM to consolidate seperate appliances, then this argument is done. 

However, these customers that suffer from box stacking recognize that they bought the best product they could (perhaps it was more than they could afford) at the time, but what they’re looking for now is "good enough" and "reduced cost."  When you purhase a $500 box that does 8 things for $500, you get a "reduction of (device) complexity" as a side effect.  But it’s silly to suggest that these folks were really BoB/BiB targets in the first place.  That’s why BoB/BiB companies such as Check Point have small UTM boxes in this range.  Please see below. 

This abstracted "security services" layer is exactly what I evangelize, however it’s comprised of BoB/BiB solutions and functionality at it’s foundation.  As players commoditize, they move into core technology as a table stakes play, but then we have distinguished BoB/BiB technology that is truly differentiated for some period of time.  Sometimes this technology becomes a market, sometimes it becomes a feature, but either way, it’s an organic process that is still based upon BoB/BiB.

You bet that Crossbeam is a UTM player.  In fact, despite what Fortinet lies (yes, lies) about in their press releases, Crossbeam continues to be the leader in the high-end ($50K+) UTM market.  However, as I’ve said eleventy-billion times, there is an enormous difference between the small SMB $500 Perimeter UTM solutions and our Enterprise and Provider-Class UTM solutions.

I’m not going to re-hash this here again.  You’ll need to reference this post to get the big picture.  Suffice it to say, we’ve been in business for 6 years with revenue doubling YoY doing the thing that is now called UTM — and we do it in a way that nobody else can because it’s damned hard to do right.

I admit/concede/agree that Single-function BoB/BiB solutions that are intended by their creators to be deployed in a singular fashion on their own appliance stacked next to or on top of another BoB/BiB solution is a dying proposition.   This is why you see vendors — even Cisco — combining functionality into a consolidated solution to reduce security sprawl.   That won’t stop them from building BoB/BiB compartmentalized solutions, however.  This is what vendors do.

Typically integrators get to make money from cobbling it all together.  Savvy resellers and integrators don’t have to cobble if they use an architecture that aligns all of these solutions into and onto a platform architecture that is as much a competent networking component as it is a BoB/BiB security layer.  That would be Crossbeam.

That does NOT, however, mean that BoB/BiB itself is dead (at the perimeter or otherwise) because just like IBM buying ISS (the market leader in BoB/BiB IPS,)  this will result in the inevitable integration via service of ISS’ components into a  more robust suite of security services complemented by infrastructure.   

However, when a single vendor does this, you only get that single vendor’s version of the truth and so I assume this is what Mike means when he says a customer has to "settle" for BoB/BiB.

The dirty little secret is that customers are forcing BoB/BiB vendors to work together — or more specifically work together on a platform using an architecture that provides for this integration in an amazingly scaleable, highly-available, and high performance way.

Here are some pertinent examples:

  • Next Generation Networks de-couple the transport from the service layers.  You have plumbing and intelligence.  The plumbing is dumb, fast and reliable whilst the service layer providers the value in things such as content delivery, security, etc.

    In this model, the plumbing is made up of the BoB/BiB networking components and the intelligence layer is comprised of BoB/BiB service delivery components.

    NGN’s are driving the re-architecture of some of the biggest networks on the planet — in fact THE largest IT project in the world, BT’s 21CN, calls for this architecture where BoB/BiB components have been selected to be consolidated in a single platform in order to deliver BoB/BiB security as a service layer across the entire network — end to end.  They don’t expect switches or routers to be able to deliver this security — they trust in the fact that BoB/BiB players will — in one platform. 

    By the way, that includes that little thing called "the perimeter."  I’ve said it once and I’ll say it again:

    The perimeter is not going away.  In fact, it’s multiplying.  However, the diameter is collapsing.

Applying dynamic, on-demand and highly-differentiated combinations of BoB/BiB security services at different areas of the network from a single set of carrier/enterprise -class security switches allows you to secure these micro-perimeters as you best see fit.

You don’t "settle" for anything.  The customer has a choice of which BoB/BiB security software he/she wishes to run and like a "Security Service Oriented Architecture" and dynamically and at will apply these choices where, when and how needed.  If vendor A changes strategy or goes out of business, you can add/switch vendor B.

  • Virtualization in both the data center and the "network" is dependent upon BoB/BiB to deliver the functionality required for distributed computing.  Just as servers, storage, networking and processing is virtualized, security is too.

    Since many companies are utilizing VLANs to being their virtualization efforts and beginning to abstract the network in VRF terms @ Layer 2/Layer 3, they have two choices: use the still immature security technology present in clumps in their routers/switches (and hold your breath for SNF — which is really just a product like ours connected to a switch — don’t believe me?  I’ll post one of Richard Stiennon’s slides describing SNF) or choose an architecture that delivers EXACTLY the level of security you need at its most potent level as a combined virtualized service layer across the network using BoB/BiB.

  • Consolidation and Acquisitions will come and go, but you’ll notice that we are able to do things that nobody else can in the BoB/BiB market.  Take this VARBusiness story for example — just published today — in which an established BoB/BiB Firewall player (Check Point) is combined with a BoB/BiB IPS player (SourceFire) on our platform doing something the two companies could not do otherwise.  By the way, and most importantly, the customer can choose from 15+ other BoB/BiB security applications to combine, also, such as ISS, WebSense, Trend Micro, Forum Systems, Imperva, Dragon, etc.
  • Customers (in our world that’s large enterprise and service providers/carriers/mobile operators) are no longer settling for "good enough" and they’re also not settling for having BoB/BiB providers suggest that they need to tear into their networks to integrate their individual wares.  Here’s an interesting one for you:

    While many of them utilize things like FWSM modules in their 6500 series Cisco switches for firewall or even combine Juniper’s ISG2000 IPS devices with the 6500’s to provide FW and IPS together (and both of those are still considered BoB/BiB solutions by the way,) they tell the BoB/BiB purveyors of Web Services/SOA/XML security, gateway A/V, Content Filtering, Web Application and Database security solutions that while they will most definitely want their products, they won’t deploy them unless they run on the big, white, box.  That would be these.

To wrap up, Mike ends with:

To get back to my another brick
analogy, you could say that every new best of breed application you add
to your box is another brick that makes your box more interesting to
customers. No?

Yes, but how does that mean BoB/BiB is dead again?

In the spirit of the Who, here’s an appropriate selection from the Quadrophenia song "I’ve had enough":

You were under the impression
That when you were walking forward
You’d end up further onward
But things ain’t quite that simple.

You got altered information
You were told to not take chances
You missed out on new dances
Now you’re losing all your dimples.

Yours wordily, Mr. Dimples…

Chris

On Martin McKeay’s Podcast re: NAC/SNF

August 10th, 2006 3 comments

Buyavowel
Our online MobCast featuring Martin McKeay, Mike Rothman, Richard Stiennon, Alan Shimel and I regarding our on-going debate regarding NAC and SNF was almost a DNF when we discovered that SkypeCast sucked beyond compare (we jumped to "regular skype") and then Martin’s recording software decided to dedicate its compute cycles to the SETI project rather than record us.

Many thanks to the esteemed Mr. Stiennon who was luckily committing a felony by recording us all without disclosure. 😉  See, there’s a good side to all that government training. 😉

At any rate, we had a lively debate that needed to go on for about an hour more because (surprise!) we didn’t actually resolve anything — other than the two analysts were late to the call (surprise #2) and the two vendors were loud and obnoxious (not really a surprise.)

It was a great session that got passionately elevated across multiple elements of the discussion.  What we really recognized was that the definition of and expectations from NAC are wildly differing across the board; from the analyst to the vendor to the customer.

Take a listen when Martin posts it and let us know if we should have a second session.  I believe it will show up here:

http://www.securityroundtable.com/

Chris

ICMP = Internet Compromise Malware Protocol…the end is near!

August 9th, 2006 5 comments

Tinhat
Bear with me here as I admire the sheer elegance and simplicity of what this latest piece of malware uses as its covert back channel: ICMP.  I know…nothing fancy, but that’s why I think its simplicity underscores the bigger problem we have in securing this messy mash-up of Internet connected chewy goodness.

When you think about it, even the dopiest of users knows that when they experience some sort of abnormal network access issue, they can just open their DOS (pun intended) command prompt and type "ping…" and then call the helpdesk when they don’t get the obligatory ‘pong’ response.

It’s a really useful little protocol. Good for all sorts of things like out-of-band notifications for network connectivity, unreachable services and even quenching of overly-anxious network hosts. 

Network/security admins like it because it makes troubleshooting easy
and it actually forms some of the glue and crutches that folks depend
upon (unfortunately) to keep their networks running…

It’s had its fair share of negative press, sure. But who amongst us hasn’t?  I mean, Smurfs are cute and cuddly, so how can you blame poor old ICMP for merely transporting them?  Ping of Death?  That’s just not nice!  Nuke Attacks!?  Floods!?

Really, now.  Aren’t we being a bit harsh?  Consider the utility of it all..here’s a great example:

When I used to go onsite for customer engagements, my webmail access/POP-3/IMAP and SMTP access was filtered. Outbound SSH and other types of port filtering were also usually blocked but my old friend ICMP was always there for me…so I tunneled my mail over ICMP using Loki and it worked great..and it always worked because ICMP was ALWAYS open.  Now, today’s IDS/IPS combos usually detect these sorts of tunelling activities, so some of the fun is over.

The annoying thing is that there is really no reason why the entire range of ICMP types need to be open and it’s not that difficult to mitigate the risk, but people don’t because they officially belong to the LBNaSOAC (Lazy Bastard Network and Security Operators and Administrators Consortium.)

However, back to the topic @ hand.  I was admiring the simplicity of this newly-found data-stealer trojan that installs itself as an Internet Exploder (IE) browser helper and ultimately captures keystrokes and screen images when accessing certain banking sites and communicates back to the criminal operators using ICMP and a basic XOR encryption scheme.  You can read about it here.

It’s a cool design.  Right wrong or indifferent, you have to admire the creativity and ubiquity of the back channel…until, of course, you are compromised.

There are so many opportunities for the creative uses of taken-for-granted infrastructure and supporting communication protocols to suggest that this is going to be one hairy, protracted battle.

Submit your vote for the most "clever" use of common protocols/applications for this sort of thing…

Chris

NAC Attack: Why NAC doesn’t work and SN(i)F doesn’t, either…alone

August 7th, 2006 12 comments

Hearnospeakno
I have to admit that when I read Alan Shimel’s blog entry whereby he calls out Richard Stiennon on his blog entries titled "Don’t Bother with NAC" and "Network Admission Control is a Blind Alley," I started licking my chops as I couldn’t wait to jump in and take some time to throw fuel on the fire.  Of course, Mike Rothman already did that, but my goal in life is to be more inflammatory than he is, so let the dousing begin!

I’ve actually been meaning to ask for clarification on some points from both of these fellas, so no better time than the present. 

Now, given the fact that I know all of the usual suspects in this
debate, it’s odd that I’m actually not siding with any of them.  In
fact, I sit squarely in the middle because I think that in the same
breath both Richard and Alan are as wrong as they are right.  Mike is always right (in his own mind) but rather than suggest there was a KO here, let’s review the tape and analyze the count before we go to the scorecards for a decision.

This bout is under the jurisdiction of and sanctioned by the Nevada
Gaming Commission and brought to you by FUD — the offical supplier of
facts on the Internet. 😉

Tale of the tape:

Richard Stiennon:

  1. Richard Stiennon highlighted some of Ofir Arkin’s NAC "weaknesses" presented at Black Hat and suggests that NAC is a waste of time based upon not only these technical deficiencies but also that the problems that NAC seeks to solve are already ameliorated by proper patching as machines "…that are patched do not get infected."
  2. Somewhat confusingly he follows on with the statement based upon the previous statement that "The fear of the zero-day worm or virus has
    proved ungrounded. And besides, if it is zero-day, then having the
    latest DAT file from Symantec does you no good."
  3. Richard suggests that integrating host-based and network-based security
    is a bad idea and that the right thing to do is based upon  "de-coupling network and host-based security. Rather than require them to work together let them work alone."
  4. Ultimately he expects that the rest of the problems will be fixed with a paradigm which he has called Secure Network Fabric. 
  5. Richard says the right solution is the concept he calls "Secure Network Fabric (SNF)" wherein "…network security
    solutions will not require switches, routers, laptops, servers,
    and vendors to work in concert with each other" but rather "…relies most heavily
    on a switched network architecture [which] usually involve[s] core switches
    as well as access switches."
  6. SNF relies on the VLANs that "…would be used to provide granularity
    down to the device-level where needed. The switch enforces policy based
    on layer 2 and 3 information. It is directed by the NetFlow based
    behavior monitoring system.
  7. Richard has talked about the need for integrating intelligent IDP (Intrusion Detection and Prevention) systems coupled with NBA/NBAD (Network Behavioral Analysis/Network Behavioral Anomaly Detection) and switching fabric for quite some time and this integration is key in SNF functionality.
  8. Furthermore, Richard maintains that relying on the endpoint to report its health back to the network and the mechanisms designed to allow admission is a bad idea and unnecessary.
  9. Richard maintains that there is a difference between "Admission Control" and "Access Control" and sums it up thusly: "To keep it simple just remember: Access Control, good. Admission Control, bad."

Alan Shimel:

  1. Alan freely admits that there are some technical "issues" with NAC such as implementation concerns with DHCP, static IP addresses, spoofed MAC addresses, NAT, etc.
  2. Alan points out that Richard did not mention that Ofir Arkin also suggests that utilizing a NAC solution based upon 802.1x is actually a robust solution.
  3. Alan alludes to the fact that people deploy NAC for various reasons and (quoting from a prior article) "…an important point is that NAC is not really geared towards stopping the determined hacker, but rather the inadvertant polluter."  Hence I believe he’s saying that 802.1x is the right NAC solution to use if you can as it solves both problems, but that if latter is not your reason for deploying NAC, then the other issues are not as important.
  4. Alan points to the fact that many of Richard’s references are quite dated (such as the comments describing the 2003 acquisition of TippingPoint by 3Com as "recent" and that ultimately SNF is a soapbox upon which Richard can preach his dislike of NAC based upon "…trusting the endpoint to report its health."
  5. De-coupling network and host-based endpoint security is a bad idea, according to Alan, because you miss context and introduce/reinforce the information silos that exist today rather than allow for coordinated, consolidated and correlated security decisions to be made.
  6. Alan wishes to purchase Pot (unless it’s something better) from Richard and go for a long walk on the shore because Mr. Stiennon has "the good stuff" in his opinion since Richard intimates that patching and configuration management have worked well and that zero-day attacks are a non-entity.
  7. Alan suggests that the technology "…we used to call behvior based IPS’s" which will pass "…to the switch to enfore policy" is ultimately "another failed technology" and that the vendors Richard cites in his BAIPS example (Arbor, Mazu, etc.) are all "…struggling in search of a solution for the technology they have developed."
  8. Alan maintains that the SNF "dream" lacks the ability to deliver any time soon because by de-coupling host and network security, you are hamstrung by the lack of "…context, analytics and network performance."
  9. Finally, Alan is unclear on the difference between Network Access Control (good) and Network Admission Control (bad.)

So again, I maintain that they are both right and both wrong.  I am the Switzerland of NAC/SNF! 

I’ll tell you why — not in any particular order or with a particular slant…

(the bout has now turned from a boxing context to a three man Mixed-Martial-Arts Octagon cage-match!  I predict a first round submission via tap-out):

  1. Firstly, endpoint and network security MUST be deployed together and ultimately communicate with one another to ultimately effect the most visible, transparent, collaborative and robust defense-in-depth security strategy available.  Nobody said it’s a 50/50 percentage, but having one without the other is silly.  There are things on endpoints that you can’t discover over a network.  Not having some intelligence about the endpoint means the network cannot possibly determine as accurately the "intent" of the packets spewing from it.
  2. Network Admission Control solutions don’t necessarily blindly trust the endpoint — whether agent or agentless, the NAC controller takes not only what the host "reports" but also how it "responds" to active probes of its state.  While virtualization and covert rootkits have the capability to potentially hide from these probes, suggesting that an endpoint passes these tests does not mean that the endpoint is no longer subject to any other control on the network…
  3. Once Network Admission Control is accomplished, Network Access Control can be applied (continuously) based upon policy and behavioral analysis/behavioral anomaly detection.
  4. Patching doesn’t work — not because the verb is dysfunctional — but because the people who are responsible for implementing it are.  So long as these systems are not completely automated, we’re at risk.
  5. Zero day exploits are not overblown — they’re getting more surgically targeted and the remediation cycles are too long.  Network-based solutions alone cannot protect against anomalies that are not protocol or exploit/vulnerability signature driven…if the traffic patterns are not abnormal, the flags are OK and the content seemingly benign going to something "normal," it *will* hit the target.
  6. You’ll be suprised just how immature many of the largest networks on this planet are in terms of segmentation via VLANs and internal security…FLAT, FLAT, FLAT.  Scary stuff.  If you think the network "understands" the data that is transported over it or can magically determine what is more important by asset/risk relevance, I too would like some of that stuff you’re smoking.
  7. Relying on the SNF concept wherein the "switch enforces policy based
    on layer 2 and 3 information," and "is directed by the NetFlow based
    behavior monitoring system" is wholly shortsighted.  Firstly layer 2/3 information is incredibly limited since most of the attacks today are application-level attacks and NOT L2/L3 and Netflow data (even v9) is grossly coarse and doesn’t provide the context needed to effectively determine these sorts of incursions.  That’s why NetFlow today is mostly used in DDoS activities — because you see egregiously spiked usage and/or traffic patterns.  It’s a colander not a sieve.
  8. Most vendors today are indeed combining IDP functionality with NBA/D to give a much deeper and contextual awareness across the network.  More importantly, big players such as ISS and Cisco include endpoint security and NAC (both "versions") to more granularly define, isolate and ameliorate attacks.  It’s not perfect, but it’s getting better.
  9. Advances in BA/NBA/NBAD are coming (they’re here, actually) and it will produce profound new ways of managing context and actionable intelligence when combined with optimized compute and forwarding engines which are emerging at the same time.   They will begin, when paired with network-wide correlation tools, to solve the holy trinity of issues: context, analytics and performance.
  10. Furthermore, companies such as ISS and StillSecure (Alan’s company) have partnered with switch vendors to actually do just what Richard suggests in concept.  Interestingly enough,
    despite the new moniker, the SNF concept is not new — Cisco’s SDN (albeit
    without NAC) heavily leverages the concepts described above from an
    embedded security perspective and overlay security vendors such as ISS
    and Crossbeam also have solutions (in POC or available) in this area.
  11. Let’s be honest, just like the BA vendors Alan described, NAC is in just the same position — "struggling in search of a solution for the technology they have developed."  There are many reasons for deploying NAC: Pre and Post-inspection/Quarantine/Remediation and there are numerous ways of doing it: agent-based/agentless/in-line/out-of-band… The scary thing is with so many vendors jumping in here and the 800 pound Gorilla (Cisco) even having trouble figuring it out, how long before NAC becomes a feature and not a market?  Spending a bunch of money on a solution (even without potential forklifts) to not "… stop the determined hacker, but rather the inadvertant polluter" seems a little odd to me.  Sure it’s part of a well defined network strategy, but it ain’t all that yet, either.
  12. With Cisco’s CSO saying things like "The concept of having devices join a network in which they are posture-assessed and given access to the network in a granular way is still in its infancy" and even StillSecure’s CTO (Mitchell Ashley) saying "…but I think those interested in NAC today are really trying to avoid infection spread by an unsuspecting network user rather than a knowledgeable intruder" it’s difficult to see how NAC can be considered a core element of a network security strategy WITHOUT something like SNF.

So, they’re both right and both wrong.  Oh, and by the way, Enterprise and Provider-class UTM solutions are combining ALL of this in a unified security service layer… FW, IDP, Anti-X, NBA(D) and SNF-like foundations.

[Tap before it breaks, boys!]

We need NAC and we need SNF and I’ve got the answer:

  1. Take some of Richard’s "good stuff"
  2. Puff, puff, pass
  3. Combine some of Alan’s NAC with Richard’s SNF
  4. Stir
  5. Bake for 30 minutes and you have one F’N (good) SNAC
    (it’s an anagram, get it!)

There you have it.

Chris

The Most Hysterical “Security by Obscurity” Example, Evah!

August 4th, 2006 No comments

Upsidedownebay
For those of you living under a rock for the last 15+ years, you may not have heard of Bruce Schneier.  He’s a brilliantly opinionated cryptographer, privacy advocate, security researcher, businessman, author and inadvertent mentor to many.  I don’t agree with everything he says, but I like the buttons he pushes.

I love reading his blog because his coverage of the issues today are diverse and profound and very much carry forth the flavor of his convictions.  Also, it seems Bruce really likes Squids…which makes this electronically-enabled Cepholopod-inspired security post regarding the theft of someone’s wireless connection that much more funny.

Here’s the gist: A guy finds that his neighbor is "stealing" his wireless Internet access.  Rather than just secure it he "…"runs squid with a trivial redirector that downloads images, uses
mogrify to turn them upside down and serves them out of it’s local
webserver."  Talk about security by obscurity!

That’s just f’in funny…so much so, I’m going to copy his idea, just like I did Bruce’s blog entry! 😉

Actually the best part is the comment from one "Matthew Skala" who performs an autopsy on the clearly insecure and potentially dangerous implementation of the scripts and potential for "…interesting results."  He’s just sayin’…

I don’t know all the details of how Squid interfaces to redirection
scripts, but I see that that redirection script passes the URL to wget
via a command line parameter without using "–" to terminate option
processing. It first parses out what’s supposed to be the URL using a
regular expression, but not a very cautious one. I wonder if it might
be possible to request a carefully-designed URL that would cause wget
to misbehave by interpreting the URL as an option instead of a URL. I
also see that it’s recognizing images solely by filename, so I wonder
if requesting a URL named like an image but that *wasn’t* an image,
could cause interesting results. Furthermore, it writes the images to
disk before flipping them – and I don’t even see any provision for
clearing out the cache of flipped images – so requesting a lot of very
large images, or images someone wouldn’t want to be caught possessing,
might be interesting.

Posted by: Matthew Skala  at August  4, 2006 08:42 AM

Read the whole thing (with configs.) here.

Chris

Categories: General Rants & Raves Tags:

More debate on SSO/Authentication

August 2nd, 2006 1 comment

Mike Farnum and I continue to debate the merits of single-sign-on and his opinion that deploying same makes you more secure. 

Rothman’s stirring the point saying this is a cat fight.  To me, it’s just two dudes having a reasonable debate…unless  you know something I don’t [but thanks, Mike R. because nobody would ever read my crap unless you linked to it! ;)]

Mike’s position is that SSO does make you more secure and when combined with multi-factor authentication adds to defense-in-depth.   

It’s the first part I have a problem with, not so much the second and I figured out why.  It’s the order of the things that got me bugged when Mike said the following:

But here’s s [a] caveat, no matter which way you go: you really need a
single-signon solution backing up a multi-factor authentication
implementation.
 

If he had suggested that multi-factor authentication should back up an SSO solution, I’d agree.  But he didn’t and he continues not to by maintaing (I think) that SSO itself is secure and SSO + multi-factor authentication is more secure.

My opinion is a little different.  I believe that strong authentication *does* add to defense-in-depth, but SSO adds only depth of complexity, obfuscation and more moving parts, but with a single password on the front end.  More on that in a minute.

Let me clarify a point which is that I think from a BUSINESS and USER EXPERIENCE perspective, SSO is a fantastic idea.  However, I still maintain that SSO by itself does not add to defense-in-depth (just the opposite, actually) and does not, quantifiably, make you more "secure."  SSO is about convenience, ease of use and streamlined efficiency.

You may cut down on password resets, sure.  If someone locks themselves out, however, most of the time resets/unlocks involve then self-service portals or telephone resets which are just as prone to brute force and social engineering as calling the helpdesk, but that’s a generalization and I would rather argue through analogy… 😉

Here’s the sticky part of why I think SSO does not make you more secure, it merely transfers the risks involved with passwords from one compartment to the next. 

While that’s a valid option, it is *very* important to recognize that managing risk does not, by definition, make you more secure…sometimes managing risk means you accept or transfer it.  It doesn’t mean you’ve solved the problem, just acknowledged it and chosen to accept the fact that the impact does not justify the cost involed in mitigating it. 😉

SSO just says "passwords are a pain in the ass to manage. I’m going to find a better solution for managing them that makes my life easier."  SSO Vendors claim it makes you more secure, but these systems can get very complex when implementing them across an Enterprise with 200 applications, multiple user repositories and the need to integrate or federate identities and it becomes difficult to quantify how much more secure you really are with all of these moving parts.

Again, SSO adds depth (of complexity, obfuscation and more moving parts) but with a single password on the front end.  Complex passwords on the back-end managed by the SSO system don’t do you a damned good when some monkey writes that single password that unlocks the entire enterprise down on a sticky note.

Let’s take the fancy "SSO" title out of the mix for a second and consder today’s Active Directory/LDAP proxy functions which more and more applications tie into.  This relies on a single password via your domain credentials to authenticate directly to an application.  This is a form of SSO, and the reality is that all we’re doing when adding on an SSO system is supporting web and legacy applications that can’t use AD and proxying that function through SSO.

It’s the same problem all over again except now you’ve just got an uber:proxy.

Now, if you separate SSO from the multi-factor/strong authentication argument, I will agree that strong authentication (not necessarily multi-factor — read George Ou’s blog) helps mitigate some of the password issue, but they are mutually exclusive.

Maybe we’re really saying the same thing, but I can’t tell.

Just to show how fair and balanced I am (ha!) I will let you know that prior to leaving my last employ, I was about to deploy an Enterprise-wide SSO solution.  The reason?  Convenience and cost.

Transference of risk from the AD password policies to the SSO vendor’s and transparency of process and metrics collection for justifying more heads.    It wasn’t going to make us any more secure, but would make the users and the helpdesk happy and let us go figure out how we were going to integrate strong authentication to make the damned thing secure.

Chris

On two-factor authentication and Single-Sign-On…

August 1st, 2006 2 comments

Computer_key1_1
I’ve been following with some hand-wringing the on-going debates regarding the value of two-factor and strong authentication systems in addition to, or supplementing, traditional passwords.

I am very intent on seeing where the use cases that best fit strong authentication ultimately surface in the long term.  We’ve seen where they are used today, but Icub wonder if we, in the U.S., will ever be able to satisfy the privacy concerns raised by something like a smart-card-based national ID system to recognize the benefits of this technology. 

Today, we see multi-factor authentication utilized for:  Remote-access VPN, disk encryption, federated/authenticated/encrypted identity management and access control, the convergence of physical and logical/information security…

[Editor’s Note: George Ou from ZDNet just posted a really intersting article on his blog relating how banks are "…cheating their way to [FFIEC] web security guidelines" by just using multiple instances of "something the user knows" and passing it off as "multifactor authentication."  His argument regarding multi-factor (supplemental) vs. strong authentication is also very interesting.]

I’ve owned/implemented/sold/evaluated/purchased every kind of two-factor / extended-factor / strong authentication system you can think of:

  • Tokens
  • SMS Messaging back to phones
  • Turing/image fuzzing
  • Smart Cards
  • RFID
  • Proximity
  • Biometrics
  • Passmark-like systems

…and there’s very little consistency in how they are deployed, managed and maintained.  Those pesky little users always seemed to screw something up…and it usually involved losing something, washing something, flushing something or forgetting something.

The technology’s great, but like Chandler Howell says there are a lot of issues that need reconsideration when it comes to their implementation that go well beyond what we think of today as simply the tenents of "strong" authentication and the models of trust we surround them with:

So here are some Real World goals I suggest we should be looking at.

  1. Improved authentication should focus on (cryptographically) strong
    Mutual Authentication, not just improved assertion of user Identity.
    This may mean shifts in protocols, it may mean new technology. Those
    are implementation details at this level.
  2. We need to break the relationship between location & security
    assumption, including authentication. Do we need to find a replacement
    for “somewhere you are?” And if so, is it another authentication factor?
  3. How does improved authentication get protection closer to the data?
    We’re still debating types of deadbolts for our screen door rather than
    answering this question.

All really good points, and ones that I think we’re just at the tip of discussing. 

Taking these first steps is an ugly and painful experience usually, and I’d say that the first footprints planted along this continuum do belong to the token authentication models of today.  They don’t work for every application and there’s a lack of cross-pollinization when you use one vendor’s token solution and wish to authenticate across boundaries (this is what OATH tries to solve.)

For some reaon, people tend to evaluate solutions and technology in a very discrete and binary modality: either it’s the "end-all, be-all, silver bullet" or it’s a complete failure.  It’s quite an odd assertion really, but I suppose folks always try to corral security into absolutes instead of relativity.

That explains a lot.

At any rate, there’s no reason to re-hash the fact that passwords suck and that two-factor authentication can provide challenges, because I’m not going to add any value there.  We all understand the problem.  It’s incomplete and it’s not the only answer. 

Defense in depth (or should it be wide and narrow?) is important and any DID strategy of today includes the use of some form of strong authentication — from the bowels of the Enterprise to the eCommerce applications used in finance — driven by perceived market need, "better security," regulations, or enhanced privacy.

However, I did read something on Michael Farnum’s blog here that disturbed me a little.  In his blog, Michael discusses the pros/cons of passwords and two-factor authentication and goes on to introduce another element in the Identity Management, Authentication and Access Control space: Single-Sign-On.

Michael states:

But here’s s [a] caveat, no matter which way you go: you really need a
single-signon solution backing up a multi-factor authentication
implementation.
  This scenario seems to make a lot of sense for a few
reasons:

  • It eases the administrative burdens for the IT department because,
    if implemented correctly, your password reset burden should go down to
    almost nil
  • It eases (possibly almost eliminates) password complaints and written down passwords
  • It has the bonus of actually easing the login process to the network and the applications

I know it is not the end-all-be-all, but multi-factor authentication
is definitely a strong layer in your defenses.  Think about it.

Okay, so I’ve thought about it and playing Devil’s Advocate, I have concluded that my answer is: "Why?"

How does Single-Sign-On contribute to defense-in-depth (besides adding another hyphenated industry slang) short of lending itself to convenience for the user and the help desk.  Security is usually 1/convenience, so by that algorithm it doesn’t.

Now instead of writing down 10 passwords, the users only need one sticky — they’ll write that one down too!

Does SSO make you more secure?  I’d argue that in fact it does not — not now that the user has a singular login to every resource on the network via one password. 

Yes, we can shore that up with a strong-authentication solution, and that’s a good idea, but I maintain that SA and SSO are mutually exclusive and not a must.  The complexity of these systems can be mind boggling, especially when you consider the types of priviledges these mechanisms often require in order to reconcile this ubiquitous access.  It becomes another attack surface.

There’s a LOT of "kludging" that often goes on with these SSO systems in order to support web and legacy applications and in many cases, there’s no direct link between the SSO system, the authentication mechanism/database/directory and ultimately the control(s) protecting as close to the data as you can.

This cumbersome process still relies on the underlying OS functionality and some additional add-ons to mate the authentication piece with the access control piece with the encryption piece with the DRM piece…

Yet I digress.

I’d like to see the RISKS of SSO presented along with the benefits if we’re going to consider the realities of the scenario in terms of this discussion.

That being said, just because it’s not the "end-all-be-all" (what the hell is with all these hyphens!?) doesn’t mean it’s not helpful… 😉

Chris