Archive

Author Archive

No Good Deed Goes Unpunished (Or Why NextGen DLP Is a Step On The Information Centric Ladder…)

March 19th, 2008 4 comments

Farmersnakeangled
Rothman wrote a little ditty today commenting on a blog I scribbled last week titled "The Walls Are Collapsing Around Information Centricity"

Information centricity – Name that tune.

Of course, the Hoff needs to pile on to Rich’s post about information-centric security. He even finds means to pick apart a number of my statements. Now that he is back from down under, maybe he could even show us some examples of how a DLP solution is doing anything like information-centricity. Or maybe I’m just confused by the uber-brain of the Hoff and how he thinks maybe 500 steps ahead of everyone else.

Based on my limited brain capacity, the DLP vendors can profile and maybe even classify the types of data. But that information is neither self-describing, nor is it portable. So once I make it past the DLP gateway, the data is GONE baby GONE.

In my world of information-centricity, we are focused on what the fundamental element of data can do and who can use it. It needs to be enforced anywhere that data can be used. Yes, I mean anywhere. Name that tune, Captain Hoff. I’d love to see something like this in use. I’m not going to be so bold as to say it isn’t happening, but it’s nothing I’ve seen before. Please please, edumacate me.

I’m always pleased when Uncle Mike shows me some blog love, so I’ll respond in kind, if not only to defend my honor.  Each time Mike "compliments" me on how forward-looking I am, it’s usually accompanied by a gnawing sense that his use of "uber-brained" is Georgian for "dumbass schlock." 😉

Yes, you’re confused by my "uber-brain…" {roll eyes here}

I believe Mike missed a couple of key words in my post, specifically that the next generation of solutions would start to deliver the functionality described in both my and Rich’s posts.

What I referred to was that the evolution of the current generation of DLP solutions as well as the incremental re-tooling of DRM/ERM, ADMP, CMP, and data classification at the point of creation and across the wire gets us closer to being able to enforce policy across a greater landscape.

The current generation of technologies/features such as DLP do present useful solutions in certain cases but in their current incarnation are not complete enough to solve all of the problems we need to solve.  I’ve said this many times.  They will, however, evolve, which is what I was describing.

Mike is correct that today data is not self-describing, but that’s a problem that we’ll need standardization to remedy — a common metadata format would be required if cross-solution policy enforcement were to be realized.  Will we ever get there?  It’ll take a market leader to put a stake in the ground to get us started, for sure (wink, wink.)

As both Mogull and I alluded in our posts and our SOURCEBoston presentation, we’re keyed into many companies in stealth mode as well as the roadmaps of many of the companies in this space and the solutions represented by the intersection of technologies and solutions that are becoming CMP are very promising.

That shouldn’t be mistaken for near-term success, but since my job is to look 3-5 years out on the horizon, that’s what I wrote about.  Perhaps Mike mistook my statement about the fact that companies are beginning to circle the wagons on this issue to mean that they are available now.  That’s obviously not the case.

Hope that helps, Mike.

/Hoff

Categories: Information Centricity Tags:

Off the Grid For Almost a Week…

March 11th, 2008 5 comments

MaorieyeAfter Mogull and my presentation tomorrow @ SOURCEBoston, I’m bolting to Logan to catch the first leg of travel that amounts to 27 hours in transit to my sister’s wedding in New Zealand. 

I’m making sacrifices to the travel gods this evening that I catch my connector in L.A., else I’m noodles.

I haven’t seen my sister and her brood in quite some time, so I reckoned why not punctuate a lengthy period of absence by walking her down the "aisle" a continent or so away?

I use the term "aisle" loosely as it’s really likely something akin to an oceanic garden path.  You see, she’s getting hitched on one of New Zealand’s beautiful north island iron-based black sand beaches.  You may have seen the cheap imitations in Hawaii (they’re ground fresh for the tourists daily, )but It’s been far too long since I’ve curled my toes in the onyx bits of old earth.

I grew up in New Zealand and I haven’t been back for any reasonable visit in more than 10 years except for the occasional stop-over on the way to the Rock (Australia.)

So, I’ll likely be off the air until next Tuesday because I’ll be reconnecting with my family, my adopted Maori roots, and my favorite Kiwi beer.  When I come back, I’ll likely be speaking funny.  I’m sure you’ll understand.

For some reason, I’m guessing the world will go on without me.

I just hope Mogull doesn’t try an subvert my sprinkler system.  That probably came out wrong on SO many fronts…but I don’t have time to explain…

/Hoff

Categories: Travel Tags:

The Walls Are Collapsing Around Information Centricity

March 10th, 2008 2 comments

Since Mogull and I collaborate quite a bit on projects and share many thoughts and beliefs, I wanted to make a couple of comments on his last post on Information Centricity and remind the audience at home of a couple of really important points.

Rich’s post was short and sweet regarding the need for Information-Centric solutions with some profound yet subtle guideposts:

For information-centric security to become a reality, in the long term it needs to follow the following principles:

  1. Information (data) must be self describing and defending.
  2. Policies and controls must account for business context.
  3. Information must be protected as it moves from structured to
    unstructured, in and out of applications, and changing business context.
  4. Policies must work consistently through the different defensive layers and technologies we implement.

I’m not convinced this is a complete list, but I’m trying to keep to
my new philosophy of shorter and simpler. A key point that might not be
obvious is that while we have self-defending data solutions, like DRM
and label security, for success they must grow to account for business
context. That’s when static data becomes usable information.

Mike Rothman gave an interesting review of Rich’s post:


The Mogull just laid out your work for the next 10 years. You just
probably don’t know it yet. Yes, it’s all about ensuring that the
fundamental elements of your data are protected, however and wherever
they are used. Rich has broken it up into 4 thoughts. The first one
made my head explode: "Information (data) must be self-describing and
defending."

Now I have to clean up the mess. Sure things like DRM are a
bad start, and have tarnished how we think about information-centric
security, but you do have to start somewhere. The reality is this is a
really long term vision of a problem where I’m not sure how you get
from Point A to Point B. We all talk about the lack of innovation in
security. And how the market just isn’t exciting anymore. What Rich
lays out here is exciting. It’s also a really really really big
problem. If you want a view of what the next big security company does,
it’s those 4 things. And believe me, if I knew how to do it, I’d be
doing it – not talking about the need to do it.

The comments I want to make are three-fold:

  1. Rich is re-stating and Mike’s head is exploding around the exact concepts that Information Survivability represents and the Jericho Forum trumpets in their Ten Commandments.  In fact, you can read all about that in a prior posts I made on the subjects of the Jericho Forum, re-perimeterization, information survivability and information centricity.  I like this post on a process I call ADAPT (Applied Data and Application Policy Tagging) a lot.

    For reference, here are the Jericho Forum’s Ten Commandments. Please see #9:

    Jericho_comm1Jericho_comm2

  2. As mike alluded, DRM/ERM has received a bad rap because of how it’s implemented — which has really left a sour taste in the mouths of the consumer consciousness.  As a business tool, it is the precursor of information centric policy and will become the lynchpin in how we will ultimately gain a foothold on solving the information resiliency/assurance/survivability problem.
  3. As to the innovation and dialog that Mike suggests is lacking in this space, I’d suggest he’s suffering from a bit of Shitake-ism (a-la mushroom-itis.)  The next generation of DLP solutions that are becoming CMP (Content Monitoring and Protection — a term I coined) are evolving to deal with just this very thing.  It’s happening.  Now.

    Further to that, I have been briefed by some very, very interesting companies that are in stealth mode who are looking to shake this space up as we speak.

So, prepare for Information Survivability, increased Information Resilience and assurance.  Coming to a solution near you…

/Hoff

The Unbearable Lightness of Being…Virtualized

March 10th, 2008 1 comment

Ripple
My apologies to Pete Lindstrom for not responding to his comment regarding my "virtualization & defibrilation" post sooner and hat-tip for Rothman for the reminder.

Pete was commenting on a snippet from my larger post dealing with the following assertion:

The Hoff-man pontificates yet again:

Firewalls, IDS/IPSs, UTM, NAC, DLP — all of them have limited
visibility in this rapidly "re-perimeterized" universe in which our
technology operates, and in most cases we’re busy looking at
uninteresting and practically non-actionable things anyway.  As one of
my favorite mentors used to say, "we’re data rich, but information
poor."

In
a general sense, I agree with this statement, but in a specific sense,
it isn’t clear to me just how significant this need is. After all, we
are talking today about 15-20 VMs per physical host max and I defy
anyone to suggest that we have these security solutions for every 15-20
physical nodes on our network today – far from it.

Let’s shed a little light on this with some context regarding *where* in the network we are virtualizing as I don’t think I was quite clear enough.

Most companies have begun to virtualize their servers driven by the economic benefits of consolidation and have done so in the internal zones of the networks.  Most of these networks are still mostly flat from a layer 3 perspective, with VLANs providing isolation for mostly performance/QoS reasons.

In discussions with many companies ranging from the SME to the Fortune 10, all except a minority are virtualizing hosts and systems placed within traditional DMZ’s facing external networks.

From that perspective, I agree with Pete.  Most companies are still grappling with segmenting their internal networks based on asset criticality, role, function or for performance/security reasons, so it stands to reason that internally we don’t, in general, see "…security solutions for every 15-20
physical nodes on our network today,"
but we certainly do in externally-facing DMZ’s.

However, that’s changing — and will continue to — as we see more and more basic security functionality extend into switching fabrics and companies begin to virtualize security policies internally.  The next generation of NAC/NAP is good example.  Internal segmentation will drive the need for logically applying combinations of security functions virtualized across the entire network space.

Furthermore, there are companies who are pushing well past the 15-20 VM’s per host, and that measure is really not that important.  What is important is the number of VM’s on hosts per cluster.  More on that later.

That said, it isn’t necessarily a great argument for me simply to
suggest that since we don’t do it today in the physical world that we
don’t have to do it in the virtual world. The real question in my mind
is whether folks are crossing traditional network zones on the same physical box.
That is, do trust levels differ among the VMs on the host? If they
don’t, then not having these solutions on the boxes is not that big a
deal – certainly useful for folks who are putting highly-sensitive
resources in a virtual infrastructure, but not a lights-out problem.

The reality is that as virtualization platforms become more ubiquitous, internal segmentation and more strict definition of host grouping in both virtualized and non-virtualized server clusters will become an absolute requirement because it’s the only way security policies will be able to be applied given today’s technology. 

This will ultimately then drive to the externally-facing DMZ’s over time.  It can’t not.  However, today, the imprecise measurement of quantifiable risk (or lack thereof) combined with exceptionally large investment in "perimeter" security technologies, keeps most folks at an arm’s length from virtualizing their DMZ’s from a server perspective.  Lest we forget our auditor friends and things like PCI/DSS which complicate the issue.

Other’s might suggest that with appropriately architected multi-tier environments combined with load balanced/clustered server pools and virtualized security contexts, the only difference between what they have today and what we’re talking about here is simply the hypervisor.  With blade servers starting to proliferate in the DMZ, I’d agree.

Until we have security policies that are, in essence, dynamically instantiated as they are described by the virtual machines and the zones into which they are plumbed, many feel they are still constrained by trying to apply static ACL-like policies to incredibly dynamic compute models.

So we’re left with constructing little micro-perimeter DMZ’s throughout the network. 

If the VMs do cross zones, then it is much more important to have
virtual security solutions. In fact, we don’t recommend using virtual
security appliances as zone separators simply because the hypervisor’s
additional attack surface introduces unknown levels of risk in an
environment. (Similarly, in the physical world we don’t recommend
switch-based security solutions as zone separators either).

What my research is finding in discussion with some very aggressive
companies who are embracing virtualization is that starting internally
— and for some of the reasons like performance which Pete mentions —
companies are beginning to setup clusters of VM’s that do indeed "cross
the streams." 😉

I am told by our virtualization technical expert that there may be
performance benefits to commingling resources in this way, so at some
point it will be great to have these security solutions available. I
suppose we should keep in mind that any existing network security
solution that isn’t using proprietary OS and/or hardware can migrate
fairly easily into a virtual security appliance.

There are most certainly performance "benefits" — I call them band-aids — to co-mingling resources in a single virtual host.  Given the limitations of today’s virtual networking and the ever-familiar I/O bounding we experience with higher-than-acceptable latency in mainstream Ethernet-based connectivity, sometimes this is the only answer.  This is pushing folks to consider I/O virtualization technologies and connectivity other than Ethernet such as InfiniBand.

The real issue here that I have with the old "just run your tired old security solutions as a virtual appliance within a virtual host" is the exact reason that I alluded to in the quote Pete singled in on.  We lose visibility, coherence and context using this model.  This is the exact reason for something like VMsafe, but will come with a trade-off I’ve already highlighted.

The thinner the hypervisor, the more internals will need to be exposed via API to external functions.  While the VMM gets more compact, the attack surface expands via the API.

Keep in mind that we have essentially ignored the whole
de-perimeterization, network without borders, Jericho Forum
predisposition to minimize these functions anyway. That is, we can
configure the functional VMs themselves with better security
capabilities as well.

Yes, we have ignored it…and to our peril.  Virtualization is here. It will soon be in your DMZ if it isn’t already.  If you’re not seriously reconsidering the implications that virtualization (of servers, storage, networking, security, applications, data, etc.) has on your datacenter — both in your externally-facing segments and your internal networks —  *now* you are going to be in a world of hurt within 18 months.

/Hoff

Categories: Virtualization Tags:

I Love the Smell of Big Iron In the Morning…

March 9th, 2008 1 comment

Univac
Does Not Compute…

I admit that I’m often fascinated by the development of big iron and I also see how to some this seems at odds with my position that technology isn’t the answer to the "security" problem.  Then again, it really depends on what "question" is being asked, what "problem" we’re trying to solve and when we expect to solve them.

It’s pretty clear that we’re still quite some time off from having secure code, solid protocols, brokered authentication and encryption and information-centric based controls that provide the assurance dictated by the policies described by the information itself. 

In between now and then, we see the evolution of some very interesting "solutions" from those focused on the network and host perspectives.  It’s within this bubble that things usually get heated between those proponents who argue that innovation in networking and security is constrained to software versus those who maintain that the way to higher performance, efficacy and coverage can only be achieved with horsepower found in hardware.

I always find it interesting that the networking front prompts argument in this vein, but nobody seems to blink when we see continued development in mainframes — even in this age of Web3.0, etc.  Take IBM’s Z10, for example.  What’s funny is that a good amount of the world still ticks due to big iron in the compute arena despite the advances of distributed systems, SOA, etc., so why do we get so wrapped up when it comes to big iron in networking or security?

I dare you to say "value." 😉

I’ve had this argument on many fronts with numerous people and realized that in most cases what we were missing was context.  There is really no argument to "win" here, but rather a need for examination of what most certainly is a manifest destiny of our own design and the "natural" phenomena associated with punctuated equilibrium.

An Example: Cisco’s New Hardware…and Software to Boot [it.]

Both camps in the above debate would do well to consider the amount of time and money a bellwether in this space — Cisco —  is investing in a balanced portfolio of both hardware and software. 

If we start to see how the pieces are being placed on Cisco’s chess board, it makes for some really interesting scenarios:

Many will look at these developments and simply dismiss them as platforms that will only solve the very most demanding of high-end customers and that COTS hardware trumps the price/performance index when compared with specialty high-performance iron such as this. 

This is a rather short-sighted perspective and one that cyclically has proven inaccurate.   

The notion of hardware versus software superiority is a short term argument which requires context.  It’s simply silly to argue one over the other in generalities.  If you’d like to see what I mean, I refer you once again to Bob Warfield’s "Multi-Core Crisis" meme.  Once we hit cycle limits on processors we always find that memory, bus and other I/O contention issues arise.  It ebbs and flows based upon semiconductor fabrication breakthroughs and the evolution and ability of software and operating systems to take advantage of them.

Toss a couple of other disruptive and innovative technologies into the mix and the landscape looks a little more interesting. 

It’s All About the Best Proprietary Open Platforms…

I don’t think anyone — including me at this point — will argue that a
good amount of "security" will just become a checkbox in (and I’ll use *gasp* Stiennon’s language) the "fabric."  There will always be point
solutions to new problems that will get bolted on, but most of the
security solutions out there today are becoming features before they
mature to markets due to this behavior.

What’s interesting to me is where the "fabric" is and in what form it will take. 

If we look downrange and remember that Cisco has openly discussed it’s strategy of de-coupling its operating systems from hardware in order to provide for a more modular and adaptable platform strategy, all this investment in hardware may indeed seem to support this supposition.

If we also understand Cisco’s investment in virtualization (a-la VMware and IOS-XE) as well as how top-side investment trickles down over time, one could easily see how circling the wagons around both hardware for high-end core/service provide platforms [today] and virtualized operating systems for mid-range solutions will ultimately yield greater penetration and coverage across markets.

We’re experiencing a phase shift in the periodic oscillation associated with where in the stack networking vendors see an opportunity to push their agenda, and if you look at where virtualization and re-perimeterization are pushing us, the "network is the computer" axiom is beginning to take shape again. 

I find the battle for the datacenter OS between the software-based virtualization players and the hardware-based networking and security gianst absolutely delicious, especially when you consider that the biggest in the latter (Cisco) is investing in the biggest of the former (VMware.)

They’re both right.  In the long term, we’re all going to end up with 4-5 hypervisors in our environments supporting multiple modular, virtualized and distributed "fabrics."  I’m not sure that any of that is going to get us close to solving the real problems, but if you’re in the business of selling tin or the wrappers that go on it, you can smile…

Imagine a blade server from your favorite vendor with embedded virtualization capabilities coupled with dedicated network processing hardware supporting your favorite routing/switching vendor’s networking code and running any set of applications you like — security or otherwise — with completely virtualized I/O functions forming a grid/utility compute model.*

Equal parts hardware, software, and innovation.  Cool, huh?

Now, about that Information-Centricity Problem…

*The reality is that this is what attracted me to Crossbeam:
custom-built high-speed networking hardware, generic compute stacks
based on Intel-reference designs, both coupled with a Linux-based
operating system that supports security applications from multiple
sources as an on-demand scalable security services layer virtualized
across the network.

Trouble is, others have caught on now…

Don’t Hassle the Hoff: Upcoming Speaking Engagements

March 5th, 2008 No comments

Microphone
Hey y’all.  Here’s some of my upcoming planned speaking engagements.  If you’re in town or going to any of the conferences, look me up:

  • SourceBoston: Boston MA, March 12th, 2008*
  • SecureWorld Expo: Boston MA, March 26th, 2008
  • RSA Security 08: San Francisco CA, April 7-11
  • Troopers08: Munich, Germany, April 23-24, 2008
  • Financial Information Security Decisions, NY NY, June 19-20th
  • IT Security World: San Francisco CA, September 15-17*

Hope to see you there.  I’m sure there will be others between April and June.

* Rich Mogull and I will be co-presenting at these events.

Categories: Press, Speaking Engagements Tags:

VMWare’s VMSafe: Security Industry Defibrilator….Making Dying Muscle Twitch Again.

March 2nd, 2008 6 comments

Defibrilator
Nurse, 10 cc’s of Adrenalin, stat!

As I mentioned in a prior posting, VMware’s VMsafe has the potential to inject life back into the atrophied and withering heart muscle of the security industry and raise the prognosis from DOA to the potential for a vital economic revenue stream once more.

How?  Well, the answer to this question really comes down to whether you believe that keeping a body on assisted life support means that the patient is living or simply alive, and the same perspective goes for the security industry.

With the inevitable consolidation of solutions and offerings in the security industry over the last few years, we have seen the commoditization of many markets as well as the natural emergence of others in response to the ebb and flow of economic, technological, cultural and political forces.

One of the most impacting disruptive and innovative forces that is causing arrhythmia in the pulse of both consumers and providers and driving the emergence of new market opportunities is virtualization. 

For the last two years, I’ve been waving my hands about the fact that virtualization changes everything across the information lifecycle.  From cradle to grave, the evolution of virtualization will profoundly change what, where, why and how we do what we do.

I’m not claiming that I’m the only one, but it was sure lonely from a general security practitioner’s perspective up until about six months ago.  In the last four months, I’ve given two keynotes and three decently visible talks on VirtSec, and I have 3-4 more tee’d up over the next 3 months, so somebody’s interested…better late than never, I suppose.

How’s the patient?

For the purpose of this post, I’m going to focus on the security implications of virtualization and simply summarize by suggesting that virtualization up until now has quietly marked a tipping point where we see the disruption stretch security architectures and technologies to their breaking point and in many cases make much of our invested security portfolio redundant and irrelevant.

I’ve discussed why and how this is the case in numerous posts and presentations, but it’s clear (now) to most that the security industry has been clearly out of phase with what has plainly been a well-signaled (r)evolution in computing.

Is anyone really surprised that we are caught flat-footed again?  Sorry to rant, but…

This is such a sorry indicator of why things are so terribly broken with "IT/Information Security" as it stands today; we continue to try and solve short term problems with even shorter term "solutions" that do nothing more than perpetuate the problem — and we do so in a horrific display of myopic dissonance, it’s a wonder we function at all.   Actually, it’s a perfectly wonderful explanation as to why criminals are always 5 steps ahead — they plan strategically while acting tactically against their objectives and aren’t afraid to respond to the customers proactively.

So, we’ve got this fantastic technological, economic, and cultural transformation occurring over the last FIVE YEARS (at least,) and the best we’ve seen as a response from most traditional security vendors is that they have simply marketed their solutions slimly as "virtualization ready" or "virtualization aware" when in fact, these are simply hollow words for how to make their existing "square" products fit into the "round" holes of a problem space that virtualization exposes and creates.

Firewalls, IDS/IPSs, UTM, NAC, DLP — all of them have limited visibility in this rapidly "re-perimeterized" universe in which our technology operates, and in most cases we’re busy looking at uninteresting and practically non-actionable things anyway.  As one of my favorite mentors used to say, "we’re data rich, but information poor."

The vendors in these example markets — with or without admission — are all really worried about what virtualization will do to their already shrinking relevance.  So we wait.

Doctor, it hurts when I do this…

VMSafe represents a huge opportunity for these vendors to claw their way back to life, making their solutions relevant once more, and perhaps even more so.

Most of the companies who have so far signed on to VMsafe will, as I mentioned previously, need to align roadmaps and release new or modified versions of their product lines to work with the new API’s and management planes. 

This is obviously a big deal, but one that is unavoidable for these companies — most of which are clumbsy and generally not agile or responsive to third parties.  However, you don’t get 20 of some of the biggest "monoliths" of the security world scrambling to sign up for a program like VMsafe just for giggles — and the reality is that the platform version of VMware’s virtualization products that will support this technology aren’t even available yet.

I am willing to wager that you will, in extremely short time given VMware’s willingness to sign on new partners, see many more vendors flock to the program.  I further maintain that despite their vehement denial, NAC vendors (with pressure already from the oncoming tidal wave of Microsoft’s NAP) will also adapt their wares to take advantage of this technology for reasons I’ve outlined here.

They literally cannot afford not to.

I am extremely interested in what other virtualization vendors’ responses will be — especially Citrix.  It’s pretty clear what Microsoft has in mind.  It’s going to further open up opportunities for networking vendors such as Cisco, f5, etc., and we’re going to see the operational, technical, administrative, "security" and governance lines  blur even further.

Welcome back from the dead, security vendors, you’ve got a second chance in life.  I’m not sure it’s warranted, but it’s "natural" even though we’re going to end up with a very interesting Frankenstein of a "solution" over the long term.

The Doctor prescribes an active lifestyle, healthy marketing calisthenics, a diet with plenty of roughage, and jumping back on the hamster wheel of pain for exercise.

/Hoff

Why Security Awareness Campaigns Matter

February 29th, 2008 No comments

Childscratchinghead_2
The topic of security awareness training has floated up to the surface
on a number of related topics lately and I’m compelled to comment on
what can only be described as a diametrically opposed set of opinions
on the matter.

Here’s a perfect illustration taken from some comments on this blog entry
where I suggested that many CIO’s simply think that "awareness
initiatives are good for sexual harassment and copier training, not
security."

Firstly, here is someone who thinks that awareness training is a waste of time:

As to educating users, it’s one of the dumbest ideas in
security. As Marcus Ranum has famously pointed out, if it was going to
work…it would have worked by now. If you are relying on user
education as part of your strategy, you are doomed. See "The Six Dumbest Ideas in Security" for a fine explanation of this.

…and here is the counterpoint offered by another reader suggesting a different perspective:

Completely disagree. Of course you’re not going to get
through to everyone, but if you get through to maybe 80-90% then that’s
an awful lot of attacks you’ve prevented, with actually very little
effort. The reason I think it hasn’t worked yet is because people are
not doing it effectively, or that they’ll ‘get around to it’ once the
CEO has signed off all the important projects, the ones that mean the
IT Security team get to play with cool new toys.

What’s my take?

I think this is very much a case of setting the appropriate
expectations for what the deliverable and results should be from the
awareness training.  I think security awareness and education can bear substantial fruit.  Further, like the second reader, if the goals are
appropriately and realistically set, suggesting that 100% of the
trainees will yield 100% compliance is simply nonsense.


Again, we see that too often the "success" of a security initiative is
only evaluated on a binary scale of 0 or 100% which is simply stupid.
We all know and accept that we’ll never been 100% secure, so why would
we suggest that 100% of our employees will remember and act on 100% of
their awareness training?

What if I showed (and I have) that the number of tailgates through
access controlled access points went down over 30% since awareness training?
What if I showed that the number of phishing attempt reports to IT
Security increased 62% and click-throughs decreased by the same amount
since awareness training?  What if I showed that the number of reports
of lost/stolen company property decreased by 18% since awareness
training?  How about when all our developers were sent to SDLC training and our software deficiencies per line of code went down double digits?

What if I told you that I spent very little amounts of money and time
implementing this training and did it both interactively and through
group meetings and everyone was accountable and felt more empowered
because we linked the topics to the things that matter to THEM as well
as the company?

As to Marcus’ arguments
regarding the efficacy of education/awareness, he’s basically
suggesting that the reason awareness doesn’t work is (1) human
stupidity and (2) a failure of properly implementing technology that
should ultimately prevent #1 from even being an issue.   

I suggest that as #2 becomes less of an issue as people get smarter
about how they deploy technology (which is also an "awareness" problem)
and the technology gets better, then implementing training and education for issue #1 becomes the element that will
help reduce the residual gap.

To simply dismiss security awareness training as a waste of time is
short-sighted and I’ve yet to find anyone who relies solely upon
awareness training as their only strategy for securing their assets.
It’s one of many tools that can effectively be used to manage risk.

What’s your take?

Categories: Security Awareness Tags:

VMware’s VMsafe: The Good, the Bad, the Bubbly…

February 28th, 2008 4 comments

Safe
Back in August before VMworld 2007, I wrote about the notion that given Cisco’s investment in VMware, we’d soon see the ability for third parties to substitute their own virtual switches for VMware’s. 

Further, I discussed the news that VMware began to circulate regarding their release of an API originally called Vsafe that promised to allow third party security and networking applications to interact with functions exposed by the Hypervisor. 

Both of these points really put some interesting wrinkles — both positive and potentially negative — in how virtualization (and not just VMware’s) will continue to evolve as the security and networking functions evolve along with it.

What a difference a consonant makes…

Several months later, they’ve added the consonant ‘m’ to the branding and VMsafe is born.  Sort of.  However, it’s become more than ‘just’ an API.  Let’s take a look at what I mean.

What exactly is VMsafe? Well, it’s a marketing and partnership platform, a business development lever, an architecture, a technology that provides an API and state of mind:

VMsafe is a new program that leverages the properties of VMware Infrastructure to protect machines in ways previously not possible with physical machines. VMsafe provides a new security architecture for virtualized environments and an application program interface (API)-sharing program to enable partners to develop security products for virtualized environments.

What it also provides is a way to make many existing older and consolidating security technologies relevant in the virtualized world given that today their value is suspect in the long term without it:

The VMsafe Security Architecture provides an open approach that gives security vendors the ability to leverage the inherent properties of virtualization in their security offerings.

Customers running their businesses on VMware Infrastructure will be assured that they are getting the best protection available – even better than what they have on physical infrastructure.

VMsafe adds a new layer of defense that complements existing physical security solutions such as network and host protection, shielding virtual machines from a variety of network, host and applications threats. This additional layer of protection can help enterprise organizations to dramatically increase the security posture of their IT environments.

There is then hope that a good deal of the visibility we lost in the limited exposure existing security solutions have across virtualized environments, we may get back.

Of course, this good news will be limited to those running VMware’s solutions and re-tooled versions of your favorite security vendor’s software.  What that means for other virtualization platforms is a little more suspect.  Since it requires the third party software to be re-written/adapted to use the VMsafe API, I can’t see many jumping on the wagon to support 3-4 VMM platforms.

Ack!  This is why we need a virtualization standard like OVF and a cross-platform signaling, telemetry and control plane definition!

So how does VMsafe work?

VMsafe enables third-party security products to gain the same visibility as the hypervisor into the operation of a virtual machine to identify and eliminate malware, such as viruses, trojans and key-loggers. For instance, security vendors can leverage VMsafe to detect and eliminate malware that is undetectable on physical machines. This advanced protection is achieved through fine-grained visibility into the virtual hardware resources of memory, CPU, disk and I/O systems of the virtual machine that can be used to monitor every aspect of the execution of the system and stop malware before it can execute on a machine to steal data.

VSAFE enables partners to build a virtualization-aware security solution in the form of a security virtual machine that can access, correlate and modify information based on the following virtual hardware:

  • Memory and CPU: VMsafe provides introspection of guest VM memory pages and cpu states.
  • Networking: Network packet-filtering for both in-hypervisor and within a Security VM.
  • Process execution (guest handling): in-guest, in-process APIs that enable complete monitoring and control of process execution.
  • Storage: Virtual machine disk files (VMDK) can be mounted, manipulated and modified as they persist on storage devices.

So where do we go from here?

With nothing more than a press release and some flashy demos, it’s a little early to opine on the extensibility of VMsafe, but I am encouraged by the fact that we will have some more tools in the arsenal, even if they are, in essence, re-branded versions of many that we already have.

However, engineering better isolation combined with brokered visibility and specific authorized/controlled access to the VMM is both a worthy endeavor that yields all sorts of opportunities, but given my original ramblings, makes me a bit nervous.

Alessandro from the virtualization.info blog gave us a description of the demo given at VMworld Europe that illustrates some of this new isolation and anti-malware capability:

A Windows XP virtual machine gets attacked with a malicious code that
copies away corporate documents but another virtual machine with
security engine is able to transparently recognize (a virtual memory
scan through VMsafe APis access) the threat and stop it before it
compromises the guest OS.

Steps in the right direction, for sure.  Since details regarding the full extent of anti-malware capabilities are still forthcoming, I will reserve judgment until I get to speak with Nand @ VMware to fully understand the scope of the capabilities.

I am sure we will see more claims surface soon suggesting with technology such as this will produce virtualized environments that are "more secure" than their non-virtualized counterparts.  The proof is in the pudding, as they say.  At this point, what we have is a very tantalizing recipe.

I’d suggest we’ll also see a fresh batch of rootkit discussions popping up soon…I’d really like to see the documentation surrounding the API.  I wonder how much it costs to sign up and be authorized to participate with VMsafe?

Some of the Determina functionality is for sure making its way into VMsafe and it will be really interesting to see who will be first first out of the gate to commercially offer solutions that will utilize VMsafe  once it’s available — and it’s a little unclear when that will be.

Given who demoed at VMworld Europe, I’d say it’s safe to bet that McAfee will be one of the first along with some of the more agile startups that might find it easier to include in their products.  The initial lineup of vendor support makes up some of the who’s-who in the security space — all except for, um, Cisco.  Also, where the heck is SourceFire?

When do the NAC and DLP vendors start knocking?

More detailed analysis soon.

/Hoff

Categories: Virtualization, VMware Tags:

News Flash: If You Don’t Follow Suggested Security Hardening Guidelines, Bad Things Can Happen…

February 26th, 2008 10 comments

Noticeangle
The virtualization security (VirtSec) FUD meter is in overdrive this week…

Part I:
     So, I was at a conference a couple of weeks ago.  I sat in on a lot of talks.
     Some of them had demos.  What amazed me about these demos is that in
     many cases, in order for the attacks to work, it was disclosed that the
     attack target was configured by a monkey with all defaults enabled and no
     security controls in place.  "…of course, if you checked this one box, the
     exploit doesn’t work…" *gulp*

Part II:
     We’ve observed a lot of interesting PoC attack demonstrations such as those at shows being picked
     up by the press and covered in blogs and such.  Many of these stories simply ham it up for the
     sensational title.  Some of the artistic license and innacuracies are just plain recockulous. 
     That’s right.  There’s ridiculous, then there’s recockulous. 

Example:
     Here’s a by-line from an article which details the PoC attack/code that Jon Oberheide used to show
     how, if you don’t follow VMware’s (and the CIS benchmark) recommendations for securing your
     VMotion network, you might be susceptible to interception of traffic and bad things since — as
     VMware clearly states — VMotion traffic (and machine state) is sent in the clear.

     This was demonstrated at BlackHat DC and here’s how the article portrayed it:

Jon Oberheide, a researcher and PhD candidate at the
University of Michigan, is releasing a proof-of-concept tool called
Xensploit that lets an attacker take over the VM’s hypervisor and
applications, and grab sensitive data from the live VMs.

     Really?  Take over the hypervisor, eh?  Hmmmm.  That sounds super-serious!  Oh, the humanity!

However, here’s how the VMTN blog rationally describes the situation in a measured response that does it better than I could:

Recently a researcher published a proof-of-concept called
Xensploit which allows an attacker to view or manipulate a VM undergoing live
migration (i.e. VMware’s VMotion) from one server to
another. This was shown to work with
both VMware’s and Xen’s version of live migration. Although impressive, this work by no means
represents any new security risk in the datacenter. It should be emphasized this proof-of-concept
does NOT “take over the hypervisor” nor present
unencrypted traffic as a vulnerability needing patching, as some news
reports incorrectly assert. Rather, it a
reminder of how an already-compromised network, if left unchecked, could be
used to stage additional severe attacks in any environment, virtual or
physical. …

Encryption of all data-in-transit is certainly one well-understood mitigation
for man-in-the-middle attacks.  But the fact
that plenty of data flows unencrypted within the enterprise – indeed perhaps
the majority of data – suggests that there are other adequate mitigations. Unencrypted VMotion traffic is not a flaw,
but allowing VMotion to occur on a compromised network can be. So this is a good time to re-emphasize hardening best practices for VMware
Infrastructure and what benefit they serve in this scenario.

I’m going to give you one guess as to why this traffic is unencrypted…see if you can guess right in the comments.

Now, I will concede that this sort of thing represents a new risk in the datacenter if you happen to not pay attention to what you’re doing, but I think Jon’s PoC is a great example of substantiating why you should follow both common sense, security hardening recommendations and NOT BELIEVE EVERYTHING YOU READ.

If you don’t stand for something, you’ll fall for anything.

/Hoff