Archive

Author Archive

Complimentary Admission to the InfoWord Virtualization Executive Forum, February 4th, San Francisco…

January 8th, 2008 3 comments

Virt_header

If any of my readers are interested in attending the InfoWorld Virtualization Executive Forum in San Francisco on February 4th, 2008, I have complimentary passes available (a $795 value)

You might remember that when InfoWorld first ran this forum in New York, I was grumpy because there were no topics/speakers focused on security.  I contacted them and expressed my concern.  Alan Shimel doubted anything would come of my kvetching, but lo and behold, the organizers invited me to come and present at this show in February.

So, if you believe in something strongly enough, good things *do* happen 😉

I am speaking (the last speaker of the day — all that stands between the audience and beer…is me!) so come on down and heckle me:


  Addressing Security Concerns in Virtual Environments

Easy
to create and easy to move, introducing a new software layer between
hardware and operating system, and operating over virtual networks as
well as physical ones, virtual servers present a new order of security
risks and challenges. This session will explore the impact of
virtualization on network and host security, how security solutions
providers are beginning to address them, and the best practices that
are emerging for securing virtual environments.

If you’re interested in attending free of charge, ping me via email and I’ll give you the details in order to register.

Hope to see you there!

/Hoff

{Edited for proper Yiddish…Thanks, Alan!}

Categories: Virtualization Tags:

Grab the Popcorn: It’s the First 2008 “Ethical Security Marketing” (Oxymoron) Dust-Up…

January 5th, 2008 15 comments

Xsswormfap_2
Robert Hansen (RSnake / ha.ckers.org / SecTheory) created a little challenge (pun intended) a couple of days ago titled "The Diminutive XSS worm replication contest":

The diminutive XSS worm replication contest
is a week long contest to get some good samples of the smallest amount
of code necessary for XSS worm propagation. I’m not interested in
payloads for this contest, but rather, the actual methods of
propagation themselves. We’ve seen the live worm code
and all of it is muddied by obfuscation, individual site issues, and
the payload itself. I’d rather think cleanly about the most efficient
method for propagation where every character matters.

Kurt Wismer (anti-virus rants blog) thinks this is a lousy idea:

yes, folks… robert hansen (aka rsnake), the founder and ceo of
sectheory, felt it would be a good idea to hold a contest to see who
could create the smallest xss worm
ok, so there’s no money changing hands this time, but that doesn’t mean
the winner isn’t getting rewarded – there are absolutely rewards to be
had for the winner of a contest like this and that’s a big problem
because lots of people want rewards and this kind of contest will make
people think about and create xss worms when they wouldn’t have
before…

Here’s where Kurt diverges from simply highlighting nominal arguments of the potential for
misuse of the contest derivatives.  He suggests that RSnake is being
unethical and is encouraging this contest not for academic purposes, but rather to reap personal gain from it:

would you trust your security to a person who makes or made malware?
how about a person or company that intentionally motivates others to do
so? why do you suppose the anti-virus industry works so hard to fight
the conspiracy theories that suggest they are the cause of the viruses?
at the very least mr. hansen is playing fast and loose with the publics
trust and ultimately harming security in the process, but there’s a
more insidious angle too…

while the worms he’s soliciting from others are supposed to be merely
proof of concept, the fact of the matter is that proof of concept worms
can still cause problems (the recent orkut worm
was a proof of concept)… moreover, although the winner of the contest
doesn’t get any money, at the end of the day there will almost
certainly be a windfall for mr. hansen – after all, what do you suppose
happens when you’re one of the few experts on some relatively obscure
type of threat and that threat is artificially made more popular? well,
demand for your services goes up of course… this is precisely the
type of shady marketing model i described before
where the people who stand to gain the most out of a problem becoming
worse directly contribute to that problem becoming worse… it made
greg hoglund and jamie butler household names in security circles, and
it made john mcafee (pariah though he may be) a millionaire…

I think the following exchange in the comments section of the contest forum offers an interesting position from RSnake’s perspective:                   

Re: Diminutive XSS Worm Replication Contest

            

Posted by: Gareth Heyes (IP Logged)

      

Date: January 04, 2008 04:56PM

      

@rsnake

This contest is just asking for trouble 🙂

Are there any legal issues for creating such a worm in the uk?

————————————————————————————————————

 

Re: Diminutive XSS Worm Replication Contest

 

            

Posted by: rsnake (IP Logged)

      

Date: January 04, 2008 05:11PM

      

@Gareth Heyes – perhaps, but trouble is my middle name. So is danger.
Actually I have like 40 middle names it turns out. 😉 No, I’m not
worried, this is academic – it won’t work anywhere without modification
of variables, and has no payload. The goal is to understand worm
propagation and get to the underlying important pieces of code.

I’m not in the UK and am not a lawyer so I can’t comment on the
laws. I’m not suggesting anyone should try to weaponize the code (they
could already do that with the existing worm code if they wanted anyway).

So, we’ve got Wismer’s perspective and (indirectly) RSnake’s. 

What’s yours?  Do you think holding a contest to build a POC for a worm a good idea?  Do the benefits of research and understanding the potential attacks so one can defend against them outweigh the potential for malicious use?  Do you think there are, or will be, legal ramifications from these sorts of activities?

/Hoff

Your InfoSec Dream Job?

January 4th, 2008 9 comments

Careerchoice
Assuming you were going to stay in the "Information Security" industry, what would you do if you could pack up your office tomorrow and move into shiny new digs in your dream job?  What would that be?  With whom?  Doing what?

I’ll start:

  • On the vendor side: I’d go to a start-up/up-start (my 5th?) again where I can make a huge difference.  I’d do something with virtualization, information-centric security survivability and converged enterprise architecture.  I’d find my next Crossbeam.
  • In the Enterprise, I’d go to a mid-sized progressive services-focused company who understands and "appreciates" the management of risk and investing in security that can be used as a strategic differentiator for the betterment of the business.
  • Venture Capital: I’d love to work in some capacity for a fund with a large and diverse portfolio that would allow me to evaluate technology for investment potential.
  • Research/Analysis: I’d look into a DARPA/NSF-funded long-term research project focused on next generation networking with an integrated security services layer, working to solve long term event-horizon survivability/assurance problems and delivery modality constructs.
  • Independent Consultancy:  I’ve done it before and it became a 7 year rollercoaster ride that was fantastic.  More and more companies need objective "executive steering assistance" for business-aligned, long term strategic risk management, business resilience, information assurance and infrastructure protection guidance.  Just ask Mogull.

You can thank the fine people at St. James’s Gate Brewery for this one.

Your turn.

/Hoff

Categories: Career Tags:

Don’t Hassle the Hoff: Recent Press & Podcast Coverage & Upcoming Speaking Engagements…

January 4th, 2008 1 comment

Microphone_2Here are some recent press, webcast and podcast coverage on topics relevant to content on my blog (slow holiday season):

I’ll be speaking at the upcoming InfoWorld Executive Virtualization Forum (February, San Francisco) and with Rich Mogull at Boston’s new security conference, the Source (March, Boston.)  I’ll be posting more details shortly.

/Hoff

Categories: Press Tags:

Thinning the Herd & Chlorinating the Malware Gene Pool…

December 28th, 2007 3 comments

Anchovyswarm
Alan Shimel pointed us to an interesting article written by Matt Hines in his post here regarding the "herd intelligence" approach toward security.  He followed it up here. 

All in all, I think both the original article that Andy Jaquith was quoted in as well as Alan’s interpretations shed an interesting light on a problem solving perspective.

I’ve got a couple of comments on Matt and Alan’s scribbles.

I like the notion of swarms/herds.  The picture to the right from Science News describes the
notion of "rapid response," wherein "mathematical modeling is
explaining how a school of fish can quickly change shape in reaction to
a predator."  If you’ve ever seen this in the wild or even in film,
it’s an incredible thing to see in action.

It should then come as no surprise that I think that trying to solve the "security problem" is more efficiently performed (assuming one preserves the current construct of detection and prevention mechanisms) by distributing both functions and coordinating activity as part of an intelligent "groupthink" even when executed locally.  This is exactly what I was getting at in my "useful predictions" post for 2008:

Grid and distributed utility computing models will start to creep into security
A
really interesting by-product of the "cloud compute" model is that as
data, storage, networking, processing, etc. get distributed, so shall
security.  In the grid model, one doesn’t care where the actions take
place so long as service levels are met and the experiential and
business requirements are delivered.  Security should be thought of in
exactly the same way. 

The notion that you can point to a
physical box and say it performs function ‘X’ is so last Tuesday.
Virtualization already tells us this.  So, imagine if your security
processing isn’t performed by a monolithic appliance but instead is
contributed to in a self-organizing fashion wherein the entire
ecosystem (network, hosts, platforms, etc.) all contribute in the
identification of threats and vulnerabilities as well as function to
contain, quarantine and remediate policy exceptions.

Sort
of sounds like that "self-defending network" schpiel, but not focused
on the network and with common telemetry and distributed processing of
the problem.
Check out Red Lambda’s cGrid technology for an interesting view of this model.

This basically means that we should distribute the sampling, detection and prevention functions across the entire networked ecosystem, not just to dedicated security appliances; each of the end nodes should communicate using a standard signaling and telemetry protocol so that common threat, vulnerability and effective disposition can be communicated up and downstream to one another and one or more management facilities.

This is what Andy was referring to when he said:

As part of the effort, security vendors may also need to begin sharing more of that information with their rivals to create a larger network effect for thwarting malware on a global basis, according to the expert.

It
may be hard to convince rival vendors to work together because of the
perception that it could lessen differentiation between their
respective products and services, but if the process clearly aids on
the process of quelling the rising tide of new malware strains, the
software makers may have little choice other than to partner, he said.

Secondly, Andy suggested that basically every end-node would effectively become its own honeypot:

"By
turning every endpoint into a malware collector, the herd network
effectively turns into a giant honeypot that can see more than existing
monitoring networks," said Jaquith. "Scale enables the herd to counter
malware authors’ strategy of spraying huge volumes of unique malware
samples with, in essence, an Internet-sized sensor network."

I couldn’t agree more!  This is the sort of thing that I was getting at back in August when I was chatting with Lance Spitzner regarding using VM’s for honeypots on distributed end nodes:

I clarified that what I meant was actually integrating a
HoneyPot running in a VM on a production host as part of a standardized
deployment model for virtualized environments.  I suggested that this
would integrate into the data collection and analysis models the same
was as a "regular" physical HoneyPot machine, but could utilize some of
the capabilities built into the VMM/HV’s vSwitch to actually make the
virtualization of a single HoneyPot across an entire collection of VM’s
on a single physical host.

Thirdly, the notion of information sharing across customers has been implemented cross-sectionally in industry verticals with the advent of the ISAC’s such as the Financial Services Information Sharing and Analysis Center which seeks to inform and ultimately leverage distributed information gathering and sharing to protect it’s subscribing members.  Generally-available services like Symantec’s DeepSight have also tried to accomplish similar goals.

Unfortunately, these offerings generally lack the capacity to garner ubiquitous data gathering and real-time enforcement capabilities.

As Matt pointed out in his article, gaining actionable intelligence on the monstrous amount of telemetric data from participating end nodes means that there is a need to really prune for false positives.  This is the trade-off between simply collecting data and actually applying intelligence at the end-node and effecting disposition. 

This requires technology that we’re starting to see emerge with a small enough footprint when paired with the compute power we have in endpoints today. 

Finally, as the "network" (which means the infrastructure as well as the "extrastructure" delivered by services in the cloud) gains more intelligence and information-centric granularity, it will pick up some of the slack — at least from the perspective of sloughing off the low-hanging fruit by using similar concepts.

I am hopeful that as we gain more information-centric footholds, we shouldn’t actually be worried about responding to every threat but rather only those that might impact the most important assets we seek to protect. 

Ultimately the end-node is really irrelevant from a protection perspective as it should really be little more than a presentation facility; the information is what matters.  As we continue to make progress toward more resilient operating systems leveraging encryption and mutual authentication within communities of interest/trust, we’ll start to become more resilient and information assured.

The sharing of telemetry to allow these detective and preventative/protective capabilities to self-organize and perform intelligent offensive/evasive actions will evolve naturally as part of this process.

Mooooooo.

/Hoff

Really Interesting Blog Snippets I Don’t Have Time to Comment On…

December 20th, 2007 1 comment

I’m swamped right now and have about 30 tabs open in Mozilla referencing things I expected to blog about but simply haven’t had the time to.  Rather than bloat Mozilla’s memory consumption further and lose these, I figured I’d jot them down.

Yes, I should use any number of the services available to track these sorts of things for this very purpose, but I’m just old fashioned, I guess…

Perhaps you’ll find these snippets interesting, also.

Sadly I may not get around to blogging about many of these.  I’ve got a ton more from the emerging technology, VC and virtualization space, too. 

I don’t want to become another story summarizer, but perhaps I’ll use this format to cover things I can’t get to every week.

/Hoff

Categories: Uncategorized Tags:

BeanSec! Wednesday, December 19th – 6PM to ?

December 15th, 2007 No comments

Beansec3_2
This month’s BeanSec!
will be held in a different location due to a facility booking at the usual location.

This month’s meeting will be located at the Middlesex Lounge, 315 Massachusetts Avenue, Cambridge MA  02139 (right down the street.)


Yo!  BeanSec! is once again upon us.  Wednesday, December 19th, 2007.

BeanSec! is an informal meetup of information security
professionals, researchers and academics in the Greater Boston area
that meets the third Wednesday of each month. 

I say again, BeanSec! is hosted the third Wednesday of every month.  Add it to your calendar.

Come get your grub on.  Lots of good people show up.  Really.

Unlike other meetings, you will not be expected to pay dues, “join
up”, present a zero-day exploit, or defend your dissertation to attend.
Map to the Enormous Room in Cambridge.

Enormous Room: 567 Mass Ave, Cambridge 02139.  Look for the Elephant
on the left door next to the Central Kitchen entrance.  Come upstairs.
We sit on the left hand side…
(see above)

Don’t worry about being "late" because most people just show up when
they can.  6:30 is a good time to aim for.  We’ll try and save you a
seat.  There is a parking garage across the street and 1 block down or
you can try the streets (or take the T)

In case you’re wondering, we’re getting about 30-40 people on
average per BeanSec!  Weld, 0Day and I have been at this for just over
a year and without actually *doing* anything, it’s turned out swell.

We’ve had some really interesting people of note attend lately (I’m
not going to tell you who…you’ll just have to come and find out.)  At
around 9:00pm or so, the DJ shows up…as do the rather nice looking
people from the Cambridge area, so if that’s your scene, you can geek
out first and then get your thang on.

The food selection is basically high-end finger-food appetizers and
the drinks are really good; an attentive staff and eclectic clientèle
make the joint fun for people watching.  I’ll generally annoy you into
participating somehow, even if it’s just fetching napkins. 😉

See you there.

/Hoff

Categories: BeanSec! Tags:

It’s On The Internet, It Must Be True!!

December 15th, 2007 5 comments

Internettruth

Case in point, here.

That is all.

/Hoff

Categories: Jackassery Tags:

Breaking News: Successful SCADA Attack Confirmed – Mogull Is pwned!

December 13th, 2007 31 comments

Scada
A couple of weeks ago, right after I wrote my two sets of 2008 (in)security predictions (here and here), Mogull informed me that he was penning an article for Dark Reading on how security predictions are useless.  He even sent me a rough draft to rub it in.

His Dark Reading article is titled "The Perils of Predictions – and Predicting Peril" which you can read here.  The part I liked best was, of course, the multiple mentions that some idiot was going to predict an attack on SCADA infrastructure:


Oh, and there is one specific prediction I’ll make for next year:
Someone will predict a successful SCADA attack, and it won’t happen.
Until it does.

So, I’m obviously guilty as charged.  Yup, I predicted it.  Yup, I think it will happen.

In fact, it already has…

You see, Mogull is a huge geek and has invested large sums of money in his new home and outfitted it with a complete home automation system.  In reality, this home automation system is basically just a scaled down version of a SCADA system (Supervisory Control and Data Acquisition.)  Controlling sensors and integrating telemetry with centralized reporting and control…

Rich and I are always IM’ing and emailing one another, so a few days ago before Rich left town for an international junket, I sent him a little email asking him to review something I was working on.  The email contained a link to my "trusted" website.

The page I sent him to was actually trojaned with the 0day POC code for the QT RTSP vulnerability from a couple of weeks ago.  I guess Rich’s Leopard ipfw rules need to be modified because right after he opened it, the trojan executed and then phoned home (to me) and I was able to open a remote shell on TCP/554 right to his Mac which incidentally controls his home automation system.  I totally pwn his house.

CctvSo a couple of days ago, Rich went out of town and I waited patiently for the DR article to post.  Now that it’s up, I have exacted my revenge.

I must say that I think Rich’s choice of automation controllers was top-shelf, but I think I might have gone with a better hot tub controller because I seem to have confused it and now it will only heat to 73 degrees.

I also think he should have gone with better carpet.

I’m pretty sure his wife is going absolutely bonkers given the fact that the lights in the den keep blinking to the beat of a Lionel Ritchie song and the garage door opener keeps trying to attack the gardener.  I will let you know that I’m being a gentleman and not peeking at the CCTV images…much.

Let this be a lesson to you all.  When it comes to predicting SCADA attacks, don’t hassle the Hoff!

/Hoff

Categories: Punditry Tags:

Complexity: The Enemy of Security? Or, If It Ain’t Fixed, Don’t Break It…

December 12th, 2007 4 comments

Hammerhead
When all you have is a hammer, everything looks like a nail…

A couple of days ago, I was concerned (here) that I had missed Don Weber’s point (here)
regarding how he thinks solutions like UTM that consolidate multiple
security functions into a single solution increased complexity and
increased risk
.

I was interested in more detail regarding Don’s premise for his argument, so I asked him for some substantiating background information before I responded:

The question I have for Don is simple: how is it that you’ve
arrived at the conclusion that the consolidation and convergence of
security functionality from multiple discrete products into a
single-sourced solution adds "complexity" and leads to "increased risk?"

Can you empirically demonstrate this by giving us an example of
where a single function security device that became a multiple function
security product caused this complete set combination of events to occur:

  1. Product complexity increased
  2. Lead to a vulnerability that was exploitable and
  3. Increased "risk" based upon business impact and exposure

Don was kind enough to respond to my request with a rather lengthy
post titled "The Perimeter Is Dead —  Let’s Make It More Complex."  I knew that I wouldn’t get the example I wanted, but I did get
what I expected.  I started to write a very detailed response but stopped when I realized a couple of important things in reading his post as well as many of the comments:

  • It’s clear that many folks simply don’t understand the underlying internal operating principles and architectures of security products on the market, and frankly for the most part they really shouldn’t have to.  However, if you’re going to start debating security architecture and engineering implementation of security software and hardware, it’s somewhat unreasonable to start generalizing and creating bad analogs about things you clearly don’t have experience with. 
     
  • Believe it or not, most security companies that create bespoke security solutions do actually hire competent product management and engineering staff with the discipline, processes and practices that result in just a *little* bit more than copy/paste integration of software.  There are always exceptions, but if this were SOP, how many of them would still be in business?
     
  • The FUD that vendors are accused of spreading to supposedly motivate consumers to purchase their products is sometimes outdone by the sheer lack of knowledge illustrated by the regurgitated drivel that is offered by people suggesting why these same products are not worthy of purchase. 

    In markets that have TAMs of $4+ Billion, either we’re all incompetent lemmings (to be argued elsewhere) or there are some compelling reasons for these products.  Sometimes it’s not solely security, for sure, but people don’t purchase security products with the expectations of being less secure with products that are more complex and put them more at risk.  Silliness.
     

  • I find it odd that the people who maintain that they must have diversity in their security solution providers gag when I ask them for proof that they have invested in multiple switch and router vendors across their entire enterprise, that they deliberately deploy critical computing assets on disparate operating systems and that they have redundancy for all critical assets in their enterprise…including themselves. 
     
  • It doesn’t make a lot of sense arguing about the utility, efficacy, usability and viability of a product with someone who has never actually implemented the solution they are arguing about and instead compares proprietary security products with a breadboard approach to creating a FrankenWall of non-integrated open source software on a common un-hardened Linux distro.
     
  • Using words like complexity and risk within a theoretical context that has no empirical data offered to back it up short of a "gut reaction" and some vulnerability advisories in generally-available open source software lacks relevancy and is a waste of electrons.

I have proof points, ROI studies, security assessment results to the code level, and former customer case studies that demonstrate that some of the most paranoid companies on the planet see fit to purchase millions of dollars worth of supposedly "complex risk-increasing" solutions like these…I can tell you that they’re not all lemmings.

Again, not all of those bullets are directed at Don specifically, but I sense we’re
really just going to talk past one another on this point and the emails I’m getting trying to privately debate this point are agitating to say the least.

Your beer’s waiting, but expect an arm wrestle before you get to take the first sip.

/Hoff