Archive

Author Archive

Topps Meat Company: 0157:H7 E.Coli, Breaches & You…

October 9th, 2007 8 comments

All_beef
A few days ago, Topps Meat company, a 67-year old company and one of the largest producers of frozen meat products in the country, shut its doors for good.

Why?

They had a breach of the sanitation persuasion.  From the NY Times:

Topps Meat Company, one of the country’s largest manufacturers of
frozen hamburgers, said today it was going out of business after it
recalled more than 21.7 million pounds of ground beef products last
month.

The company, based in Elizabeth, N.J., said a few of its 87 employees will remain at the plant to help the United States Department of Agriculture investigate how the E. coli bacteria tainted frozen hamburger patties made there.


Anthony D’Urso, the chief operating officer at Topps, said the company
was unable to withstand the financial burden of the recall.


“This is tragic for all concerned,” Mr. D’Urso said in a statement. “In
one week we have gone from the largest U.S. manufacturer of frozen
hamburgers to a company that cannot overcome the economic reality of a
recall this large.”

On Sept. 25, the United
States Department of Agriculture announced a recall of frozen hamburger
patties from Topps, saying that the meat was potentially tainted by E.
coli bacteria. Officials at the agency conceded that they knew that
meat from Topps was contaminated on Sept. 7, when the first positive
test results for E. coli came back.

The financial strain associated with a recall of spoiled meat in a single week killed them.

So what does this have to do with data breaches?

When the ChoicePoint scandal hit, we saw Card Services shutter due to direct economic pressure (they could no longer process credit cards) brought about by the fallout from data breaches, but contrast that with the experience of a recent "breacher" such as TJX and some might argue that not only has it not actively impacted their P&L negatively, but it’s made them a better, stronger and more profitable company.  The figures don’t lie:

After the TJX debacle I remember seeing predictions that people will vote with their feet. Of course they didn’t, sales actually went up 9%. The same argument was made for Ruby Tuesdays who lost some credit cards. It just doesn’t happen. Lake Chad and disasters on a global scale continue to plague us due to climate change yet still people refuse to stop buying SUV’s.

Check out the chronology of security breaches from the Privacy Rights Clearinghouse.  The total
            number of records containing sensitive personal information involved in security breaches:

          167,308,738

That number is mounting every day
and some of these breaches you don’t even hear about in the press.
Have we become so
desensitized to this breach fiasco that it’s become just a mild
inconvenience?  Or is it that credit card number losses have been subconsciously classified outside of the scope of "identity theft?"

Think about it.  Having your credit card number stolen is really, in the scope of things, not that big of a deal.  You call the CC company, dispute any charges you didn’t make, they close the account and despite the inconvenience, that’s it.  Then a new card shows up in the mail, sometimes with a larger spending limit!  Sweet!

The liability is minimal.  It’s happened to me twice.  My credit wasn’t impacted, my life didn’t end.  In fact, I got a card with a cool Koi on it that matches one of my tattoos.   I’m not saying it goes that "well" for everyone, but what’s the impetus for consumer outrage?

As soon as the liability is shifted away from the banks who suck it up and take the hit (as do the vendors whose merchandise is stolen,) and moves closer to the consumer,  we’ll see some agitation and consumer outrage.

Until then, I suppose we’re content to just go on eating spoiled meat (as it were) and get a new credit card number every three months until a company like Topps — or rather one that people really care about — goes through the meat grinder and closes its doors.

Where’s the beef?

/Hoff

Categories: PCI Tags:

Poetic Weekly Security Review

October 5th, 2007 1 comment

Security-related news from the week…

Two hundred grand
is what you’ll pay,
for that illegally-scored music
says the RIAA.

Big data breaches make a really bad rap,
Think ABN Amro, eBay and the GAP.
Retailers recovering from a big breach black eye
Tell the Payment Card Council
"We hate PCI"

The Representative’s children
download images of lust
He thrilled some high schoolers
with an eyeful of bust!

The Feds were determined
to save Arnie’s day…
nuked ca dot gov
and the ‘Net went away

Extra screen RC toys,
says the ole TSA
next thing you’ll know
they’ll take your Webkinz away

The poor DHS
they’re feeling quite small
They DDoS’d themselves
with a big "Reply-All"

Microsoft’s looking
to increase their wealth
by putting online
your records of health

You’d think that a government
like that of Big Mass.
wouldn’t send out my social
and show their incompetent ass

The experts are puzzled
they say "Storm’s a bot!"
The one thing they’re sure of
is something it’s not.

It’s not easy to corner
it’s causing us fear
for the nextgen of malware
is already here

The Great Firewall of China
Oy!  Vadda mess!
Now it turns out
they block RSS!

The House Committee on Commerce
probes the wiretapping NSA
While the Air Force tried bombs
to make enemies gay?

And finally a comment
on Ex-czar Richard Clarke
whose ideas on security
leave our rights in the dark

We don’t need any more laws
to control what you can’t,
stick to fiction my friend
I’ll take care of the rants

/Hoff

Categories: Jackassery, Poetry Tags:

IDC Study Suggests Security Drives Open Source Technology Deployment In Asia/Pacific

October 4th, 2007 11 comments

Opensource
I’m still not sure I’ve fully digested the conclusion that this IDC study suggests and I’m not in a position to currently spend $4500 on the full report to do so.  However, I found the article which summarizes the catalysts of Open Source adoption in APAC countries to be very interesting:

The top most influential factor for deploying open source
technology in Australia, Korea, India and the People’s Republic of
China is better protection against security breaches, according to a
survey by IDC. "The results indicate that organizations perceived open
source technology as providing better security compared to proprietary
products," said Prianka Srinivasan, a software market analyst with IDC
Asia/Pacific.

Huh.  Really?  Security is the top reason?  That’s intriguing but makes my right eyebrow curl.

The
survey results also suggest that organizations in India and the
People’s Republic of China (PRC) deployed open source technology more
than their counterparts in Australia and Korea. Furthermore, as
expected, a larger number of small and medium size businesses (SMBs) in
all four countries were deploying open source technology compared to
large businesses.

The IDC survey measured key factors contributing to the deployment
of open source technology. Top factors cited by respondents include:

  • Provides better protection against security
  • Budget constraints
  • Sufficient support from vendors
  • Availability of required functionalities
  • Better management tools and utilities
  • Recommended by fellow industry peers
  • Preference of open standard adoption compared to proprietary products

"Though cost-efficiency remains a key decision factor, the results
also suggest that organizations look forward to leverage open source
technology to primarily fulfill their requirements for specific
functionalities instead of widespread deployment," said Srinivasan.

When segmenting the data by company size, it emerged that SMBs in
all four countries deployed open source technology primarily to ensure
protection from security threats, which is similar to large
organizations in Australia, India and the PRC. Large organizations from
Korea, however, cited better management tools and utilities as the
leading factor.

I get all that and it sounds reasonable if not somewhat out of order.

The part I’m grappling with is that while security is represented here as the number one reason for adoption, I have this funny feeling that in some of these "developing" nations (from an IT perspective) that the word FREE really is the prime motivator and security, management, features, etc. are gravy.  I can’t really argue with the study since I didn’t conduct it, but it just doesn’t jive for me.

I‘m going to (gasp!) step into the role of agent provocateur here and suggest that I’m not convinced that Open Source security software yields a more secure business, especially in the SMB realm.  SMB’s don’t have security experts, so how is it that these folks who can barely install toner cartridges can perform source code analysis? 

I think that perhaps the thought of having many people’s eyeballs on the source code may deliver an advantage as an extended QA function from a security perspective at which point people "feel" more secure but it’s the monkeys configuring and deploying said software one needs to be worried about.

Let’s be real.  Given a choice to download pre-compiled binaries, ISO’s or virtual appliances versus source code that requires library linking and compiling, which route is an SMB going to take?  Right.

The last paragraph from IDC’s tickler really cements my thinking on this matter:

"IDC believes that open source technology and software will appear
in the higher end of the application stack in the coming years.
Commercial vendors of open source software will need to provide
extensive support and training services, as well as address the issues
of interoperability, in order to take advantage of the addressable
market for open source technology in the region," added Srinivasan.

Um, yep.  I’m willing to bet that Open Source will continue to be deployed in these developing countries with SMB’s as a way to offset operational expenditures — at least at first.  Then the issue of long term vendor support will rare its ugly head.   Sometimes the security of "free" is outweighed by the insecurity of "unsupported."

Using the security market as an example, we’ve obviously seen the success of companies like Sourcefire, Tenable and StillSecure with their Open Source and Open Source derivative licensing and support mechanisms.  I guess I’d really need to understand how IDC is defining Open Source in their study because I feel it may have made a difference as to how I reacted.

As we move along, I reckon we’ll see a burgeoning market for companies whose offerings focus on providing general sets open source software support.  They are around today, but the number and type of applications usually prove to be quite small.

From the opposite angle, I think we’ll also see the proliferation of hosted applications in the SaaS realm which are based on OSS and may have tiered levels of usage and support…sort of like GoogleApps but with Open Source.  If it’s hosted, you’ve got a single neck to choke.

What do you think?  If you were in an SMB’s shoes, would you rank security as the number one reason you’d adopt Open Source? 

/Hoff

 

Read more…

Categories: Open Source Tags:

Follow-up to Amazon MP3 Watermarking

October 3rd, 2007 1 comment

AmazonbustAs a follow-up to my blog entry here regarding Amazon.com and MP3 Watermarking…

Alex Halderman over at the Freedom To Tinker blog yesterday posted an entry that seems to confirm the theory that Amazon.com is not individually tagging each MP3 file purchased and that any file downloaded with the same title is identical to that downloaded by another user:

Last week Amazon.com launched a DRM-free music store.
It sells tracks from two major labels and many independents in the
unprotected MP3 file format. In addition to being DRM-free, Amazon’s
songs are not individually watermarked. This is an important step
forward for the music industry.

Some content companies see individualized watermarks as a
consumer-friendly alternative to DRM. Instead of locking down files
with restrictive technology, individualized watermarking places
information in them that identifies the purchasers, who could
conceivably face legal action if the files were publicly shared. Apple
individually watermarks DRM-free tracks sold on iTunes, but every
customer who purchases a particular track from Amazon receives the
exact same file.

The company has stated as much, and colleagues and I
confirmed this by buying a small number of files with different Amazon
accounts and verifying that they were bit-for-bit identical. (As Wired reports,
some files on Amazon’s store have been watermarked by the record
labels, but each copy sold contains the same mark. The labels could use
these marks to determine that a pirated track originated from Amazon,
but they can’t trace a file to a particular user.)

This is good news and I thank Alex and his friends for doing the dirty work and actually confirming these statements instead of just parroting them back and taking Amazon’s word for it.  The rest of Alex’s blog entry provides good insight as to the risks — legal, security and otherwise — that swirl around the contentious topic of DRM.  Please read the article in its entirety.

/Hoff

Categories: IP/Data Leakage Tags:

On Bandwidth and Botnets…

October 3rd, 2007 No comments

An interesting story in this morning’s New York Times titled "Unlike U.S., Japanese Push Fiber Over Profit" talked about Japan’s long term investment efforts to build the world’s first all-fiber national network and how Japan leads the world’s other industrialized nations, including the U.S., in low-cost, high speed services centered around Internet access.  Check out this illustration:

2007broadbandgraphic_2
The article states that approximately 8 million Japanese subscribe to the fiber-enabled service offerings that provides performance at roughly 30 times that of a corresponding xDSL offering.

For about $55 a month, subscribers have access to up to 100Mb/s download capacity.

France Telecom is rumored to be rolling out services that offer 2.5Gb/s downloads!

I have Verizon FIOS which is delivered via fiber to my home and subscribe at a 20Mb/s download tier.

What I find very interesting about the emergence of this sort of service is that if you look at a typical consumer’s machine, it’s not well hardened, not monitored and usually easily compromised.  At this rate, the bandwidth of some of these compromise-ready consumer’s home connectivity is eclipsing that of mid-tier ISP’s!

This is even more true, through anecdotal evidence gathering, of online gamers who are typically also P2P filesharing participants and early adopters of new shiny kit — it’s a Bot Herder’s dream come true.

At xDSL speeds of a few Mb/s, a couple of infected machines as participants in a targeted synchronized fanning DDoS attack can easily take down a corporate network connected to the Internet via a DS3 (45Mb/s.)  Imagine what a botnet of a couple of 60Mb/s connected endpoints could do — how about a couple of thousand?  Hundreds of thousands?

This is great news for some as this sort of capacity will be economically beneficial to cyber-criminals as it reduces the exposure risk of Botnet Herders; they don’t have to infect nearly the same amount of machines to deliver exponentially higher attack yields given the size of the pipes.  Scary.

I’d suggest that using the lovely reverse DNS entries that service providers use to annotate logical hop connectivity will be even more freely used to target these high-speed users; you know, like (fictional):

bigass20MbpsPipe.vzFIOS-05.bstnma.verizon-gni.net (7x.y4.9z.1)

As an interesting anecdote from the service provider perspective, the need for "Clean Pipes" becomes even more important and the providers will be even more so financially motivated to prevent abuse of their backbone long-hauls by infected machines.

This, in turn, will drive the need for much more intelligent, higher throughput infrastructure and security service layers to mitigate the threat which is forcing folks to take a very hard look about how they architect their networks and apply security.

/Hoff

Opinions Are Like De-Perimeterized Virtualized Servers — Everyone’s Got One, Even Larry Seltzer

October 2nd, 2007 3 comments

Weirdscience
Dude, maybe if we put bras on our heads and chant incoherently we can connect directly to the Internet…

Somebody just pushed my grumpy button!  I’m all about making friends and influencing people, but the following article titled "You Wouldn’t Actually Turn Off Your Firewall, Would You?" is simply a steaming heap of unqualified sensationalism, plain and simple. 

It doesn’t really deserve my attention but the FUD it attempts to promulgate is nothing short of Guinness material and I’m wound up because my second Jiu Jitsu class of the week isn’t until tomorrow night and I’ve got a hankering for an arm-bar.

Larry Seltzer from eWeek decided to pen an opinion piece which attempts for no good reason to collapse two of my favorite topics into a single discussion: de-perimeterization (don’t moan!) and virtualization. 

What one really has to do directly with the other within the context of this discussion, I don’t rightly understand, but it makes for good drama I suppose.

Larry starts off with a question we answered in this very blog (here, here, here and here) weeks ago:

Opinion: I’m unclear on what deperimeterization means. But if it means putting
company systems directly on the Internet then it’s a big mistake.

OK, that’s a sort of a strange way to state an opinion and hinge an article, Larry. Why don’t you go to the source provided by those who coined the term, here.  Once you’re done there, you can read the various clarifications and debates above. 

But before we start, allow me to just point out that almost every single remote salesperson who has a laptop that sits in a Starbucks or stays in a hotel is often connected "…directly on the Internet."  Oh, but wait, they’re sitting behind some sort of NAT gateway broadband-connected super firewall, ya?  Certainly the defenses at Joe’s Java shack must be as stringent as a corporate firewall, right?  <snore>

For weeks now I’ve been thinking on and off about "deperimeterization,"
a term that has been used in a variety of ways for years. Some analyst talk got it in the news recently.

So you’ve been thinking about this for weeks and don’t mention if
you’ve spoken to anyone from the Jericho Forum (it’s quite obvious you haven’t read their 10 commandments) or anyone mentioned in the article
save for a couple of analysts who decided to use a buzzword to get some
press?  Slow newsday, huh?

At least the goal of deperimeterization is to enhance security.
That I can respect. The abstract point seems to be to identify the
resources worth protecting and to protect them. "Resources" is defined
very, very broadly.

The overreacting approach to this goal is to say
that the network firewall doesn’t fit into it. Why not just put systems
on the Internet directly and protect the resources on them that are
worthy of protection with appropriate measures?

Certainly the network firewall fits into it.  Stateful inspection firewalls are, for the most part today, nothing more than sieves that filter out the big chunks.  They serve that purpose very nicely.  They allow port 80 and port 443 traffic through unimpeded.  Sweet.  That’s value.

Even the inventors of stateful inspection will tell you so (enter one Shlomo Kramer and Nir Zuk.)  Most "firewalls" (in the purest definition) don’t do much more than stateful ACL’s do today and are supplemented with IDS’s, IPS’s, Web Application Firewalls, Proxies, URL Filters, Anti-Virus, Anti-Spam, Anti-Malware and DDoS controls for that very reason.

Yup, the firewall is just swell, Larry.  Sigh.

I hope I’m not misreading the approach, but that’s what I got out of
our news article: "BP has taken some 18,000 of its 85,000 laptops off
its LAN and allowed them to connect directly to the Internet,
[Forrester Research analysts Robert Whiteley and Natalie Lambert]
said." This is incredible, if true.

Not for nothing, but rather than depend on a "couple of analysts," did you think to contact someone from BP and ask them what they meant instead of speculating and deriding the effort before you condemned it?  Obviously not:

What does it mean? Perhaps it just means that they can connect
to the VPN through a regular ISP connection? That wouldn’t be news. On
the other hand, what else can it mean? Whitely and Lambert seem to view
deperimeterization as a means to improve performance and lower cost. If
you need to protect the data on a notebook computer they say you should
do it with encryption and "data access controls." This is the
philosophy in the 2001 article in which the term was coined.

Honestly, who in Sam’s Hill cares what "Whitely and Lambert" seem to view deperimeterization as?  They didn’t coin the term, they butchered its true intent and you still don’t apparently know how to answer your own question. 

Further, you also reference a conceptual document floated back in 2001 ignoring the author’s caution that "The actual concept behind the entire paper never really flew, but you may find that too thought provoking."  Onward.

But of course you can’t just put a system on Comcast and have it
access corporate resources. VPNs aren’t just about security, they
connect a remote client into the corporate network. So unless everyone
in the corporation has subnet mask of 0.0.0.0 there needs to be some
network management going on.

Firstly, nobody said that network management should be avoided, where the heck did you get that!?

Secondly, if you don’t have firewalls in the way, sure you can — but that would be cheating the point of the debate.  So we won’t go there.  Yet.  OK, I lied, here we go.

Thirdly, if you look at what you will get with, say, Vista and Longhorn, that’s exactly what you’ll be able to do.  You can simply connect to the Internet and using encryption and mutual authentication, gain access to internal corporate resources without the need for a VPN client at all.  If you need a practical example, you can read about it here, where I saw it with my own eyes.

Or maybe I’m wrong. Maybe that’s what they actually want to do. This certainly sounds like the idea behind the Jericho Forum, the minds behind deperimeterization. This New York Times blog echoes the thoughts.

Maybe…but we’re just dreamers.  I dare say, Larry, that Bill Cheswick has forgotten more about security than you and I know.  It’s obvious you’ve not read much about information assurance or information survivability but are instead content to myopically center on what "is" rather than that which "should be."

Not everyone has this cavalier attitude towards deperimeterization. This article from the British Computer Society
seems a lot more conservative in approach. It refers to protecting
resources "as if [they were] directly exposed to the Internet." It
speaks of using "network segmentation, strict access controls, secure
protocols and systems, authentication and encryption at multiple
levels."

Cavalier!?  What’s so cavalier about suggesting that systems ought to be stand-alone defensible in a hostile environment as much as they are behind one of those big, bad $50,000 firewalls!? I bet you money I can put a hardened host on the Internet today without a network firewall in front of it and it will be just as resistant to attack. 

But here’s the rub, nobody said that to get from point A to point B one would not pragmatically apply host-based hardening and layered security such as (wait for it) a host-based firewall or HIPS?  Gasp!

What’s the difference between filtering TCP handshakes or blocking based on the 4/5 tupule at a network level versus doing it at the host when you’re only interested in scaling to performance and commensurately secured levels of a single host?  Except for tens of thousands of dollars.  How about Nada?  (That’s Spanish for "Damn this discussion is boring…")

And whilst my point above is in response to your assertions regarding "clients," the same can be said for "servers."  If I use encryption and mutual authentication, short of DoS/DDoS, what’s the difference?

That sounds like a shift in emphasis, moving resources more
towards internal protection, but not ditching the perimeter. I might
have some gripes with this—it sounds like the Full Employment Act for
Security Consultants, for example—but it sounds plausible as a useful
strategy.

I can’t see how you’d possibly have anything bad to say about this approach especially when you consider that the folks that make up the Jericho Forum are CISO’s of major corporations, not scrappy consultants looking for freelance pen-testing.

When considering the protection of specific resources, Whitely and
Lambert go beyond encryption and data access controls. They talk
extensively about "virtualization" as a security mechanism. But their
use of the term virtualization sounds like they’re really just talking
about terminal access. Clearly they’re just abusing a hot buzzword.
It’s true that virtualization can be involved in such setups, but it’s
hardly necessary for it and arguably adds little value. I wrote a book
on Windows Terminal Server back in 2000 and dumb Windows clients with
no local state were perfectly possible back then.

So take a crappy point and dip it in chocolate, eh?   Now you’re again tainting the vision of de-perimeterization and convoluting it with the continued ramblings of a "couple of analysts."  Nice.

Whitely and Lambert also talk in this context about how updating in
a virtualized environment can be done "natively" and is therefore
better. But they must really mean "locally," and this too adds no
value, since a non-virtualized Terminal Server has the same advantage.

What is the security value in this? I’m not completely clear
on it, since you’re only really protecting the terminal, which is a
low-cost item. The user still has a profile with settings and data. You
could use virtual machines to prevent the user from making permanent
changes to their profile, but Windows provides for mandatory (static,
unchangeable) profiles already, and has for ages. Someone explain the
value of this to me, because I don’t get it.

Well, that makes two of us..

And besides, what’s it got to do with deperimeterization? The
answer is that it’s a smokescreen to cover the fact that there are no
real answers for protecting corporate resources on a client system
exposed directly to the Internet.

Well, I’m glad we cleared that up.  Absolutely nothing.  As to the smokescreen comment, see above.  I triple-dog-dare you.  My Linux workstation and Mac are sitting on "the Internet" right now.  Since I’ve accomplished the impossible, perhaps I can bend light for you next?

The reasonable approach is to treat local and perimeter security as
a "belt and suspenders" sort of thing, not a zero sum game. Those who
tell you that perimeter protections are a failure because there have
been breaches are probably just trying to sell you protection at some
other layer.

…or they are pointing out to you that you’re treating the symptom and not the problem.  Again, the Jericho Forum is made up of CISO’s of major multinational corporations, not VP’s of Marketing from security vendors or analyst firms looking to manufacture soundbites.

Now I have to set a reminder for myself in Outlook for about
two years from now to write a column on the emerging trend towards
"reperimeterization."

Actually, Larry, set that appointment back a couple of months…it’s already been said.  De-perimeterization has been called many things already, such as re-perimeterization or radical externalization.

I don’t really give much merit to what you choose to call it, but I call it a good idea that should be discussed further and driven forward in consensus such that it can be used as leverage against the software and OS vendors to design and build more secure systems that don’t rely on band-aids.

…but hey, I’m just a dreamer.

/Hoff

Interviewed for Information Security Magazine – Security 7 Award Winners Article

October 2nd, 2007 2 comments

Ismoct07_2
This month’s Information Security Magazine features the 2007 Security 7 Award Winners.  This year’s winners represent an excellent cross-section of security professionals in seven industries, each with very diverse and interesting backgrounds, approaches and career paths:

  • Michael Assante, Infrastructure Protection Strategist, Idaho National Labs
  • Kirk Bailey, CISO, University of Washington
  • Michael K. Daly, Director Enterprise Security Services, Raytheon
  • Sasan Hamidi, CISO, Interval International
  • Timothy McKnight, VP&CISO, Northrup Grumman
  • Mark Olson, Manager of IS Security and DR, Beth Israel Deaconess Medical Center
  • Simon Riggs, Global Head of Security, Reuters

Congratulations to all of this year’s winners!  I know four of them and they’re all excellent representatives of our profession.

I was one of the inaugural Security 7 award winners back in 2005 in the financial services category when I was a CISO and was interviewed over the phone recently by Michael Mimoso from the magazine as a "catching up with…" piece that complimented the profiles of this year’s winners. 

Please forgive the rather colloquial nature of the transcription of the discussion, it was very much a stream of consciousness as part of a 20 minute conversation that has been edited down for size.  Some of the concatenated sentences seem to contradict one another…I didn’t get to see it before it went to press ;(  Nonetheless, I appreciate the opportunity, Michael.

You can find the entire story here and my blurb hereShimmy, as big as a pain in the ass as you are, you’ll notice that I appropriately state that I owe my blogging to you.  Thanks, pal.

For reference, here is a listing of the 2005 and 2006 winners:

2005
    Edward Amoroso (Telecommunications)
 
  Hans-Ottmar Beckmann (Manufacturing)

 
  Dave Dittrich (Education)

 
  Patrick Heim (Health care)

 
  Christofer Hoff (Financial services)

 
  Richard Jackson (Energy/utilities)

 
  Charles McGann (Government)

2006
  Stephen Bonner (Financial services)
  Larry Brock (Manufacturing)
  Dorothy Denning (Education)
  Robert Garigue (Telecommunications)
  Andre Gold (Retail)
  Philip Heneghan (Government)
  Craig Shumard (Health care)

I’d also like to call out and pay tribute to one of the 2006 award winners, Robert Garigue, who passed away in January.  May he rest in peace.

/Hoff

Categories: Uncategorized Tags:

Virtualization Security Training?

October 1st, 2007 10 comments

I just read an interesting article written by Patrick Thibodeau from Computerworld which described the difficulties IT managers are having finding staffers with virtualization experience and expertise:

As more organizations adopt server virtualization software, they’re
also looking to hire people who have worked with the technology in live
applications.

But such workers can be hard to find, as Joel Sweatte, IT
manager at East Carolina University’s College of Technology and
Computer Science, recently discovered when he placed a help-wanted ad
for an IT systems engineer with virtualization skills.

Sweatte received about 40 applications for the job at the
Greenville, N.C.-based university, but few of the applicants had any
virtualization experience, and he ended up hiring someone who had none.
“I’m fishing in an empty ocean,” Sweatte said.

To give his new hire a crash course in virtualization,
Sweatte brought him to market leader VMware Inc.’s annual user
conference in San Francisco last month. “That’s a major expenditure for
a university,” Sweatte said of the conference and travel costs. “[But]
I wanted him to take a drink from the fire hose.”

If the industry is having trouble finding IT generalists with training in virtualization security, I can only imagine the dearth of qualified security experts in the hopper.  I wonder when the first SANS course in virtualization security will surface?

I’m interested in understanding how folks are approaching security training for their server ops, audit, compliance and security teams.  If you wouldn’t mind, please participate in the poll below.  This is the first time I’ve used Visu Polls, and you’ll need to enable scripting/Flash to make this work:

Categories: Virtualization Tags:

Poetic Weekly Security Review

September 28th, 2007 2 comments

Another week has come and gone
and still the Internet hums along.
Despite predictions that are quite dour
like taking down our nation’s power.

Government security made the press
vendors, hackers, the DHS.
Google Apps and Cross-Site-Scripting,
through our mail the perps are sifting.

TJX, Canadians found, deployed Wifi
that wasn’t sound.

VMWare’s bugs in DHCP
shows there’s risk virtually

HD Moore’s become quite adroit
at extending the reach of Metasploit
hacking tools found a new home
run ’em on your cool iPhone!

Speaking of iPhone
Apple’s played a trick,
hack your phone
it becomes a brick!

Missile silos for sale, that’s a fact,
but it seems the auctioneer’s been hacked!
Applied to Gap as would-be clerks?
They lost your data, careless jerks!

Microsoft updated computers in stealth
which affected the poor machines good health
It seems the risk analysis battle’s won
who needs ISO 2-7-00-1?

Maynor was back in the news,
as his sick days he did abuse.
He claimed to contract Pleurisy,
but was at home with Halo3.

More fun’s in store with M&A
another deal, another day;
Huawei and 3Com getting hitched
who knows if TippingPoint gets ditched?

It’s never boring in InfoSec
Like watching a slow-mo car-crash wreck.
I wish you well my fellow geek
until this time, same place, next week.

/Hoff

Categories: Jackassery Tags:

Opening VMM/HyperVisors to Third Parties via API’s – Goodness or the Apocalypse?

September 27th, 2007 2 comments

Holding_breath
This is truly one of those times that we’re just going to have to hold our breath and wait and see…

Prior to VMworld, I blogged about the expected announcement by Cisco and VMware that the latter would be opening the HyperVisor to third party vendors to develop their virtual switches for ESX.

This is extremely important in the long term because security vendors today who claim to have security solutions for virtualized environments are basically doing nothing more than making virtual appliance versions of their software that run as yet another VM on a host along side critical applications.

These virtual appliances/applications are the same you might find running on their stand-alone physical appliance counterparts, and they have no access to the HyperVisor (or the vSwitch) natively.  Most of them therefore rely upon enabling promiscuous mode on vSwitches to gain visibility to inter-VM traffic which uncorks a nasty security genie of its own.  Furthermore, they impose load and latencies on the VM’s as they compete for resources with the very assets they seek to protect.

The only exception to this that I know of currently is Blue Lane who actually implements their VirtualShield product as a HyperVisor Plug-in which gives them tremendous advantages over products like Reflex and Catbird (which I will speak about all of these further in a follow-on post.)  Ed: I have been advised that this statement needs revision based upon recent developments — I will, as I mention, profile a comparison of Blue Lane, Catbird and Reflex in a follow-on post.  Sorry for the confusion.

At any rate, the specific vSwitch announcement described above was not forthcoming at the show, but a more important rumbling became obvious on the show floor after speaking with several vendors such as Cisco, Blue Lane, Catbird and Reflex; VMware was quietly beginning to provide third parties access to the HyperVisor by exposing API’s per this ZDNet article titled "VMware shares secrets in security drive":

Virtualization vendor VMware has quietly begun sharing some of
its software secrets with the IT security industry under an unannounced
plan to create better ways of securing virtual machines. 

VMware
has traditionally restricted access to its hypervisor code and, while
the vendor has made no official announcement about the API sharing
program tentatively called "Vsafe," VMware founder and chief scientist
Mendel Rosenblum said that the company has started sharing some APIs
(application program interfaces) with security vendors.

I know I should be happy about this, and I am, but now that we’re getting closer to the potential for better VM security, the implementation deserves some scrutiny.  We don’t have that yet because most of the vSafe detail is still hush-hush.

This is a double-edged sword.  While it represents a fantastic opportunity to expose functionality and provide visibility into the very guts of the VMM to allow third party software to interact with and control the HyperVisor and dependent VM/GuestOS’s, opening the Kimono represents a potentially huge new attack surface for potential malicious use.

"We would like at a high level for (VMware’s platform) to be a better
place to run," he said. "To try and realize that vision, we have been
partnering with experts in security, like the McAfees and Symantecs,
and asking them about the security issues in a virtual world."

I’m not quite sure I follow that logic.  McAfee and Symantec are just as clueless as the bulk of the security world when it comes to security issues in a virtual world.  Their answer is usually "do what you normally do and please make sure to buy a license for our software on each VM!" 

The long-term for McAfee and Symantec can’t be to continue to deploy bloatware on every VM.  Users won’t put up with the performance hit or the hit in their wallet.  They will have to re-architect to take advantage of the VMM API’s just like everyone else, but they have a more desperate timefame:

Mukil Kesavan, a VMware intern studying at the University of Rochester,
demonstrated his research into the creation of a host-based antivirus
scanning solution for virtualized servers at the conference. Such a
solution would enable people to pay for a single antivirus solution
across a box running multiple virtual servers, rather than having to
buy an antivirus solution for each virtual machine.

Licensing is going to be very critical to companies like these two very shortly as it’s a "virtual certainty" that the cost savings enjoyed by consolidating physical servers will place pressure on reducing the software licensing that goes along with it — and that includes security.


Rosenblum says that some of the traditional tools used to protect a hardware server work just as well in a virtualized environment, while others "break altogether."


"We’re trying to fix the things that break, to bring ourselves up to
the level of security where physical machines are," he said. "But we
are also looking to create new types of protection."

Rosenblum said the APIs released as part of the initiative
offer security vendors a way to check the memory of a processor, "so
they can look for viruses or signatures or other bad things."

Others allow a security vendor to check the calls an
application within a virtual machine is making, or at the packets the
machine is sending and receiving, he said.

I think Rosenblum’s statement is interesting in a couple of ways:

  1. His goal, as quoted,  is to fix the things that virtualization breaks and bring security up to the level of physical servers.  Unlike every other statement from VMware spokesholes, this statement therefore suggests that virtualized environments are less secure than physical ones.  Huh.
  2. I think this area of focus — when combined with the evolution of the determina acquisition — will yield excellent security gains.  Extending the monitoring and visibility into the isolated memory spaces of the virtual processors in a VM means that we may be able to counter attacks without having to depend upon solely on the virtual switches; it gives you application-level visibility without the need for another agent.

The Determina acquisition is really a red herring for VMware.  Determina’s "memory firewall" seeks to protect a system "…from buffer overflow
attacks, while still allowing the system to run at high speeds. It also
developed "hot-patching" technology–which allows servers to be patched
on the fly, while they are still running."  I’ve said before that this acquisition was an excellent move.  Let’s hope the integration goes well.

If you imagine this chunk built into the VMM, the combination of exposed VMM API’s with a lightweight VMM running in hardware (flash) embedded natively into a server less the bloated service console, it really is starting tohead down an interesting path.  This is what ESX Server 3i is designed to provide:

ESX Server 3i has considerable advantages over its predecessors
from a security standpoint. In this latest release, which will be
available in November, VMware has decoupled the hypervisor from the
service console it once shipped with. This console was based on a
version of the Red Hat Linux operating system.


As such, ESX 3i is a mere 32MB in size, rather than 2GB.

Some 50 percent of the vulnerabilities that VMware was patching in
prior versions of its software were attributable to the Red Hat piece,
not the hypervisor.

"Our hope is that those vulnerabilities will all be gone in 3i," Rosenblum said.

Given Kris Lamb’s vulnerability distribution data from last week, I can imagine that everyone hopes that these vulnerabilities will all be gone, too.  I wonder if Kris can go back and isolate how many of the vulns listed as "First Party" were attributable to the service console (the underlying RH Linux OS) accompanying the HyperVisor.  This would be good to know.  Kris? 😉

At any rate, life’s about trade-offs and security’s no different.  I think that as we see the API’s open up, so will more activity designed to start tearing at the fleshy underbelly of the VMM’s.  I wonder if we’ll see attacks specific to flash hardware when 3i comes out?

/Hoff

(P.S. Not to leave XenSource or Veridian out of the mix…I’m sure that their parent companies (Citrix & microsoft) who have quite a few combined security M&A transactions behind them are not dragging their feet on security portfolio integration, either.)

 

Categories: Virtualization, VMware Tags: