Archive

Archive for the ‘Information Security’ Category

Unified Risk Management (URM) and the Secure Architecture Blueprint

May 6th, 2007 5 comments

Gunnar once again hits home with an excellent post defining what he calls the Security Architecture Blueprint (SAB):

The purpose of the security architecture blueprint is to bring focus to the key areas of
concern for the enterprise, highlighting decision criteria and context for each domain.
Since security is a system property it can be difficult for Enterprise Security groups to
separate the disparate concerns that exist at different system layers and to understand
their role in the system as a whole. This blueprint provides a framework for
understanding disparate design and process considerations; to organize architecture and
actions toward improving enterprise security.

Securityarchitectureroadmap

I appreciated the graphical representation of the security architecture blueprint as it provides some striking parallels to the diagram that I created about a year ago to demonstrate a similar concept that I call the Unified Risk Management (URM) framework

(Ed.: URM focuses on business-driven information survivability architectures that describes as much risk tolerance as it does risk management.)

Here are both the textual and graphical representations of URM: 

Managing risk is fast becoming a lost art.  As the pace of technology’s evolution and adoption overtakes our ability to assess and manage its impact on the business, the overrun has created massive governance and operational gaps resulting in exposure and misalignment.  This has caused organizations to lose focus on the things that matter most: the survivability and ultimate growth of the business.

Overwhelmed with the escalation of increasingly complex threats, the alarming ubiquity of vulnerable systems and the constant onslaught of rapidly evolving exploits, security practitioners are ultimately forced to choose between the unending grind of tactical practices focused on deploying and managing security infrastructure versus the strategic art of managing and institutionalizing risk-driven architecture as a business process.

URM illustrates the gap between pure technology-focused information security infrastructure and business-driven, risk-focused information survivability architectures and show how this gap is bridged using sound risk management practices in conjunction with best of breed consolidated Unified Threat Management (UTM) solutions as the technology anchor tenant in a consolidated risk management model.

URM demonstrates how governance organizations, business stakeholders, network and security teams can harmonize their efforts to produce a true business protection and enablement strategy utilizing best of breed consolidated UTM solutions as a core component to effectively arrive at managing risk and delivering security as an on-demand service layer at the speed of business.  This is a process we call Unified Risk Management or URM.

Urmv12

(Updated on 5/8/07 with updates to URM Model)

The point of URM is to provide a holistic framework against which one may measure and effectively manage risk.  Each one of the blocks above has a set of sub-components that breaks out the specifics of each section.  Further, my thinking on URM became the foundation of my exploration of the Security Services Oriented Architecture (SSOA) model. 

You might also want to check out Skybox Security’s Security Risk Management (SRM) Blueprint, also.

Thanks again to Gunnar as I see some gaps that I have to think about based upon what I read in his SAB document.

/Hoff

NWC’s Wittmann: Security in Virtualized Environments Overstated: Just Do It!

April 30th, 2007 2 comments

Virtualprotection_dog
In the April, 2007 edition of Network Computing magazine, Art Wittmann talks about server virtualization, its impact on data center consolidation and the overall drivers and benefits virtualization offers. 

What’s really interesting is that while he rambles on about the benefits of power, cooling and compute cycle-reclamation, he completely befuddled me with the following statement in which he suggests that:

    "While the security threat inherent in virtualization is
     real, it’s also overstated."

I’ll get to the meaty bits in a minute as to why I think this is an asinine comment, but first a little more background on the article.

In addition to illustrating everything wrong with the way in which IT has traditionally implemented security — bolting it on after the fact rather than baking it in — it shows the recklessness with which evangelizing the adoption of technology without an appropriate level of security is cavalierly espoused without an overall understanding of the impact of risk such a move creates.

Whittmann manages to do this with an attitude that seeks to suggest that the speed-bump security folks and evil vendors (or in his words: nattering nabobs of negativity) are just intent on making a mountain out of a molehill.

It seems that NWC approaches the evaluation of technology and products in terms of five areas: performance, manageability, scalability, reliability and security.  He lists how virtualization has proven itself in the first four categories, but oddly sums up the fifth category (security) by ranting not about the security things that should or have been done, but rather how it’s all overblown and a conspiracy by security folks to sell more kit and peddle more FUD:

"That leaves security as the final question.  You can bet that everyone who can make a dime on questioning the security of virtualization will be doing so; the drumbeat has started and is increasing in volume. 

…I think it’s funny that he’s intimating that we’re making this stuff up.  Perhaps he’s only read the theoretical security issues and not the practical.  While things like Blue Pill are sexy and certainly add sizzle to an argument, there are some nasty security issues that are unique to the virtualized world.  The drumbeat is increasing because these threats and vulnerabilities are real and so is the risk that companies that "just do it" are going to discover.

But while the security threat is real –and you should be concerned about it — it’s also overstated.  If you can eliminate 10 or 20 servers running outdated versions of NT in favor of a single consolidated pair of servers, the task of securing the environment should be simpler or at least no more complex.  If you’re considering a server consolidation project, do it.  Be mindful of security, but don’t be dissuaded by the nattering nabobs of negativity."

As far as I am concerned, this is irresponsible and reckless journalism and displays an ignorance of the impact that technology can have when implemented without appropriate security baked in. 

Look, if we don’t have security that works in non-virtualized environments, replicating the same mistakes in a virtualized world isn’t just as bad, it’s horrific.   While it should be simpler or at least no more complex, the reality is that it is not.  The risk model changes.  Threat vectors multiply.  New vulnerabilities surface.  Controls multiply.  Operational risk increases.

We end up right back where we started; with a mess that the lure of cost and time savings causes us to rush into without doing security right from the start.

Don’t just do it. Understand the risk associated with what a lack of technology, controls, process, and policies will have on your business before your held accountable for what Whittmann suggests you do today with reckless abandon.  Your auditors certainly will. 

/Hoff

The Philosophy of Network Security Design

April 3rd, 2007 1 comment

Thinkmanmk2
Thomas and I were barking at each other regarding something last night and today he left a salient and thought-provoking comment that provided a very concise, pragmatic and objective summation of the embedded vs. overlay security quagmire:

     "I think the jury is still out on
how much security policy we   
     should be pushing to middleboxes, and how
smart those   
     middleboxes should be. What I know right now is we spend
     way, way too much time, effort, and money on 19" rack
     mountable chasses
that suck in packets and spit them back
     out again without providing any
measurable impact on the
     security of our networks.  Not a fan."

I couldn’t agree more.  Most of the security components today, including those that run in our little security ecosystem, really don’t intercommunicate.  There is no shared understanding of telemetry or instrumentation and there’s certainly little or no correlation of threats, vulnerabilities, risk or disposition.

The problem is bad inasmuch as even best-of-breed solutions usually
require box sprawl and stacking and don’t necessarily provide for a
more secure posture, especially within context of another of Thomas’
interesting posts on defense in depth/mesh…

That’s changing, however.  Our latest generation of NPMs (Network Processing Modules) allow discrete security ISV’s (which run on intelligently load-balanced Application Processor Modules — Intel blades in the same chassis) to interact with and control the network hardware through defined API’s — this provides the first step in that common telemetry such that while application A doesn’t need to know about the specifics of application B, they can functionally interact based upon the common output of disposition and/or classification of flows between them.

Later, they’ll be able to perhaps control each other through the same set of API’s.

So, I don’t think we’re going to solve the interoperability issue completely anytime soon inasmuch as we’ll go from 0 to 100%, but I think that the consolidation of these functions into smaller footprints that allow for intelligent traffic classification and disposition is a first good step.

I don’t expect Thomas to agree or even resonate with my statements below, but I found his explanation of the problem space to be dead on.  Here’s my explanation of an incremental step towards solving some of the bigger classes of problems in that space which I believe hinges on consolidation of security functionality first and foremost.

The three options for reducing this footprint are as follows:

  1. Proprietary Embedded security in routers/switches (Cisco, Juniper)

    Pros: Supposedly less boxes, better communication between components and good coverage
    given the fact that the security stuff is in the infrastructure.  One vendor from which you get
    your infrastructure and your protection.  Correlation across the network "fabric" will ultimately
    allow for near-time zoning and quarantine.  Single management pane across the Enterprise
    for availability and security.  Did I mention the platform is already there?

    Cons: You rely on a single vendor’s version of the truth and you get closer to a monoculture
    wherein the safeguards protecting the network put at risk the very assets they seek to protect
    because there is no separation of "church and state."  Also, the expertise and coverage as well
    as the agility for product development based upon evolving threats is hampered by the many
    moving parts in this machine.  Utility vs Security?  Utility wins.  Good enough vs. Best of breed?
    Probably somewhere in between.

  2.  

  3. Proprietary Overlay security in a Consolidated Platform (Fortinet 5000, Tipping Point, etc.)

    Pros:  Reduced footprint, consolidated functionality, single management pane across multiple
    security functions within the box.  Usually excels in one specific area like AV and can add "good enough" functionality as the needs arise.  Software moves up and down the scalability stack depending upon performance needed.

    Cons:  You again rely on a single vendor’s version of the truth.  These boxes tend to want to replace switching infrastructure.  Many of these platforms utilize ASICs to accelerate certain functions with the bulk of functionality residing in pure software with limited application or network-level intelligence.  You pay the price in terms of performance and scale given the architectures of these boxes which do not easily allow for the addition of new classes of solutions to thwart new threats.  Not really routers/switches.

  4.  

  5. Open Overlay security in a Consolidated Platform (Crossbeam)

    Pros:  The customer defines best of breed and can rapidly add new security functionality
    at a speed that keeps pace with the threats the customer needs to mitigate.  Utilizing a scalable and high-performance switching architecture combined with all the benefits
    of an open blade-based security application/appliance delivery mechanism gives the best of all
    worlds: self-healing, highly resilient, high performance and highly-available while utilizing
    hardened Linux OS across load-balanced, virtualized security applications running on optimized
    hardware.

    Cons: Currently based upon proprietary (even though Intel reference design) hardware for
    the application processing while also utilizing proprietary networking switching fabric and
    load balancing.  Can only offer software as quickly as it can be adapted and tested on the
    platforms.  No ASICs means small packet performance @ 64byte zero loss isn’t as high as
    ASIC based packet-forwarding engines.  No single pane of management.

I think that option #3 is a damned good start towards solving the consolidation issues whilst balancing the need to overlay syngergistically with the network infrastructure.  You’re not locked into single vendor’s version of the truth and although the hardware may be "proprietary," the operating system and choice in software is not.  You can choose from COTS, Open Source or write your own, all in an scaleable platform that is just as much a collapsed switching/routing platform as it is a consolidated blade server.

I think it has the best chance of evolving to solve more classes of problems than the other two at a rate and level of cost-effectiveness balanced with higher efficacy due to best of breed.

This, of course, depends upon how high the level of integration is between the apps — or at least their dispositions.  We’re working very, very hard on that.

At any rate, Thomas ended with:

"I am a believer in
freezing development of the core protocols and building new
functionality on top of them. I like NAT. I like Paul Francis. I think
the IETF has been hijacked by the leftovers from the OSI standards
committees. I don’t know what you call that philosophy, besides
"end2end originalist".

I like NAT.  I think this is Paul Francis.  The IETF has been hijacked by aliens, actually, and I’m getting a new tattoo:


Og_2

If it walks like a duck, and quacks like duck, it must be…?

April 2nd, 2007 5 comments

Blackhatvswhitehat
Seriously, this really wasn’t a thread about NAC.  It’s a great soundbite to get people chatting (arguing) but there’s a bit more to it than that.  I didn’t really mean to offend those NAC-Addicts out there.

My last post was the exploration of security functions and their status (or even migration/transformation)  as either a market or feature included in a larger set of features.  Alan Shimel responded to my comments; specifically regarding my opinion that NAC is now rapidly becoming a feature and won’t be a competitive market for much longer. 

Always the quick wit, Alan suggested that UTM was a "technology" that is going to become a feature much like my description of NAC’s fate.  Besides the fact that UTM isn’t a technology but rather a consolidation of lots of other technologies that won’t stand alone, I found a completely orthogonal statement that Alan made to cause my head to spin as a security practitioner. 

My reaction stems from the repeated belief that there should be separation of delivery between the network plumbing, the security service layers and ultimately the application(s) that run across them.  Note well that I’m not suggesting that common instrumentation, telemetry and disposition shouldn’t be collaboratively shared, but their delivery and execution ought to be discrete.  Best tool for the job.

Of course, this very contention is the source of much of the disagreement between me and many others who believe that security will just become absorbed into the "network."  It seems now that Alan is suggesting that the model of combining all three is going to be something in high demand (at least in the SME/SMB) — much in the same way Cisco does:

The day is rapidly coming when people will ask why would they buy a box
that all it does is a bunch of security stuff.  If it is going to live
on the network, why would the network stuff not be on there too or the
security stuff on the network box.

Firstly, multi-function devices that blend security and other features on the "network" aren’t exactly new.

That’s what the Cisco ISR platform is becoming now what with the whole Branch Office battle waging, and back in ’99 (the first thing that pops into my mind) a bunch of my customers bought and deployed WhistleJet multi-function servers which had DHCP, print server, email server, web server, file server, and security functions such as a firewall/NAT baked in.

But that’s neither here nor there, because the thing I’m really, really interested in Alan’s decidedly non-security focused approach to prioritizing utility over security, given that he works for a security company, that is.

I’m all for bang for the buck, but I’m really surprised that he would make a statement like this within the context of a security discussion.

That is what Mitchell has been
talking about in terms of what we are doing and we are going to go
public Monday.  Check back then to see the first small step in the leap
of UTM’s becoming a feature of Unified Network Platforms.

Virtualization is a wonderful thing.  It’s also got some major shortcomings.  The notion that just because you *can* run everything under the sun on a platform doesn’t always mean that you *should* and often it means you very much get what you pay for.  This is what I meant when I quoted Lee Iacocca when he said "People want economy and they will pay any price to get it."

How many times have you tried to consolidate all those multi-function devices (PDA, phone, portable media player, camera, etc.) down into one device.  Never works out, does it?  Ultimately you get fed up with inconsistent quality levels, you buy the next megapixel camera that comes out with image stabilization.  Then you get the new video iPod, then…

Alan’s basically agreed with me on my original point discussing features vs. markets and the UTM vs. UNP thing is merely a handwaving marketing exercise.  Move on folks, nothing to see here.

’nuff said.

/Hoff

(Written sitting in front of my TV watching Bill Maher drinking a Latte)

NAC is a Feature not a Market…

March 30th, 2007 7 comments

MarketfeatureI’m picking on NAC in the title of this entry because it will drive
Alan Shimel ape-shit and NAC has become the most over-hyped hooplah
next to Britney’s hair shaving/rehab incident…besides, the pundits come a-flockin’ when the NAC blood is in the water…

Speaking of chumming for big fish, love ’em or hate ’em, Gartner’s Hype Cycles do a good job of allowing
one to visualize where and when a specific technology appears, lives
and dies
as a function of time, adoption rate and utility.

We’ve recently seen a lot of activity in the security space that I
would personally describe as natural evolution along the continuum,
but is often instead described by others as market "consolidation" due to
saturation. 

I’m not sure they are the same thing, but really, I don’t care to argue
that point.  It’s boring.  It think that anyone arguing either side is
probably right.  That means that Lindstrom would disagree with both. 

What I do want to do is summarize a couple of points regarding some of
this "evolution" because I use my blog as a virtual jot pad against which
I can measure my own consistency of thought and opinion.  That and the
chicks dig it.

Without my usual PhD Doctoral thesis brevity, here are just a few
network security technologies I reckon are already doomed to succeed as
features and not markets — those technologies that will, within the
next 24 months, be absorbed into other delivery mechanisms that
incorporate multiple technologies into a platform for virtualized
security service layers:

  1. Network Admission Control
  2. Network Access Control
  3. XML Security Gateways
  4. Web Application Firewalls
  5. NBAD for the purpose of DoS/DDoS
  6. Content Security Accelerators
  7. Network-based Vulnerability Assessment Toolsets
  8. Database Security Gateways
  9. Patch Management (Virtual or otherwise)
  10. Hypervisor-based virtual NIDS/NIPS tools
  11. Single Sign-on
  12. Intellectual Property Leakage/Extrusion Prevention

…there are lots more.  Components like gateway AV, FW, VPN, SSL
accelerators, IDS/IPS, etc. are already settling to the bottom of UTM
suites as table stakes.  Many other functions are moving to SaaS
models.  These are just the ones that occurred to me without much
thought.

Now, I’m not suggesting that Uncle Art is right and there will be no
stand-alone security vendors in three years, but I do think some of this
stuff is being absorbed into the bedrock that will form the next 5
years of evolutionary activity.

Of course, some folks will argue that all of the above will just all be
absorbed into the "network" (which means routers and switches.)  Switch
or multi-function device…doesn’t matter.  The "smoosh" is what I’m
after, not what color it is when it happens.

What’d I miss?

/Hoff

(Written from SFO Airport sitting @ Peet’s Coffee.  Drinking a two-shot extra large iced coffee)

Breaking News: SOA, Web services security hinge on XML gateways!

March 20th, 2007 No comments

Captainobvious
Bloody Hell!

The article below is dated today, but perhaps this was just the TechTarget AutoBlogCronPoster gone awry from 2004? 

Besides the fact that this revelation garners another vote for the RationalSecurity "Captain Obvious" (see right) award, the simple fact that XML gateways as a stand-alone market are being highlighted here is laughable — especially since the article clearly shows the XML Security Gateways are being consolidated and bundled with application delivery controllers and WAF solutions by vendors such as IBM and Cisco.

XML is, and will be everywhere.  SOA/Web Services is only one element in a greater ecosystem impacted by XML.

Of course the functionality provided by XML security gateways are critical to the secure deployment of SOA environments; they should be considered table stakes, just like secure coding…but of course we know how consistently-applied compensating controls are painted onto network and application architectures. 

The dirty little secret is that while they are very useful and ultimately an excellent tool in the arsenal, these solutions are disruptive, difficult to configure and maintain, performance pigs and add complexity to an already complex model.  In many cases, asking a security team to manage this sort of problem introduces more operational risk than it mitigates. 

Can you imagine security, network and developers actually having to talk to one another?!  *gasp*

Here is the link to the entire story.  I’ve snipped pieces out for relevant mockery.

ORLANDO, Fla. — Enterprises are moving forward with service
oriented architecture (SOA) projects to reduce complexity and increase
flexibility between systems and applications, but some security pros
fear they’re being left behind and must scramble to learn new ways to
protect those systems from Web-based attacks.

<snip>

"Most network firewalls aren’t designed to handle the latest
Web services standards, resulting in new avenues of attack for digital
miscreants, said Tim Bond, a senior security engineer at webMethods
Inc. In his presentation at the Infosec World Conference and Expo, Bond
said a growing number of vendors are selling XML security gateways,
appliances that can be plugged into a network and act as an
intermediary, decrypting and encrypting Web services data to determine
the authenticity and lock out attackers.

"It’s not just passing a message through, it’s actually taking
action," Bond said. "It needs to be customized for each deployment, but
it can be very effective in protecting from many attacks."

Bond said that most SOA layouts further expose applications by
placing them just behind an outer layer of defense, rather than placing
them within the inner walls of a company’s security defenses along with
other critical applications and systems. Those applications are
vulnerable, because they’re being exposed to partners, customer
relationship management and supply chain management systems. Attackers
can scan Web services description language (WSDL) — the XML language
used in Web service calls — to find out where vulnerabilities lie,
Bond said.

<snip>

A whole market has grown around protecting WSDL, Bond said.
Canada-based Layer 7 Technologies Inc. and UK-based Vordel are
producing gateway appliances to protect XML and SOAP language in Web
service calls. Reactivity, which was recently acquired by Cisco Systems
Inc. and DataPower, now a division of IBM, also address Web services
security.

Transaction values will be much higher and traditional SSL,
security communications protocol for point-to-point communications,
won’t be enough to protect transactions, Bond said.

<snip>

In addition to SQL-injection attacks, XML is potentially
vulnerable to schema poisoning — a method of attack in which the XML
schema can be manipulated to alter processing information. A
sophisticated attacker can also conduct an XML routing detour,
redirecting sensitive data within the XML path, Bond said.

Security becomes complicated with distributed systems in an
SOA environment, said Dindo Roberts, an application security manager at
New York City-based MetLife Inc. Web services with active interfaces
allow the usage of applications that were previously restricted to
using conventional custom authentication. Security pros need new
methods, such as an XML security gateway to protect those applications,
Roberts said.

<snip>

Another Virtualized Solution for VM Security…

March 19th, 2007 10 comments

Virtualmyspace
I got an email reminder from my buddy Grant Bourzikas today pointing me to another virtualized security solution for servers from Reflex Security called Reflex VSA.  VSA stands for Virtual Security Appliance and the premise appears to be that you deploy this software within each guest VM and it provides what looks a lot like host-based intrusion prevention functionality per VM.

The functionality is defined thusly:

Reflex VSA solves the problem that traditional network security such as
IPS and firewall appliances currently can not solve: detecting and preventing attacks within a virtual server. Because Reflex VSA runs as virtualized
application inside the virtualized environment, it can detect and mitigate
        threats between virtual hosts and networks.

Reflex VSA Features:
        • Access firewall for permission enforcement for intra-host and external network
           communication
        • Intrusion Prevention with inline blocking and filtering for virtualized networks
        • Anomaly, signature, and rate-based threat detection capability
       
        • Network Discovery to discover and map all virtual machines and applications
        • Reflex Command Center, providing a centralized configuration and management
           console, comprehensive reporting tools, and real-time event aggregation and
           correlation
   

Reflex_vsa_deploy
It does not appear to wrap around or plug-in to the HyperVisor natively, so I’m a little confused as to the difference between deploying VSA and whatever HIPS/NIPS agent a customer might already have deployed on "physical" server instantiations.

Blue Lane’s product addresses this at the HyperVisor layer and it would be interesting to me to have the pundits/experts argue the pros/cons of each approach. {Ed. This is incorrect.  Blue Lane’s product runs as a VM/virtual appliance also.  With the exposure via API of the hypervisor/virtual switches, products like Blue Lane and Reflex would take advantage to be more flexible, effective and higher performing.}

I’m surprised most of the other "security configuration management" folks haven’t already re-branded their agents as being "Virtualization Compliant" to attack this nascent marketspace. < :rolleyes here: >

It’s good to see that folks are at least owning up to the fact that intra-VM communications via virtual switches are going to drive a spin on risk models, detection and mitigation tools and techniques.  This is what I was getting at in this blog entry here.

I would enjoy speaking to someone from Reflex to understand their positioning and differentiation better, but isn’t this just HIPS per VM?  How’s that different than firewall, AV, etc. per VM?

/Hoff

John Thompson’s (Symantec) Ironic warning of “Conflict of Interest”

March 19th, 2007 3 comments

Drivethrubeer
Infoworld ran an interesting article on John Thompson’s recent CeBIT keynote in which he took a shot at Microsoft by suggesting that there is an inherently "…huge conflict of interest for one company to provide both an operating platform and a security platform."

I suppose that opinion depends upon whether or not said company suggests that their security controls are all that are needed to secure said operating system or that defense in depth is not needed.

Here’s why I find this statement interesting and I am going to twist it by agreeing with the statement within the context of the same argument pertaining to Cisco as an extension to the many, many articles I have already written on this topic.

Given just the last rash of vulnerabilities in Cisco’s routing, switching and security products a few weeks ago, I believe it’s also a mistake (you can read "conflict of interest" if you desire) for Cisco (le fox) to protect the network (le chicken.)  That’s the same argument of the "operating system" and the "security platform."

I think it’s simply not relevant or appropriate to simply shrug off issues like this just because of Cisco’s size and the apparent manifest destiny associated with security "going into the switch" — just because it does and more than likely will — does not mean it should and does not mean that people will settle for "good enough" security when the network consistently fails to self-defend.

I don’t disagree that more and more security *will* make it’s way into the network switches, much like I don’t disagree that the sun will rise in the east and set in the west, but much in the same way that folks don’t just give up and go to sleep once the sun goes down, the lightbulb that goes on in my head suggests there is a better way.

/Hoff

Blue Lane VirtualShield for VMWare – Here we go…

March 19th, 2007 1 comment

Arms_diagramarmorlg
Greg Ness from Blue Lane and I have known each other for a while now, and ever since I purchased Blue Lane’s first release of products a few years ago (when I was on the "other" side as a *gasp* customer) I have admired and have taken some blog-derived punishment for my position on Blue Lane’s technology.

I have zero interest in Blue Lane other than the fact that I dig their technology and products and think it solves some serious business problems elegantly and efficiently with a security efficacy that is worth its weight in gold.

Vulnerability shielding (or patch emulation…) is a provocative subject and I’ve gone ’round and ’round with many a fine folk online wherein the debate normally dissolves into the intricacies of IPS vs. vulnerability shielding versus the fact that the solutions solve a business problem in a unique way that works and is cost effective.

That’s what a security product SHOULD do.  Yet I digress.

So, back to Greg @ Blue Lane…he let me know a few weeks ago about Blue Lane’s VirtualShield offering for  VMWare environments.  VirtualShield is the first commercial product that I know of that specifically tackles problems that everyone knows exists in VM environments but have, until now, sat around twirling thumbs at.

In fact, I alluded to some of these issues in this blog entry regarding the perceived "dangers" of virtualization a few weeks ago.

In short, VirtualShield is designed to protect guest VM’s running under a VMWare ESX environment in the following manner (and I quote):

  • Protects virtualized servers regardless of physical location or patch-level;
  • Provides up-to-date protection with no configuration changes and no agent installation on each virtual machine;
  • Eliminates remote threats without blocking legitimate application requests or requiring server reboots; and
  • Delivers appropriate protection for specific applications without requiring any manual tuning.

VS basically sits on top of the HyperVisor and performs a similar set of functionality as the PatchPoint solution does for non-VM systems.

Specifically, VirtualShield discovers the virtual servers running on a server and profiles the VM’s, the application(s), ports and protocols utilized to build and provision the specific OS and application protections (vulnerability shielding) required to protect the VM.

Bluelanevs_alt_conceptual_v2 I think the next section is really the key element of VirtualShield:

As traffic flows through VirtualShield inside the
hypervisor, individual sessions are decoded and monitored for
vulnerable conditions. When necessary, VirtualShield can replicate the
function of a software security patch by applying a corrective action
directly within the network stream, protecting the downstream virtual
server.

As new security patches are released by software
application vendors, VirtualShield automatically downloads the
appropriate inline patches from Blue Lane. Updates may be applied
dynamically without requiring any reboots or reconfigurations of the
virtual servers, the hypervisor, or VirtualShield.

While one might suggest that vulnerability shielding is not new and in some cases certain functionality can be parlayed by firewalls, IPS, AV, etc., I maintain that the manner and model in which Blue Lane elegantly executes this compensating control is unique and effective.

If you’re running a virtualized server environment under VMWare’s ESX architecture, check out VirtualShield…right after you listen to the virtualization podcast with yours truly from RSA.

/Hoff

RSA Conference Virtualization Panel – Audio Session Available

March 15th, 2007 No comments

Microphone_2
According to the folks at RSA, people really wanted the audio recording  of the DEPL-107 "Virtualization and Security" panel session I was on @ this year’s RSA show. 

The room was filled to the brim and I think ultimately it’s worth the listen.  Good balance of top-down and bottom-up taxonomy of the challenges virtualization brings to the security world.

The kind folks @ RSA decided that rather than charge for it, they would release it for free:

"Demand for these six sessions was so high at RSAR Conference 2007 that we’re providing the audio recordings for all to enjoy for free. Please download the session audio files below, and enjoy!"

If you think I write a lot, I talk a hell of a lot more!  Yikes.

Here is the link to the .mp3 of the DEPL-107 Session.

Enjoy.  /Hoff