Archive

Author Archive

Don’t Hassle the Hoff: Recent Press & Podcast Coverage…

December 12th, 2007 4 comments

Microphone_2
Here’s a rundown of some recent press and some podcast coverage on topics relevant to content on my blog:

/Hoff

Categories: Press Tags:

Consolidating Controls Causes Chaos and Certain Complexity?

December 10th, 2007 6 comments

Simplicity_complexity
Don Weber wrote a post last week describing his thoughts on the consolidation of [security] controls and followed it up with another today titled "Quit Complicating our Controls – UTM Remix" in which he suggests that the consolidation of controls delivers an end-state of additional "complexity" and "higher risk":

Of course I can see why people desire to integrate the technologies. 

  • It is more cost effective to have two or more technologies on one piece of hardware.
  • You only have to manage one box.
  • The controls can augment each other more effectively and efficiently (according to the advertising on the box).
  • Firewalls usually represent a choke point to external and potentially hostile environments.
  • Vendors can market it as the Silver Bullet (no relation to Gary McGraw’s podcast) of controls.
  • “The next-generation firewall will have greater blocking and
    visibility into types of protocols,” says Greg Young, research vice
    president for Gartner.
  • etc

Well, I have a problem with all of this. Why are we making our
controls more complex?
Complexity leads to vulnerabilities.
Vulnerabilities lead to exploits. Exploits lead to compromises.
Compromises lead to loss.

…and:

Don’t get me wrong. I am all for developing new technologies that will
allow organizations to analyze their traffic so that they get a better
picture of what is traversing and exiting their networks. I just think
they will be more effective if they are deployed so that they augment
each other’s control measures instead of threatening them by increasing
the risk through complexity. Controls should reduce risk, not increase
it.

Don’s posts have touched on a myriad of topics I have very strong opinions on: complex simplicity, ("magical") risk, UTM and application firewalls.  I don’t agree with Don’s statements regarding any of them. That’s probably why he called me out.

The question I have for Don is simple: how is it that you’ve arrived at the conclusion that the consolidation and convergence of security functionality from multiple discrete products into a single-sourced solution adds "complexity" and leads to "increased risk?"

Can you empirically demonstrate this by giving us an example of where a single function security device that became a multiple function security product caused this complete set combination of events to occur:

  1. Product complexity increased
  2. Lead to a vulnerability that was exploitable and
  3. Increased "risk" based upon business impact and exposure

I’m being open-minded here and rather than try and address every corner-case I am eager to understand more of the background of Don’s position so I might respond accordingly.

/Hoff

WARNING: Tunneling Traffic Means Filtering On 5-Tuple Insufficient. Welcome to 1995!

December 8th, 2007 4 comments

Tunnelsbad
…just to put your mind at ease, no, that’s not me.  I’m all about the boxers not briefs.  Now you know since you’ve all been wondering, I’m sure…

I really do appreciate it when people dedicate time, energy and expertise to making sure we’re all as informed as we can be about the potential for bad things to happen.  Case in point, hat tip to Mitchell for pointing us to just such a helpful tip from a couple of guys submitting a draft to the IETF regarding the evils of tunneled traffic.

Specifically, the authors are commenting on the "feature" called Teredo in which IPv6 is tunneled in UDP IPv4 datagrams.

Here’s the shocking revelation, sure to come as a complete surprise to anyone it IT/Security today…if you only look at SrcIP, DstIP, SrcPort, DstPort and Protocol, you’ll miss the fact that nasty bits are traversing your networks in a tunneled/encapsulated death ray!

Seriously, welcome to 1995.  If your security infrastructure relies upon technology that doesn’t inspect "deeper" than the 5-tupule above, you’re surely already aware of this problem as 90% of the traffic entering and leaving your network is probably "tunneled" within port 80/443.

Here’s a practical example.  I stuck a Palo Alto Networks box in front of my home network as part of an evaluation for a client I’m doing.  Check out the application profile of the traffic leaving my network via my FIOS connection:
Panacchoff_2

Check out that list of applications above.  Care to tell me how many of them are tunneled over/via/through port 80/443?  True, they’re not IPv6 in IPv4, but it’s really the same problem; obfuscating applications and protocols means you need to have much more precise fidelity and resolution in detecting what’s going through your firewalls colander.

By the way, I’ve got stuff going through SSH port forwarding, in ICMP payloads, via SSL VPN, via IPSec VPN…can’t wait to see what happens when I shove ’em out using Fragrouter.

I’m all for raising awareness, but does this really require an IETF draft to update the Teredo specification?

/Hoff

Read more…

Categories: Uncategorized Tags:

The Seesaw CISO…Changing Places But Similar Faces…

December 8th, 2007 1 comment

Seesaw_shadow
…from geek to business speak…

Dennis Fisher has nice writeup over at the SearchSecurity Security Bytes Blog about the changing role and reporting structure of the CISO.

Specifically, Dennis notes that he was surprised by the number of CISOs who recently told him that they no longer report to the CIO and aren’t a part of IT at all.  Moreover, these same CISOs noted that the skillset and focus is also changing from a technical to a business role:

In the last few months I’ve been hearing more and more from CEOs,
CIOs and CSOs about the changing role of the CSO (or CISO, depending on
your org chart) in the enterprise. In the past, the CSO has nearly
always been a technically minded person who has risen through the IT
ranks and then made the jump to the executive ranks. That lineage
sometimes got in the way when it came time to deal with other upper
managers who typically had little or no technical knowledge and weren’t
interested in the minutiae of authentication schemes, NAC and unified
threat management. They simply wanted things to work and to avoid
seeing the company’s name in the papers for a security breach.

But that seems to be changing rather rapidly. Last month I was on a
panel in Chicago with Howard Schmidt, Lloyd Hession, the CSO of BT
Radianz, and Bill Santille, CIO of Uline, and the conversation quickly
turned to the ways in which the increased focus on risk management in
enterprises has forced CSOs to adapt and expand their skill sets. A
knowledge of IDS, firewalls and PKI is not nearly enough these days,
and in some cases is not even required to be a CSO. One member of the
audience said that the CSO position in his company is rotated regularly
among senior managers, most of whom have no technical background and
are supported by a senior IT staff member who serves as CISO. The CSO
slot is seen as a necessary stop on the management circuit, in other
words. Several other CSOs in the audience said that they no longer
report to the CIO and are not even part of the IT organization.
Instead, they report to the CFO, the chief legal counsel, or in one
case, the ethics officer.

I’ve talked about the fact that "security" should be a business function and not a technical one and quite frankly what Dennis is hearing has been a trend on the uptick for the last 3-4 years as "information security" becomes less relevant and managing risk becomes the focus.  To wit:

The number of organizations making this kind of change surprised me
at the time. But, in thinking more about it, it makes a lot of sense,
given that the daily technical security tasks are handled by people
well below the CSO’s office. And many of the CSOs I know say they spend
most of their time these days dealing with policy issues such as
regulatory compliance. Patrick Conte, the CEO of software maker
Agiliance, which put on the panel, told me that these comments fit with
what he was hearing from his customers, as well. Some of this shift is
clearly attributable to the changing priorities inside these
enterprises. But some of it also is a result of the maturation of the
security industry as a whole, which has translated into less of a focus
on technology and more attention being paid to policies, procedures and
other non-technical matters.

How this plays out in the coming months and years will be quite
interesting. My guess is that as security continues to be absorbed into
the larger IT and operations functions, the CSO’s job will continue to
morph into more of a business role.

I still maintain that "compliance" is nothing more than a gap-filler.  As I said here, we have compliance as an industry [and measurement] today because we manage technology
threats and vulnerabilities and don’t manage risk.  Compliance is
actually nothing more than a way of forcing transparency and plugging a
gap between the two.  For most, it’s the best they’ve got.

Once organizationally we’ve got our act together, compliance will become the floor, not the ceiling and we’ll really start to see the "…maturation of the security industry as a whole."

/Hoff

And Now Some Useful 2008 Information Survivability Predictions…

December 7th, 2007 1 comment

Noculars
So, after the obligatory dispatch of gloom and doom as described in my
2008 (in)Security Predictions, I’m actually going to highlight some of
the more useful things in the realm of Information Security that I
think are emerging as we round the corner toward next year.

They’re not really so much predictions as rather some things to watch.

Unlike folks who can only seem to talk about desperation, futility
and manifest destiny or (worse yet) "anti-pundit pundits" who try to
suggest that predictions and forecasting are useless (usually because
they suck at it,) I gladly offer a practical roundup of impending
development, innovation and some incremental evolution for your
enjoyment. 

You know, good news.

As Mogull mentioned,
I don’t require a Cray XMP48, chicken bones & voodoo or a
prehensile tail to make my picks.  Rather I grab a nice cold glass of
Vitamin G (Guiness) and sit down and think for a minute or two,
dwelling on my super l33t powers of common sense and pragmatism with just a
pinch of futurist wit.

Many of these items have been underway for some time, but 2008 will
be a banner year for these topics as well as the previously-described
"opportunities for improvement…"

That said, let’s roll with some of the goodness we can look forward to in the coming year.  This is not an exhaustive list by any means, but some examples I thought were important and interesting:

  1. More robust virtualization security toolsets with more native hypervisor/vmm accessibility
    Though
    it didn’t start with the notion of security baked in, virtualization
    for all of its rush-to-production bravado will actually yield some
    interesting security solutions that help tackle some very serious
    challenges.  As the hypervisors become thinner, we’re going to see the
    management and security toolsets gain increased access to the guts of
    the sausage machine in order to effect security appropriately and this
    will be the year we see the virtual switch open up to third parties and
    more robust APIs for security visibility and disposition appear.
     
  2. The focus on information centric security survivability graduates from v1.0 to v1.1
    Trying
    to secure the network and the endpoint is like herding cats and folks
    are tired of dumping precious effort on deploying kitty litter around
    the Enterprise to soak up the stinky spots.  Rather, we’re going to see
    folks really start to pay attention to information classification,
    extensible and portable policy definition, cradle-to-grave lifecycle
    management, and invest in technology to help get them there.

    Interestingly
    the current maturity of features/functions such as NAC and DLP have
    actually helped us get closer to managing our information and
    information-related risks.  The next generation of these offerings in
    combination with many of the other elements I describe herein and their
    consolidation into the larger landscape of management suites will
    actually start to deliver on the promise of focusing on what matters —
    the information.
     

  3. Robust Role-based policy, Identity and access management coupled with entitlement, geo-location and federation…oh and infrastructure, too!
    We’re
    getting closer to being able to affect policy not only based upon just
    source/destination IP address, switch and router topology and the odd entry in active directory on
    a per-application basis, but rather holistically based upon robust
    lifecycle-focused role-based policy engines that allow us to tie in all of the major
    enterprise components that sit along the information supply-chain.

    Who, what, where, when, how and ultimately why will be the decision
    points considered with the next generation of solutions in this space.
    Combine the advancements here with item #2 above, and someone might
    actually start smiling.

    If you need any evidence of the convergence/collision of the application-oriented with the network-oriented approach and a healthy overlay of user entitlement provisioning, just look at the about-face Cisco just made regarding TrustSec.  Of course, we all know that it’s not a *real* security concern/market until Cisco announces they’ve created the solution for it 😉
     

  4. Next Generation Networks gain visibility as they redefine the compute model of today
    Just
    as there exists a Moore’s curve for computing, there exists an
    overlapping version for networking, it just moves slower given the
    footprint.  We’re seeing the slope of this curve starting to trend up
    this coming year, and it’s much more than bigger pipes, although that
    doesn’t hurt either…

    These next generation networks will
    really start to emerge visibly in the next year as the existing
    networking models start to stretch the capabilities and capacities of
    existing architecture and new paradigms drive requirements that dictate
    a much more modular, scalable, resilient, high-performance, secure and
    open transport upon which to build distributed service layers.

    How
    networks and service layers are designed, composed, provisioned,
    deployed and managed — and how that intersects with virtualization and
    grid/utility computing — will start to really sink home the message
    that "in the cloud" computing has arrived.  Expect service providers
    and very large enterprises to adapt these new computing climates first
    with a trickle-down to smaller business via SaaS and hosted service
    operators to follow.

    BT’s 21CN
    (21st Century Network) is a fantastic example of what we can expect
    from NGN as the demand for higher speed, more secure, more resilient and more extensible interconnectivity really
    takes off.
     

  5. Grid and distributed utility computing models will start to creep into security
    A
    really interesting by-product of the "cloud compute" model is that as
    data, storage, networking, processing, etc. get distributed, so shall
    security.  In the grid model, one doesn’t care where the actions take
    place so long as service levels are met and the experiential and
    business requirements are delivered.  Security should be thought of in
    exactly the same way. 

    The notion that you can point to a
    physical box and say it performs function ‘X’ is so last Tuesday.
    Virtualization already tells us this.  So, imagine if your security
    processing isn’t performed by a monolithic appliance but instead is
    contributed to in a self-organizing fashion wherein the entire
    ecosystem (network, hosts, platforms, etc.) all contribute in the
    identification of threats and vulnerabilities as well as function to
    contain, quarantine and remediate policy exceptions.

    Sort of sounds like that "self-defending network" schpiel, but not focused on the network and with common telemetry and distributed processing of the problem.

    Check out Red Lambda’s cGrid technology for an interesting view of this model.
     

  6. Precision versus accuracy will start to legitimize prevention as
    the technology starts to allow us the confidence to start turning the
    corner beyond detection

    In a sad commentary on the last few
    years of the security technology grind, we’ve seen the prognostication
    that intrusion detection is dead and the deadpan urging of the security
    vendor cesspool convincing us that we must deploy intrusion prevention
    in its stead. 
       
    Since there really aren’t many pure-play intrusion detection systems
    left anyway, the reality is that most folks who have purchased IPSs
    seldom put them in in-line mode and when they do, they seldom turn on
    the "prevention" policies and instead just have them detect attacks,
    blink a bit and get on with it.

    Why?  Mostly because while the
    threats have evolved the technology implemented to mitigate them hasn’t
    — we’re either stuck with giant port/protocol colanders or
    signature-driven IPSs that are nothing more than IDSs with the ability
    to send RST packets.

    So the "new" generation of technology has
    arrived and may offer some hope of bridging that gap.  This is due to
    not only really good COTS hardware but also really good network
    processors and better software written (or re-written) to take
    advantage of both.  Performance, efficacy and efficiency have begun to
    give us greater visibility as we get away from making decisions based
    on ports/protocols (feel free to debate proxies vs. ACLs vs. stateful
    inspection…) and move to identifying application usage and getting us
    close to being able to make "real time" decisions on content in context
    by examining the payload and data.  See #2 above.

    The
    precision versus accuracy discussion is focused around being able to
    really start trusting in the ability for prevention technology to
    detect, defend and deter against "bad things" with a fidelity and
    resolution that has very low false positive rates.

    We’re getting closer with the arrival of technology such as Palo Alto Network’s solutions
    — you can call them whatever you like, but enforcing both detection
    and prevention using easy-to-define policies based on application (and
    telling the difference between any number of apps all using port
    80/443) is a step in the right direction.
     

  7. The consumerization of IT will cause security and IT as we know it to die radically change
    I know it’s heretical but 2008 is going to really push the limits of
    the existing IT and security architectures to their breaking points, which is
    going to mean that instead of saying "no," we’re going to have to focus
    on how to say "yes, but with this incremental risk" and find solutions for an every increasingly mobile and consumerist enterprise. 

    We’ve talked about this before, and most security folks curl up into a fetal position when you start mentioning the adoption by the enterprise of social
    neworking, powerful smartphones, collaboration tools, etc.  The fact is that the favorable economics, agility , flexibility and efficiencies gained with the adoption of consumerization of IT outweigh the downsides in the long run.  Let’s not forget the new generation of workers entering the workforce. 

    So, since information is going to be leaking from our Enterprises like a sieve on all manners of devices and by all manner of methods, it’s going to force our hands and cause us to focus on being information centric and stop worrying about the "perimeter problem," stop focusing on the network and the host, and start dealing with managing the truly important assets while allowing our employees to do their jobs in the most effective, collaborative and efficient methods possible.

    This disruption will be a good thing, I promise.  If you don’t believe me, ask BP — one of the largest enterprises on the planet.  Since 2006 they’ve put some amazing initiatives into play:

    like this little gem:

    Oil giant BP is pioneering a "digital consumer" initiative
    that will give some employees an allowance to buy their own IT
    equipment and take care of their own support needs.

    The
    project, which is still at the pilot stage, gives select BP staff an
    annual allowance — believed to be around $1,000 — to buy their own
    computing equipment and use their own expertise and the manufacturer’s
    warranty and support instead of using BP’s IT support team.

    Access
    to the scheme is tightly controlled and those employees taking part
    must demonstrate a certain level of IT proficiency through a computer
    driving licence-style certification, as well as signing a diligent use
    agreement.

    …combined with this:

    Rather
    than rely on a strong network perimeter to secure its systems, BP has
    decided that these laptops have to be capable of coping with the worst
    that malicious hackers can throw at it, without relying on a network
    firewall.

    Ken Douglas, technology director of BP, told the UK
    Technology Innovation & Growth Forum in London on Monday that
    18,000 of BP’s 85,000 laptops now connect straight to the internet even
    when they’re in the office.

  8. Desktop Operating Systems become even more resilient
    The first steps taken by Microsoft and Apple in Vista and OS X (Leopard) as examples have begun to
    chip away at plugging up some of the security holes that
    have plagued them due to the architectural "feature" that providing an open execution runtime model delivers.  Honestly, nothing short of a do-over will ultimately mitigate this problem, so instead of suggesting that incremental improvement is worthless, we should recognize that our dark overlords are trying to makethings better.

    Elements in Vista such as ASLR, NX, and UAC combined with integrated firewalling, anti-spyware/anti-phishing, disk encryption, integrated rights management, protected mode IE mode, etc. are all good steps in a "more right" direction than previous offerings.  They’re in response to lessons learned.

    On the Mac, we also see ASLR, sandboxing, input management, better firewalling, better disk encryption, which are also notable improvements.  Yes, we’ve got a long way to go, but this means that OS vendors are paying more attention which will lead to more stable and secure platforms upon which developers can write more secure code.

    It will be interesting to see how the intersection of these "more secure" OS’s factor with virtualization security discussed in #1 above.

    Vista SP1 is due to ship in 2008 and will include APIs through which third-party security products can work with kernel patch protection on Vista
    x64, more secure BitLocker drive encryption and a better Elliptical Curve Cryptography PRNG (pseudo-random number generator.)  Follow-on releases to Leopard will likely feature security enhancements to those delivered this year.
     

  9. Compliance stops being a dirty word  & Risk Management moves beyond buzzword
    Today
    we typically see the role of information security described as blocking and tackling; focused on managing threats and
    vulnerabilities balanced against the need to be "compliant" to some
    arbitrary set of internal and external policies.  In many people’s
    assessment then, compliance equals security.  This is an inaccurate and
    unfortunate misunderstanding.

    In 2008, we’ll see many of the functions of security — administrative, policy and operational — become much more visible and transparent to the business and we’ll see a renewed effort placed on compliance within the scope of managing risk because the former is actually a by-product of a well-executed risk management strategy.

    We have compliance as an industry today because we manage technology threats and vulnerabilities and don’t manage risk.  Compliance is actually nothing more than a way of forcing transparency and plugging a gap between the two.  For most, it’s the best they’ve got.

    What’s traditionally preventing the transition from threat/vulnerability management to risk management is the principal focus on technology with a lack of a good risk assessment framework and thus a lack of understanding of business impact.

    The availability of mature risk assessment frameworks (OCTAVE, FAIR, etc.) combined with the maturity of IT and governance frameworks (CoBIT, ITIL) and the readiness of the business and IT/Security cultures to accept risk management as a language and actionset with which they need to be conversant will yield huge benefits this year.

    Couple that with solutions like Skybox and you’ve got the makings of a strategic risk management strategy that can bring the security more closely aligned to the business.
     

  10. Rich Mogull will, indeed, move in with his mom and start speaking Klingon
    ’nuff said.

So, there we have it.  A little bit of sunshine in your otherwise gloomy day.

/Hoff

2008 Security Predictions — They’re Like Elbows…

December 3rd, 2007 6 comments

Carnacangled_2
Yup.  Security predictions are like elbows.  Most everyone’s got at least two, they’re usually ignored unless rubbed the wrong way but when used appropriately, can be devastating in a cage match…

So, in the spirit of, well, keeping up with the Jones’, I happily present you with Hoff’s 2008 Information (in)Security Predictions.  Most of them are feature attacks/attack vectors.  A couple are ooh-aah trends.  Most of them are sadly predictable.  I’ve tried to be more specific than "cybercrime will increase."

I’m really loathe do these, but being a futurist, the only comfort I can take is that nobody can tell me that I’m wrong today 😉

…and in the words of Carnac the Magnificent, "May the winds of the Sahara blow a desert scorpion up your turban…"

  1. Nasty Virtualization Hypervisor Compromise
    As the Hypervisor gets thinner, more of the guts will need to be exposed via API or shed to management and functionality-extending toolsets, expanding the attack surface with new vulnerabilities.  To wit, a Hypervisor-compromising malware will make it’s first in-the-wild appearance to not only produce an exploit, but obfuscate itself thanks to the magic of virtualization in the underlying chipsets.  Hang on to yer britches, the security vendor product marketing SpecOps Generals are going to scramble the fighters with a shock and awe campaign of epic "I told you so" & "AV isn’t dead, it’s just virtualized" proportions…Security "strategery" at it’s finest.

  2. Major Privacy Breach of a Social Networking Site
    With the broadening reach of application extensibility and Web2.0 functionality, we’ll see a major privacy breach via social network sites such as MySpace, LinkedIn or Facebook via the usual suspects (CSRF, XSS, etc.) and via host-based Malware that 0wns unsuspecting Millenials and utilizes the interconnectivity offered to turn these services into a "social botnet" platform with a wrath the likes of which only the ungoldly lovechild of Storm, Melissa, and Slammer could bring…

  3. Integrity Hack of a Major SaaS Vendor
    Expect a serious bit of sliminess to occur with real financial impact to occur from a SaaS vendor’s offering.  With professional cybercrime on the rise, the criminals will go not only where the money is, but also after the data that describes where that money is.  Since much of the security of the SaaS model counts on the integrity and not just the availability of the hosted service, a targeted attack which holds hostage the (non-portable) data and threatens its integrity could have devastating effects on the companies who rely on it.  SalesForce, anyone?
     
  4. Targeted eBanking Compromise with substantial financial losses
    Get ready for a nasty eBanking focused compromise that starts to unravel the consumer confidence in this convenient utility; not directly because of identity abuse (note I didn’t say identity theft) but because of the business model impact it will bring to the banks.   These types of direct attacks (beyond phishing) will start to push the limits of acceptable loss for the financial institutions and their insurers and will start to move the accountability/responsibility more heavily down to the eBanker.  A tiered service level will surface with greater functionality/higher transaction limits being offered with a trade-off of higher security/less convenience.  Same goes for credit/debit cards…priceless!
  5. A Major state-sponsored espionage and cyberAttack w/disruption of U.S. government function
    We saw some of the more noisy examples of low-level crack attacks via our Chinese friends recently, but given the proliferation of botnets, the inexcusably poor levels of security in government systems and network security, we’ll see a targeted attack against something significant.  It’ll be big.  It’ll be public.  It’ll bring new legislation…Isn’t there some little election happening soon?  This brings us to…
  6. Be-Afraid-A of a SCADA compromise…the lunatics are running the asylum!
    Remember that leaked DHS "turn your generator into a roman candle" video that circulated a couple of months ago?  Get ready to see the real thing on prime time news at 11.  We’ve got decades of legacy controls just waiting for the wrong guy to flip the right switch.  We just saw an "insider" of a major water utility do naughty things, imagine if someone really motivated popped some goofy pills and started playing Tetris with the power grid…imagine what all those little SCADA doodads are hooked to…
     
  7. A Major Global Service/Logistics/Transportation/Shipping/Supply-Chain Company will be compromised via targeted attack
    A service we take for granted like UPS, FedEx, or DHL will have their core supply chain/logistics systems interrupted causing the fragile underbelly of our global economic interconnectedness to show itself, warts and all.  Prepare for huge chargebacks on next day delivery when all those mofo’s don’t get their self-propelled, remote-controlled flying UFO’s delivered from Amazon.com.

  8. Mobile network attacks targeting mobile broadband
    So, you don’t use WiFi because it’s insecure, eh?  Instead, you fire up that Verizon EVDO card plugged into your laptop or tether to your mobile phone instead because it’s "secure."  Well, that’s going to be a problem next year.  Expect to see compromise of the RF you hold so dear as we all scramble to find that next piece of spectrum that has yet to be 0wn3d…Google’s 700Mhz spectrum, you say? Oh, wait…WiMax will save us all…
     
  9. My .txt file just 0wn3d me!  Is nothing sacred!?  Common file formats and protocols to cause continued unnatural acts
    PDF’s, Quicktime, .PPT, .DOC, .XLS.  If you can’t trust the sanctity of the file formats and protocols from Adobe, Apple and Microsoft, who can you trust!?  Expect to see more and more abuse of generic underlying software plumbing providing the conduit for exploit.  Vulnerabilities that aren’t fixed properly combined with a dependence on OS security functionality that’s only half baked is going to mean that the "Burros Gone Wild" video you’re watching on YouTube is going to make you itchy in more ways than one…

  10. Converged SensorNets
    In places like the UK, we’ve seen the massive deployment of CCTV monitoring of the populous.  In places like South Central L.A., we have ballistic geo-location and detection systems to track gunshots.  We’ve got GPS in phones.  In airports we have sniffers, RFID passport processing, biometrics and "Total Recall" nudie scanners.  The PoPo have license plate recognition.  Vegas has facial recognition systems.  Our borders have motion, heat and remote sensing pods.  start knitting this all together and you have massive SensorNets — all networked — and able to track you to military precision.  Pair that with GoogleMaps/Streets and I’ll be able to tell what color underwear you had on at the Checkout counter of your local Qwik-E-Mart when you bought that mocha slurpaccino last Tuesday…please don’t ask me how I know.

  11. Information Centric Security Phase One
    It should come as no surprise that focusing our efforts on the host and the network has led to the spectacular septic tank of security we have today.  We need to focus on content in context and set policies across platform and transport to dictate who, how, when, where, and why the creation, modification, consumption and destruction of data should occur.  In this first generation of DLP/CMF solutions (which are being integrated into the larger base of "Information" centric "assurance" solutions,) we’ve taken the first step along this journey.  What we’ll begin to see in 2008 is the information equivalent of the Mission Impossible self-destructing recording…only with a little more intelligence and less smoke.  Here come the DRM haters…
     
  12. The Attempted Coup to Return to Centralized Computing with the Paradox of Distributed Data
    Despite the fact that data is being distributed to the far reaches of the Universe, the wonders of economics combined with the utility of some well-timed technology is seeing IT & Security (encouraged by the bean counters) attempting to reel the genie back in the bottle and re-centralize the computing (desktop, server, application and storage) experience back into big boxes tucked safely away in some data center somewhere.  Funny thing is, with utility/grid computing and SaaS, the data center is but an abstraction, too.  Virtualization companies will become our dark overlords as they will control the very fabric of our digital lives…2008 is when we’ll really start to use the web as the platform for the delivery of all applications, served through streamed desktops on thinner and thinner clients.

So, that’s all I could come up with.  I don’t really have a formulaic empirical model like Stiennon.  I just have a Guiness and start complaining.  This is what I came up with.

In more ways than one, I hope I’m terribly wrong on most of these.

/Hoff

[Edit: Please see my follow-on post titled "And Now Some Useful 2008 Information Survivability Predictions" which speak to some interesting less gloomy things I predict to happen in 2008]

Security and Disruptive Innovation Part IV: Embracing Disruptive Innovation by Mapping to a Strategic Innovation Framework

November 29th, 2007 4 comments

This is the last of the series on the topic of "Security and Disruptive Innovation."

In Part I we talked about the definition of innovation, cited some examples of general technology innovation/disruption, discussed technology taxonomies and lifecycles and what initiatives and technologies CIO’s are investing in.

In Parts II and III we started to drill down and highlight some very specific disruptive technologies that were impacting Information Security.

In this last part, we
will explore how to take these and future examples of emerging
disruptive innovation and map them to a framework which will allow you
to begin embracing them rather that reacting to disruptive innovation after the fact.

21. So How Can we embrace disruptive technology?
Isd2007028
Most folks in an InfoSec role find themselves overwhelmed juggling the day-to-day operational requirements of the job against the onslaught of evolving technology, business, culture, and economic "progress"  thrown their way.

In most cases this means that they’re rather busy mitigating the latest threats and remediating vulnerabilities in a tactical fashion and find it difficult to think strategically and across the horizon.

What’s missing in many cases is the element of business impact and how in conjunction with those threats and vulnerabilities, the resultant impact should drive the decision on what to focus on and how to prioritize actions by whether they actually matter to your most important assets.

Rather than managing threats and vulnerabilities without context and just deploy more technology blindly, we need to find a way to better manage risk.

We’ll talk about getting closer to assessing and managing risk in a short while, but if we look at what entails managing threats and vulnerabilities as described above, we usually end up in a discussion focused on technology.  Accepting this common practice today, we need a way to effectively leverage our investment in that technology to get the best bang for our buck.

That means we need to actively invest in and manage a strategic security portfolio — like an investor might buy/sell stocks.  Some items you identify and invest in for the short term and others for the long term.  Accordingly, the taxonomy of those investments would also align to the "foundational, commoditizing, distinguished" model previously discussed so that the diversity of the solutions sets can be associated, timed and managed across the continuum of investment.

This means that we need to understand how the intersection of technology, business, culture and economics intersect to affect the behavior of adopters of disruptive innovation so we can understand where, when, how and if to invest.

If this is done rationally, we will be able to demonstrate how a formalized innovation lifecycle management process delivers transparency and provides a RROI (reduction of risk on investment) over the life of the investment strategy. 

It means we will have a much more leveraged ability to proactively invest in the necessary people, process and technology ahead of the mainstream emergence of the disruptor by building a business case to do so.

Let’s see how we can do that…

22. Understand Technology Adoption Lifecycle
Isd2007029

This model is what we use to map the classical adoption cycle of disruptive innovation/technology and align it to a formalized strategic innovation lifecycle management process.

If you look at the model on the top/right, it shows how innovators initially adopt "bleeding edge" technologies/products which through uptake ultimately drive early adopters to pay attention.

It’s at this point that within the strategic innovation framework that we identify and prioritize investment in these technologies as they begin to evolve and mature.  As business opportunities avail themselves and these identified and screened disruptive technologies are vetted, certain of them are incubated and seeded as they become an emerging solution which adds value and merits further investment.

As they mature and "cross the chasm" then the early majority begins to adopt them and these technologies become part of the portfolio development process.  Some of these solutions will, over time, go away due to natural product and market behaviors, while others go through the entire area under the curve and are managed accordingly.

Pairing the appetite of the "consumer" against the maturity of the product/technology is a really important point.  Constantly reassessing the value brought to the mat by the solution and whether a better, faster, cheaper mousetrap may be present already on your radar is critical.

This isn’t rocket science, but it does take discipline and a formal process.  Understanding how the dynamics of culture, economy, technology and business are changing will only make your decisions more informed and accurate and your investments more appropriately aligned to the business needs.

23. Manage Your Innovation Pipeline
Isd2007030

This slide is another example of the various mechanisms of managing your innovation pipeline.  It is a representation of how one might classify and describe the maturation of a technology over time as it matures into a portfolio solution:

     * Sensing
     * Screening
     * Developing
     * Commercializing

In a non-commerical setting, the last stage might be described as "blessed" or something along those lines. 

The inputs to this pipeline as just as important as the outputs; taking cues from customers, internal and external market elements is critical for a rounded decision fabric.  This is where that intersection of forces comes into play again.  Looking at all the elements and evaluating your efforts, the portfolio and the business needs formally yields a really interesting by-product: Transparency… 

24. Provide Transparency in portfolio effectiveness
Isd2007031_2

I didn’t invent this graph, but it’s one of my favorite ways of visualizing my investment portfolio by measuring in three dimensions: business impact, security impact and monetized investment.  All of these definitions are subjective within your organization (as well as how you might measure them.)

The Y-axis represents the "security impact" that the solution provides.  The X-axis represents the "business impact" that the  solution provides while the size of the dot represents the capex/opex investment made in the solution.

Each of the dots represents a specific solution in the portfolio.

If you have a solution that is a large dot toward the bottom-left of the graph, one has to question the reason for continued investment since it provides little in the way of perceived security and business value with high cost.   On the flipside, if a solution is represented by a small dot in the upper-right, the bang for the buck is high as is the impact it has on the organization.

The goal would be to get as many of your investments in your portfolio from the bottom-left to the top-right with the smallest dots possible.

This transparency and the process by which the portfolio is assessed is delivered as an output of the strategic innovation framework which is really comprised of part art and part science.

25. Balancing Art and Science
Isd2007032

Andy Jaquith, champion of all things measured, who is now at Yankee but previously at security consultancy @Stake, wrote a very interesting paper that suggested that we might learn quite a bit about managing a security portfolio from the investment community on Wall Street.

Andy suggested, as I alluded to above that, this portfolio management concept — while not exactly aligned — is indeed as much art as it is science and elegantly suggested that using a framework to define a security strategy over time is enabled by a mature process:

"While the analogy is imperfect, security managers should be able to use the tools of unique and systematic management to create more-balanced security strategies."

I couldn’t agree more 😉

26. How Are you doing?

Isd2007033

If your CEO/CIO/CFO came to you today and put in front of you this list of disruptive innovation/technology and asked how these might impact your existing security strategy and what you were doing about it, what would your answer be?

Again, many of the security practitioners I have spoken to can articulate in some form how their existing technology investments might be able to absorb some impact this disruption delivers, but many have no formalized process to describe why or how.

Luck?  Serendipity?  Good choices?  Common sense?

Unfortunately, without a formalized process that provides the transparency described above it becomes very difficult to credibly demonstrate that the appropriate amount of long term strategic planning has been provided for and will likely cause angst and concern in the next budget cycle when monies for new technology is asked for.

27. Ranum for President
Isd2007034
At a minimum, what the business wants to know is whether, given the investment made, they are more or less at risk than they were before the investment was made (see here for what they really want to know.)

That’s a heady question and without transparency and process, one most folks would — without relying purely on instinct — have a difficult time answering.  "I guess" doesn’t count.

To make matters worse, people often confuse being "secure" with being less at risk, and I’m not sure that’s always a good thing.  You can be very secure, but unfortunately make the ability for the business to conduct business very difficult.  This elevates risk, which is bad. 

What we really seek to do is balance information sharing with the need to manage risk to an acceptable level.  So when folks ask if the future will be more "secure," I love to refer them to Marcus Ranum’s quote in the slide above: "…it will be just as insecure as it possibly can, while still continuing to function.  Just like it is today."

What this really means is that if we’re doing our job in the world of security, we’ll use the lens that a strategic innovation framework provides and pair it with the needs of the business to deliver a "security supply chain" that is just-in-time and with a level — no less and no more — than what is needed to manage risk to an acceptable level.

I do hope that this presentation gives you some ideas as to how you might take a longer term approach to delivering a strategic service even in the face of disruptive innovation/technology.

/Hoff

Categories: Disruptive Innovation Tags:

Take5 (Episode #7) – Five Questions for Nir Zuk, Founder & CTO Palo Alto Networks

November 26th, 2007 7 comments

It’s been a while since I’ve done a Take5 and this seventh episode interviews Nir Zuk, Founder & CTO of up-start "next-generation firewall" company Palo Alto Networks

There’s been quite a bit of hubbub lately about PAN and I thought I’d see what all the frothing was about.  I reached out to Nir and sent him a couple of questions via email which he was kind enough to answer.  PAN is sending me a box to play with so we’ll see how well it holds up on the Rack.  I’m interested in seeing how this approach addresses the current and the next generation network security concerns.

Despite my soapbox antics regarding technology in the security space, having spent the last two years at a network security startup put me at the cutting-edge of some of the most unique security hardware and software in the business and the PAN solution has some very interesting technology and some very interesting people at its core.

If you’ve used market-leading security kit in your day, you’ve probably appreciated some of Nir’s handywork:

First a little background on the victim:


Nirzuk_2
Nir Zuk brings a wealth of network security expertise and industry
experience to Palo Alto Networks. 

Prior to co-founding Palo Alto
Networks, Nir was CTO at NetScreen Technologies, which was acquired by
Juniper Networks in 2004.

Prior to NetScreen, Nir was co-founder and
CTO at OneSecure, a pioneer in intrusion prevention and detection
appliances.  Nir was also a principal engineer at Check Point Software
Technologies and was one of the developers of stateful inspection
technology.


Just to reiterate the Take5 ground-rules: I have zero interest in any
of the companies who are represented by the folks I interview, except
for curiosity.  I send the questions via email and what I get back, I post.  There are no clarifying attempts at messaging or do-overs.  It’s sort of like live radio, but without sound…

Questions:

1) Your background in the security space is well known and as we take a
look out at the security industry and the breadth of technologies and
products balanced against the needs of the enterprise and service
providers, why did you choose to build another firewall product?

Don't we have a mature set of competitors in this space?  What need is
Palo Alto Networks fulfilling?  Isn't this just UTM?


The reason I have decided to build a new firewall product is quite
similar to the reasons Check Point (one of my previous employers)
decided to build a new firewall product back in the early 90's when
people where using packet filters embedded in routers - that reason
being that existing firewalls are ineffective. Throughout the years,
application developers have learnt how to bypass existing firewalls
using various techniques such as port hopping, tunneling and encryption.
Retrofitting existing firewalls, which use ports to classify traffic,
turned out to be impossible hence a new product had to be developed from
the ground up.

2) As consolidation of security technologies into less boxes continues
to heat up, vendors in the security space add more and more
functionality to their appliances so as not to be replaced as the
box-sprinkling madness continues.  Who do you see as a competitive
threat and who do you see your box replacing/consolidating in the long
term?


I think that a more important trend in network security today is the
move from port-centric to application-centric classification
technologies. This will make most of the existing products obsolete,
similar to the way stateful inspection has made its predecessors
disappear from the world... As for device consolidation, I think that
existing firewall architectures are too old to support real
consolidation, which today is limited to bolting multiple segregated
products on the same device with minimal integration. A new
architecture, which allows multiple network security technologies to
share the same engines, has to emerge before real consolidation happens.
The Palo Alto Networks PA-4000 series is, I believe, the first device to
offer this kind of architecture.

3) The PA-4000 Series uses some really cutting-edge technologies, can
you tell us more about some of them and how the appliance is
differentiated from multi-core x86 based COTS appliances? Why did you go
down the proprietary hardware route instead of just using standard Intel
reference designs and focus on software?

Intel CPUs are very good at crunching numbers, running Excel
spreadsheets and for playing high-end 3D games. They are not so good at
handling packets. For example, the newest quad core Intel CPU can
handle, maybe, 1,500,000 packets per second which amounts to about 1
Gbps with small packets. A single network processor, such as the one of
many that we have in the PA-4000 series, can handle 10 times that -
15,000,000 packets per second. Vendors that claim 10 Gbps throughput
with Intel CPUs, do so with large packet sizes which do not represent
the real world.


4) Your technology focuses on providing extreme levels of application
granularity to be able to identify and control the use of specific applications.   

Application specificity is important as more and more applications use well
known ports (such as port 80) encryption or other methods to obfuscate
themselves to bypass firewalls.  Is this going deep enough?  Don't you need
to inspect and enact dispositions at the content level; after all, it's the
information that's being transmitted that is important.


Inspection needs to happen at two levels. The first one is used to
identify the application. This, usually, does not require going into the
information that's being transmitted but rather merely looking at the
enclosing protocol. Once the application is identified, it needs to be
controlled and secured, both of which require much deeper inspection
into the information itself. Note that simply blocking the application
is not enough - applications need to be controlled - some are always
allowed, some are always blocked but most require granular policy. The
PA-4000 products perform both inspections, on two different
purpose-built hardware engines.

5)  You've architected the PA-4000 Series to depend upon signatures and
you don't use behavioral analysis or behavioral anomaly detection in the
decision fabric to determine how to enact a disposition.  Given the
noise associated with poorly constructed expressions based upon
signatures in products like IDS/IPS systems that don't use context as a
decision point, are you losing anything by relying just on signatures?


The PA-4000 is not limited to signature-based classification of
applications. It is using other techniques as well. As for
false-positive issues, these are usually not associated with traffic
classification but rather with attack detection. Generally, traffic
classification is a very deterministic process that does not suffer from
false positives. As for the IDS/IPS functionality in the PA-4000 product
line, it is providing full context for the IDS/IPS signatures for better
accuracy but the most important reason as to why the PA-4000 products
have better accuracy is because Palo Alto Networks is not a pure IPS
vendor and therefore does not need to play the "who has more signatures"
game which leads to competing products having thousands of useless
signatures that only create false positives.

BONUS QUESTION:

6)  The current version of the software really positions your solution as
a client-facing, forward proxy that inspects outbound traffic from an end-user
perspective.   

Given this positioning which one would imagine is done mostly at a "perimeter"
choke point, can you elaborate on adding features like DLP or NAC?  Also, if
you're at the "perimeter" what about reverse proxy functionality to inspect
inbound traffic to servers on a DMZ?


The current shipping version of PAN-OS provides NAC-like functionality
with seamless integration with Active Directory and domain controllers.
DLP is not currently a function that our product provides even though
the product architecture does not preclude it. We are evaluating adding
reverse proxy functionality in one of our upcoming software releases.

Categories: Take5 Tags:

Answering A Very Difficult Value Question Regarding Information Security

November 24th, 2007 12 comments

MoremoneyEarlier this week I was in Nice, France speaking on the topic of the impact that the consumerization of IT has on security and vice versa.

We had a really diverse set of speakers and customers in attendance.

When you can pool the input and output from very large financial institutions to small law firms against the presentations from business innovation experts, security folk, workforce futurists, industry analysts and practitioners, you’re bound to have some really interesting conversation.

One of the attendees really capped off the first day’s discussion for me whilst at the bar by asking a seemingly innocuous (but completely flammable) question regarding the value that Information Security brings to the table against its ability to provide service and not stifle agility, innovation and general business practice.

This really smart person leads the innovation efforts at a very large financial institution in the UK and was quite frankly fed up with the "No Department" (InfoSec group) at his company.  He was rightfully sick of the strong-arming speedbumps that simply got in the way and cost money.

The overtly simplified question he posited was this:

Why can’t you InfoSec folks quite simply come to your constituent customers — the business — and tell them that your efforts will make me x% more or less profitable?

In his organization — which is really good at making decisions based
upon risk — he maintained that every business decision had assessed against it an
acceptable loss figure.  Sometimes those figures totaled in the
billions.

He suggested then that things like firewalls, IPS’s, AV,
etc. had a near zero-sum impact when measured in cost against these
acceptable losses.  Instead of the old axiom regarding not spending $100,000 to protect a $1,000 asset, he was actually arguing about not spending $100,000 to offset an acceptable loss of $1,000,000,000…

Interesting. 

I smiled as I tried to rationalize why I thought for the most part, nobody I knew could easily demonstrate the answer to his question.  Right, wrong or indifferent, I agreed that this was really a fundamentally crappy topic to bring up without something stronger than wine. 😉

Speedbumps
It turned into quite an interesting conversation, during which I often found myself putting on various hats (architecture, security, operations, risk management) in an attempt to explain — but not justify — the status quo.

I demonstrated what I thought were some interesting counter-questions but for the most part found it increasingly uncomfortable each time we ended up back at his initial question.   The more complex the answers, the more divergent from the concept he was focused on became.

Imagine if you were the CSO and were being asked this question by your CIO/CFO/CEO as the basis for the on-going funding of your organization: "We can comfortably sustain losses in the hundreds of millions.  Why should I invest in security when you can’t demonstrate that you enable my business to achieve its business goals in a way which can make us more profitable or offset my acceptable losses?"

It’s why businesses exercise any option to swerve around the speedbumps IT/Security are perceived as being.

Categories: General Rants & Raves Tags:

Security and Disruptive Innovation Part III: Examples of Disruptive Innovation/Technology in the Security Space

November 24th, 2007 2 comments

Continuing on from my last post titled Security and Disruptive Innovation Part II: Examples of Disruptive Innovation/Technology in the Security Space we’re going to finish up the tour of some security-specific
examples reflecting upon security practices,
movements and methodologies and how disruptors, market pressures and
technology are impacting what we do and how. 

16.  Software as a Service (SaaS)
Isd2007023
SaaS
is a really interesting disruptive element to the traditional approach of
deploying applications and services; so much so that in many cases, the
business has the potential to realize an opportunity to sidestep IT and
Security altogether by being able to spin up a new offering without involving either group. 

There’s no complex infrastructure to buy and install, no obstruction to the business process.  Point, click,
deploy.  The rationalization of reduced time to market, competitive
advantage and low costs are very, very sexy concepts.

On the one hand, we have the agility, flexibility and innovation that SaaS
brings but we also need to recognize how SaaS intersects with the
lifecycle management of applications.  The natural commoditization of software
functionality that is yielded as a by-product of the "webification" of
many of the older applications make SaaS even more attractive as it offers a more cost-effective alternative.  Take WebEx, Microsoft Live, Salesforce.com and Google Apps as examples.

There are a number of other interesting collision spaces that impact information security.  Besides issues surrounding the application of general security controls in a hosted model, since the application and data are hosted offsite, understanding how and where data is stored, backed up, consumed, re-used and secured throughout is very important.  As security practitioners, we lose quite a bit of visibility from an operational perspective in the SaaS model.

Furthermore, one of the most important issues surrounding data security and SaaS is the issue of portability; can you take the data and transpose its use from one service to another?  Who owns it?  What format is it in?  If the investment wager in service from a SaaS company does not pay off, what happens to the information?

SaaS is one of the elements in combination with virtualization and utility/grid computing that will have a profound impact on the way in which we secure our assets.  See the section on next generation centers of data and information centricity below.

17.  Virtualization
Isd2007024
Virtualization is a game-changing technology enabler that provides economic, operational and resilience benefits to the business.  The innovation delivered by this disruptor are plainly visible.

Virtualization in servers today allow us to realize the first of many foundational building blocks of future operating system architectures and next generation computing platforms such as the promises offered by grid and utility computing models.

While many of the technology advancements related to these "sidelined futures" have been in the works for many years, most have failed to grasp mainstream adoption because despite being technically feasible, they were not economically viable.  This is changing.  Grid and utility computing is starting to really take hold thanks to low cost compute stacks, high-speed I/O, and distributed processing/virtualization capabilities.

Virtualization is not constrained to simply the physical consolidation of server iron; it extends to all elements of the computing experience; desktops, data, networks, applications, storage, provisioning, deployment and security.

It’s very clear that like most emerging technologies, we are in the position of playing catch-up with securing the utility that the virtualization delivers.  We’re seeing wholesale shifts in the operationalization of IT resources and it it will continue to radically impact the way in which we think about how to secure the assets most important to us.

In many cases, those who were primarily responsible for the visibility and security of information across well-defined boundaries of trust, classification, and distribution, will find themselves in need of new methods, tools and skillsets when virtualization is adopted in their enterprise.

  To generally argue whether virtualization provides "more" or "less" security as compared to non-virtualized environments is an interesting debate, but one that offers little in the way of relevant assistance to those faced with securing virtualized environments today. 

Any emerging technology yields new attack surfaces, exposes vulnerabilities and provides new opportunities related to managing risk when threats arise.  However, how "more" or "less" secure one is when implementing virtualization is just as subjective a measurement which is dependent upon business impact, how one provisions, administers, and deploys solutions and how ultimately applies security controls to the environment.

Realistically, if your security is not up to par in non-virtualized, physically-isolated infrastructure, you will be comforted by the lack of change when deploying virtualization; it will be equally as good…

There are numerous resources available now discussing the "security" things we should think about when deploying virtualization.  You can find many on my blog here.

18.  De-/Re-Perimeterization
Isd2007025
This topic is near and dear to my heart and inspires some very passionate discussion when raised amongst our community. 

Some of the reasons for heated commentary come from the poor marketing of the underlying message as well as the name of the concept. 

Whether you call it de-perimeterization, re-perimeterization or radical externalization, this concept argues that the way in which security is practiced today is outdated, outmoded and requires a new model that banishes the notion that the inside and outside of our companies are in any way distinguishable today and thus our existing solutions are ineffective to defend them.

De-/Re-perimeterization does not mean that you should scrap your security program or controls in lieu of a new-fangled dogma and set of technology.  It doesn’t mean that one should throw away the firewalls so abundantly prevalent at the "perimeter" borders of the network. 

It does, however, suggest you should redefine the notion of the perimeter.  The perimeter, despite its many holes, is like a colander — filtering out the big chunks at the edge.  However, the problem doesn’t lie with an arbitrary line in the sand, it permeates the computing paradigm and access modalities we’ve adopted to provide access to our most important assets.

Trying to draw a "perimeter" box around an amorphous and dynamic abstraction of our intellectual property in any form is a losing proposition.

However, the perimeter isn’t disappearing.  In fact, I maintain that it’s multiplying, but the diameter is collapsing.

Every element in the network is becoming its own "micro-perimeter" and we have to think about how we can manage and secure hundreds or thousands of these micro-perimeters by re-thinking how we focus on solving the problems we face today and what those problems actually are without being held hostage by vendors who constantly push the equivalent of vinyl siding when the foundations of our houses are silently rotting away in the name of "defense in depth."

"Defense in depth" has really become "defense in width."  As we deploy more and more security "solutions" all wishing to be in-line with one another and do not interoperate, intercommunicate or integrate, we’re not actually solving the problem, we’re treating the symptoms.

We really need endpoints that can self-survive in assuredly hostile environments using mutual authentication and encryption of data which can self-describe the nature of the security controls needed to protect it.  This is the notion of information survivability versus information security.

This is very much about driving progress through pressure on
developers and vendors to produce more secure operating systems,
applications and protocols.  It will require — in the long term —
wholesale architectural changes to our infrastructure and architecture. 

The reality is that these changes are arriving in the form of things like virtualization, SaaS, and even the adoption of consumer technologies as they force us to examine what, how and why we do what we do.

Progress is being made and will require continued effort to realize the benefits that are to come.

19.  Information Centricity
Isd2007026
Building off the themes of SaaS and the de-/re-perimeterization concepts, the notion of what and how we protect our information really comes to light in the topic of information centricity.

You may have heard the term "data-centric" security, but I despise this term because quite frankly, most individuals and companies are overloaded; we’re data rich and information poor.

What we need to do is allow ourselves not to be overwhelmed by the sheer mountains of "data" but rather determine what "information" matters to us most and organize our efforts around protecting it in context.

Today we have networks which cannot provide context and hosts that cannot be trusted to report their status so it’s no wonder we’re in a heapful of trouble. 

We need to look at the tenets described in the de-/re-perimeterization topics above and recognize the wisdom of the notion that "…access to data should be controlled by the security attributes of the data itself." 

If we think of controlling the flow or "routing" of information by putting in place classification systems that work (content in context…) we have a fighting chance of ensuring that the right data gets to only the right people at the right time.

Without blurring the discussion with the taglines of ERM/DRM, controlling information flow and becoming information centric rather than host or network centric is critically important, especially when you consider the fact that your data is not where you think it is…

20.  Next Generation Centers of Data
Isd2007027
This concept is clear and concise.

Today the notion of a "data center" is a place where servers go to die.

A "center of data" on the other hand, is an abstraction that points to anywhere where data is created, processed, stored, secured and consumed.  That doesn’t mean a monolithic building with a keypad out front and a chunk of A/C and battery backup.

In short, thanks to innovation such as virtualization, grid/utility services, SaaS, de-/re-perimeterization and the consumerization of IT, can you honestly tell me that you know where your data is and why?  No.

The next generation centers of data really become the steam that feeds the "data pumps" that power information flow.  While in one sense even if the compute stacks may become physically consolidated, the processing and information flow become more distributed.

Processing architectures and operational realities are starting to provide radically different approaches to the traditional data center.  Take Sun’s Project Blackbox or Google’s distributed processing clusters, for example.  Combined with grid/utility computing models, instead of fixed resource affinity, one looks at pooled sets of resources and distributed computing capacity which are not constrained by the physical brick and mortar wallspaces of today.

If applications, information, processes, storage, backup, and presentation are all distributed across these pools of resources, how can the security of today provide what we need to ensure even the very basic constructs of confidentiality, integrity and availability?

Next we will explore how to take these and future examples of emerging disruptive innovation and map them to a framework which will allow you to begin embracing them rather that reacting to them after the fact.

Categories: Disruptive Innovation Tags: