Quick Quip: Capability, Reliability and Liability…Security Licensing
Earlier today, I tweeted the following and was commented on by Dan Kaminsky (@dakami):
@beaker “In general, security engineers are neither.”
— Dan Kaminsky (@dakami) November 28, 2012
…which I explained with:
We practice and focus on physical infrastructure, network and application security as relevant discipline slices, but “information?”…
— [Christofer] Hoff (@Beaker) November 28, 2012
This led to a very interesting comment by Preston Wood who suggested something very interesting from the perspective of both leadership and accountability:
@beaker @dakami It’s time for a degreed or licensed requirement for security decision makers – just like other critical professions
— Preston Wood (@preston_wood) November 28, 2012
…and that brought forward another insightful comment:
@beaker @dakami would be nice if there were engineers out there that did accept liability.It’d end charlatanism quickly.
— ǝɔʎoſ ʇʇɐW (@openfly) November 28, 2012
Pretty interesting, right? Engineers, architects, medical practitioners, etc. all have degrees/licenses and absorb liability upon failure. What about security?
What do you think about this concept?
/Hoff
Improbable in the extreme. In order to accept liability, knowledge must be reduced to a an agreed corpus of core content. Security is not bound by conventional knowledge – it’s a moving target.
Because of this, professional indemnity insurance cannot be accurately assessed by actuaries. The inability to lay down accurate models for risk of infraction involves resultant legal costs and punitive fines. How could these be valued and risk allocated from an insurance perspective.
After all, would you take liability without an insurance policy ?
@EtherealMind
Interesting that you structured your argument around and concluded with “insurance policy.” You do realize there is a huge (almost $B) market for “cybersecurity” insurance TODAY.
There’s not that large of a leap establishing the connection between an organizational component of an insurance policy and extending the named to a person(s) responsible for such (regardless of license/credential)…
/Hoff
I’m having trouble seeing how saddling the “security engineer” with liability for the outcome is going to help matters in this case. Structural engineers don’t have to worry that they’ll be held liable if the client decides to use the office building as an ammunition depot, or that the steel beams they specified will turn out to be made out of cheese after they’re installed. Information security practitioners have to protect inherently insecure systems and processes using buggy and inadequate tools while keeping out of the way of the business. A structural engineer would never be expected to sign off on a design where the CEO demanded that his office was 100% glass on all sides, but IT security people are asked to “secure” things at least that stupid every day.
There’s at least one such licensed scheme where licensees accept personal liability, but it’s niche (right now, anyway; UK Govt specific). Check out http://www.cesg.gov.uk/servicecatalogue/CLAS/Pages/CLAS.aspx .
While it doesn’t appear, at first glance, to have significant entry requirements from the point of view of skills, it gets very obvious pretty quickly that it’s not something to take on, unless you really know what you’re doing.
@beaker The actuarial analysis for the _effect_ of cybersecurity insurance is reasonably simple. Measure the value of a business, set a baseline for total business loss for a breach. Follow through with some research on the numbers and you could build an actuarial model that would survive legal challenges.
For Civil and Medical instances, liability is determined through historical data sowing the relative risk models. For IT security, I doubt that there is sufficient data to build a sound model.
Secondly, the accreditation is performed by universities and societies. I find it doubtful that we could create IT Security degrees that require 4 years of study and award degree level certifications. For an industry body accreditation, we have many, many of those in IT security already and none have any serious value or credibility.
It’s not impossible, but I can’t envision the political and business environment that would create options for execution.
@Etherealmind:
Again you make an interesting assertion:
“Secondly, the accreditation is performed by universities and societies. I find it doubtful that we could create IT Security degrees that require 4 years of study and award degree level certifications. ”
Since I knew of at least 10 of these off the top of my head, I used Google and with a quick query of “university degree information security” found over 150+ undergraduate and graduate degrees in “Cybersecurity” and “Information Assurance” and “Information Security” from accredited higher learning institutions…
You were saying?
One difference here is that in some of the examples you gave, the licensed professional has final say over a particular risk decision. A lot of time in the information security field, the security practitioner doesn’t have this authority. Often, the security architect or engineer is ultimately just making a recommendation to business people, who really have the true authority about a purchase/strategy/etc. Responsibility without authority is a failing equation that is all too common in this field.
Although I made a quip regarding this on Twitter about a $130k CISSP, it would not require that everyone have some sort of BA/IS degree. HIstorically -and still in some jurisdictions today- people may sit for the bar after having met some amount of apprenticeship work, so I do not see this as the biggest hurdle.
What I do see as the hurdle is the other side of the coin: disallowing those that are not “In” (as JackDaniel put it) from practicing the art. Plus, (as has been mentioned) empowering the practitioners to the extent that not everything on their wish list requires outside consultants to prove to the business that the requirement is sane and/or necessary.
Where’s our galloping gerty? Where’s our catalog of failures? Without a set of known failures and case studies from each, how can we decide when we’ve engaged in proper care?