In this Dark Reading post, Peter Tippett, described as the inventor of what is now Norton Anti-virus, suggests that the bulk of InfoSec practices are "…outmoded or outdated concepts that don’t apply to today’s computing
environments."
As I read through this piece, I found myself flip-flopping between violent agreement and incredulous eye-rolling from one paragraph to the next, caused somewhat by the overuse of hyperbole in some of his analogies. This was disappointing, but overall, I enjoyed the piece.
Let’s take a look at Peter’s comments:
For example, today’s security industry focuses way too much time
on vulnerability research, testing, and patching, Tippett suggested.
"Only 3 percent of the vulnerabilities that are discovered are ever
exploited," he said. "Yet there is huge amount of attention given to
vulnerability disclosure, patch management, and so forth."
I’d agree that the "industry" certainly focuses their efforts on these activities, but that’s exactly the mission of the "industry" that he helped create. We, as consumers of security kit, have perpetuated a supply-driven demand security economy.
There’s a huge amount of attention paid to vulnerabilities, patching and prevention that doesn’t prevent because at this point, that’s all we’ve got. Until we start focusing on the the root cause rather than the symptoms, this is a cycle we won’t break. See my post titled "Sacred Cows, Meatloaf, and Solving the Wrong Problems" for an example of what I mean.
Tippett compared vulnerability research with automobile safety
research. "If I sat up in a window of a building, I might find that I
could shoot an arrow through the sunroof of a Ford and kill the
driver," he said. "It isn’t very likely, but it’s possible.
"If I disclose that vulnerability, shouldn’t the automaker put in
some sort of arrow deflection device to patch the problem? And then
other researchers may find similar vulnerabilities in other makes and
models," Tippett continued. "And because it’s potentially fatal to the
driver, I rate it as ‘critical.’ There’s a lot of attention and effort
there, but it isn’t really helping auto safety very much."
What this really means and Peter doesn’t really ever state, is that mitigating vulnerabilities in the absence of threat, impact or probability is a bad thing. This is why I make such a fuss about managing risk instead of mitigating vulnerabilities. If there were millions of malicious archers firing arrows through the sunroofs of unsuspecting Ford Escort drivers, then the ‘critical’ rating is relevant given the probability and impact of all those slings and arrows of thine enemies…
Tippett also suggested that many security pros waste time trying
to buy or invent defenses that are 100 percent secure. "If a product
can be cracked, it’s sometimes thrown out and considered useless," he
observed. "But automobile seatbelts only prevent fatalities about 50
percent of the time. Are they worthless? Security products don’t have
to be perfect to be helpful in your defense."
I like his analogy and the point he’s trying to underscore. What I find in many cases is that the binary evaluation of security efficacy — in products and programs — still exists. In the absence of measuring the effective impact that something has in effecting one’s risk posture, people revert to a non-gradient scale of 0% or 100% insecure or secure. Is being "secure" really important or is managing to a level of risk that is acceptable — with or without losses — the really relevant measure of success?
This concept also applies to security processes, Tippett said.
"There’s a notion out there that if I do certain processes flawlessly,
such as vulnerability patching or updating my antivirus software, that
my organization will be more secure. But studies have shown that there
isn’t necessarily a direct correlation between doing these processes
well and the frequency or infrequency of security incidents.
"You can’t always improve the security of something by doing it
better," Tippett said. "If we made seatbelts out of titanium instead of
nylon, they’d be a lot stronger. But there’s no evidence to suggest
that they’d really help improve passenger safety."
I would like to see these studies. I think that companies who have rigorous, mature and transparent processes that they execute "flawlessly" may not be more "secure," (a measurement I’d love to see quantified) but are in a much better position to respond and recover when (not if) an event occurs. Based upon the established corollary that we can’t be 100% "secure" in the first place, we then know we’re going to have incidents.
Being able to recover from them or continue to operate while under duress is more realistic and important in my view. That’s the point of information survivability.
Security teams need to rethink the way they spend their time,
focusing on efforts that could potentially pay higher security
dividends, Tippett suggested. "For example, only 8 percent of companies
have enabled their routers to do ‘default deny’ on inbound traffic," he
said. "Even fewer do it on outbound traffic. That’s an example of a
simple effort that could pay high dividends if more companies took the
time to do it."
I agree. Focusing on efforts that eliminate entire classes of problems based upon reducing risk is a more appropriate use of time, money and resources.
Security awareness programs also offer a high
rate of return, Tippett said. "Employee training sometimes gets a bad
rap because it doesn’t alter the behavior of every employee who takes
it," he said. "But if I can reduce the number of security incidents by
30 percent through a $10,000 security awareness program, doesn’t that
make more sense than spending $1 million on an antivirus upgrade that
only reduces incidents by 2 percent?"
Nod. That was the point of the portfolio evaluation process I gave in my disruptive innovation presentation:
24. Provide Transparency in portfolio effectiveness
I didn’t invent this graph, but it’s one of my favorite ways of
visualizing my investment portfolio by measuring in three dimensions:
business impact, security impact and monetized investment. All of
these definitions are subjective within your organization (as well as
how you might measure them.)
The Y-axis represents the "security impact" that the solution
provides. The X-axis represents the "business impact" that the
solution provides while the size of the dot represents the capex/opex
investment made in the solution.
Each of the dots represents a specific solution in the portfolio.
If you have a solution that is a large dot toward the bottom-left of
the graph, one has to question the reason for continued investment
since it provides little in the way of perceived security and business
value with high cost. On the flipside, if a solution is represented
by a small dot in the upper-right, the bang for the buck is high as is
the impact it has on the organization.
The goal would be to get as many of your investments in your
portfolio from the bottom-left to the top-right with the smallest dots
possible.
This transparency and the process by which the portfolio is assessed
is delivered as an output of the strategic innovation framework which
is really comprised of part art and part science.
All in all, a good read from someone who helped create the monster and is now calling it ugly…
/Hoff
Recent Comments