Lucene search

K
threatpostKatie MoussourisTHREATPOST:27F8092D2D7E88CBD23EAF8A7A016E24
HistoryMar 23, 2009 - 8:02 p.m.

Partial disclosure: Was it a cat I saw?

2009-03-2320:02:26
Katie Moussouris
threatpost.com
43

0.974 High

EPSS

Percentile

99.9%

Quite often in our industry, two (or five) people can look at the same problem from different angles, and see radically different things. Rare is the situation that reads the same to everyone, forwards and backwards. It’s all about perspective.

In my appearance on the ‘Partial Disclosure Dilemma’ Panel at SOURCEBoston this year, I found myself surrounded by great minds who most certainly do not think alike. While there was some agreement and common ground between all parties on the dais, namely wanting to make the Internet safer and protecting people, there was little agreement on the best way to accomplish that goal.

The conversation between us friends and colleagues, both on stage and in the audience, wended its way down many tangential paths, most of which I will have to watch again on the video to fully understand how we got from Partial Disclosure to Dan Kaminsky saying “More people have died from windows crashing on them than from Windows crashing.” But I promised my redux of the panel, so I will guide you down the path I think was most interesting.

[Partial disclosure, complete disagreement** ]**

The disclosure issues around Dan Kaminsky’s DNS vulnerability were one seed of the panel idea. If you are reading this blog, then I will assume that you’ve heard of this vulnerability, else you must have been living under an Amish rock in a Luddite colony, high in the brisk, thin air of the Himalayas.

As far as the disclosure route he chose and how that played out, he executed a plan he thought was best in order to get vendors to fix a serious issue (they did), and to get as many affected customers protected (some were) with the fix in place before broadly releasing the technical details. He let a small number of people know the details, in the hopes that delivering those details to the right people and no one else would best protect the world’s critical infrastructure. Hence, the term ‘partial disclosure’ was used to describe his approach. Other notable researchers thought it was just hype, then took it back once they had spoken to Dan, some pretty much figured it out on their own, or chatted about it on DailyDave. It was weaponized shortly thereafter, and a couple weeks after his initial announcement, some affected people had applied the update and some unfortunately hadn’t. There were certainly more details I’m skipping here, but that’s the skinny.

Now that the panel stage was set, here’s one of the topics I thought was interesting.

In our introductions, we each counted ourselves among the security research community. Some of us had also been or still are consultants, all of us had done the startup thing, and some of us had been in charge of running some kind of computing infrastructure.
At the risk of sounding immodest, I believe I had a unique perspective on the topic of responsible disclosure as I was the only panel member who has been, at various stages in my career, a vulnerability Finder, Coordinator, and a Vendor (for both open and closed source software).

Let my official punditry from this pugnacious pulpit begin. 😉

It was interesting to me that the panelist who most strongly endorsed “inflicting pain” in the form of exploit release in order to provide the necessary “wake up call” to vendors had never been responsible for maintaining any kind of infrastructure deployment. We all know how much easier it is to break something than it is to build it or to fix it, yet there is a pervasive attitude among many security researchers that nothing should be more important than security, not even the business itself.

And that’s where our disagreement’s footing took a stronger hold on its rocky purchase. Define pain, our moderator asked. Who decides on the form the pain will take and how intense or widespread it is, I asked.

Sure, it took some pain to get the attention of software vendors to fix their products and build security in from the ground up. But as security folks, aren’t we tired of having to use the same arguments of active, widespread exploitation toprove that something needs to be done? Security people often complain that not enough has changed since the epoch began, but if that’s true, then why have we not looked at ways to stop beating our heads upon the supposed brick wall of vendors or deployers of technology, and instead tried something different to get the right eyes on the right issues at the right time to do the right thing? Doesn’t executing the same behavior over and over again but expecting different results equal insanity? When are we willing to stop the madness?

At the end of the panel we were each asked to describe our security utopia. My Shangri-La was this: I would like to see more cross-over among those of us who say the sky is falling and those of us upon whom the sky will fall.

Communication between two groups with different mindsets requires a lingua franca other than exploitation. One might think that math is the language of the universe, and Proof of Concept serves as the mathematical proof needed for anyone and everyone to arrive at the logical conclusion of “drop everything NOW and create (if you’re a vendor) or apply (if you’re managing infrastructure) the update.” Before I had ever been responsible for building anything or protecting anything, I might have agreed with that, since it made perfect logical sense to me at the time, in the context within which I worked.

But it’s not doing the trick of convincing all vendors and all deployers by a long shot, so obviously, we need something to change. PoC can and should be part of the conversation between responsible researchers and people to whom they are reporting the issues, but it must be framed appropriately for the listener. PoC is not that simple for non-security types to immediately frame the same way we do as security people. Even if they do grok the severity of the situation, they may not be able to move as quickly as a researcher feels they should.

Consider this, researcher-types: If you’ve never managed infrastructure before, or been responsible for shipping and maintaining complex and widely deployed code, then you don’t have the context to understand why there are sometimes legitimate reasons to do things more carefully and therefore more slowly. Once the talk recordings are posted, check out the very thorough treatise by our own MSRC on How Microsoft Fixes Security Vulnerabilities: Everything you wanted to know about the MSRC Security Update Engineering Process. Think about how you as a researcher and security expert would react if some CTO or IT person or developer who lacks your depth of security knowledge and subject matter expertise came and told you what to hack, how to hack, and at what pace to hack it? That’s essentially what you’re doing when you say “you should be able to fix it and fix it now, and if you don’t do it on my timeline, then you obviously need to be made an example of so I’m going to release an exploit for it into the wild.”

They don’t swim in your security research toilet :-), so why must you pee in their development or infrastructure pool?

Okay, I couldn’t resist making the joke – and no, I don’t think security research is a cesspool, or I wouldn’t have founded two vulnerability research programs in my career. What I am saying is that all of us should be striving for the delicious harmony of combining your chocolate with my peanut butter, your gin and my tonic, your milk and my shake, in order to make the whole greater than the sum of its parts. As a researcher, one can choose to be the sabot and grind gears to a halt to prove a point, or one can be the grease that moves things along with less friction, earning the trust that will allow each subsequent notification pill to be swallowed more easily. As a developer or deployer, one can choose to stuff up one’s ears until someone firmly inserts an icepick, or one can strive to fix things as quickly and safely as possible and learn from the experience to continually improve and speed up that process over time.

We need a better way to reach our common ground of protecting the computing environment on which we all rely. Researchers need a means by which to communicate urgency that avoids descriptive hyperbole or causes damage, which erodes trust. Developers and deployers need a better way to service existing code and infrastructure reliably, safely, and rapidly if necessary, to build trust among the researcher and customer communities that they are doing the best they can at any given time. Around here, we’ve done serious work on making this a reality on the development front, with the dual-ninjas of SDL (proactive) and MSRC (reactive). I’d like to see SDL someday brought up to a full double-D in the form of a Secure Development and Deployment Lifecycle, to build infrastructure design and servicing models that are resilient in the face of threats to deployments as well as software. Perhaps I can begin to work on this here at Microsoft, if I can get some of my other work done first. 🙂

After we had each said our peace on what our security utopia looked like, that’s where we left things. No agreement could be reached in the two hours or so we were on the stage, which is no surprise. If the tape ran out before the end, then you won’t get to see us literally “hug it out” after all was said and done, disagreements notwithstanding. I continue to have tremendous respect and share camaraderie with my fellow panelists and with researchers around the world. It is my hope that the determination and vision of those on any side of the equation who can see across the role boundaries of researcher, vendor, and deployer will usher us into a new age.

People often ask what more is there to say about disclosure that hasn’t already all been said. I think the real conversation on how to get the results we all desire – to get things fixed in spite of our disagreement – has yet to truly begin.

I’m listening, as well as talking. Are you?

_* Katie Moussouris is a senior security program manager in Microsoft’s Secure Development Lifecycle (SDL) team.
_

Photo credit: Microsoft.

References

0.974 High

EPSS

Percentile

99.9%