The process of vulnerability disclosure has improved over the years, but still too many security researchers face threats when trying to report bugs.
Disclosure policies that give ethical hackers clear guidelines are vast and varied and are seldom universally followed, which adds to the friction between researchers and vendors.
This week, the U.S. government’s cybersecurity agency said it will require federal agencies to implement vulnerability-disclosure policies by next March – a landmark directive for agencies that have long lacked a formal mechanism of disclosing flaws. Also this week, WhatsApp debuted a new dedicated security advisory site aimed at informing its more than 2 million users about bugs and keeping them updated on app security.
In this week’s Threatpost Podcast, ZDI head Brian Gorenc and communications manager Dustin Childs talk about why the IoT and industrial IoT markets are still a pain point for ethical hackers trying to disclose flaws, as well as what’s considered appropriate when it comes to public disclosure, and share their craziest Pwn2Own bug stories.
Below find a lightly edited transcript of the article.
Lindsey O’Donnell Welch: Welcome back to the Threatpost podcast. This is Lindsey O’Donnell Welch with Threatpost. And I’m joined today by Brian Gorenc, senior director of vulnerability research for Trend Micro and head of the Zero Day Initiative. And Dustin Childs, who is the Communications Manager for Trend Micro’s Zero Day Initiative. Brian and Dustin, thank you so much for joining me today.
Dustin Childs: You’re welcome. Thanks for having us.
Brian Gorenc: Yeah, thanks for having us. We’re excited to be here.
Lindsey: Great. So, so we’re going to talk about ZDI or Zero Day Initiative today, it’s been 15 years since ZDI was launched in 2005. And when it was first launched, the plan was to financially reward researchers who found previously unknown software flaws, aka zero days, and disclose them responsibly. So tell us a little bit about the ZDI program and how it works and kind of how it first got started all those years ago.
Brian: Well, yeah, the Zero Day Initiative program, back in 2005, what we noticed is that there was a global community out there of independent researchers who were looking for zero day vulnerabilities and really didn’t have an avenue of which to disclose that information to the vendors. And they also would not be getting compensated for the work that they were doing. And what we noticed is, we wanted to incorporate that type of research into what we were doing for the tipping point IPS. And so what we did is we decided we would encourage the reporting of zero day vulnerabilities to effective vendors by financially rewarding researchers out there by having them submit them to us here at the Zero Day Initiative. What this would allow us to do is it would allow us to get an understanding of the zero day vulnerabilities that were being discovered out there by that community and it would allow us to amplify the effectiveness of our team by crowdsourcing vulnerability and exploit intelligence from that community. At the same time, we would, once we financially come compensated those researchers, we would actually work with the vendor to make sure that they would actually fix the vulnerability that had been discovered and kind of educate them on the process of responsible disclosure, and allowing us to protect the customers from harm until the effective vendor actually had shipped the patch.
Lindsey: Right. Yeah, Brian, that’s really interesting. Now back all the way in, you know, 2005, it was a bit of a different vulnerability disclosure landscape. And there is kind of a perception by some in the information security space that those who are finding the vulnerabilities were malicious hackers looking to do harm. What did the vulnerability disclosure landscape look like back then, at the time when this program was being launched?
Dustin: It was definitely a different landscape. So back then people were definitely looking at the vulnerabilities but the disclosure wasn’t as mature as it is now. So even when they were finding stuff, it wasn’t clear what they were supposed to do with it. So there were a lot of misunderstandings between researchers and vendors and bug bounty programs and all of the various players, and really, who was hurting the most was the customers who ended up getting impacted by this. So our program really helped establish a way to coordinate disclosure and kind of mature that process. So, but it was definitely very, very different, because it was, there just weren’t rules set up and there weren’t established procedures, there wasn’t experience. So there was a lot of hurt feelings and a lot of misunderstandings from various people throughout the whole community.
Lindsey: Right. Yeah, I’m sure over the years you saw, there have obviously been a couple of incidents where vulnerability disclosure goes wrong when it comes to, researchers who are reporting vulnerabilities to companies and then get threatened to get sued or other things might happen. So can you talk a little bit about how the landscape has kind of changed over the past 15 years since that time, especially, now we’re seeing more bug bounty service companies gain traction and more bug bounty programs pop up and even companies creating their own vulnerability disclosure policies and whatnot.
Dustin: Right. It really has shifted since over the last 15 years. And when we first started, it was it was controversial and bug bounties themselves were controversial. The really bad analogy that people used a lot of times was, “well, it’s like somebody comes by and paints your porch and knocks on your door and asks to be paid for work you didn’t ask them to do” and, of course, that’s a very poor analogy and not what bug bounties are at all – it’s really outsourcing and crowdsourcing a lot of security research. So over the years, it’s really normalized and people have begun to understand the bug bounties are a valuable part of any sort of response process. But vendors have also gone a long way to doing this. I mean, when Microsoft came out and said “we won’t sue researchers who report to us legitimately.” Not every person, not every vendor has picked up on that, we still occasionally will have to deal with a lawyer. We had one company even this year, a little unhappy to participate in our upcoming Pwn2Own. So that was interesting to kind of work them through the process and like, let them know that they don’t really get to choose not to participate. But again it’s an experience, and it’s an understanding and it’s a maturing of the entire process throughout the industry. So it really has shifted a lot so that bug bounties where they once were controversial, are now normalized. To the point where you see the United Airlines, having a bug bounty program, General Motors has a bug bounty program, Tesla has a bug bounty program. So all of these people that you wouldn’t normally associate with bug bounty programs have some form of formally doing the vulnerability disclosure process and financially compensating the researchers or at least recognizing them in some way.
Brian: And I also think it’s interesting when you look at the maturity of the industry is based off of the verticals that we actually operate in. So you’ll see enterprise software has a certain way that they respond. Whereas Microsoft and Adobe and Apple, some of these big name players in the industry have a much more mature process. But if we look at say, like the ICS, or the SCADA world, the maturity process for vulnerability disclosure in that part of the market is not as mature as the Microsoft’s and Adobe’s of the world. And as a result, what we’re seeing is, each industry maturing at its own rate, each one learning from the lessons of the other industries that are out there, and we’re hoping that you know, as a total, the entire software community will actually be able to respond a lot faster in the future, based off of some of the work that the bounty programs are doing now. And in the process of vulnerability disclosure.
Lindsey: Right, that’s a really good point in terms of – and I know too, for ICS and SCADA they on their end, they might have a little more complications, trying to patch and what that means for downtime of their systems and whatnot – But that’s a good point to really point out kind of that variation between different industries and how they’re responding and different companies. How has ZDI seen, based on the researchers coming to you with different vulnerabilities and how the companies are responding to those vulnerabilities, are you seeing any kind of overarching changes over the past few years, in terms of kind of the relationships between the security research community and the white hat hacker community and the companies that they are finding vulnerabilities and kind of disclosing them within?
Brian: I think if you look at the companies that are adopting the bug bounty programs and the ones that are kind of reaching out to the research community for guidance, you can see them trying to point them in the direction of which products they want research in. Take Microsoft for example. They’ll have by bounty programs for HyperV and some of the cloud technology out there that they’re running as a vendor. You know, we in ZDI, consider ourselves vendor agnostic. So what we’re looking for is vulnerabilities in enterprise software and things that are affecting our customers, other companies, the bug bounty as a service companies, are looking for vulnerabilities in what they’re being paid to look for. So researchers can identify and find the areas and the programs that best work with their skill set, and try to maximize the amount of money that they’re making through the research that they’re doing. And also help highlight issues in some of the technology that the vendors themselves are actually very interested in finding.
Dustin: And to go along with that. I mean, a few years ago, we saw the enterprise companies really embrace the security researcher community. And we’re starting to see that now like at Pwn2Own Miami this year, we had the ICS World start to understand, and kind of make some of those connections with the security researcher community as well. I think we have yet to see that in the IoT world. But that’s probably going to be the next area where they really start to embrace this idea of an outsourced security research component that they can tap into.
Lindsey: I also wanted to talk about public disclosure, and that whole process as well, I know ZDI has you have a policy where you allow the vendor 120 days to address the flaw with a security patch, or other corrective measure. Can you talk a little bit about this process of public disclosure and why this amount of time versus more or less, and kind of public disclosure in the security space in general?
Brian: Yeah, I think, you know, back when we first started releasing zero days advisories for bugs that were not fixed, we were actually doing it at 180 days. And so when that happened, there was a lot of vendors out there that weren’t addressing the issues that we were reporting to them and we got tired of them not admitting at least to the issue and releasing fixes for them. So we placed that timeline on them to kind of enforce them to, to make a fix happen. And what we learned over the years is that the vendors would actually respond to this type of activity and start to work against that timeline and start releasing patches right before the timeline was up. And so these types of timelines are actually very valuable for vendors, they actually give them a set timeline in which they can actually work and try to schedule it within the software releases that they’re doing. And over time, what we noticed is that vendors were getting better and better. And so as we analyze the data coming in from the ZDI program, we actually adjusted our timeline down to 120 days to keep the pressure on the vendors, to ensure that they would release a patch in a timely fashion. And you see other bug bounty program and other research organizations taking a different approach is that this some have 90 day timelines and some have no timelines, we prescribed to the 120 day timeline based off of the data that we have in our program and how the industries are responding to vulnerabilities that are coming from our program.
Dustin: And looking at disclosure as a whole, generically speaking, it’s our opinion, disclosure should drive action. And disclosure takes a lot of forms, that sometimes you don’t even realize Adobe and Microsoft disclose bugs every second Tuesday of the month. And the action there is apply the security patch. So it’s very simple – disclosure driving action. For us, when we disclose something, we want to encourage an action and maybe it is changing that timeline, from 180 to 120. Or maybe it’s getting information out there that people need to know, to make change configuration or to apply a patch, or to put pressure on a vendor to release a patch. We’ve had instances where we’ve had disagreements with vendors on severity and stuff and said “okay, well, we’re going to disclose this,” and we go public with it. And then the vendor has a change of heart and they release a patch a week later. So clearly that’s a time where disclosure is driving action. But if you’re not driving action with your disclosure, you need to ask yourself, what are you doing with that disclosure, if you’re not driving an action – if you’re just making noise, we’re kind of against that. So that’s why we think disclosure should drive action. It’s why we have the 120 day disclosure timeline to make sure that vendors don’t just ignore the reports. But it also gives them a reasonable amount to actually produce effects.
Lindsey: Yeah, Dustin, that’s a really good point. And I think it’s interesting too, that there are these varying ideas of what is appropriate bug disclosure and some researchers over the past year will release you know, PoC exploit code for unpatched bugs or others will disclose flaws without giving vendors enough time to respond or some will have, as you mentioned before, they’ll go by the 90 day disclosure policy or 120 days or whatnot. Do you ever see there being more of a consensus within the security space about what is appropriate versus not appropriate?
Dustin: I don’t know that the that our community will come to an agreement about anything. So I don’t, I would like to think that we would continue to have multiple voices, continuing to have the conversation, maybe one day we can get it, our data will say that 90 days is the appropriate timeline, and others can back that up on it. I really hope we have multiple voices and continue to have the conversation, but have our decisions be driven by data and not by fear, or, looking to make a name for yourself or whatever. So I don’t know that there ever will truly be a consensus because there’s a lot of opinion and a lot of data that is not shared between organizations. So we’re not all looking at the same thing, making decisions off the same pieces of information.
Lindsey: Right, right. That’s fair. Now, beyond bug disclosure, what are some top challenges that are existing for security researchers who are looking to disclose vulnerabilities?
Dustin: Well the Wassenaar Arrangement has certainly changed some of the disclosure [policies] especially if you’re one of those countries that is participating in the Wassenaar Arrangement, you have some additional paperwork that you need to fill out. And we’ve learned to deal with that over the years. There’s also, just as a researcher right now, there’s so much out there. Do you work with a vendor agnostic program? Do you go for a vendor specific program? Do you work with bug bounty as a service? Do you participate in CTFs? There’s a lot of options. And that’s not even getting into like the gray market with bug brokers. And then there’s the, well “what happens to my research after I sell it to this broker who resells it to a government, etc.?” That’s one of the things that we’re seeing right now is there’s just so much out there, that researchers are having a hard time even knowing where to begin, if they’re just getting into it. And then, you know, how do they navigate this now almost saturated field?
Brian: Yeah, that’s one of the value props for a program like the Zero Day Initiative is we’re actually a single source for disclosure to multiple vendors. And we see a lot of researchers out there who will work with us on with disclosure to multiple vendors in different industries and we kind of are able to act as a middleman and making sure that their research actually gets to the vendor, actually gets addressed. And an advisory released with the proper, you know, acknowledgments and patches being released to the customers.
Dustin: Yeah, and going back to like the emerging industries for a lot of places in ICS SCADA or IoT, it’s not very clear where to submit your research for the established – the Apple’s, the Microsoft’s, the Googles, you know where to send an email saying, “Hey, I found this bug and want to report it,” for a lot of places in SCADA, in IoT, they just, it’s not very clear where to submit your research.
Brian: Again, this comes back to the maturity of those different verticals that we were talking about earlier in the conversation. Now, hopefully we’ll see over time that the ICS you know, vendors will have better adopted the process that the other vendors are using.
Lindsey: Shifting gears back to the Zero Day Initiative. Can you talk about some of the kind of more interesting stories that you’ve seen or the most unique bugs that have been discovered through your program over the past 15 years?
Dustin: There’s been a lot, Brian, once you start with the IE mitigation bypass.
Brian: Yeah, I think that, you know, for us, you know, for myself and for a couple of the researchers on my on the team, we actually worked back when Microsoft was trying to address the use-after-free vulnerabilities in Internet Explorer that at that time, there was a lot of exploitation happening against those vulnerabilities. And we were seeing a lot of zero day vulnerabilities in Internet Explorer. And what happened was Microsoft actually implemented a set of mitigations to address use after free vulnerabilities. It was called isolated heap, and at the time was called MemGC. And the purpose of this was to try to start to make use after free not exploitable anymore. And so what they did is they actually released is a mitigation in the browser without telling anybody and so we, they kind of released it silently on one of the patch Tuesdays. And what we noticed inside of the program was that all of a sudden, all of the use after free vulnerabilities that we had cataloged in our system, all were gone. And all had been basically made, made useless by this by this mitigation. And so we did a little bit of research into this. And we actually looked into how they actually implemented these mitigations. And were able to develop a set of attacks against these newly implemented mitigations. That allowed us to take advantage of them and basically allow us to bypass ASLR and break some of the mitigations that they had put into put into place. And so we quickly gathered this information together and submitted it to Microsoft – Microsoft’s bounty program – and that research actually earned us $125,000 in in rewards from Microsoft, at the time being one of the one of the highest payouts that they had done as part of their bounty program. And being who we are, we actually donated all that money to education that focused on STEM education. And so that for me personally was one of my most favorite bugs and research that we’ve done through the program just because of the impact that it had and also who we were able to benefit from that research.
Dustin: Yeah, another bug that I go back to a lot was from Pwn2Own a couple years ago, it was a VMware bug. And to demonstrate it, they went into a virtual machine, opened a web browser and browsed to a web page, and that was the entirety of the user interaction. And from that, it escaped the browser, it escaped the kernel, it escaped the client, it executed code at system level on the underlying hypervisor. And that was incredibly impressive to watch. And just really great research to see.
It’s hard for me to think of a single bug just because we get so many, so I think of trends you know the rise and fall of Java bugs after you know click to run was implemented. Flash bugs that were hugely popular and have tapered off over the years. And now, you know, Flash is going away at the end of this year. The UAFS, the use after frees that Brian mentioned, those have come and gone and now we’re seeing the rise of deserialization bugs. So but as far as individual ones, I look at the Pwn2Own contest. And those are those are some pretty unique bugs we see there. Even the demonstration last year with the Tesla that was really neat. You know that sort of thing. I go back to the Pwn2Own and those are the most realistic bugs that we see too. So they’re fully working exploits, that you really have to spend a lot of time on to make work there. A lot of fun to watch.
Brian: Yeah, truly pieces of art. I think when we look at, when we actually get to go and dig through the exploits in Pwn2Own. It is quite amazing how these researchers can not only find the zero days that they’re leveraging in the contest, but then bypass all of the mitigations that have been implemented by the companies out there. It’s truly inspiring and that’s why a lot of these, you know, get talked about on the conference scene or when Pwnie awards at Black Hat and things like that.
Lindsey: Yeah, that’s very true. And I know too for Pwn2Own, that you guys have really been kind of diversifying your threat hunting services through these different events. And, you know, I feel like it’s really expanded beyond, you know, browsers and OS vulnerabilities to also look at mobile and IoT devices and industrial control systems. So, you know, that’s always kind of exciting to watch and look at these kind of kind of new areas that are being encompassed there.
Brian: Yeah I like the latest spins of the Pwn2Own contest where we’ve partnered with the S4 conference to bring Pwn2Own to the ICS community, I think is you know, at least for this year, one of them the more impactful events that we’ve run. When we first launched the contest, we were kind of unsure because we had never really ran a contest in this space before. We didn’t know how the research community was going to react to this. And what we found out is, you know, when we run upon to own in a new area, people are going to start looking in those in those areas. And we had a very long, long days back at the S4 conference going through all of the exploits. I mean, we were basically there, the entire conference running exploits disclosing vulnerabilities to the different vendors, from multiple international teams. So it created quite a fun event. And I see, I think what we’ve seen after that, as we’ve seen a lot more disclosures in the ICS space, since that conference was run. And so you know, it does have an impact on the greater ecosystem, when programs like the Zero Day Initiative are involved in some of the more high profile events that are happening out there, and kind of encouraging people to research in certain areas, and kind of uncovering some of the new weaknesses that that exist in those.
Dustin: Yeah, that’s one of the great thing about the contest and about our program just in general, is that we’re able to guide research and assessment areas that were looking for. Virtualization wasn’t a category until 2016, for Pwn2Own. And we really wanted to see some of those virtualization bugs. Now, it’s very common for us to get bugs in VMware and Oracle throughout the year. So the same thing we’re seeing with ICS. We’re really trying to guide researchers into areas that interest the community at large.
Lindsey: Right, right. Well, you know that that sounds like it will continue to be very innovative and exciting and really what it means for the researchers and kind of what new threat areas will be expanded to. So Brian and Dustin, thank you so much for coming on today to talk about ZDI and vulnerability disclosure and Pwn2Own.
Dustin: You’re welcome. Thank you for having us.
Brian: Yes, we appreciate the time. Thanks.
Lindsey:Thank you. And thanks again to our listeners for tuning into today’s Threatpost podcast. If you have any questions or thoughts on vulnerability disclosure, feel free to tweet to our Twitter page @Threatpost any of your comments and catch us next time on the Threatpost podcast.
Also, check out our podcast microsite, where we go beyond the headlines on the latest news.