It’s the second part of our talk with Daniil Svetlov at his radio show “Safe Environment” recorded 29.03.2017. In this part we talk about vulnerabilities in Linux and proprietary software, problems of patch an vulnerability management, and mention some related compliance requirements.
Video with manually transcribed Russian/English subtitles:
Previous part “Programmers are also people who also make mistakes”.
Taking about the fact that if you use fully updated software and do not use some self-written scripts, programs, then in theory everything will be safe.
But recently there was some statistics that critical vulnerabilities stay in Linux kernel about 7 years from the moment they appeared as a result of a programmer’s error till the moment they were found by our white hat researcher.
** But it is not clear during these seven years if cybercriminals have found them, used them and how many systems were broken using this vulnerabilities. Not to mention that some special government services may use it too.**
> For example: The latest Linux kernel flaw (CVE-2017-2636), which existed in the Linux kernel for the past seven years, allows a local unprivileged user to gain root privileges on affected systems or cause a denial of service (system crash). The Hacker News
Well yes. There is such a statistic. There is also some criticism from proprietary software developers. Like you say “many eyes that looks in code will find any error.” This is a quote from Linus Torvalds, if I’m not mistaken.
> Not exactly. Linus’s Law is a claim about software development, named in honor of Linus Torvalds and formulated by Eric S. Raymond in his essay and book The Cathedral and the Bazaar (1999).[1][2] The law states that “given enough eyeballs, all bugs are shallow”; or more formally: “Given a large enough beta-tester and co-developer base, almost every problem will be characterized quickly and the fix obvious to someone.” Wikipedia
But in practice, yes, there are really old vulnerabilities that come up after many many years. Because apparently they did not looking for this vulnerabilities well enough.But we still don’t have anything else, except Linux kernel. Therefore, they can say anything, but they will use it anyway. It is in the first place.
And secondly, sometimes vulnerabilities appear in Microsoft software, for example in Windows. Quite possible that in some calculator, someday researchers will be found vulnerability from the times of Windows XP. All these are normal things.
The fact that some vulnerabilities were disclosed is not such a big trouble. The big troubles appear if vulnerabilities were not patched promptly. Particular systems in some particular infrastructure. Why do they get updated slowly?
The reasons can be quite different actually. Why, for example, do not update all applications at once?
If we have Linux servers, let’s update them all. Great. But on these Linux servers we have our own applications. Who will guarantee that when updating some open-source components that these applications use, they just will not break. They can stop working suddenly and we will need to figure out why. It turns out that before update of any component, you need to undergo a complete testing process. This is also expensive.
Plus it also slows you down probably.
Yes, and it slows down, so there must also be a compromise.
If you scanned your network, detected some vulnerabilities and brought them to IT administrator saying: “Let’s update!”, the natural questions will be “Why? How critical these vulnerabilities are? Are they really exploitable in our infrastructure?” And in all companies the software will be updated only when it is really necessary.
Or when we look at Windows workstations. You can update them, but you need to restart the computer. And users really do not like to reboot. Because they have some scripts working there.
Yes, or they just opened a document.
Yes, the document is open, they work with it and then the window pops up: “restart, you have critical update.” This is also annoying, it interferes with their work. That delays the whole updating process.
Well, if we go back a week ago we had Sergei Soldatov here in the studio and we discussed the problem of so-called targeted attacks, APT in particular. And we discussed it at the end recommendations of Australian Department of Defence.
You can read at <https://www.asd.gov.au/infosec/mitigationstrategies.htm>
They adore articles like “15 first measures in order to increase information security”. And the four main things that you need to do in your infrastructure, in their opinion, if you want to protect yourself from APT:
Sergey doubted the fact that the first two items are still relevant now because all the attacks are done not with malware, but with PowerShell, cmd, the most common software.
And on the second point, he also said that very much can be done directly with user permissions. And if we talk about Trojans and CryptoLockers, they do not really need any admin rights, they will encrypt exactly what is available for the user. And yet, the remaining two items, update operating systems and software updates, are important. But, as I understand it, doing this is a scale of a big organization, when you have thousands of computers in principle, it is very difficult.
Yes, indeed. I can agree that the Australian Defense Ministry is in a trend. Basically, the same recommendations can be found in the CIS Critical Controls and many many other standards. Even in PCI DSS.
PCI DSS requires all critical updates to be installed within a month.
> Requirement 6: Develop and maintain secure systems and applications
…
6.2 Protect all system components and software from known vulnerabilities by installing applicable vendor-supplied security patches. Install critical security patches within one month of release.
<https://www.pcisecuritystandards.org/documents/PCIDSS_QRGv3_1.pdf>
Yes, both about updating and scanning vulnerabilities with certified solutions, and about scanning vulnerabilities with your scanner: not only perimeter, but also inside your network. All this is also in PCI DSS. Those. All modern standards really recommend this in one form or another.
Is it difficult or simple: of course it is difficult. Here is the problem of scale. Let’s say we have an infrastructure in with two servers, with Linux for example, and 20 workstations running Windows. Basically, we can manually monitor how they are updated and whether there are no vulnerabilities there. Or write your own scripts that will do it.
Another situation is when you have thousands and tens thousands of servers not only with Windows and Linux, but also with proprietary Unix, some network devices, etc. Some certified network devices from Russian local vendors can be used, that are unknown all over the world. All this greatly complicates the whole process of Vulnerability Management.
In fact, for each host of the network it is necessary to detect what software or firmware version it uses. View the list of vulnerabilities. To do this, you need to look at all security bulletins of each vendor. And then to understand which of the vulnerabilities are really critical in order to prioritize recommendations for the update.
If this is a large organization, then it’s likely that IT administrators will make the updating. If the organization is small enough, then usually the IT administrator is the security guy at the same time, he is in charge of everything, and he will have to update infrastructure too.