Lucene search

avleonovAlexander LeonovAVLEONOV:A5219F45CF78A7D911A6EBBE8F9D49B2
HistoryAug 09, 2017 - 5:49 p.m.

Downloading entire database in 5 minutes

Alexander Leonov





Today I once again would like to talk about and why, in my opinion, it is the best vulnerability database that exist nowadays and a real game-changer.

The main thing is transparency. Using Vulners you not only can search for security content (see “Vulners – Google for hacker”), but download freely all available content from the database for your own offline analysis. And more than this, you can even see how Vulners actually works and evaluate how fresh and full the content is.

Vulners collections

Why you may need to download full security content database? For example, you may want to create something like vulnerability quadrants.

Vulnerability Quadrant

For this I needed to get CVE objects with all related security objects of other type. This information you can see it on any CVE web page on, e.g. CVE-2017-0144.

Vulners CVE related objects

You can get it using search API: references in JSON

The problem is that you will need to do it for each CVE. That mean you will need to make enormous amount of requests. It’s not a very efficient process. Of course, it would be good if we had all these links to security objects in CVE Vulners collection (see “Processing Vulners collections using Python”). But unfortunately these data is dynamic currently is not available in Vulners archives/collections.

So, the other option is to download all available security object collections and process it from the own scripts.

First of all we need to get available security object types. You can do it with a simple get request to <;

  "result": "OK",
  "data": {
    "type_results": {
      "**aix**": {
        "lastUpdated": [],
        "bulletinFamily": "unix",
        "displayName": "IBM AIX",
        "lastrun": "2017-08-09T19:06:26",
        "count": 108,
        "workTime": "0:00:12.061442"

You can download all collections in multiple parallel threads using Python code bellow. I created eventlet.GreenPool() element and ran pool.imap(download, object_names) with a link to a worker function, that downloads Vulners collection, and the set of available object_names (cve, nessus, openvas, redhat, centos, aix, etc.). Parallel launching of the tasks will be managed automatically.

#pip install eventlet

import requests
import json
import eventlet
import os

response = requests.get('')
objects = json.loads(response.text)

object_names = set()
for name in objects['data']['type_results']:

def download(name):
    response = requests.get('' + name)
    with open('vulners_collections/' + name + '.zip' , 'wb') as f:
    return  name + " - " + str(os.path.getsize('vulners_collections/' + name + '.zip'))

pool = eventlet.GreenPool()
for name in pool.imap(download, object_names):

Download function prints collection name and the size of Vulners collection zip file. The whole job finished in 5 minutes:

xen - 83878
d2 - 15998
typo3 - 156251
samba - 27866
pentestit - 21909
malwarebytes - 143380
archlinux - 327987
openbugbounty - 12500426
korelogic - 68225
w3af - 229716
gentoo - 899848
fireeye - 449874
pentestnepal - 10921
wired - 7242
debian - 3421840
carbonblack - 101328
metasploit - 4541274
zeroscience - 954521
hackread - 16768
appercut - 474561
thn - 5029563
freebsd - 1303269

The total download time will, of course, depend on your Internet connection speed and the current workload on

Downloaded zip archives:

All vulners collections

The whole size:

# du -h vulners_collections/ 374M vulners_collections/

How to work with this collection files read in Processing Vulners collections using Python.