Custom built application for asynchronus forensic data presentation on an ElasticSearch backend. This application is designed to ingest a Mandiant Redline “collections” file and give flexibility in search/stack and tagging. The application was born out of the inability to control multiple investigations (or hundreds of endpoints) in a single pane of glass.
To ingest redline audits, you can use nightHawk.GO
, a fully fledge GOpher application designed to accompany this framework. The source code to the application is available in this repo, a binary has been compiled and is running inside the iso ready to ingest from first boot.
To make it straight forward for users of nightHawk, we built an ISO with everything setup ready to go. That means you get the following;
/opt/nighthawk/etc/nightHawk.json
file.Before building your VM with the supplied ISO, take into consideration the following;
_ Pending _ : Setup the Elastic service to be dual nodes with 1/4 of the allocated system memory per node. This means if you give it 2GB RAM, each ES node will be 512mb and the system will remain with 1GB to operate.
_ If you want to set this any different way, ssh into the box and configure your desired way. _
A minimum of 20GB should be considered. An audit file can be large and therefore its advised you allocate a lot of storage to handle ingesting many collections.
elasticsearch-dsl.py
django 1.8
python requests
Download ISO: nightHawk v1.0
Configure the hardware, mount the ISO into the VM, start the installtion script.
Once complete, in your browser (Chrome/FireFox), goto; https://192.168.42.173
.
If you need to access Kibana, goto; https://192.168.42.173:8443
.
If you need to SSH into the box, the login details are; admin/nightHawk
.
If you want to change the IP address (reflected application wide); /opt/nighthawk/bin/nighthawkctl set-ip <new_ipaddress>
Redline Audit Collection Script can be found in the root of this repo.