Custom built application for asynchronus forensic data presentation on an Elasticsearch backend.
This application is designed to ingest a Mandiant Redline "collections" file and give flexibility in search/stack and tagging.
The application was born out of the inability to control multiple investigations (or hundreds of endpoints) in a single pane of glass.
To ingest redline audits, we created
nightHawk.GO , a fully fledge GOpher application designed to accompany this framework. The source code to the application is available in this repo, a binary has been compiled and is running inside the iso ready to ingest from first boot.
16/07/16 : Version 1.0.2
Bug fixes (tokenization and mapping updates)
Global Search error handling, keyword highlighting
Stacking on URL Domain and DNS, fixed stacking registry
Reindex data utility added (see wiki article for usage)
Upgrade feature added, you can now update the sourcecode from yum without downloading a new iso (see wiki article for usage)
Rotate /media folder to remove old collections after 1 day (or > 2GB foldersize)
Added w32system (system info)
Removed static mapping in postController for hostname
Fixed issue with building audit aggs where default_field was not being passed to ES.
Video Demonstration: nightHawk Response Platform
Single view endpoint forensics (multiple audit types).
Interactive process tree view.
Multiple file upload & Named investigations.
To make it straight forward for users of nightHawk, we built an ISO with everything setup ready to go. That means you get the following;
Latest nightHawk source.
CentOS 7 Minimal with core libs needed to operate nightHawk.
Nginx and UWSGI setup in reverse proxy (socketed and optimized), SSL enabled.
Latest Elasticsearch/Kibana (Kibana is exposed and useable if desired).
Sysctrl for all core services.
Logging (rotated) for all core services.
Configurable system settings, a list of these can be found in the
Starting the system :
Before building your VM with the supplied ISO, take into consideration the following;
_ If you want to set this any different way, ssh into the box and configure your desired way. _
Download ISO: nightHawk v1.0.2
Configure the hardware, mount the ISO into the VM, start the installtion script.
Once complete, in your browser (Chrome/FireFox), goto;
If you need to access Kibana, goto;
If you need to SSH into the box, the login details are;
If you want to change the IP address (reflected application wide);
/opt/nighthawk/bin/nighthawkctl set-ip <new_ipaddress>
Redline Audit Collection Script can be found in the root of this repo. Use this when using the standalone redline collector as this will return the documents you need to populate nightHawk correctly.
IMPORTANT : Creating audit zip file to upload (Redline stand alone collector):
step_1: Navigate to Sessions\AnalysisSessionX\Audits<ComputerName> where X is analysis number which is 1 for most cases.
step_2: Create zip of folder containing audit files i.e. 20160708085733
step_3: Upload 20160708085733.zip
IMPORTANT : Use exisiting HX audit file (HX collector): FireEye HX audits are an extension ending in .mans. The audit from HX differs from the Redline collector because the .mans that it returns is actually a zip file. This means it can be uploaded directly unlike the Redline audit which you need to follow the instructions above.
Navigate to the "Upload" icon on the nav bar, select an audit .zip (or multiple), a case name (otherwise the system will supply you with one) and submit. If you have used our Redline audit script to build your collection, follow the "Redline Collector" instructions just above.
Once processed, the endpoint will appear in the "Current Investigations" tree node. Under the endpoint you will be presented with all audit types available for that endpoint. The upload feature of this web app spawns pOpen subprocesss that calls the GO application to parse the redline audit and push data into Elasticsearch. There are 2 options for uploading, one is sequential, the other is concurrent.
Please Note: Concurrent uploads are limited to 5 at a time and can be resource intensive, if you have an underpowered machine then restrict usage of this feature to 2-3. _
You can click on any row in any table (in response view) to tag that data. Once tagged you can view the comments in the comments view.
There are custom mappings (supplied in git root) and advisory comments on the following;
Documents are indexed via the GO app as parent/child relation. This was chosen because it is able to give relatively logical path to view documents, ie. the parent is the endpoint name and the children are audit types. Performing aggregations on a parent/child relational document at scale seems to make sense as well. The stacking framework relies on building parents in an array to then get all child document aggregations for certain audit types.
Elasticsearch setups require tuning and proper design recognition. Sharding is important to understand because of the way we are linking parent/child documents. The child is ALWAYS routed to the parent, it cannot exist on its own. This means consideration must be given to how many shards are resident on the index. From what we understand, it may be wise to choose a setup that encorporates many nodes with single shards. To gain performance out of this kind of setup we are working on shard routed searches.
We are currently working on designing the best possible configuration for fast searching.
This application is designed to scale immensely. From inital design concept, we were able to run it smoothly on a single cpu 2gb ubuntu VM with 3 ES nodes (Macbook Pro), with about 4million+ documents (or 50 endpoints ingested). If going into production, running a setup with 64/128GB RAM and SAS storage, you would be able to maintain a lightning fast response time on document retrival whilst having many analysts working on the application at once.
DataTables mixed processing:
There are several audit types ingested that are much to large to return all documents to the table. For example, URL history and Registry may return 15k doc's back to the DOM, rendering this would put strain on the client browser. To combat this, we are using ServerSide processing to page through results of certain audit types. This means you can also search over documents in audit type using Elasticsearch in the backend.
Currently we can tag documents and view those comments. We can update them or change them. The analyst is able to give context such as Date/Analyst Name/Comment to the document.
Dependencies (all preinstalled):
Process Handles (in progress).
Time selection sliders for time based generators (in progress).
Context menu for Current/Previous investigations.
Tagging context. The tagging system will integrate in a websocket loop for live comments across analyst panes (in progress).
Ability to move endpoints between either context.
Potentially redesign node tree to be investigation date driven.
Selective stacking, currently the root node selector is enabled.
Shard routing searches.
Redline Audit script template.
More extensive integration with AngularJS (in progress).
Responsive design. (in progress).
Administrative control page for configuration of core settings (in progress).