rockNSM Version 2.4 as an Incident Response Package


Before starting the installation, make sure you read the hardware requirements here. These are the steps that I followed to get rockNSM running with ESXi 6.5+:


rockNSM Community Questions/Answers and Twitter and rockNSM documentation

Back to main page


Installing rockNSM on ESXi from the ISO and check out rockNSM’s own install guide

·         Custom install of Rock

·         Set your timezone

·         You can set your network information now or later (see below)

·         Configure the Installation Destination and pick I will configure partitioning and Done

·         Minimum 50GB drive, recommend to manually configure your primary as follow (Just type the mount point in the Mount Point box because these are not all listed):

·         / 25 GB

·         /boot 1 GB

·         /boot/efi 512MB

·         swap 8GB      Note: Make sure you set this to swap if you are manually typing it.

·         /home 5 GB+

·         /data remainder

·         Create a user account during the installation process (User Creation like admin) It is up to you if you want to Make this user administrator

·         System will reboot automatically

·         Select ‘c’ to continue and get the command prompt

·         login with account created during CentOS installation

·         sudo su -

·         passwd root (Assign a  root password if desired)

·         If a root password has been set, you can do all the command as user root vs. using sudo root –


Note: Optional with VMware: After the installation, shutdown and add two more drives to save elasticsearch and stenographer (packets) in separate partitions (I picked 25GB for each)

Configure New Partitions now before Running the Deployment Scripts if you are planning to Separate Data Collection


·         dmesg | grep sd (find the extra drives. Likely sdb and sdc)

·         sudo cfdisk /dev/sdb

·         sudo cfdisk /dev/sdc

·         sudo mkfs.xfs /dev/sdb1

·         sudo mkfs.xfs /dev/sdc1

·         sudo mkdir -p /data/elasticsearch

·         sudo mkdir -p /data/stenographer

-- sudo vi /etc/fstab and add ---


·         /dev/sdb1               /data/elasticsearch     xfs     defaults        0 0

·         /dev/sdc1               /data/stenographer      xfs     defaults        0 0

·         mount -a     (mount all drives)


Running the rockNSM Installer to Configure Sensor

·         sudo rock setup (If this is the first time running, it will create: /etc/rocknsm/config.yml)

·         Select Interface

·         Set Management IP (Select static IP and set network configuration)

·         Set hostname (i.e.

·         Select installer online or offline

·         Chose components (Default is all)

·         Chose service on boot (Default is all)

·         Review configuration

·         Write configuration changes (This saves your configuration to /etc/rocknsm/config.yml)

·         Now we are ready to run the installer

When the installation successfully completed, you will get this banner

Updating and patching CentOS

·         yum clean all && yum check-update

·         yum -y update

·         yum -y install open-vm-tools ntp bind-utils net-tools python-pip (optional if you need nslookup or dig and using VMware)

·         pip install oletools            Note: This is required for fsf package

·         pip install --upgrade pip

·         Before rebooting, see note below

Now you are ready to finish the installation and configuration of rockNSM


·         sudo vi /etc/suricata/suricata.yaml and change the HOME_NET

·         sudo vi /etc/suricata/rocknsm-overrides.yaml and verify capture is the correct one. Mine was "interface: ens34"

·         Configuring Bro Network Collection

Edit the Bro config file networks.cfg and make sure you have the correct network listed for collection (from RFC1918 to your Internet network ranges):

·         sudo vi /etc/bro/networks.cfg

Note: If you are running multiple interfaces, follow these instructions to enable all of them before rebooting.

·         reboot

Note: After rebooting rockNSM, note the default time is UTC not your local time.

After the system is rebooted, check the services to make sure they are all running:

·         sudo rockctl status (check if everything is working)


Accessing your sensor

·         The username/password to login the interface is located in the user account you created during setup. The filename is called KIBANA_CREDS.README

·         To add your own custom username and password to Kibana do: kibanapw USER PASSWORD

·         https://IPADDRESS - to access Kibana (I recommand static address)

·         To query packets directly from the sensor: https:// IPADDRESS /app/docket/#/query

·         To view and download previous packet queries: https:// IPADDRESS /app/docket/#/jobs


Troubleshooting Tips


If DOCKET won’t start, it is likely missing mod_openssl in the lighttpd web service. To fix it do:

$ sudo vi /etc/lighttpd/modules.conf

Find  server.modules =

Verify that "mod_openssl" is inserted in the configuration and if not, add it to the configuration.


Manipulating All or Individual Services


The primary service that control all rockNSM services is rockctl. These are the options available:

$ sudo rockctl: Usage: rockctl {start|stop|status|reset-failed}

$ sudo rock destroy (This will wipe all the sensor data, i.e. logs, indices and pcaps)


You can also find the status of each services with the following commands (start, stop, restart, try-restart, reload, force-reload, status):

$sudo service zookeeper status

$sudo status kafka.service

$ sudo service bro status

$ sudo service suricata status

$ sudo service filebeat status

$ sudo service elasticsearch status

$sudo service logstash status

$ sudo service stenographer status

$ sudo service fsf status

$ sudo service lighttpd status                                     Note: This service must run for Docket to work. Check the troubleshooting tips above if Docket isn’t running and add mod_openssl to fix it.

$ sudo service docket status

$ sudo service stenographer@ens34 status         Note: Replace @ens34 with your traffic capture interface


Information How Suricata Rule Management Works


Directory Locations

Docket packet directory: /var/spool/docket/      Note: This is a directory to watch for if you are querying lots of packets

Data directories: /data → bro  elasticsearch  fsf  kafka  stenographer  suricata


Zeek (formally Bro) File Parsing Configuration

·         Zeek (Bro) FSF scripts directory location: /usr/share/bro/site/scripts/rock/frameworks/files

·         Zeek (Bro) FSF scripts configuration location: /usr/share/bro/site/scripts/rock/frameworks/files/extraction/plugins/

To change the default FSF file parsing (extract-common-exploit-types.bro & extract-executable-types.bro) to all (or any of the other options) do:

# sudo vi  __load__.bro

@load ./extract-common-exploit-types.bro   → Default

@load ./extract-executable-types.bro       → Default

@load ./extract-java.bro

@load ./extract-ms-office.bro

@load ./extract-pdf.bro

@load ./extract-pe.bro

Zeek (formally Bro) Loading Intel Data

Note: Loading large amount of intel to Zeek may have an impact on Zeek. Also, ”The text files need to reside only on the manager if running in a cluster.”

Directory to load intel data into Zeek (Bro): /usr/share/bro/site/scripts/rock/frameworks/intel

Add the appropriate intel to the file below:

# sudo vi  intel-1.dat

#fields indicator       indicator_type  meta.source     meta.desc       meta.url

#        Intel::ADDR     source1 Sending phishing email        Intel::DOMAIN   source2 Name used for data exfiltration –

Here is an example adding Malc0de hostname to Zeek:

cat ZONES | sed 's/zone "\([[:alpha:]].*\)".*type.*/\1 \tIntel::DOMAIN\tMalware\tmalc0de\tblockeddomain\thttp:\/\/\/bl\/ZONES/g'> intel-malc0de.dat

chown bro:bro intel-malc0de.dat

# sudo vi intel.bro

                redef Intel::read_files += {

                fmt("%s/intel-1.dat", @DIR),

                  fmt("%s/ intel-malc0de.dat", @DIR),