Security Onion & Splunk: Alert Analysis Workflow/Examples

Published: May 5, 2020

Security Onion & Splunk is setup successfully, everything is ingesting and properly alerting but now what? That largely depends on your individual situation, but I can assume you’ll see some alerts and need to do an investigation.  So this article will address how to use Security Onion & Splunk to perform an investigation on your alerts.

The first thing we need to decide is where we’re going to get our alerts from. We have a couple options when using Security Onion, but my go to apps are Sguil and squert. Sguil requires a remote client to connect to the Security Onion Server, but offer a bit more in terms of alerting. If you’re doing an investigation directly on the Security Onion server, I recommend using Sguil.  Squert is an awesome web GUI for investigating snort and OSSEC alerts. It doesn’t require a client like sguild, and can be access by browsing to the web GUI. It’s almost as full featured as Sguild, but it’s convivence makes it an easy choice for this article.

We’ll start off by logging into squert (if you’re not up to this point check out my series: Intro to Security Analysis)

If you can’t access squert from your host machine, you’ll need to open Security Onions local firewall. Use “so-allow” and follow the prompts to allow your host machine to access port 443 on Security Onion.

Here’s a shot of my welcome screen after logging in:

I have honeypots running on my network, so that’s why Security Onion is lit up like a Christmas tree. This will give us some good alerts to use for examples.

In the top right of the screen you’ll see our alerts broken out by severity:

If you click on one of the numbers, we can filter the results by that severity. Filtering by “Critical” darkens the block and only shows up critical alerts, as well as some OSSEC alerts

Most of my alerts are from Ips with bad reputations connecting to my honeypot, but I was able to grab a couple interesting alerts.

ELF File Inbound Likely Command Execution

ET WEB_SPECIFIC_APPS ELF file magic plain Inbound Web Serers Likely Command Execution 12:

If you click on the red icon labeled “4” in my case, it will expand the finding to show the additional details

We can now see the snort rule that was triggering the alert, listed below the red button:

alert tcp $EXTERNAL_NET any -> $HTTP_SERVERS any (msg:”ET WEB_SPECIFIC_APPS ELF file magic plain Inbound Web Servers Likely Command Execution 12″; flow:established,to_server; content:”|5c 5c|x7f|5c 5c|x45|5c 5c|x4c|5c 5c|x46|5c 5c|”; metadata: former_category WEB_SPECIFIC_APPS; classtype:attempted-user; sid:2025869; rev:2; metadata:affected_product Linux, attack_target Web_Server, deployment Datacenter, signature_severity Major, created_at 2018_07_19, updated_at 2018_07_19;)

We also have the source and destination IP addressed involved in the alert


We can continue to drill down into the alert by again clicking on the red button below the one we initially clicked. Mine shows “4” listed again inside the red button.

This expands the alert so we can see all the individual instances it was fired. Especially helpful if the alert is coming from multiple ips.

Click on one of the red “RT” buttons will expand the alert further, showing the actual payload that was flagged on:

If you scroll to the bottom of the alert, squert gives you a nice ascii output of the section of the packet it alerted on:

Why is snort flagging on this traffic? Lets take another look at the snort rule. (This is just a portion of the rule)

alert tcp $EXTERNAL_NET any -> $HTTP_SERVERS any (msg:”ET WEB_SPECIFIC_APPS ELF file magic plain Inbound Web Servers Likely Command Execution 12″; flow:established,to_server; content:”|5c 5c|x7f|5c 5c|x45|5c 5c|x4c|5c 5c|x46|5c 5c|”;

This alert says that any EXTERNAL_NET IP address communicating with our HTTP_SERVERS with the contents “|5c 5c|x7f|5c 5c|x45|5c 5c|x4c|5c 5c|x46|5c 5c|” should be alerted on, and Identify as “ET WEB_SPECIFIC_APPS ELF file magic plain Inbound Web Servers Likely Command Execution 12.” The variable EXTERNAL_NET & HTTP_SERVERS is defined in the snort.conf file.

If we look at the payload, we can see the hex bits being flagged (without null bits in between them):

|5c 5c|x7f|5c 5c|x45|5c 5c|x4c|5c 5c|x46|5c 5c|

If you need even more detail about what generated the alert, we can fetch full packet capture. Each alert has a corresponding event ID. If you click on this event ID, we can view the full packet capture.

CapMe pops up and loads the events:

Here’s the header of the full packet capture. Based on the data, it appears 172.16.0.2 (Src IP) is sending logs to Dst IP 192.168.0.150:8777.  We can see file paths being referenced (/home/cowrie/cowrie/var/log/cowrie/cowrie.json).

As well as a number of metrics logs, and references to /opt/splunkforwarder/var/log/splunk/splunkd.log

Here’s simliar data to the payload that’s being flagged on:

We need to know more about the hosts in question. We can do some nslookups, ask our system admins, or maybe we know where these resources are. Based on my knowledge, we’re able to determine that source IP 172.16.0.2, is my cowrie honeypot. Malicious commands were log on the honeypot and these logs were sent to IP 192.168.0.150 over port 8777. This is my splunk server, and we can assume port 8777 is a listening port it’s ingesting data on.

It’s interesting to note that snort had no visibility into the encrypted SSH commands as they were issued from the remote attacker.  Once the clear text commands were logged and sent to Splunk, snort could finally see the contents, and was able to alert.

If this were an actual production resource, we could confidently say it’s compromised. We could limit its network access and continue to observe its behavior, or pull the plug, image and rebuild.

Malicious outbound CURL request

Let’s take a look at one more example. A compromised host is attempting to pull down a malicious payload via a curl request. This again happened on one of my honeypots. Here’s the alert it generated:

A curl request in your environment might not be seen as malicious, and occur often. It’s hard to determine a false positive from a true positive by the alert alone.

If we expand the alert we get a bit more info. Our alert is flagging on the content “User-Agent curl”,  so our resource must be issuing a curl request to an IP address in Japan.

Clicking on the red button “7” then again on the red “RT” we can see the portion of the packet snort alerted on:

We can see src 172.16.0.2 issue a GET request using the curl user-agent to 133.167.105.83 for “gtop.sh”

Click the event ID for detailed packet capture:

The packet headers gives us info about sources and destination. We can see the source issue the get request:

So, did the machine download the file? Nope! It received a 403 forbidden error:

If we keep scrolling down, we can see the content was blocked:

It looks like the UTM web protection is blocking the honeypot from downloading the file:

Good validation of the UTM Web Protection, but not what I was hoping to do with the honeypot, so I disabled AV scanning in web protection 🙂

What if the payload was successful, and we want to grab a copy of it? Can do! Click on the .pcap file link at the top of the full packet capture header to download it:

WE can now import it to Wireshark. We can see the same requests in wireshark as capME when we follow the tcp stream

If the file was successfully download, we could select File-> Export Objects -> HTTP

The file in the tcp stream is listed, and we can “Save” the file

If the gtop.sh file successfully downloaded, you’d see it’s contents. Instead, since the UTM blocked the file from being downloaded, the output contains the web response from the UTM:

Security Onion App for Splunk

Lets take a look at the same thing in the Security Onion App for Splunk. We don’t have the ability to pivot to full packet capture, or see packet contents, but we can use Splunk as an initial alerting/investigation point.

If we look at our Sguil events, we can see the outbound curl request

We can also drill into this by clicking on the row containing the alert name, then clicking on an event in the “Select Event Details by Source IP”

You can then take actions on each alert entry, querying values via CIF, IR, Robtex, or a custom lookup that we can define

The “Software” tab is also interesting. Finding rogue software via user agents is always worth investigating

One other good set of dashboards is the “Bro(wser)” It’s an easy list of bro logs to scan through for any anomolies. Notice.log

Dns.log, etc…

Client SSH Connections, etc…

The Security Onion App is a great resource to get ideas for building dashboards of your own. You probably won’t use the default app out of the box for analysis, but I still recommend installing it to give you the ability to integrate Splunk into your workflow. 

If I don’t have snort alerts to investigate, I’m usually developing a Splunk alert/report/analysis dashboard to uncover anomalies and further drive analysis. While the articles examples didn’t properly highlight the value of Splunk, imagine if we did not have an alert to pivot from, but only an IP address that a user reports is acting funny. Without alerts to generate full packet capture, we have to rely on additional host and network based logging to determine what’s going on. This is why Splunk is an ideal tool for our task. It can correlate data from many sources at once and tell a more complete story.