x

Prepping your Data for Data Collection

Written by Steve Litras

July 23, 2020

With the advent of data collection, new logging data workflows become possible. If your retention requirements are served by archiving data off to a cheap storage mechanism like S3 or Glacier, you can drastically reduce what’s in your logstore to just what you need for normal troubleshooting, or even only metrics, using data collection to retrieve relevant data only when you need it.

However, it’s critical that you have the data you’re retaining to be partitioned in a way that supports the workflow you’re looking for. For example if you want to be able to filter data collection by hostname, you’ll need to have the hostname in the partitioning structure, so it can be extracted. This post attempts to cover a number of things you’re going to want to take into account when planning your archive configuration.

Partitioning Scheme

The most important decision to make when setting up your retention store is the partitioning scheme for the archive. A partitioning scheme, in the case of either file systems or an S3 bucket, is really just a directory plan. Well, S3 doesn’t actually have directories, but the key scheme there mimics a directory structure, so for our purposes, it works. With Cribl LogStream, a “Partitioning Expression” field in both the S3 and Filesystem destinations facilitates this by allowing you to include specified fields in the outgoing data. 

For example, say I have firewall log data that I want to write to the archive, and when I write the archive, I have a number of fields parsed out of that data: 

Field Purpose
sourcetype The type of data being written
src_ip IP address of the source of the traffic
src_zone Firewall zone that the traffic came from
dest_ip IP address of the destination of the traffic
dest_zone Firewall zone for the destination

We’ll want to include date information in the partitioning scheme, and we want to make it so the filter expression can significantly narrow down the files to include. For example, if we use the following partitioning expression:

${C.Time.strftime(_time, '%Y/%m/%d/%H')}/${sourcetype}/${src_zone}/${src_ip}/${dest_zone}/${dest_ip}

The data we see in the S3 bucket will look like this:

2020/07/01/18/pan:traffic/trusted/10.0.4.85/trusted/172.16.3.182/CriblOut-DqlA77.1.json
2020/07/01/18/pan:traffic/trusted/10.0.1.213/trusted/10.0.2.127/CriblOut-RcYp4E.1.json
2020/07/01/18/pan:traffic/trusted/172.16.3.199/trusted/10.0.2.166/CriblOut-mSVTFX.1.json
2020/07/01/18/pan:traffic/trusted/10.0.1.222/trusted/192.168.5.35/CriblOut-u5qA4B.1.json
2020/07/01/18/pan:traffic/trusted/192.168.5.121/trusted/10.0.4.78/CriblOut-5EHiUd.1.json
2020/07/01/18/pan:traffic/trusted/192.168.1.23/trusted/10.0.3.152/CriblOut-kh7gjv.1.json
2020/07/01/18/pan:traffic/trusted/10.0.2.81/trusted/10.0.1.49/CriblOut-DgiKYh.1.json
2020/07/01/18/pan:traffic/trusted/192.168.10.53/untrusted/129.144.62.179/CriblOut-R9T0LJ.1.json
2020/07/01/18/pan:traffic/trusted/192.168.10.53/untrusted/52.88.186.130/CriblOut-zH0bsm.1.json

In the file list above, all of the files contain events that came in on July 1, 2020, in the 6pm hour. They are all of sourcetype “pan:traffic,” they are all originating in the “trusted” Source Zone, then further organized by Source IP, Destination Zone (trusted/untrusted), and Destination IP. A partitioning scheme like this allows us to filter in a number of ways:

  • Time range between date x and date y
  • Source or destination zone
  • Source or destination IP address

Or any combination of the above. Since we can use helper functions in our collection path, we can use C.Net.cidrMatch() against the IP address fields to filter on data that comes from or goes to specific network blocks. 

It is likely to take some trial and error to get a partitioning scheme in place that facilitates your intended workflow. The good news is that we have a great feature that can help you “rewrite” your existing partition scheme to a new bucket – it’s called Data Collection. Yep, you can use the same tool to reingest the data from your existing retention store and write it with any partitioning changes you want to a new bucket.

When To Archive

When archiving is the first step in the route table, it usually means that the event is archived as it came in (after any conditioning pipelines or event breaker rules are applied). This can mean that the attributes that we might want in a partitioning scheme have not been parsed out of the event. 

However, we can choose to do full processing prior to archival, to get any parsing, cleanup, and enrichment into the archived data (making all of those attributes available in the partitioning scheme). We can also end up somewhere in between those two extremes. 

Most likely, this decision will need to weigh any corporate policies against operational needs. Many companies start out with a position that logs at rest need to be unmodified, but that’s not a very realistic request. (Logs ingested into an analytics system are modified regardless – the individual events may be unchanged, but they’re not necessarily still in their original file structure or path structure.) What is more realistic is that any changes to the data be auditable, which would mean that the parsed or enriched data is legitimate. Because LogStream output includes references to pipelines that have modified data, it is auditable. 

Data Locality

Another important decision is with regard to data locality, and this largely becomes a financial decision. For example, if we are storing your archive data in AWS S3, but our LogStream and/or logging analytics environment is on-premise, we may incur data egress costs, and may need to increase internet or direct-connect bandwidth to accommodate the traffic, so you might choose a local storage target to mitigate these issues instead.  

Come Take it For a Spin

LogStream 2.2 is packed with great new features and significant improvements to existing features. We’ve got our first 2.2-focused interactive sandbox, the Data Collection and Replay Sandbox, available for you to master. This gives you access to a full, standalone instance of LogStream for the course content, but you can also use it to explore the whole product. If you’re not quite ready for hands-on, and want to learn more about 2.2, check out the recording of our latest webinar on 2.2, presented by our CEO, Clint Sharp.

Questions about our technology? We’d love to chat with you.

So you're rockin' Internet Explorer!

Classic choice. Sadly, our website is designed for all modern supported browsers like Edge, Chrome, Firefox, and Safari

Got one of those handy?