An ex-colleague at Splunk asked me in a LinkedIn post if Cribl Stream does anything else besides log reduction. This blog is for him. Stream optimizes data so that it’s consumable again. In this blog, I’ll focus on using Stream to improve Splunk performance for search while lowering CPU usage.
If you’re in the David Veuve camp, you know the value of using the tstats
command to achieve performant searches in Splunk. And if you’re in the Clint Sharp camp, you know the value of time-series databases, such as a Splunk metrics index. In my small lab, in a set of Docker containers, Stream was shown to improve the performance of Splunk searches by up to 103x by populating index-time fields and searching via tstats
; and for a different data set where a metrics index was populated instead of a traditional event index, performance improved by 13x and also simplified by leveraging the ‘analytics workspace.’ The performance improvements will be even larger in a production environment where billions or trillions of events are searched.
Splunk is a great tool that makes it easy to convert raw, unstructured machine data to meaningful outcomes. Curious what data you have? Just run a search like Google for your logs. Need to report on gigabytes or terabytes of unstructured data and populate statistical graphs or timecharts? Well, that’s where Splunk performance can suffer. That’s because Splunk was first designed for search-time analytics. The schema is built when you run your search.
This is where adopting a different strategy, like populating index-time fields or a time-series metrics database, produces much faster results. And since we all love Splunk, let’s love it even more with Stream! So let’s dive in on how to improve Splunk performance and lower CPU usage.
Stream is a data pipeline solution that can help you transform your unstructured data to be more structured before it persists to disk. This doesn’t only improve sending to Splunk, but also sending to other observability solutions like Datadog, Wavefront, the Elastic Stack, or Sumo Logic, as well as writing to an S3-compliant API, GCP Cloud Storage, or Azure Blob Storage.
In this blog post, we will focus on Splunk as the destination. Regardless of the destination, transforming the data first helps reduce infrastructure costs, helps reduce storage costs, and enables you to do more with your software license.
How can Stream improve your Splunk search performance? Methods include:
tstats
command.So, how was this proven? To quantify the benefits, I ran various tests generating 2GB/day in Docker containers on my Macintosh. It is a relatively beefy box, with 8 CPU cores and 32GB RAM. For this test, I used Tomcat application logs; these have high variability, with several event formats in one log file.
Below is a snapshot of a Stream pipeline processing the Tomcat application logs. In this case, the pipeline is doing transformations for four distinct types of identified events:
The pipeline has various functions to simplify transforming events. To see the impact of a change, you simply save the pipeline, then switch to the right Preview pane’s Event In/Out view to compare the before and after. Here’s how some of the events look like before being processed by Stream:
And here’s the after:
Now, jumping to Splunk. Two versions of this dataset were sent to a Splunk instance:
index=main
.index=app_events
And with converted metrics targeted to index=app_metrics
.Here’s what the unprocessed events look like in Splunk:
And here are some of the Stream-processed events in Splunk:
And the metrics converted by Stream are all visible under the analytics workspace:
Through a single Stream pipeline, we can transform this combined dataset to make room for additional data ingest by a modest 16%. But we are reaping more benefit from faster search performance, which translates to lower CPU usage on the Splunk infrastructure.
Now for the meat and potatoes (or if you’re a vegetarian, the wok-seared tofu)!
There will be a couple of search performance comparisons:
Comparison one – search-time field vs. index-time field within event indexes:
|stats count
command on the raw events in index=main over 24,48, and 72 hours of data|tstats
command on the raw events in index=app_events over 24,48, and 72 hours of dataComparison two – search-time field in event index vs. data in a metrics index:
stats-average
of a metrics in one of the events in index=main over 24,48, and 72 hours of datamstats
/analytics workspace rendering of the same metric in index=app_metrics
over 24,48, and 72 hours of dataThe fastest way to get started with Cribl Stream is to sign-up at Cribl.Cloud. You can process up to 1 TB of throughput per day at no cost. Sign-up and start using Stream within a few minutes.
Experience a full version of Cribl Stream and Cribl Edge in the cloud with pre-made sources and destinations.