x
Data Journey

A Data Engineers Journey to Modernizing with Cribl

November 7, 2023
Written by
Categories: Learn

Terry Mulligan is a Splunk consultant with Discovered Intelligence (and Notre Dame’s biggest fan)— a data intelligence services and solutions provider that specializes in data observability and security platforms. He shares what Cribl has brought to the table for his organization and his clients and how it’s changed their processes and the role of the Splunk data engineer.

As a Splunk data engineer at Discovered Intelligence, He’s responsible for onboarding Splunk sources for our internal customers and stakeholders. This could be for new data sources or for a PLC where a customer needs the logs in Splunk and wants to understand the costs involved.

Once a request is received, they define the data requirements and collection methods, execute a viability review, and begin the ingestion process.

Data Engineering Before Cribl

Before the arrival of Cribl, data onboarding began with a customer submitting a request. They provided information about sizing, locations, and data type over a quick Teams meeting and a review process, then determined the viability of the data. This was more of a value test than a technical viability test to determine if the value of the data exceeded the initial and ongoing ingestion costs. To put it simply — was it license-worthy?

Typically, they rejected two types of requests. The first were cases where the 5% of data that was actually interesting wasn’t valuable enough to justify the cost of the other 95%. The second type was bespoke data requests for sources that couldn’t be ingested using regular tools like a universal forwarder, HEC, or TA. Development costs pushed those requests into the non-viable category.

Accepted requests were completed and all but forgotten about until they’d get calls from the customer saying their data wasn’t easily searchable and needed to be cleaned up. Then they’d have to deliver the bad news — there wasn’t much we could do to simplify the data or make it useful.

Data Engineering After Discovering Cribl Stream

Once they added Cribl Stream to their arsenal, everything changed. Now they have a tool with the power to drop the bloat, only ingest interesting data, process API data in hours instead of days, and decouple source from destination to offer multi-destination routing.

Since they adopted Cribl Stream, the Splunk data engineering role has become much more challenging and interesting. Conversations with customers have changed — they’re no longer quick calls but in-depth conversations where they’d drill into customer needs, goals, and use cases. They’ve gone from, “Sorry, we won’t be able to onboard your new data source” to “Yes, let’s see what we can do.”

At the start of the migration, Cribl was a novelty for their team members. It began as an afterthought but transformed into the default ingestion point for all new data sources. The ease of ingestion, reduction, and transformation — with the ability to deliver to multiple platforms — means Cribl Stream can add value to all data. For that reason, they no longer send any data directly to Splunk.

Creating and tweaking pipelines can be an intoxicating rabbit hole. The questions they deal with now are more along the lines of — how much time do we spend on Cribling data? How much reduction or transformation do we apply? When is the pipeline good enough? These problems are much nicer to have than the ones that preceded our adoption of Cribl.

Successes of the Journey to Cribl Stream

In the beginning, they focused on the features of Cribl Stream that would provide immediate results and let it shine. They targeted data sources where they knew they could achieve significant reductions, and they lowered their license by 3 TB. For hardware reduction, they migrated their Syslog data sources from syslog ng to Cribl and eliminated 14 servers. For data viability, they ingested some previously non-viable data sources using the REST collector.

These successes provided great talking points for senior leadership and reinforced the decision to purchase Cribl. From there, they moved on to some backlog items that were previously impossible to resolve — primarily, sources with GDPR concerns that were now possible with the flexibility of Cribl’s masking feature. They’ve also added existing inputs that get routed through Cribl instead of the Splunk gateways.

One of the more interesting successes involved ingesting AppOmni data. When they first brought this data source on and put it through Cribl Stream, it should have been a ~200 GB/day source. It remained that way for a few months until the AppOmni team expanded their footprint, and the data jumped to 1 TB overnight.

This was a huge licensing issue, but they were easily able to rewrite all the data to Google SecOps — a destination without license restrictions. From there, they sent a very small subset of the data to Splunk, leaving Cribl to save the day yet again.

Cribl Stream Best Practices

Now that they have had the pleasure of working with Cribl Stream for some time, here are some of his tips for finding success quickly.

  • Get leadership buy-in. Weaponize your leadership with dashboards and regular updates. Make sure the successes and benefits of Cribl are clear. Learn more about building an ROI Plan in this blog.
  • Define measurable and attainable goals. Examples include reducing license volume by X amount, reducing your hardware footprint, or moving long-term storage to a different platform.
  • Take advantage of Cribl University. Cribl offers free training courses, reference architectures, sandboxes, and other resources that are consistently updated.
  • Plan your architecture ahead of time. Will it be on-prem, cloud, or a mixed environment? Does your company require HR or DR? Are you using AWS? Will you be using availability zones or auto-scaling?
  • Define standard naming conventions. For sources, destinations, pipelines, and lookups — before deployment. A consistent, organized set of pipelines is infinitely easier to work with.
  • Include worker groups in naming conventions. Make the names meaningful and representative of the group’s purpose — i.e. staging, push, or pull.
  • Utilize a staging environment. For everyone to experiment with installing packs, creating pipelines, testing new techniques, and developing their skills, without fear of impacting production.

Tips & Tricks For Working With Cribl Stream

  • Use the Quick Reference Guide. It will save you hours and hours on Google and Cribl’s Community Slack looking for answers.
  • Develop a default set of data transformation rules. Apply them before the customer sees the data in Splunk. They drop null fields, convert date/time fields, flatten embedded JSON, and recreate name/time fields with aliases/calculated fields. They also remove headers from syslog sources and use Cribl’s automatic syslog parser.
  • Leverage Lookups to assist with event processing. They use them for field renaming and to fix events with nonexistent or incorrect time fields like Windows Event Logs or network events from devices that should have been set to GMT.
  • Develop guidelines for pipeline format/style. This will come in handy for new hires and larger teams.
  • Take advantage of the Cribl Pack Dispensary. Consider Packs as a starting point for any ingestion — if you’re ingesting a data source, someone else has probably done it before. Review them, learn from them, and then customize them to fit your needs.
  • Learn the ins & outs of Cribl event breakers. Execution in Cribl is different from Splunk. Timestamps use capture groups, and truncation behaves differently.
  • Cribl uses a different flavor of regex. Cribl uses ECMAScript, and Splunk uses PCRE2 — they are similar, but there are differences.
  • Cribl uses a different flavor of strptime. It’s very similar to Splunk, but you’ll have to learn a few differences.

Watch Terry’s full presentation from CriblCon 2023 for more on the journey to modernizing data engineering using Cribl.


Cribl, the Data Engine for IT and Security, empowers organizations to transform their data strategy. Customers use Cribl’s suite of products to collect, process, route, and analyze all IT and security data, delivering the flexibility, choice, and control required to adapt to their ever-changing needs.

We offer free training, certifications, and a free tier across our products. Our community Slack features Cribl engineers, partners, and customers who can answer your questions as you get started and continue to build and evolve. We also offer a variety of hands-on Sandboxes for those interested in how companies globally leverage our products for their data challenges.

.
Blog
Feature Image

Mastering Tail Sampling for OpenTelemetry: Cost-Effective Strategies with Cribl

Read More
.
Blog
Feature Image

The Stream Life Podcast 110: Microsoft Azure + Cribl – Better together

Read More
.
Blog
Feature Image

Rethinking Security: Why Organizations are Flocking to Microsoft Sentinel

Read More
pattern

Try Your Own Cribl Sandbox

Experience a full version of Cribl Stream and Cribl Edge in the cloud with pre-made sources and destinations.

box

So you're rockin' Internet Explorer!

Classic choice. Sadly, our website is designed for all modern supported browsers like Edge, Chrome, Firefox, and Safari

Got one of those handy?