Announcing Cribl, the Log Preprocessor

August 27, 2018

Today we’re pleased to announce Cribl, the Log Preprocessor. Cribl is derived from the world cribble, which is a sieve or strainer. We chose the word cribble because getting value from log data is often a matter of sifting valuable log entries from a stream of significantly less valuable data.

For the first time, Cribl gives full access to the data in motion to lookup, enrich, redact, encrypt, transform, or sample data before indexing. Cribl enables new use cases for Splunk, our initial focus, like bringing in sensitive information with role-based decryption or bringing in high volume but low value data sources like Netflow affordably. Cribl puts the Splunk Admin in control to make logs contextual, safe and optimized.

We’re releasing in beta today, with our 1.0 release on October 1st at Splunk’s Users Conference in October. Please sign up here if you’re interested. Please read on to learn more about Cribl, how we came to be and where we’re going.

Our Origin Story

When I left Splunk a year ago, my view on the world was that log management and analysis was a solved problem. Our founding team has over 25 years of experience at Splunk and using Splunk, and our world view was entirely through a lens of seeing some incredibly successful and valuable use cases for log analysis. In my own use case as a customer, before joining Splunk, we had real time visibility into our business down to the retail rep and could drill from a business transaction down to the individual Java process to troubleshoot business, application or infrastructure performance problems. Other customers have built the most advanced threat hunting infrastructures on the planet on top of Splunk. In the open source world, Elastic has hugely successful large installations making it easy to drill from any query down to a specific log line over terabytes of log data.

Yet, despite huge success at many customers, the enterprises we worked with over the last year were struggling. Half had no log management at all. The value delivered by searching and analyzing log data was not exceeding the costs of the solution in hardware, people and license costs except for the few, most valuable use cases.

The Problem

Obviously, having seen and experienced such great success elsewhere, we wanted to know why. As we started interviewing and working with these enterprises, three key problems emerged. The first, and most obvious, is that the cost of any log analysis solution is driven primarily by the volume of data being thrown at it. In every enterprise we spoke with, they were putting a fraction of the total volume they could be into their solutions because of concerns about cost. Many of these enterprises were using the open source Elastic stack, so license costs weren’t a factor. Simply storing the data for a reasonable period of time was exceeding the value delivered by the solution. But, as we looked into the data sources themselves, they were filled with junk: overly verbose messages, poorly formatted, with useless debug and informational messages thrown into the mix. Their top data sources were often things like VPC Flow logs where the volume is massive but the value of an individual record is very low.

Secondly, there were security concerns about centralizing logging at all. In enterprises where access to systems is tightly governed, they have a regime in place where they know only the right people have access to the right systems. A centralized logging system has a potential risk of exposing customer or company sensitive information for all customers, all in one place.

Lastly, we watched how users were using their log analysis tools. Users were wading through external systems to get the context they needed for their log search, like looking up a dns hostname from an IP or looking up the name of a Kubernetes pod in the service they were troubleshooting. In their logs, they only had an IP address or a container ID, and often times that information had changed between when the log was written and when the user tried to query their logging system.

All of these added up to one bigger problem: administrators of log analysis solutions have very little control over the data in motion. We heard stories of bugs in production or a DDoS attack suddenly slamming and overloading their logging system, killing performance at the time they needed it most. Administrators had little ability once they turn on a data source to control its format, verbosity, or security. If they found something amiss, the only way to fix it was to fix the source generating the data, often times requiring production code changes, or push configuration changes to thousands of agents to turn off that data source.

Hasn’t this been solved before?

These challenges aren’t new. Early in my career, I worked at AT&T Wireless on billing and rating systems. Call detail records were a massive data source, and they were also how we generated the bills for every subscriber. Call detail records weren’t just for billing though, there were a ton of superfluous records. For example, the system would generate a record every time you punched a digit on your phone! The volume of data was so large that if we fell behind in processing it there was no way to catch up. In addition to junk data, these records came in a binary format that wasn’t digestible by the billing system, and some records needed to enriched or multiple records coalesced into one billing event. To solve these problems, we had to preprocess the data before we could send it to the billing system. We called it mediation, but the use case was the same: transform, enrich, aggregate, redact, route, and sample the data to get it into a shape that’s usable.

It’s fair to ask, since there’s so much prior art, why doesn’t this exist for logs? Scalable map/reduce approaches to analyzing big data have allowed us to store and process log data cheaply. With cheap storage, the approach has been to gather and store all the logs. However, regulations like GDPR are requiring us to ask whether what we’re storing is safe, and moving to the cloud makes us question every month whether we want to continue leasing that storage capacity. It became clear as we surveyed the market that there was a real need for a solution which could help people manage their data in motion.

The Solution

This is Cribl. We are a preprocessor for log data. Given the unstructured nature and scale of log data, generic approaches to streams processing will not work. For the first time, Cribl gives an easy to use, simple user experience backed by full, programmatic access to the data in motion. Cribl enables administrators to lookup, enrich, redact, encrypt, transform, or sample data before indexing and storage. Cribl puts the log analysis admin in control to make logs contextual, safe and optimized.

Given our deep experience in the Splunk product, community, and ecosystem we are focusing initially just on Splunk. We absolutely love Splunk as a product and as a company. Cribl allows you to do things with Splunk you’ve never done before, like performantly finding all logs for a particular application or user by doing ingestion time enrichment, or doing smart sampling which allows us to bring in all of a high volume data source like web access logs, keep the errors and sample the successes. Cribl also helps Splunk customers get the maximum value from their investment by putting in the right data and filter out all the junk. Cribl’s out of the box and community knowledge allows customers to share best practices for bringing in chatty data sources like Windows Event, Cisco ASA or Palo Alto Firewall logs with just the right information. Cribl is built to fit into your existing Splunk architecture. It can be deployed on a Heavyweight Forwarder or an Indexer and requires no new architectural components.

Join Our Beta!

Our beta release is out now. We’d love to hear from you! We will have a booth at Splunk’s Users’ Conference October 1st through 4th, so if you’ll be in Orlando we’d love to talk to you at the booth. Our product will be generally available October 1st. We’d love to have you as a prospect and a customer, so please reach out to us at hello@cribl.io!

.
Blog
Feature Image

Leveraging AWS Private Image Build for a Compliant Cribl Deployment

Read More
.
Blog
Feature Image

Cribl: Empowering Data Freedom with Open Standards and Unmatched Flexibility

Read More
.
Blog
Feature Image

Hello Vegas! Cribl @ AWS re:Invent 2024

Read More
pattern

Try Your Own Cribl Sandbox

Experience a full version of Cribl Stream and Cribl Edge in the cloud with pre-made sources and destinations.

box

So you're rockin' Internet Explorer!

Classic choice. Sadly, our website is designed for all modern supported browsers like Edge, Chrome, Firefox, and Safari

Got one of those handy?