Cribl puts your IT and Security data at the center of your data management strategy and provides a one-stop shop for analyzing, collecting, processing, and routing it all at any scale. Try the Cribl suite of products and start building your data engine today!
Learn more ›Gain the flexibility to build a data engine that’s right for your IT and security infrastructure and the control to let you analyze, collect, process, and route data at any scale with total freedom.
Learn more ›Cribl Stream is a vendor-agnostic observability pipeline that gives you the flexibility to collect, reduce, enrich, normalize, and route data from any source to any destination within your existing data infrastructure.
Learn more ›Cribl Edge provides an intelligent, highly scalable edge-based data collection system for logs, metrics, and application data.
Learn More ›Cribl Search turns the traditional search process on its head, allowing users to search data in place without having to collect/store first.
Learn More ›The Cribl.Cloud platform gets you up and running fast without the hassle of running infrastructure.
Learn More ›Cribl.Cloud Solution Brief
The fastest and easiest way to realize the value of an observability ecosystem.
Read Solution Brief ›AppScope gives operators the visibility they need into application behavior, metrics and events with no configuration and no agent required.
Learn More ›Explore Cribl’s Solutions by Use Cases:
Explore Cribl’s Solutions by Integrations:
Explore Cribl’s Solutions by Industry:
Join us on March 27th at 2 pm GMT, to hear from Dan Whittingham, Security Solutions Architect at Rolls Royce, as he shares insights on leveraging Microsoft Sentinel and Cribl to improve threat detection, response, and data management.
Register ›Try Your Own Cribl Sandbox
Experience a full version of Cribl Stream and Cribl Edge in the cloud.
Launch Now ›Get inspired by how our customers are innovating IT, security and observability. They inspire us daily!
Read Customer Stories ›Sally Beauty Holdings
Sally Beauty Swaps LogStash and Syslog-ng with Cribl.Cloud for a Resilient Security and Observability Pipeline
Read Case Study ›Experience a full version of Cribl Stream and Cribl Edge in the cloud.
Launch Now ›Transform data management with Cribl, the Data Engine for IT and Security
Learn More ›Cribl Corporate Overview
Cribl makes open observability a reality, giving you the freedom and flexibility to make choices instead of compromises.
Get the Guide ›Stay up to date on all things Cribl and observability.
Visit the Newsroom ›Cribl’s leadership team has built and launched category-defining products for some of the most innovative companies in the technology sector, and is supported by the world’s most elite investors.
Meet our Leaders ›Join the Cribl herd! The smartest, funniest, most passionate goats you’ll ever meet.
Learn More ›Whether you’re just getting started or scaling up, the Cribl for Startups program gives you the tools and resources your company needs to be successful at every stage.
Learn More ›Want to learn more about Cribl from our sales experts? Send us your contact information and we’ll be in touch.
Talk to an Expert ›Today we’re pleased to announce Cribl, the Log Preprocessor. Cribl is derived from the world cribble, which is a sieve or strainer. We chose the word cribble because getting value from log data is often a matter of sifting valuable log entries from a stream of significantly less valuable data.
For the first time, Cribl gives full access to the data in motion to lookup, enrich, redact, encrypt, transform, or sample data before indexing. Cribl enables new use cases for Splunk, our initial focus, like bringing in sensitive information with role-based decryption or bringing in high volume but low value data sources like Netflow affordably. Cribl puts the Splunk Admin in control to make logs contextual, safe and optimized.
We’re releasing in beta today, with our 1.0 release on October 1st at Splunk’s Users Conference in October. Please sign up here if you’re interested. Please read on to learn more about Cribl, how we came to be and where we’re going.
When I left Splunk a year ago, my view on the world was that log management and analysis was a solved problem. Our founding team has over 25 years of experience at Splunk and using Splunk, and our world view was entirely through a lens of seeing some incredibly successful and valuable use cases for log analysis. In my own use case as a customer, before joining Splunk, we had real time visibility into our business down to the retail rep and could drill from a business transaction down to the individual Java process to troubleshoot business, application or infrastructure performance problems. Other customers have built the most advanced threat hunting infrastructures on the planet on top of Splunk. In the open source world, Elastic has hugely successful large installations making it easy to drill from any query down to a specific log line over terabytes of log data.
Yet, despite huge success at many customers, the enterprises we worked with over the last year were struggling. Half had no log management at all. The value delivered by searching and analyzing log data was not exceeding the costs of the solution in hardware, people and license costs except for the few, most valuable use cases.
Obviously, having seen and experienced such great success elsewhere, we wanted to know why. As we started interviewing and working with these enterprises, three key problems emerged. The first, and most obvious, is that the cost of any log analysis solution is driven primarily by the volume of data being thrown at it. In every enterprise we spoke with, they were putting a fraction of the total volume they could be into their solutions because of concerns about cost. Many of these enterprises were using the open source Elastic stack, so license costs weren’t a factor. Simply storing the data for a reasonable period of time was exceeding the value delivered by the solution. But, as we looked into the data sources themselves, they were filled with junk: overly verbose messages, poorly formatted, with useless debug and informational messages thrown into the mix. Their top data sources were often things like VPC Flow logs where the volume is massive but the value of an individual record is very low.
Secondly, there were security concerns about centralizing logging at all. In enterprises where access to systems is tightly governed, they have a regime in place where they know only the right people have access to the right systems. A centralized logging system has a potential risk of exposing customer or company sensitive information for all customers, all in one place.
Lastly, we watched how users were using their log analysis tools. Users were wading through external systems to get the context they needed for their log search, like looking up a dns hostname from an IP or looking up the name of a Kubernetes pod in the service they were troubleshooting. In their logs, they only had an IP address or a container ID, and often times that information had changed between when the log was written and when the user tried to query their logging system.
All of these added up to one bigger problem: administrators of log analysis solutions have very little control over the data in motion. We heard stories of bugs in production or a DDoS attack suddenly slamming and overloading their logging system, killing performance at the time they needed it most. Administrators had little ability once they turn on a data source to control its format, verbosity, or security. If they found something amiss, the only way to fix it was to fix the source generating the data, often times requiring production code changes, or push configuration changes to thousands of agents to turn off that data source.
These challenges aren’t new. Early in my career, I worked at AT&T Wireless on billing and rating systems. Call detail records were a massive data source, and they were also how we generated the bills for every subscriber. Call detail records weren’t just for billing though, there were a ton of superfluous records. For example, the system would generate a record every time you punched a digit on your phone! The volume of data was so large that if we fell behind in processing it there was no way to catch up. In addition to junk data, these records came in a binary format that wasn’t digestible by the billing system, and some records needed to enriched or multiple records coalesced into one billing event. To solve these problems, we had to preprocess the data before we could send it to the billing system. We called it mediation, but the use case was the same: transform, enrich, aggregate, redact, route, and sample the data to get it into a shape that’s usable.
It’s fair to ask, since there’s so much prior art, why doesn’t this exist for logs? Scalable map/reduce approaches to analyzing big data have allowed us to store and process log data cheaply. With cheap storage, the approach has been to gather and store all the logs. However, regulations like GDPR are requiring us to ask whether what we’re storing is safe, and moving to the cloud makes us question every month whether we want to continue leasing that storage capacity. It became clear as we surveyed the market that there was a real need for a solution which could help people manage their data in motion.
This is Cribl. We are a preprocessor for log data. Given the unstructured nature and scale of log data, generic approaches to streams processing will not work. For the first time, Cribl gives an easy to use, simple user experience backed by full, programmatic access to the data in motion. Cribl enables administrators to lookup, enrich, redact, encrypt, transform, or sample data before indexing and storage. Cribl puts the log analysis admin in control to make logs contextual, safe and optimized.
Given our deep experience in the Splunk product, community, and ecosystem we are focusing initially just on Splunk. We absolutely love Splunk as a product and as a company. Cribl allows you to do things with Splunk you’ve never done before, like performantly finding all logs for a particular application or user by doing ingestion time enrichment, or doing smart sampling which allows us to bring in all of a high volume data source like web access logs, keep the errors and sample the successes. Cribl also helps Splunk customers get the maximum value from their investment by putting in the right data and filter out all the junk. Cribl’s out of the box and community knowledge allows customers to share best practices for bringing in chatty data sources like Windows Event, Cisco ASA or Palo Alto Firewall logs with just the right information. Cribl is built to fit into your existing Splunk architecture. It can be deployed on a Heavyweight Forwarder or an Indexer and requires no new architectural components.
Our beta release is out now. We’d love to hear from you! We will have a booth at Splunk’s Users’ Conference October 1st through 4th, so if you’ll be in Orlando we’d love to talk to you at the booth. Our product will be generally available October 1st. We’d love to have you as a prospect and a customer, so please reach out to us at hello@cribl.io!
Experience a full version of Cribl Stream and Cribl Edge in the cloud with pre-made sources and destinations.