Cribl puts your IT and Security data at the center of your data management strategy and provides a one-stop shop for analyzing, collecting, processing, and routing it all at any scale. Try the Cribl suite of products and start building your data engine today!
Learn more ›Evolving demands placed on IT and Security teams are driving a new architecture for how observability data is captured, curated, and queried. This new architecture provides flexibility and control while managing the costs of increasing data volumes.
Read white paper ›Cribl Stream is a vendor-agnostic observability pipeline that gives you the flexibility to collect, reduce, enrich, normalize, and route data from any source to any destination within your existing data infrastructure.
Learn more ›Cribl Edge provides an intelligent, highly scalable edge-based data collection system for logs, metrics, and application data.
Learn more ›Cribl Search turns the traditional search process on its head, allowing users to search data in place without having to collect/store first.
Learn more ›Cribl Lake is a turnkey data lake solution that takes just minutes to get up and running — no data expertise needed. Leverage open formats, unified security with rich access controls, and centralize access to all IT and security data.
Learn more ›The Cribl.Cloud platform gets you up and running fast without the hassle of running infrastructure.
Learn more ›Cribl.Cloud Solution Brief
The fastest and easiest way to realize the value of an observability ecosystem.
Read Solution Brief ›AppScope gives operators the visibility they need into application behavior, metrics and events with no configuration and no agent required.
Learn more ›Explore Cribl’s Solutions by Use Cases:
Explore Cribl’s Solutions by Integrations:
Explore Cribl’s Solutions by Industry:
April 24 | 10am PT / 1pm ET
3 ways to fast-track your data lake strategy without being a data expert
REGISTER ›Try Your Own Cribl Sandbox
Experience a full version of Cribl Stream and Cribl Edge in the cloud.
Launch Now ›Get inspired by how our customers are innovating IT, security and observability. They inspire us daily!
Read Customer Stories ›Sally Beauty Holdings
Sally Beauty Swaps LogStash and Syslog-ng with Cribl.Cloud for a Resilient Security and Observability Pipeline
Read Case Study ›Experience a full version of Cribl Stream and Cribl Edge in the cloud.
Launch Now ›Transform data management with Cribl, the Data Engine for IT and Security
Learn More ›Cribl Corporate Overview
Cribl makes open observability a reality, giving you the freedom and flexibility to make choices instead of compromises.
Get the Guide ›Stay up to date on all things Cribl and observability.
Visit the Newsroom ›Cribl’s leadership team has built and launched category-defining products for some of the most innovative companies in the technology sector, and is supported by the world’s most elite investors.
Meet our Leaders ›Join the Cribl herd! The smartest, funniest, most passionate goats you’ll ever meet.
Learn More ›Whether you’re just getting started or scaling up, the Cribl for Startups program gives you the tools and resources your company needs to be successful at every stage.
Learn More ›Want to learn more about Cribl from our sales experts? Send us your contact information and we’ll be in touch.
Talk to an Expert ›Normally I prefer to write a wittier headline for a release announcement, especially for a release this important. Three things are holding me back, however: a) I really want you to know we support the Universal Forwarder and Syslog, b) length of headline, c) it’s April Fucking 1st, and it’s possible this is so cool people will think we’re joking anyways.
Since we launched Cribl 6 months ago, we’ve known moving from supporting already structured events, like is sent from a Splunk Heavyweight Forwarder or a Elastic Beats agent, was just a matter of time. That time is six months as it turns out. But, we didn’t want to simply copy the ways of the past! We wanted to bring a fresh perspective and accomplish a goal which sounds simple but is difficult to accomplish: make getting data into log systems as easy as possible.
What makes supporting the Universal Forwarder and Syslog different than what we’ve been shipping the last 6 months is the ability to take a stream of bytes, break it up into events, and extract a timestamp. In Cribl, we pair these actions together, paired with a filter expression, that gives you supreme flexibility for identifying data which should be matched, including matching the content of the events themselves. In fact, we ship example content for AWS, Apache, Cisco, Palo Alto, and Bro logs which will break up events properly and extract their timestamps based on finding signatures of those log lines in the raw byte streams.
Supporting the Universal Forwarders also adds a really nice side effect: we can now deploy on Splunk’s indexing tier again. In Cribl 1.0, we discovered a problem when deploying on Splunk’s indexing tier that was difficult to work around. By taking over the parsing workload, once again its safe for us to deploy on indexers. For customers where hardware resources are limited, this can really speed time to deployment.
In addition to support for Splunk’s Universal Forwarder and Syslog, there’s also some other great new features: support for Confluent’s Schema Registry and Avro schemas in our Kafka Input and Output, support for Azure Blob Storage and Event Hub, support for writing CEF format events, and a new Dynamic Sampling function which controls sample rate based on data volume.
Cribl LogStream 1.5 is available for download today!
The first step in dealing with byte streams is to take a chunk of bytes and turn them into discrete events. Splunk’s Universal Forwarder does not do parsing; it simply lifts the new bytes being written to a file and throws them on to the indexing tier for parsing. In the log world, probably 80% or more of the data is simply a message per line, but as with all thing software engineering the remaining 20% is where all the hard work lies. Log systems need to support multi-line events, like Windows logs or stack traces. Cribl provides a new configuration primitive for configuring Event Breaking Rules.
Rulesets allow you to configure Filter Conditions, which can include a match against raw bytes or any metadata that come in the event. Once we’ve identified which rule to use, you can configure a regular expression for breaking up the bytes into discrete events and a configuration with regex and strptime to extract time. By default, Cribl will use newlines and automatically extract timestamps, so back to the 80% case, usually we’ll work fine without any configuration at all.
Another important change with taking over the parsing workload is that we can now safely be deployed on Splunk’s indexing tier. Universal Forwarders are pointed to Cribl and Cribl sends data to its local indexer. For customers where they do not have a Heavyweight Forwarder tier or find it difficult to provision new capacity, this change allows us to move to production on their existing hardware footprint.
In addition to making it easier to just drop into your existing environment and making it easier to deploy, we’ve also added support for Confluent Schema Registry to put data easily onto Kafka in Avro. We added support for Azure Blob Store and Event Hub to bring our Azure support on par with our AWS support. Lastly, we’ve added support for Dynamic Sampling, which allows the system to choose how much data to bring in and give you a reasonable distribution of data at a fraction of the volume.
That’s it for release 1.5.
If you like what we’re doing, come and check us out at Cribl.io. If you’d like more details on installation or configuration, see our documentation or join us in Slack #cribl, tweet at us @cribl_io, or contact us via hello@cribl.io. We’d love to help you!
Experience a full version of Cribl Stream and Cribl Edge in the cloud with pre-made sources and destinations.