Cribl puts your IT and Security data at the center of your data management strategy and provides a one-stop shop for analyzing, collecting, processing, and routing it all at any scale. Try the Cribl suite of products and start building your data engine today!
Learn more ›Evolving demands placed on IT and Security teams are driving a new architecture for how observability data is captured, curated, and queried. This new architecture provides flexibility and control while managing the costs of increasing data volumes.
Read white paper ›Cribl Stream is a vendor-agnostic observability pipeline that gives you the flexibility to collect, reduce, enrich, normalize, and route data from any source to any destination within your existing data infrastructure.
Learn more ›Cribl Edge provides an intelligent, highly scalable edge-based data collection system for logs, metrics, and application data.
Learn more ›Cribl Search turns the traditional search process on its head, allowing users to search data in place without having to collect/store first.
Learn more ›Cribl Lake is a turnkey data lake solution that takes just minutes to get up and running — no data expertise needed. Leverage open formats, unified security with rich access controls, and centralize access to all IT and security data.
Learn more ›The Cribl.Cloud platform gets you up and running fast without the hassle of running infrastructure.
Learn more ›Cribl.Cloud Solution Brief
The fastest and easiest way to realize the value of an observability ecosystem.
Read Solution Brief ›AppScope gives operators the visibility they need into application behavior, metrics and events with no configuration and no agent required.
Learn more ›Explore Cribl’s Solutions by Use Cases:
Explore Cribl’s Solutions by Integrations:
Explore Cribl’s Solutions by Industry:
Watch On-Demand
3 ways to fast-track your data lake strategy without being a data expert
Watch On-Demand ›Try Your Own Cribl Sandbox
Experience a full version of Cribl Stream and Cribl Edge in the cloud.
Launch Now ›Get inspired by how our customers are innovating IT, security and observability. They inspire us daily!
Read Customer Stories ›Sally Beauty Holdings
Sally Beauty Swaps LogStash and Syslog-ng with Cribl.Cloud for a Resilient Security and Observability Pipeline
Read Case Study ›Experience a full version of Cribl Stream and Cribl Edge in the cloud.
Launch Now ›Transform data management with Cribl, the Data Engine for IT and Security
Learn More ›Cribl Corporate Overview
Cribl makes open observability a reality, giving you the freedom and flexibility to make choices instead of compromises.
Get the Guide ›Stay up to date on all things Cribl and observability.
Visit the Newsroom ›Cribl’s leadership team has built and launched category-defining products for some of the most innovative companies in the technology sector, and is supported by the world’s most elite investors.
Meet our Leaders ›Join the Cribl herd! The smartest, funniest, most passionate goats you’ll ever meet.
Learn More ›Whether you’re just getting started or scaling up, the Cribl for Startups program gives you the tools and resources your company needs to be successful at every stage.
Learn More ›Want to learn more about Cribl from our sales experts? Send us your contact information and we’ll be in touch.
Talk to an Expert ›August 26, 2019
People’s log analysis systems are critical to their day to day work. How do you observe what’s going on in your environment without data flowing into your metrics and logging tools? It’s business critical that production log ingestion pipelines remain logging. Which presents a unique problem for LogStream, because while we give you fundamental new capabilities to work with log and metric data in motion, how do we prove that it works without putting it into your pipeline? Many prospects end up setting up a test environment just for trying us out. LogStream enables you to upload samples so you can work with data that isn’t streaming through LogStream, but the sizes of these are limited and it’s up to you to download an extract from something like Splunk and upload to us. We wanted to make it easy for Splunk customers to try out working with their data in LogStream without having to first put us in their log pipeline.
We’re solving this problem with the release of Cribl LogStream 1.7. With our Cribl App for Splunk package, we are now including a new Splunk search command, criblstream
, which will send data to Cribl (by default on the same machine). You can now use the data you already have at rest in Splunk to stream over and work with in Crib, and see how we can transform, sample, suppress, and aggregate information without having to put us in the pipeline first. This is also an excellent way to feed data to Cribl for testing of your pipelines before you configure a route to run data through that pipeline.
In addition to our new Splunk search command, we’re also releasing a highly requested feature: Persistent Queues. If a destination is unavailable, you can configure Cribl to spool data to disk until the destination comes back online. This is especially important for more ephemeral data producers, like Splunk’s HTTP Event Collector or Syslog, as those producers often do not retry if the destination is unavailable, or in the case of UDP syslog, have no way of knowing. In addition to Persistent Queueing, we’re now also shipping Cribl as a statically linked binary which includes the NodeJS runtime, a slew of routing enhancements, and a few other features. Let’s review the new features in more detail.
If you install the Cribl Splunk App, you’ll now get a new search command called criblstream
. criblstream
allows you to take the output from any search and send it over to Cribl via our TCPJSON protocol. Now, you can easily shuffle any data you have at rest in Splunk over to Cribl. Cribl can be configured to output that data back to Splunk, or anywhere Cribl can output data. Do you want to see how Cribl lays out data in S3? See what it’s like to output to Elasticsearch? It’s easy to take any data you have in Splunk and use Cribl to route it anywhere, although you won’t be able to move more than around 200 GB of data a day with today’s implementation.
Using criblstream
paired with Cribl’s Aggregation
function, its trivial to estimate the size of datasets before and after cleaning out noisy data. We can output data back to Splunk for easy query, so you can see if the transformations break any your existing apps before putting us in the pipeline. You can even try out Cribl’s encryption functionality, and the decrypt
command will work out of the box.
New in LogStream 1.7 is the ability to spool data to disk in the event a destination becomes unavailable. By default, Cribl supports backpressure, so for agents like Splunk’s Universal Forwarder or Elasticsearch’s Beats agent, they will back off sending until the destination is again accepting data. However, ephemeral producers of data like Syslog (especially over UDP), or HTTP based transport mechanisms, often don’t support backpressure behavior and so data loss can occur if a remote system is unavailable. In LogStream 1.7, you can now configure persistent queueing for a given destination. After we detect that a given destination is unavailable, we will begin spooling data to disk. This is especially helpful in systems where backpressure is a regular occurrence. Persistent queueing can help minimize data loss.
As pointed out in our recent blog post, Going Native, by our CTO Ledion Bitincka, we’ve open sourced a new project called js2bin which we’re using to compile Cribl into a statically linked binary. Note, this impacts our packages, so when you go to download the newest version of Cribl, you’ll need to pick what platform you want to download. As of LogStream 1.7, we no longer depend on NodeJS, so no more having to figure out the best version of Node to run! Over time, this will also allow us to move some code off into C++ for better performance and you won’t have to worry about compiling node native modules on whatever platform you happen to be on.
This release, we’ve added a number of enhancements to routing, almost all driven directly by customers pushing the boundaries of our current UX. Firstly, and simplest, you can now insert routes easily via contextual menu anywhere in the list. Next, you can now group routes in the UI. This is purely cosmetic, but it can really help declutter a route list growing into the dozens of routes. Lastly, we’ve added a new output type called the Output Router
, which allows you to defer the routing decision to the end of the pipeline. With an Output Router
, you can output to multiple destinations from a single pipeline, or choose the right destination based on a filter expression which matches against the event. One of our customers real world use case was to be able to choose which Splunk cluster to send data to based on a tenant ID they had embedded into the event.
Also in this release is a new Serializer
function. Similar to our Parser
function, this allows you to take an object and serialize it out to a field in a number of formats, like JSON, CSV, etc. Additionally, we’ve now added capture icons throughout the UI so it’s easier than ever to capture data coming from just a particular Input, Sourcetype, etc.
That’s it for 1.7! Check out the Release Notes for more detail. Head to our website to download the latest.
Rick Salsa Apr 17, 2024
Classic choice. Sadly, our website is designed for all modern supported browsers like Edge, Chrome, Firefox, and Safari
Got one of those handy?