Cribl puts your IT and Security data at the center of your data management strategy and provides a one-stop shop for analyzing, collecting, processing, and routing it all at any scale. Try the Cribl suite of products and start building your data engine today!
Learn more ›Evolving demands placed on IT and Security teams are driving a new architecture for how observability data is captured, curated, and queried. This new architecture provides flexibility and control while managing the costs of increasing data volumes.
Read white paper ›Cribl Stream is a vendor-agnostic observability pipeline that gives you the flexibility to collect, reduce, enrich, normalize, and route data from any source to any destination within your existing data infrastructure.
Learn more ›Cribl Edge provides an intelligent, highly scalable edge-based data collection system for logs, metrics, and application data.
Learn more ›Cribl Search turns the traditional search process on its head, allowing users to search data in place without having to collect/store first.
Learn more ›Cribl Lake is a turnkey data lake solution that takes just minutes to get up and running — no data expertise needed. Leverage open formats, unified security with rich access controls, and central access to all IT and security data.
Learn more ›The Cribl.Cloud platform gets you up and running fast without the hassle of running infrastructure.
Learn more ›Cribl.Cloud Solution Brief
The fastest and easiest way to realize the value of an observability ecosystem.
Read Solution Brief ›Cribl Copilot gets your deployments up and running in minutes, not weeks or months.
Learn more ›AppScope gives operators the visibility they need into application behavior, metrics and events with no configuration and no agent required.
Learn more ›Explore Cribl’s Solutions by Use Cases:
Explore Cribl’s Solutions by Integrations:
Explore Cribl’s Solutions by Industry:
Try Your Own Cribl Sandbox
Experience a full version of Cribl Stream and Cribl Edge in the cloud.
Launch Now ›Get inspired by how our customers are innovating IT, security and observability. They inspire us daily!
Read Customer Stories ›Sally Beauty Holdings
Sally Beauty Swaps LogStash and Syslog-ng with Cribl.Cloud for a Resilient Security and Observability Pipeline
Read Case Study ›Experience a full version of Cribl Stream and Cribl Edge in the cloud.
Launch Now ›Transform data management with Cribl, the Data Engine for IT and Security
Learn More ›Cribl Corporate Overview
Cribl makes open observability a reality, giving you the freedom and flexibility to make choices instead of compromises.
Get the Guide ›Stay up to date on all things Cribl and observability.
Visit the Newsroom ›Cribl’s leadership team has built and launched category-defining products for some of the most innovative companies in the technology sector, and is supported by the world’s most elite investors.
Meet our Leaders ›Join the Cribl herd! The smartest, funniest, most passionate goats you’ll ever meet.
Learn More ›Whether you’re just getting started or scaling up, the Cribl for Startups program gives you the tools and resources your company needs to be successful at every stage.
Learn More ›Want to learn more about Cribl from our sales experts? Send us your contact information and we’ll be in touch.
Talk to an Expert ›September 4, 2018
The need for operational & performance visibility grows at least linearly with your infrastucutre sprawl; The more data your VMs, containers, APIs, apps, services, users, etc. emit, the greater the impact on the performance and the user experience of the analysis system. In theory this problem is easy to solve; simply scale the analysis system at the same rate as the infrastructure. But practice is a bitch and will throw multiple wrenches at that idea – wrenches of the economic, political, compliance, storage and staffing type. Half of the customers that we’ve spoken with, when faced with high volume/low value data, do either of the following, or a combination of both:
1. Limit/Omit: Don’t index it at all (or dump it in cheap storage for later analysis).
2. Summarize: Spin up additional infrastructure and bring in aggregates.
Is there a way to use existing infrastructure to derive some amount of knowledge from this data without completely breaking it?
Yes, one way to address it is by using Sampling. The concept of sampling is pretty straightforward: instead of using the entire set of events to make a decision, use a representative subset to get as statistically close to the exact answer as necessary. This is not unlike, for example, when a pharmaceutical/healthcare company says “cure X works 85% of the time…”. They did not test the cure on the entire population and then observed that 85% of it reacted positively! Even though theory says they certainly could :). What they did instead is test the cure on a sample that is representative enough to support their claims.
Similarly, in high volume/low value data scenarios, one can ingest a sample that is as dense or as sparse as needed to answer the right questions with certain statistical guarantees. When looking at operational trends having an appropriately sampled subset of data is definitely better than having no data at all.
Cribl ships out of the box with a Sampling function. At the time of this writing, it implements Systematic Sampling which can be applied on the entire data stream or a subset defined by filters or field values. In Systematic Sampling, “the sampling starts by selecting an element from the list at random and then every k-th element …is selected, where k, the sampling interval“.
The Function has two filter levels. One – which every other function in Cribl has – determines what events should be passed to it. And another, which determines what level of Sampling to apply to further matching conditions, such as field values or arbitrary expressions. Examples:
sourcetype=access_combined
events 3:1sourcetype=access_combined
events with status=200
5:1sourcetype=access_combined
events with status=200
5:1 but only if they’re from a host that starts with web*
sourcetype=aws:cloudwatchlogs:vpcflow
events with action=ACCEPT
8:1Multiple filters allow for smart and highly selective sampling. That is, we can ingest full fidelity high-value events from a high volume/low value data stream, but sample the rest.
Each time an event is emitted by the function, an index-time field is added to it: sampled::
which can be used in statistical functions as necessary. For example, if sourcetype=access_combined
events with status=200
are sampled at 5:1, to estimate the original number of 200s, you multiply the current number by 5.
For all the examples below let’s assume you have LogStream running and we have at least one pipeline that handles our traffic of interest. Let’s further assume that that pipeline is called main.
Scenario 1: Sample all access_combined
events at 5:1.
While in the main pipeline, click Add Function and select Sampling. In Filter enter: sourcetype=='access_combined'
and in the Sampling Rules section, enter true
on the left (Filter) and 5
as the Sampling Rate.
Scenario 2: Let’s make it a bit more complex. In all access_combined
events sample events with status
3xx at 2:1, those with status
200 at 2:1 and those with status
200 from local hosts at 8:1.
First, let’s extract an internal Cribl field. While in the main pipeline, click Add Function and select Regex Extract. In Filter enter: sourcetype=='access_combined'
and in the Regex section, enter /"\s(?<__status>[1-5][0-9]{2})/
. Set Source to _raw
, which means apply the extraction to the raw event content. This function will extract a field and make it available to other downstream function.
Now, let’s click Add Function and select Sampling. In Filter enter: sourcetype=='access_combined'
and in the Sampling Rules section, enter the following:
Filter: __status == 200 && host.endsWith('.local')
Rate: 8
Filter: __status == 200
Rate: 5
Filter: __status.startsWith('3')
Rate: 2
Scenario 3: Sample events with action ACCEPT
in aws:cloudwatchlogs:vpcflow
at 6:1.
Similar to the Scenario 2, first extract a field called __action
, then, while still in the main pipeline click Add Function and select Sampling. In Filter enter: sourcetype=='aws:cloudwatchlogs:vpcflow'
and in the Sampling Rules section, enter __action=='ACCEPT'
on the left (Filter) and 6 as the Sampling Rate.
We suggest you run a few quick searches to determine the best candidates for sampling. First, identify a source/sourcetype of interest and, optionally, count by
another field: For example:
sourcetype=access_combined | stats count by status
sourcetype=aws:cloudwatchlogs:vpcflow | stats count by action
Ideal sources are those where count by
results span one or more orders of magnitude. Potential candidates for sampling:
Web Access Type Logs | Firewall Type Logs |
---|---|
Apache Access | Cisco ASA |
AWS ELB/ALB | Palo Alto Firewall |
Amazon Cloudfront | Amazon VPC Flow Logs |
Amazon S3 Server Access | Juniper Firewalls |
“Sample data beats no data 5:1” — Winston Churchill 🙂
Sampling helps you draw statistically meaningful conclusions from a subset of high volume/low value data without linearly scaling your storage and compute. Smart and conditional sampling allows for ingestion of all high-value events so that troubleshooting is done with high fidelity data.
If you are excited and interested about what we’re doing, please join us in Slack #cribl, tweet at us @cribl_io, or contact us via hello@cribl.io. We’d love to hear your stories!
Tomer Shvueli Sep 5, 2024
Josh Biggley Aug 28, 2024
Classic choice. Sadly, our website is designed for all modern supported browsers like Edge, Chrome, Firefox, and Safari
Got one of those handy?