New to observability? Find out everything you need to know.
Cribl Stream is a vendor-agnostic observability pipeline that gives you the flexibility to collect, reduce, enrich, normalize, and route data from any source to any destination within your existing data infrastructure.
Learn More >Cribl Edge provides an intelligent, highly scalable edge-based data collection system for logs, metrics, and application data.
Learn More >Cribl Search turns the traditional search process on its head, allowing users to search data in place without having to collect/store first.
Learn More >The Cribl.Cloud platform gets you up and running fast without the hassle of running infrastructure.
Learn More >Cribl.Cloud Solution Brief
The fastest and easiest way to realize the value of an observability ecosystem.
Read Solution Brief >AppScope gives operators the visibility they need into application behavior, metrics and events with no configuration and no agent required.
Learn More >Explore Cribl’s Solutions by Integrations:
Explore Cribl’s Solutions by Industry:
Get this Gartner® report and learn why telemetry pipeline solutions represent a robust and largely untapped source of business insight beyond event and incident response.
Download Report >Observability Pipelines: Optimize Your Cloud with Exabeam and Cribl
It’s not about collecting ALL the data; it’s about collecting the RIGHT data.
Watch On-Demand >Try Your Own Cribl Sandbox
Experience a full version of Cribl Stream and Cribl Edge in the cloud.
Launch Now >Get inspired by how our customers are innovating IT, security and observability. They inspire us daily!
Read Customer Stories >Sally Beauty Holdings
Sally Beauty Swaps LogStash and Syslog-ng with Cribl.Cloud for a Resilient Security and Observability Pipeline
Read Case Study >Experience a full version of Cribl Stream and Cribl Edge in the cloud.
Launch Now >Take Control of Your Observability Data with Cribl
Learn More >Cribl Corporate Overview
Cribl makes open observability a reality, giving you the freedom and flexibility to make choices instead of compromises.
Get the Guide >Stay up to date on all things Cribl and observability.
Visit the Newsroom >Cribl’s leadership team has built and launched category-defining products for some of the most innovative companies in the technology sector, and is supported by the world’s most elite investors.
Meet our Leaders >Join the Cribl herd! The smartest, funniest, most passionate goats you’ll ever meet.
Learn More >Want to learn more about Cribl from our sales experts? Send us your contact information and we’ll be in touch.
Talk to an Expert >Our Criblpedia glossary pages provide explanations to technical and industry-specific terms, offering valuable high-level introduction to these concepts.
Not all data is of equal value. Tiering data based on relevance is a useful way to classify logs, making it easier to locate and retrieve specific or timely data.
Some data is frequently required and should be quickly available. Other data may only be retained for compliance purposes, is seldom accessed (if ever), and can be treated as at an archival level. Everything else will fall somewhere between these two points.
There may also be a cost component to how you access and store your data. If that is the case, then tiering allows you to prioritize which information receives faster (more expensive) processing services and what goes into cold storage at vastly reduced storage costs.
Different organizations and vendors may use different terminology for the tiers, such as critical, real-time, priority, and, at the other end, archival, historical, or even cold storage. Regardless of the terms, the idea is to separate different types of data based on criticality, time sensitivity, frequency of access, or a combination thereof.
Here’s a breakdown of what a tiered logging strategy might look like:
Tier 1 – Critical Logs
Logs that are crucial for real-time monitoring, alerting, and incident response. These logs are often related to critical system errors, security breaches, or service failures and when immediate access is required.
Tier 2 – Operational Logs
Logs that provide insights into the daily operations of the system, such as user activities, system events, or API calls. Require continuous access, but not normally at the priority of critical logs.
Tier 3 – Audit and Compliance Logs
Logs that track changes and access patterns, especially important for regulatory compliance, security audits, or forensic analysis.
Tier 4 – Archival Logs
Older logs that might not be immediately necessary but are kept for historical analysis, long-term trends, or backup purposes.
Implementing a tiered logging strategy requires understanding the operational, security, and business requirements of the organization and the data it collects. Not all data is of equal value; using data tiers as a way to classify logs makes it easier to locate and retrieve specific, relevant, and/or timely information. By separating logs into data tiers, you can prioritize information for quicker access and processing. Less important or infrequently accessed data can be ‘frozen’ at reduced costs but takes longer to retrieve. User needs and data requirements vary greatly, so structuring data in tiers optimizes access and costs. Proper tools and solutions, like log management systems or SIEMs, will aid in executing this strategy efficiently.
The main goal of a tiered logging strategy is to optimize costs, manage data efficiently, and ensure that the right data is available and easily accessible as needed for various purposes, such as monitoring, debugging, security analysis, or compliance.