Route data to multiple destinations
Enrich data events with business or service context
Search and analyze data directly at its source, an S3 bucket, or Cribl Lake
Reduce the size of data
Shape data to optimize its value
Store data in S3 buckets or Cribl Lake
Replay data from low-cost storage
Collect logs and metrics from host devices
Centrally receive and route telemetry to all your tools
Redact or mask sensitive data
Optimize data for better threat detection and response
Streamline infrastructure to reduce complexity and cost
Simplify Kubernetes data collection
Optimize logs for value
Control how telemetry is stored
Easily handle new cloud telemetry
Ensure freedom in your tech stack
Accelerate the value of AIOps
Effortlessly search, collect, process, route and store telemetry from every corner of your infrastructure—in the cloud, on-premises, or both—with Cribl. Try the Cribl Suite of products today.
Learn moreGet telemetry data from anywhere to anywhere
Get started quickly without managing infrastructure
Streamline collection with a scalable, vendor-neutral agent
AI-powered tools designed to maximize productivity
Easily access and explore telemetry from anywhere, anytime
Instrument, collect, observe
Store, access, and replay telemetry.
Get hands-on support from Cribl experts to quickly deploy and optimize Cribl solutions for your unique data environment.
Work with certified partners to get up and running fast. Access expert-level support and get guidance on your data strategy.
Get inspired by how our customers are innovating IT, security, and observability. They inspire us daily!
Read customer storiesFREE training and certs for data pros
Log in or sign up to start learning
Step-by-step guidance and best practices
Tutorials for Sandboxes & Cribl.Cloud
Ask questions and share user experiences
Troubleshooting tips, and Q&A archive
The latest software features and updates
Get older versions of Cribl software
For registered licensed customers
Advice throughout your Cribl journey
Connect with Cribl partners to transform your data and drive real results.
Join the Cribl Partner Program for resources to boost success.
Log in to the Cribl Partner Portal for the latest resources, tools, and updates.
Our Criblpedia glossary pages provide explanations to technical and industry-specific terms, offering valuable high-level introduction to these concepts.
Not all data is of equal value. Tiering data based on relevance is a useful way to classify logs, making it easier to locate and retrieve specific or timely data.
Some data is frequently required and should be quickly available. Other data may only be retained for compliance purposes, is seldom accessed (if ever), and can be treated as at an archival level. Everything else will fall somewhere between these two points.
There may also be a cost component to how you access and store your data. If that is the case, then tiering allows you to prioritize which information receives faster (more expensive) processing services and what goes into cold storage at vastly reduced storage costs.
Different organizations and vendors may use different terminology for the tiers, such as critical, real-time, priority, and, at the other end, archival, historical, or even cold storage. Regardless of the terms, the idea is to separate different types of data based on criticality, time sensitivity, frequency of access, or a combination thereof.
Here’s a breakdown of what a tiered logging strategy might look like:
Tier 1 – Critical Logs
Logs that are crucial for real-time monitoring, alerting, and incident response. These logs are often related to critical system errors, security breaches, or service failures and when immediate access is required.
Tier 2 – Operational Logs
Logs that provide insights into the daily operations of the system, such as user activities, system events, or API calls. Require continuous access, but not normally at the priority of critical logs.
Tier 3 – Audit and Compliance Logs
Logs that track changes and access patterns, especially important for regulatory compliance, security audits, or forensic analysis.
Tier 4 – Archival Logs
Older logs that might not be immediately necessary but are kept for historical analysis, long-term trends, or backup purposes.
Implementing a tiered logging strategy requires understanding the operational, security, and business requirements of the organization and the data it collects. Not all data is of equal value; using data tiers as a way to classify logs makes it easier to locate and retrieve specific, relevant, and/or timely information. By separating logs into data tiers, you can prioritize information for quicker access and processing. Less important or infrequently accessed data can be ‘frozen’ at reduced costs but takes longer to retrieve. User needs and data requirements vary greatly, so structuring data in tiers optimizes access and costs. Proper tools and solutions, like log management systems or SIEMs, will aid in executing this strategy efficiently.
The main goal of a tiered logging strategy is to optimize costs, manage data efficiently, and ensure that the right data is available and easily accessible as needed for various purposes, such as monitoring, debugging, security analysis, or compliance.
Classic choice. Sadly, our website is designed for all modern supported browsers like Edge, Chrome, Firefox, and Safari
Got one of those handy?