New to observability? Find out everything you need to know.
Cribl Stream is a vendor-agnostic observability pipeline that gives you the flexibility to collect, reduce, enrich, normalize, and route data from any source to any destination within your existing data infrastructure.
Learn More >Cribl Edge provides an intelligent, highly scalable edge-based data collection system for logs, metrics, and application data.
Learn More >Cribl Search turns the traditional search process on its head, allowing users to search data in place without having to collect/store first.
Learn More >The Cribl.Cloud platform gets you up and running fast without the hassle of running infrastructure.
Learn More >Cribl.Cloud Solution Brief
The fastest and easiest way to realize the value of an observability ecosystem.
Read Solution Brief >AppScope gives operators the visibility they need into application behavior, metrics and events with no configuration and no agent required.
Learn More >Explore Cribl’s Solutions by Integrations:
Explore Cribl’s Solutions by Industry:
Get this Gartner® report and learn why telemetry pipeline solutions represent a robust and largely untapped source of business insight beyond event and incident response.
Download Report >Observability Pipelines: Optimize Your Cloud with Exabeam and Cribl
It’s not about collecting ALL the data; it’s about collecting the RIGHT data.
Watch On-Demand >Try Your Own Cribl Sandbox
Experience a full version of Cribl Stream and Cribl Edge in the cloud.
Launch Now >Get inspired by how our customers are innovating IT, security and observability. They inspire us daily!
Read Customer Stories >Sally Beauty Holdings
Sally Beauty Swaps LogStash and Syslog-ng with Cribl.Cloud for a Resilient Security and Observability Pipeline
Read Case Study >Experience a full version of Cribl Stream and Cribl Edge in the cloud.
Launch Now >Take Control of Your Observability Data with Cribl
Learn More >Cribl Corporate Overview
Cribl makes open observability a reality, giving you the freedom and flexibility to make choices instead of compromises.
Get the Guide >Stay up to date on all things Cribl and observability.
Visit the Newsroom >Cribl’s leadership team has built and launched category-defining products for some of the most innovative companies in the technology sector, and is supported by the world’s most elite investors.
Meet our Leaders >Join the Cribl herd! The smartest, funniest, most passionate goats you’ll ever meet.
Learn More >Want to learn more about Cribl from our sales experts? Send us your contact information and we’ll be in touch.
Talk to an Expert >Our Criblpedia glossary pages provide explanations to technical and industry-specific terms, offering valuable high-level introduction to these concepts.
Anomaly detection is the process of identifying exceptional events, items, or observations that deviate from typical behaviors or patterns. This process plays a pivotal role in various domains. These anomalies are often referred to as standard deviations, outliers, noise, novelties, or exceptions.
In network anomaly detection and network intrusion detection the term “interesting events” is not necessarily rare but indicate unusual occurrences. For instance, sudden surges in activity, while not rare, are still considered noteworthy. Traditional statistical anomaly detection methods might not flag these abrupt spikes in activity as outliers. In such cases, cluster analysis algorithms can be more effective in detecting these microclusters of data.
Anomaly detection plays a crucial role in various domains, including cybersecurity, finance, and healthcare. It helps companies prevent fraudulent activities, detect network intrusions, and identify financial anomalies. With the ability to proactively detect deviations from the norm, anomaly detection empowers businesses to mitigate risks. By doing all of that it ensures data integrity and helps individuals make informed decisions.
Anomaly detection techniques can be categorized into three primary types: unsupervised, semi-supervised, and supervised. The choice of the appropriate method depends on the availability of labels in the dataset. Let’s break them down:
Supervised Anomaly Detection
This approach requires a dataset with a complete set of “normal” and “abnormal” labels for a classification algorithm to operate effectively. Training is a key aspect of anomaly detection, akin to conventional pattern recognition. However, this method must deal with a significant class imbalance. As a result, not all statistical classification algorithms are well-suited to address this inherent imbalance in the process.
Semi-Supervised Anomaly Detection
Semi-supervised methods leverage a labeled training dataset representing normal behavior to create a model. This model is then employed to detect anomalies by assessing how likely the model is to generate any encountered instance.
Unsupervised Anomaly Detection
Unsupervised methods identify anomalies in an unlabeled test dataset solely based on the intrinsic properties of the data. The underlying assumption is that, in most cases, the majority of instances in the dataset are normal. Anomaly detection algorithms identify instances that show the least congruence with the rest of the dataset.
The wide array of techniques caters to the diverse needs and challenges of anomaly detection. These techniques encompass generative and discriminative approaches and include clustering-based, density-based, and support vector machine-based methods. Selecting the most appropriate technique depends on the specific use case and characteristics of the dataset. Anomalies can be expressed in diverse forms, requiring customized approaches for detection and mitigation.
Anomaly detection plays a crucial role in observability by helping companies monitor and maintain the health and performance of their systems, applications, and infrastructure. Here are some key use cases in the context of observability: