With the goal of simplifying data processing of the most challenging formats, Cribl LogStream provides the first streams processing engine built for logs and metrics. LogStream gives users full control over their data in motion to identify wasted ingest, look-up, enrich, encrypt, transform, or sample it before routing it to the most cost-effective destination.
LogStream version 2.0 builds on the success of LogStream 1.0 which was released in August of 2018. LogStream 2.0 allows users to implement an observability pipeline which can process data at Petabyte scale, with a new user console allowing management of up to a thousand worker nodes and True Consumption pricing.
Multiple systems are a reality for enterprises when it comes to monitoring, security & observability. LogStream allows administrators to route their data to the most cost-effective destination while also making copies for other destinations.
Logs and metrics often contain sensitive information. LogStream provides a rich, intuitive experience allowing users to mask and encrypt sensitive information with role-based decryption.
Enrichment allows an organization to add additional context to their streaming data. LogStream allows for ingestion-time enrichment from DNS, Threat Intel Lists, AWS, your CMDB and more.
Log data comes in many formats and structures. LogStream allows you to easily parse any format, choose which fields are valuable, and output structured data back to your logging system or to any system which is looking for JSON, CSV, etc.
Based on our experience, half of the machine data collected by organizations is waste. LogStream allows administrators to identify and easily transform verbose events, stripping out unwanted fields and meaningless information.
LogStream 2.0 delivers a new user console allowing management of up to a thousand worker nodes
You generate orders of magnitude more data today than you did just a few years ago. Unfortunately, most of this data, logs, metrics and traces, is wasted ingest. LogStream 2.0 is tested at petabytes of data per day scale, preparing you for the future of observability.
It's free to process <100 GB per day. Once you see the value and want to process more, let's talk. The free plan is single node and community supported.