Route data to multiple destinations
Enrich data events with business or service context
Search and analyze data directly at its source, an S3 bucket, or Cribl Lake
Reduce the size of data
Shape data to optimize its value
Store data in S3 buckets or Cribl Lake
Replay data from low-cost storage
Collect logs and metrics from host devices
Redact or mask sensitive data
Optimize data for better threat detection and response
Streamline infrastructure to reduce complexity and cost
Simplify Kubernetes data collection
Optimize logs for value
Control how telemetry is stored
Easily handle new cloud telemetry
Ensure freedom in your tech stack
Accelerate the value of AIOps
Effortlessly search, collect, process, route and store telemetry from every corner of your infrastructure—in the cloud, on-premises, or both—with Cribl. Try the Cribl Suite of products today.
Learn moreGet started quickly without managing infrastructure
Get telemetry data from anywhere to anywhere
Streamline collection with a scalable, vendor-neutral agent
Easily access and explore telemetry from anywhere, anytime
Store, access, and replay telemetry. Point. Click. Done.
AI-powered tools designed to maximize productivity
Instrument, collect, observe
Get hands-on support from Cribl experts to quickly deploy and optimize Cribl solutions for your unique data environment.
Work with certified partners to get up and running fast. Access expert-level support and get guidance on your data strategy.
Get inspired by how our customers are innovating IT, security, and observability. They inspire us daily!
Read customer storiesFREE training and certs for data pros
Log in or sign up to start learning
Step-by-step guidance and best practices
Tutorials for Sandboxes & Cribl.Cloud
Ask questions and share user experiences
Troubleshooting tips, and Q&A archive
The latest software features and updates
Get older versions of Cribl software
For registered licensed customers
Advice throughout your Cribl journey
Connect with Cribl partners to transform your data and drive real results.
Join the Cribl Partner Program for resources to boost success.
Log in to the Cribl Partner Portal for the latest resources, tools, and updates.
Terry Mulligan is a Splunk consultant with Discovered Intelligence (and Notre Dame’s biggest fan)— a data intelligence services and solutions provider that specializes in data observability and security platforms. He shares what Cribl has brought to the table for his organization and his clients and how it’s changed their processes and the role of the Splunk data engineer.
As a Splunk data engineer at Discovered Intelligence, He’s responsible for onboarding Splunk sources for our internal customers and stakeholders. This could be for new data sources or for a PLC where a customer needs the logs in Splunk and wants to understand the costs involved.
Once a request is received, they define the data requirements and collection methods, execute a viability review, and begin the ingestion process.
Before the arrival of Cribl, data onboarding began with a customer submitting a request. They provided information about sizing, locations, and data type over a quick Teams meeting and a review process, then determined the viability of the data. This was more of a value test than a technical viability test to determine if the value of the data exceeded the initial and ongoing ingestion costs. To put it simply — was it license-worthy?
Typically, they rejected two types of requests. The first were cases where the 5% of data that was actually interesting wasn’t valuable enough to justify the cost of the other 95%. The second type was bespoke data requests for sources that couldn’t be ingested using regular tools like a universal forwarder, HEC, or TA. Development costs pushed those requests into the non-viable category.
Accepted requests were completed and all but forgotten about until they’d get calls from the customer saying their data wasn’t easily searchable and needed to be cleaned up. Then they’d have to deliver the bad news — there wasn’t much we could do to simplify the data or make it useful.
Once they added Cribl Stream to their arsenal, everything changed. Now they have a tool with the power to drop the bloat, only ingest interesting data, process API data in hours instead of days, and decouple source from destination to offer multi-destination routing.
Since they adopted Cribl Stream, the Splunk data engineering role has become much more challenging and interesting. Conversations with customers have changed — they’re no longer quick calls but in-depth conversations where they’d drill into customer needs, goals, and use cases. They’ve gone from, “Sorry, we won’t be able to onboard your new data source” to “Yes, let’s see what we can do.”
At the start of the migration, Cribl was a novelty for their team members. It began as an afterthought but transformed into the default ingestion point for all new data sources. The ease of ingestion, reduction, and transformation — with the ability to deliver to multiple platforms — means Cribl Stream can add value to all data. For that reason, they no longer send any data directly to Splunk.
Creating and tweaking pipelines can be an intoxicating rabbit hole. The questions they deal with now are more along the lines of — how much time do we spend on Cribling data? How much reduction or transformation do we apply? When is the pipeline good enough? These problems are much nicer to have than the ones that preceded our adoption of Cribl.
In the beginning, they focused on the features of Cribl Stream that would provide immediate results and let it shine. They targeted data sources where they knew they could achieve significant reductions, and they lowered their license by 3 TB. For hardware reduction, they migrated their Syslog data sources from syslog ng to Cribl and eliminated 14 servers. For data viability, they ingested some previously non-viable data sources using the REST collector.
These successes provided great talking points for senior leadership and reinforced the decision to purchase Cribl. From there, they moved on to some backlog items that were previously impossible to resolve — primarily, sources with GDPR concerns that were now possible with the flexibility of Cribl’s masking feature. They’ve also added existing inputs that get routed through Cribl instead of the Splunk gateways.
One of the more interesting successes involved ingesting AppOmni data. When they first brought this data source on and put it through Cribl Stream, it should have been a ~200 GB/day source. It remained that way for a few months until the AppOmni team expanded their footprint, and the data jumped to 1 TB overnight.
This was a huge licensing issue, but they were easily able to rewrite all the data to Google SecOps — a destination without license restrictions. From there, they sent a very small subset of the data to Splunk, leaving Cribl to save the day yet again.
Now that they have had the pleasure of working with Cribl Stream for some time, here are some of his tips for finding success quickly.
Watch Terry’s full presentation from CriblCon 2023 for more on the journey to modernizing data engineering using Cribl.
Cribl, the Data Engine for IT and Security, empowers organizations to transform their data strategy. Customers use Cribl’s suite of products to collect, process, route, and analyze all IT and security data, delivering the flexibility, choice, and control required to adapt to their ever-changing needs.
We offer free training, certifications, and a free tier across our products. Our community Slack features Cribl engineers, partners, and customers who can answer your questions as you get started and continue to build and evolve. We also offer a variety of hands-on Sandboxes for those interested in how companies globally leverage our products for their data challenges.
Experience a full version of Cribl Stream and Cribl Edge in the cloud with pre-made sources and destinations.
Classic choice. Sadly, our website is designed for all modern supported browsers like Edge, Chrome, Firefox, and Safari
Got one of those handy?