Cribl puts your IT and Security data at the center of your data management strategy and provides a one-stop shop for analyzing, collecting, processing, and routing it all at any scale. Try the Cribl suite of products and start building your data engine today!
Learn more ›Evolving demands placed on IT and Security teams are driving a new architecture for how observability data is captured, curated, and queried. This new architecture provides flexibility and control while managing the costs of increasing data volumes.
Read white paper ›Cribl Stream is a vendor-agnostic observability pipeline that gives you the flexibility to collect, reduce, enrich, normalize, and route data from any source to any destination within your existing data infrastructure.
Learn more ›Cribl Edge provides an intelligent, highly scalable edge-based data collection system for logs, metrics, and application data.
Learn more ›Cribl Search turns the traditional search process on its head, allowing users to search data in place without having to collect/store first.
Learn more ›Cribl Lake is a turnkey data lake solution that takes just minutes to get up and running — no data expertise needed. Leverage open formats, unified security with rich access controls, and central access to all IT and security data.
Learn more ›The Cribl.Cloud platform gets you up and running fast without the hassle of running infrastructure.
Learn more ›Cribl.Cloud Solution Brief
The fastest and easiest way to realize the value of an observability ecosystem.
Read Solution Brief ›Cribl Copilot gets your deployments up and running in minutes, not weeks or months.
Learn more ›AppScope gives operators the visibility they need into application behavior, metrics and events with no configuration and no agent required.
Learn more ›Explore Cribl’s Solutions by Use Cases:
Explore Cribl’s Solutions by Integrations:
Explore Cribl’s Solutions by Industry:
Try Your Own Cribl Sandbox
Experience a full version of Cribl Stream and Cribl Edge in the cloud.
Launch Now ›Get inspired by how our customers are innovating IT, security and observability. They inspire us daily!
Read Customer Stories ›Sally Beauty Holdings
Sally Beauty Swaps LogStash and Syslog-ng with Cribl.Cloud for a Resilient Security and Observability Pipeline
Read Case Study ›Experience a full version of Cribl Stream and Cribl Edge in the cloud.
Launch Now ›Transform data management with Cribl, the Data Engine for IT and Security
Learn More ›Cribl Corporate Overview
Cribl makes open observability a reality, giving you the freedom and flexibility to make choices instead of compromises.
Get the Guide ›Stay up to date on all things Cribl and observability.
Visit the Newsroom ›Cribl’s leadership team has built and launched category-defining products for some of the most innovative companies in the technology sector, and is supported by the world’s most elite investors.
Meet our Leaders ›Join the Cribl herd! The smartest, funniest, most passionate goats you’ll ever meet.
Learn More ›Whether you’re just getting started or scaling up, the Cribl for Startups program gives you the tools and resources your company needs to be successful at every stage.
Learn More ›Want to learn more about Cribl from our sales experts? Send us your contact information and we’ll be in touch.
Talk to an Expert ›In this conversation, Cribl’s Carley Rosato talks to Aflac’s Shawn Cannon about his role as a Threat Management Consultant, and how he manages their SIEM environment, brings in new data as needed, and works to improve the ingestion process.
Our customers are always coming up with new and exciting ways to implement Cribl tools — importing a 34 million-row CSV file into Redis and enriching events might be one of the most impressive we’ve seen so far. Adding a file of that size into a Splunk search is borderline impossible, so the team at Aflac initially tried to use a combination of Cribl and a Mongo database. The annual costs were too high to justify this single-use case, so they instead deployed Redis for the additional processing capability.
There are a few different options when it comes to enriching events as they’re streamed through the Cribl platform. The first is using local lookups or CSV files loaded onto your Stream leader — these can be updated manually or via script. CSV lookups are ideal for smaller files or for data that doesn’t change regularly. Data that is more consistent requires less maintenance and upkeep, and larger files require more resources of your infrastructure since they’re processed locally.
If your data feed changes regularly or is larger than a few million rows, we recommend Redis as a way to host the lookup file. Shawn and his team did exactly this — they created a Redis Elasticache instance in AWS, populated it with the lookup file data, and then used the Redis function and Cribl Stream to add the needed fields.
To create the Redis cache, they needed to know the instance size. According to the Redis docs, one million small keys use about 85 MB of memory, so their 34 million key string keys required 2.8 GB. They used the smallest Redis Elasticache instance size — cache.r6g.large — 2 CPUs and 13 GB of memory, leaving room for growth.
With the CSV file as a backup, having one shard and three nodes (one primary and two replicas) made sense, instead of setting up a Redis cluster. To get the CSV file into Redis, a large CSV file is created every morning from a Splunk search, and then a scheduled Cron job ingests the CSV into Redis. Here’s what the process looked like, including the command they used.
* If you’re not comfortable scripting solutions on your own, you can also write the data to Redis using inline functions, available by default in Cribl Stream. We support almost all Redis commands out of the box, and most read and write capabilities are covered.
Shawn uses a Redis function and GIT commands to pull the data from Redis. The customer ID field passes through a pipeline that calls the Redis function, taking the account number field in the data and matching it to the key in Redis. Redis then returns the customer ID field that they need.
Remember that Redis is a database in memory, so it should be fast — but how fast depends on the location of your Redis and Cribl workers. If everything is in AWS, speed shouldn’t be an issue, but latency may come into play if you have this running from a Cribl worker in a different location.
The pipeline profiler feature within the Cribl Stream UI gives end users a way to review the amount of time an event will take to be processed by each function. When Shawn and his team tested out the profiler, they didn’t see bottlenecks in the system. Since their Cribl workers exist in a different data center, they also saw an expected 3-4 seconds of latency.
Redis can be intimidating at first, but knowing the best ways to deploy Redis alongside Cribl Stream makes the process that much easier. When you’re getting set up, you have different deployment options depending on the complexity of your environment — you can run a standalone cluster or Sentinel mode, which means there’s a backup copy of each record stored. Start simple and add more functionality later. You can build on your standalone environment once you’re ready to move some of those use cases to production.
After you choose your deployment type, decide on the best location to run your Redis instance. Note the location of the Redis deployment in relation to the worker group retrieving the data for enrichment. This will determine how long it’ll take to traverse the network to collect the information for enrichment.
A few advanced settings can be tweaked within Cribl Stream to enhance the functionality and reliability of Redis and its interactions with the platform. There’s a drop-down within the Redis function that lists a default series of commands available for Redis — but you can manually input commands into this box as well. You’re not confined to the ones on the list.
You can also alter caching and timeouts. Caching can lead to better performance by minimizing the number of calls to Redis — keep the key and store it in memory so it’ll require fewer calls out to the Redis environment.
Timeouts help ensure that data isn’t halted or missed if Redis is unavailable. Stream will wait for the time you set, continue to process the events through the rest of the pipeline, and send them downstream if it doesn’t receive a signal back from Redis. Be mindful of the timeout value you set because Cribl Stream will wait that entire period before continuing to send events downstream. There may be some latency if it’s waiting on Redis for an extended period.
Watch the full conversation for a more detailed look into how to set up and use a Redis cache along with Cribl Stream.
Here are a few resources to help you get up and running on your Redis journey:
You can also test things in our Enrichment sandbox or the Lookups Three Ways sandbox!
Cribl, the Data Engine for IT and Security, empowers organizations to transform their data strategy. Customers use Cribl’s suite of products to collect, process, route, and analyze all IT and security data, delivering the flexibility, choice, and control required to adapt to their ever-changing needs.
We offer free training, certifications, and a free tier across our products. Our community Slack features Cribl engineers, partners, and customers who can answer your questions as you get started and continue to build and evolve. We also offer a variety of hands-on Sandboxes for those interested in how companies globally leverage our products for their data challenges.
Experience a full version of Cribl Stream and Cribl Edge in the cloud with pre-made sources and destinations.
Classic choice. Sadly, our website is designed for all modern supported browsers like Edge, Chrome, Firefox, and Safari
Got one of those handy?