In this conversation, Cribl’s Carley Rosato talks to Aflac’s Shawn Cannon about his role as a Threat Management Consultant, and how he manages their SIEM environment, brings in new data as needed, and works to improve the ingestion process.
Our customers are always coming up with new and exciting ways to implement Cribl tools — importing a 34 million-row CSV file into Redis and enriching events might be one of the most impressive we’ve seen so far. Adding a file of that size into a Splunk search is borderline impossible, so the team at Aflac initially tried to use a combination of Cribl and a Mongo database. The annual costs were too high to justify this single-use case, so they instead deployed Redis for the additional processing capability.
There are a few different options when it comes to enriching events as they’re streamed through the Cribl platform. The first is using local lookups or CSV files loaded onto your Stream leader — these can be updated manually or via script. CSV lookups are ideal for smaller files or for data that doesn’t change regularly. Data that is more consistent requires less maintenance and upkeep, and larger files require more resources of your infrastructure since they’re processed locally.
If your data feed changes regularly or is larger than a few million rows, we recommend Redis as a way to host the lookup file. Shawn and his team did exactly this — they created a Redis Elasticache instance in AWS, populated it with the lookup file data, and then used the Redis function and Cribl Stream to add the needed fields.
To create the Redis cache, they needed to know the instance size. According to the Redis docs, one million small keys use about 85 MB of memory, so their 34 million key string keys required 2.8 GB. They used the smallest Redis Elasticache instance size — cache.r6g.large — 2 CPUs and 13 GB of memory, leaving room for growth.
With the CSV file as a backup, having one shard and three nodes (one primary and two replicas) made sense, instead of setting up a Redis cluster. To get the CSV file into Redis, a large CSV file is created every morning from a Splunk search, and then a scheduled Cron job ingests the CSV into Redis. Here’s what the process looked like, including the command they used.
* If you’re not comfortable scripting solutions on your own, you can also write the data to Redis using inline functions, available by default in Cribl Stream. We support almost all Redis commands out of the box, and most read and write capabilities are covered.
Shawn uses a Redis function and GIT commands to pull the data from Redis. The customer ID field passes through a pipeline that calls the Redis function, taking the account number field in the data and matching it to the key in Redis. Redis then returns the customer ID field that they need.
Remember that Redis is a database in memory, so it should be fast — but how fast depends on the location of your Redis and Cribl workers. If everything is in AWS, speed shouldn’t be an issue, but latency may come into play if you have this running from a Cribl worker in a different location.
The pipeline profiler feature within the Cribl Stream UI gives end users a way to review the amount of time an event will take to be processed by each function. When Shawn and his team tested out the profiler, they didn’t see bottlenecks in the system. Since their Cribl workers exist in a different data center, they also saw an expected 3-4 seconds of latency.
Redis can be intimidating at first, but knowing the best ways to deploy Redis alongside Cribl Stream makes the process that much easier. When you’re getting set up, you have different deployment options depending on the complexity of your environment — you can run a standalone cluster or Sentinel mode, which means there’s a backup copy of each record stored. Start simple and add more functionality later. You can build on your standalone environment once you’re ready to move some of those use cases to production.
After you choose your deployment type, decide on the best location to run your Redis instance. Note the location of the Redis deployment in relation to the worker group retrieving the data for enrichment. This will determine how long it’ll take to traverse the network to collect the information for enrichment.
A few advanced settings can be tweaked within Cribl Stream to enhance the functionality and reliability of Redis and its interactions with the platform. There’s a drop-down within the Redis function that lists a default series of commands available for Redis — but you can manually input commands into this box as well. You’re not confined to the ones on the list.
You can also alter caching and timeouts. Caching can lead to better performance by minimizing the number of calls to Redis — keep the key and store it in memory so it’ll require fewer calls out to the Redis environment.
Timeouts help ensure that data isn’t halted or missed if Redis is unavailable. Stream will wait for the time you set, continue to process the events through the rest of the pipeline, and send them downstream if it doesn’t receive a signal back from Redis. Be mindful of the timeout value you set because Cribl Stream will wait that entire period before continuing to send events downstream. There may be some latency if it’s waiting on Redis for an extended period.
Watch the full conversation for a more detailed look into how to set up and use a Redis cache along with Cribl Stream.
Here are a few resources to help you get up and running on your Redis journey:
You can also test things in our Enrichment sandbox or the Lookups Three Ways sandbox!
Cribl, the Data Engine for IT and Security, empowers organizations to transform their data strategy. Customers use Cribl’s suite of products to collect, process, route, and analyze all IT and security data, delivering the flexibility, choice, and control required to adapt to their ever-changing needs.
We offer free training, certifications, and a free tier across our products. Our community Slack features Cribl engineers, partners, and customers who can answer your questions as you get started and continue to build and evolve. We also offer a variety of hands-on Sandboxes for those interested in how companies globally leverage our products for their data challenges.
Experience a full version of Cribl Stream and Cribl Edge in the cloud with pre-made sources and destinations.