Cribl puts your IT and Security data at the center of your data management strategy and provides a one-stop shop for analyzing, collecting, processing, and routing it all at any scale. Try the Cribl suite of products and start building your data engine today!
Learn more ›Evolving demands placed on IT and Security teams are driving a new architecture for how observability data is captured, curated, and queried. This new architecture provides flexibility and control while managing the costs of increasing data volumes.
Read white paper ›Cribl Stream is a vendor-agnostic observability pipeline that gives you the flexibility to collect, reduce, enrich, normalize, and route data from any source to any destination within your existing data infrastructure.
Learn more ›Cribl Edge provides an intelligent, highly scalable edge-based data collection system for logs, metrics, and application data.
Learn more ›Cribl Search turns the traditional search process on its head, allowing users to search data in place without having to collect/store first.
Learn more ›Cribl Lake is a turnkey data lake solution that takes just minutes to get up and running — no data expertise needed. Leverage open formats, unified security with rich access controls, and central access to all IT and security data.
Learn more ›The Cribl.Cloud platform gets you up and running fast without the hassle of running infrastructure.
Learn more ›Cribl.Cloud Solution Brief
The fastest and easiest way to realize the value of an observability ecosystem.
Read Solution Brief ›Cribl Copilot gets your deployments up and running in minutes, not weeks or months.
Learn more ›AppScope gives operators the visibility they need into application behavior, metrics and events with no configuration and no agent required.
Learn more ›Explore Cribl’s Solutions by Use Cases:
Explore Cribl’s Solutions by Integrations:
Explore Cribl’s Solutions by Industry:
Try Your Own Cribl Sandbox
Experience a full version of Cribl Stream and Cribl Edge in the cloud.
Launch Now ›Get inspired by how our customers are innovating IT, security and observability. They inspire us daily!
Read Customer Stories ›Sally Beauty Holdings
Sally Beauty Swaps LogStash and Syslog-ng with Cribl.Cloud for a Resilient Security and Observability Pipeline
Read Case Study ›Experience a full version of Cribl Stream and Cribl Edge in the cloud.
Launch Now ›Transform data management with Cribl, the Data Engine for IT and Security
Learn More ›Cribl Corporate Overview
Cribl makes open observability a reality, giving you the freedom and flexibility to make choices instead of compromises.
Get the Guide ›Stay up to date on all things Cribl and observability.
Visit the Newsroom ›Cribl’s leadership team has built and launched category-defining products for some of the most innovative companies in the technology sector, and is supported by the world’s most elite investors.
Meet our Leaders ›Join the Cribl herd! The smartest, funniest, most passionate goats you’ll ever meet.
Learn More ›Whether you’re just getting started or scaling up, the Cribl for Startups program gives you the tools and resources your company needs to be successful at every stage.
Learn More ›Want to learn more about Cribl from our sales experts? Send us your contact information and we’ll be in touch.
Talk to an Expert ›April 22, 2021
In part one of this blog post, I covered the concept, basic design, and results of using Redis to enrich VPC Flow Logs with security classification data from the GreyNoise API. In this post, I’m covering the details of how to do it. These steps will get you going if you want to try it, but keep in mind you’ll need your own GreyNoise API to run it.
Working from the design diagram from the first post:
The approach is two-fold – one set of components to keep the enrichment data up to date in our Redis instance, and one that uses that data to enrich our VPC flow logs. We’ll start with the enrichment data portion.
The actual pipeline for this consists of a single function, the Redis function. The Redis function is configured to execute a Redis SET command, setting the key gn_ip_${ip}
to the field classification
. Both of these fields are present in the GreyNoise data that goes through the pipeline. The preface of gn_ip_
is really just a poor man’s approach to creating a namespace within Redis – any keys starting with gn_ip_
are greynoise IP addresses, and their values are the classifications that GreyNoise defined for them.
The GreyNoise collector itself is a pretty straightforward REST Collector with parameters, headers and pagination, but we’re going to be a little tricky and use a custom event breaking rule to break the incoming data into individual events. In the LogStream UI’s Knowledge > Event Breaker Rules section, I added a “GreyNoise” rule:
The actual rule looks like this:
This will basically look at the input events, and extract GreyNoise’s returned data
array into individual events.
The actual collector config (in Data > Collectors), looks like this:
query
– A simple GreyNoise Query Language (GNQL) query looking for every system it’s seen in the last day.
scroll
– this is how we pass back the pagination information to the “next” api call – it evaluates out to the value of the scroll field from the “last” run of the api call.
size
– this sets the number of records we want returned per call.
The only header that needs to be set is the key
header, set to the GreyNoise API key. I’ve blurred my GreyNoise API key out, but you can go to GreyNoise.io get your very own! 🙂
In the Event Breakers section of the collector config, we need to select the GreyNoise breaker rule created earlier:
And finally, the result routing needs to go through the pipeline configured for it. Since we do not want any of this data flowing “downstream,” the output is the devnull
destination.
After testing this, I added a schedule to it, by clicking the Manage Collectors page’s Schedule button, and I put in a cron entry to have it run at 1am UTC every day:
Now that we have the enrichment data loading into our Redis instance, we need to make use of it to enrich our VPC flow logs.
The VPC Flow Logs collector is a simple S3 collector that points at the bucket where we direct all of our VPC Flow Logs to end up.
Here is the complete pipeline for the VPC Flow Logs:
This pipeline is a bit more complicated than the GreyNoise extraction pipeline, for a couple reasons:
src_ip
and dest_ip
fields against the Redis table. Any IP address that Greynoise hasn’t seen is going to return a null value, so the Eval after the them makes it look a little more consistent and understandable by setting it to “no data”. aws:vpcflowlogs
on the clones (and an index of cribl
).aws:vpcflowmetrics
). It counts events and sums the bytes field of each, setting Group By Fields of srcclass
, dstclass
, aws_acct
, action
, dstaddr
, and srcaddr
. In this case, I’m using a Splunk destination, which I won’t cover in detail. But one important note is that I’m using the new multi-metrics capability that was introduced in LogStream 2.4. In the past, Splunk required metrics to be sent one at a time, so we would take metrics events, break them into single metric events, and send them individually. Splunk 8 introduced multi-metrics, and in 2.4 of LogStream we added support for it. This reduces the overhead of sending metrics to a Splunk destination. The configuration is a simple toggle in the LogStream Splunk destination’s Advanced Settings:
The collector is pretty straightforward. As we’re running our LogStream environment in AWS, we run the Worker Groups with an IAM role that provides access to the S3 bucket. I will cover IAM nuances in a future blog post, so feel free to just use an IAM API key for this if you’re not running roles.
This is a pretty straightforward S3 collector configuration – the only real point of interest is the Path value:
/VPCFlow/AWSLogs/${aws_acct_id}/vpcflowlogs/${aws_region}/${_time:%Y}/${_time:%m}/${_time:%d}/
The first element is the key prefix we set when we set up VPC Flow Logs delivery in our environment, but everything else is standard AWS pathing for these logs. Note the templates I’m pulling out:
${aws_acct_id}
– the 12 digit account ID.${aws_region}
– the Region the logs are from.${_time*}
– the three time elements here are parsing the time of the log from the path value – %Y
is year, %m
is month, and %d
is day. This enables us to set start and end times for a given collection, and have it pull data only within that time window.Since these are AWS VPC Flow Logs, the AWS Ruleset should be selected in the Event Breakers config:
And finally, set our routing to push through the pipeline and out to our Splunk destination:
Now, when I execute a Full Run on the collector over a period of time, I see the metrics and logs populate in my Splunk destination.
While there is clearly a bit of complexity when trying to do enrichment with large, dynamic data sets, the Redis function in LogStream allows you to keep your lookup table(s) up to date as you see fit, enabling the creation of multiple pipelines using that same lookup table with pretty minimal effort. The use case I walked through here is indicative of many threat intelligence sources, and establishes an easy to replicate pattern for each of those types of sources.
If you want a hands-on introduction to enrichment, try our interactive Enrichment sandbox course. If you’re ready to try this in your own environment, download LogStream and process up to 1 TB of data per day, for free. If you have any questions about this article or anything in the LogStream and Cribl universe, be sure to join our community and share what you are doing there.
Tomer Shvueli Sep 5, 2024
Josh Biggley Aug 28, 2024
Classic choice. Sadly, our website is designed for all modern supported browsers like Edge, Chrome, Firefox, and Safari
Got one of those handy?