x
Cribl Stream to Elastic

Sending Data to Elastic Security With Cribl Stream (And Making It Work With Elastic SIEM)

December 6, 2023

Cribl Stream is a real-time security and observability data processing pipeline that can be used to collect, transform, enrich, reduce, redact, and route data from a variety of sources to a variety of destinations. One of the popular destinations for Cribl users is Elastic SIEM. This blog post will walk you through the steps on how to set up Cribl Stream to normalize and forward data to use with Elastic Security for SIEM.

Step 1: Getting Data Flowing

Configure Your Elastic Destination

To configure your Elastic Cloud or Elasticsearch destination, go to Data > Destinations in the Cribl Stream user interface. Then, select Elastic Cloud or Elasticsearch (docs page) and click Add Destination.

In the New Destination window, enter a unique name for your destination and your Elastic Cloud ID. You can also optionally send to the Bulk API and even enable load balancing by specifying multiple Bulk API URLs if you are using the Elasticsearch destination.

Step 2: Getting Data into Elastic Security

Elastic Security is an excellent SIEM designed to help you protect, investigate, and respond to cyber threats quickly and at scale. Underneath all this functionality lies your data, but as searching and correlating across multiple datasets and data schemas can be challenging, Elastic created the Elastic Common Schema (ECS) to normalize events and metrics. ECS simplifies searching, correlating, and analyzing events across various sources. this section will guide you through the process of ensuring your data is in ECS format so you can make the most of Elastic Security’s capabilities.

There are three ways to get data into ECS format and ready for Elastic Security.

  1. Use a pack that supports converting events into ECS
  2. Map your events into ECS format manually
  3. Send the original event into Elastic pipelines

We’re going about it in this order because as long as you perform the mapping in Cribl Stream, you can still clean and transform your data as you wish. Having Elastic map your events will require you to send the original log. So, without any more delay, let’s dig into each of these.

Mapping to ECS via Packs

Some packs in the Cribl Pack Dispensary support mapping data into ECS format. In most cases, this should be fairly straightforward to enable this feature: likely, you will need to enable a group of functions inside the pipelines of the pack in question. The following packs support ECS conversion today:

  • Cisco ASA
  • Cisco FTD
  • Corelight Pack
  • Zscaler
  • …and several more on the way!

Mapping to ECS Manually

If you want to map data manually to ECS, there are a few things you should do first to make this effort much more straightforward.

Once you’ve done all three steps, you can map your dataset into ECS. As for performing the mapping, you can take a few different approaches to accomplish this. Today, I will show you one way to approach this. But my next blog post will detail several other methods and the pros and cons of each.

The simplest method is to use an eval function to build the JSON objects you need to map.

This should result in a JSON event structure that might look like this.

Mapping to ECS via Elastic Integrations

If Elastic has an integration listed for the data source you want to collect, you can leverage the ingest pipelines from those integrations to process your events into ECS. Then, you will only have to set up Cribl Stream to send your logs in their original format. To do this, set your route to elastic in Cribl Stream with a passthru pipeline for any data you want to be processed this way.

On the Logstash side, you must install the processing components from your data’s vendor. Below is an example of installing these assets for AWS data. First, you will need to find the integration you want to install.

Then, navigate to the “Settings” page and click the “Install Assets” button.

Once you’ve installed the assets, grab the datastream names for any logs you want to send! For our AWS example, you can find them on the docs page here.

Setting Up Cribl Stream to Send Into Datastreams and Pipelines

Once you’ve installed the resources for the integrations you want, you must head back into Cribl Stream to map data to the desired data stream or pipeline.

You might be unsure of the difference between those, so let me explain: Elastic uses data streams to simplify index management by encapsulating index settings, templates, rollover, and lifecycle management policies in a single package. When you install the Elastic integration assets, Elastic creates fully configured data streams for each type of log included in the integration, including a default ingest pipeline.

Send to just a pipeline will lock you onto that specific pipeline version and bypass all the index settings and policies of a datastream. Generally, it’s always preferable to reference the datastream over the pipeline.

The easiest way to ingest data via Cribl and an Elastic integration is by sending the raw data to the appropriate datastream. If you need a custom pipeline head to the Elastic destination you are sending data to in Cribl Stream, insert the following string into the “pipeline” field.

You might be wondering why there is a double underscore. This is because Cribl keeps those fields internal and will never send it out, so it’s handy as control fields.

After you’ve set this value, you need to assign a value to the __pipeline field it’s looking for.

This is the same as what we must do for datastreams, so that we will use this example for both. For sending events to a datastream, the elastic destination looks for an “__index” field and overrides the value set as default in the destination.

To map either field, you must create a pipeline for assigning the values to the events you want to be parsed by Elastic. Let’s continue the AWS example from earlier; say we want to map Cloudtrail data using a datastream. We would create an eval with a filter looking for anything labeled “CloudTrail” and then assign the datastream value (the one we found above) to the __index field. It will look something like this.

Now we’re done! This should be pretty simple to repeat for any other data set you want to map this way. As I said earlier, we will have more packs that support Elastic Common Schema and sample datastream mapping coming soon so keep your eyes out for them!


 

Cribl, the Data Engine for IT and Security, empowers organizations to transform their data strategy. Customers use Cribl’s suite of products to collect, process, route, and analyze all IT and security data, delivering the flexibility, choice, and control required to adapt to their ever-changing needs.

We offer free training, certifications, and a free tier across our products. Our community Slack features Cribl engineers, partners, and customers who can answer your questions as you get started and continue to build and evolve. We also offer a variety of hands-on Sandboxes for those interested in how companies globally leverage our products for their data challenges.

.
Blog
Feature Image

Simplifying Data Management in the Cloud: How Cribl and AWS’ Strategic Collaboration Agreement Benefits Customers

Read More
.
Blog
Feature Image

Observability for Everyone

Read More
.
Blog
Feature Image

Navigating the Mainframe Logging Maze: Insights for the Modern IT Professional

Read More
pattern

Try Your Own Cribl Sandbox

Experience a full version of Cribl Stream and Cribl Edge in the cloud with pre-made sources and destinations.

box

So you're rockin' Internet Explorer!

Classic choice. Sadly, our website is designed for all modern supported browsers like Edge, Chrome, Firefox, and Safari

Got one of those handy?