Cribl puts your IT and Security data at the center of your data management strategy and provides a one-stop shop for analyzing, collecting, processing, and routing it all at any scale. Try the Cribl suite of products and start building your data engine today!
Learn more ›Evolving demands placed on IT and Security teams are driving a new architecture for how observability data is captured, curated, and queried. This new architecture provides flexibility and control while managing the costs of increasing data volumes.
Read white paper ›Cribl Stream is a vendor-agnostic observability pipeline that gives you the flexibility to collect, reduce, enrich, normalize, and route data from any source to any destination within your existing data infrastructure.
Learn more ›Cribl Edge provides an intelligent, highly scalable edge-based data collection system for logs, metrics, and application data.
Learn more ›Cribl Search turns the traditional search process on its head, allowing users to search data in place without having to collect/store first.
Learn more ›Cribl Lake is a turnkey data lake solution that takes just minutes to get up and running — no data expertise needed. Leverage open formats, unified security with rich access controls, and centralize access to all IT and security data.
Learn more ›The Cribl.Cloud platform gets you up and running fast without the hassle of running infrastructure.
Learn more ›Cribl.Cloud Solution Brief
The fastest and easiest way to realize the value of an observability ecosystem.
Read Solution Brief ›AppScope gives operators the visibility they need into application behavior, metrics and events with no configuration and no agent required.
Learn more ›Explore Cribl’s Solutions by Use Cases:
Explore Cribl’s Solutions by Integrations:
Explore Cribl’s Solutions by Industry:
Watch On-Demand
3 ways to fast-track your data lake strategy without being a data expert
Watch On-Demand ›Try Your Own Cribl Sandbox
Experience a full version of Cribl Stream and Cribl Edge in the cloud.
Launch Now ›Get inspired by how our customers are innovating IT, security and observability. They inspire us daily!
Read Customer Stories ›Sally Beauty Holdings
Sally Beauty Swaps LogStash and Syslog-ng with Cribl.Cloud for a Resilient Security and Observability Pipeline
Read Case Study ›Experience a full version of Cribl Stream and Cribl Edge in the cloud.
Launch Now ›Transform data management with Cribl, the Data Engine for IT and Security
Learn More ›Cribl Corporate Overview
Cribl makes open observability a reality, giving you the freedom and flexibility to make choices instead of compromises.
Get the Guide ›Stay up to date on all things Cribl and observability.
Visit the Newsroom ›Cribl’s leadership team has built and launched category-defining products for some of the most innovative companies in the technology sector, and is supported by the world’s most elite investors.
Meet our Leaders ›Join the Cribl herd! The smartest, funniest, most passionate goats you’ll ever meet.
Learn More ›Whether you’re just getting started or scaling up, the Cribl for Startups program gives you the tools and resources your company needs to be successful at every stage.
Learn More ›Want to learn more about Cribl from our sales experts? Send us your contact information and we’ll be in touch.
Talk to an Expert ›Cribl Stream is a real-time security and observability data processing pipeline that can be used to collect, transform, enrich, reduce, redact, and route data from a variety of sources to a variety of destinations. One of the popular destinations for Cribl users is Elastic SIEM. This blog post will walk you through the steps on how to set up Cribl Stream to normalize and forward data to use with Elastic Security for SIEM.
To configure your Elastic Cloud or Elasticsearch destination, go to Data > Destinations in the Cribl Stream user interface. Then, select Elastic Cloud or Elasticsearch (docs page) and click Add Destination.
In the New Destination window, enter a unique name for your destination and your Elastic Cloud ID. You can also optionally send to the Bulk API and even enable load balancing by specifying multiple Bulk API URLs if you are using the Elasticsearch destination.
Elastic Security is an excellent SIEM designed to help you protect, investigate, and respond to cyber threats quickly and at scale. Underneath all this functionality lies your data, but as searching and correlating across multiple datasets and data schemas can be challenging, Elastic created the Elastic Common Schema (ECS) to normalize events and metrics. ECS simplifies searching, correlating, and analyzing events across various sources. this section will guide you through the process of ensuring your data is in ECS format so you can make the most of Elastic Security’s capabilities.
There are three ways to get data into ECS format and ready for Elastic Security.
We’re going about it in this order because as long as you perform the mapping in Cribl Stream, you can still clean and transform your data as you wish. Having Elastic map your events will require you to send the original log. So, without any more delay, let’s dig into each of these.
Some packs in the Cribl Pack Dispensary support mapping data into ECS format. In most cases, this should be fairly straightforward to enable this feature: likely, you will need to enable a group of functions inside the pipelines of the pack in question. The following packs support ECS conversion today:
If you want to map data manually to ECS, there are a few things you should do first to make this effort much more straightforward.
Once you’ve done all three steps, you can map your dataset into ECS. As for performing the mapping, you can take a few different approaches to accomplish this. Today, I will show you one way to approach this. But my next blog post will detail several other methods and the pros and cons of each.
The simplest method is to use an eval function to build the JSON objects you need to map.
This should result in a JSON event structure that might look like this.
If Elastic has an integration listed for the data source you want to collect, you can leverage the ingest pipelines from those integrations to process your events into ECS. Then, you will only have to set up Cribl Stream to send your logs in their original format. To do this, set your route to elastic in Cribl Stream with a passthru
pipeline for any data you want to be processed this way.
On the Logstash side, you must install the processing components from your data’s vendor. Below is an example of installing these assets for AWS data. First, you will need to find the integration you want to install.
Then, navigate to the “Settings” page and click the “Install Assets” button.
Once you’ve installed the assets, grab the datastream names for any logs you want to send! For our AWS example, you can find them on the docs page here.
Once you’ve installed the resources for the integrations you want, you must head back into Cribl Stream to map data to the desired data stream or pipeline.
You might be unsure of the difference between those, so let me explain: Elastic uses data streams to simplify index management by encapsulating index settings, templates, rollover, and lifecycle management policies in a single package. When you install the Elastic integration assets, Elastic creates fully configured data streams for each type of log included in the integration, including a default ingest pipeline.
Send to just a pipeline will lock you onto that specific pipeline version and bypass all the index settings and policies of a datastream. Generally, it’s always preferable to reference the datastream over the pipeline.
The easiest way to ingest data via Cribl and an Elastic integration is by sending the raw data to the appropriate datastream. If you need a custom pipeline head to the Elastic destination you are sending data to in Cribl Stream, insert the following string into the “pipeline” field.
You might be wondering why there is a double underscore. This is because Cribl keeps those fields internal and will never send it out, so it’s handy as control fields.
After you’ve set this value, you need to assign a value to the __pipeline
field it’s looking for.
This is the same as what we must do for datastreams, so that we will use this example for both. For sending events to a datastream, the elastic destination looks for an “__index” field and overrides the value set as default in the destination.
To map either field, you must create a pipeline for assigning the values to the events you want to be parsed by Elastic. Let’s continue the AWS example from earlier; say we want to map Cloudtrail data using a datastream. We would create an eval with a filter looking for anything labeled “CloudTrail” and then assign the datastream value (the one we found above) to the __index
field. It will look something like this.
Now we’re done! This should be pretty simple to repeat for any other data set you want to map this way. As I said earlier, we will have more packs that support Elastic Common Schema and sample datastream mapping coming soon so keep your eyes out for them!
Cribl, the Data Engine for IT and Security, empowers organizations to transform their data strategy. Customers use Cribl’s suite of products to collect, process, route, and analyze all IT and security data, delivering the flexibility, choice, and control required to adapt to their ever-changing needs.
We offer free training, certifications, and a free tier across our products. Our community Slack features Cribl engineers, partners, and customers who can answer your questions as you get started and continue to build and evolve. We also offer a variety of hands-on Sandboxes for those interested in how companies globally leverage our products for their data challenges.
Experience a full version of Cribl Stream and Cribl Edge in the cloud with pre-made sources and destinations.
Classic choice. Sadly, our website is designed for all modern supported browsers like Edge, Chrome, Firefox, and Safari
Got one of those handy?