x

Cribl Self Guided Trials​

Extend Data Retention to Meet Compliance Requirements

1. Understand the Problem

Goal:
Extend data retention to meet compliance requirements

Challenge:
Requires additional investment in license and or storage

Example:
You’ve been asked to retain certain information for as long as 7 years. Your current log analysis system holds data for 90 days, which is more than sufficient for IT Ops, and for over 99% of security investigations. Expanding retention to 7 years will significantly increase your budget, factoring in the data volumes, redundancy requirements, hardware and licensing costs.

How Can Cribl Help?
By separating your system of analysis from your system of long-term retention, you can achieve indefinite storage of data, to meet compliance requirement needs.

S3 compatible object stores (MinIO or Cloud storage such as AWS S3, Azure Blob Storage, or Google Cloud Storage, or on-prem capable S3 storage appliance) or data lake solutions provide a low cost option to allow infrequently-accessed data to be stored indefinitely, typically at around 1-2% the cost of traditional storage.

When you send data with longer retention requirements to cloud storage, Cribl’s Replay feature can be used to recall and filter data on demand. You’ll be able to demonstrate the required data is available for compliance audits, or when needed for investigations across broad time ranges.

To do this, you will test and deploy several Cribl Stream technical use cases:

  • Routing: Route data to multiple destinations for analysis and/or storage. This gives teams confidence they can meet retention requirements while reducing or keeping tooling and infrastructure spend flat–with the added bonus of accelerated data onboarding, and normalization and enrichment in the stream. But wait…there’s more! With only relevant data going into your analysis tools you’ll enhance performance across searches, dashboard loading and more.
  • Replay: When you do need to pull data back from object stores, it is easy to get the right data, in the formats required into the tools you choose, using Cribl’s Replay feature. Streamed in real-time, collected on-demand, or collected on an easily configured schedule to get data where you need it for breach analysis or compliance reporting.

Before You Begin:

  • Review the following Cribl Sandboxes:
  • You’ll use Cribl.Cloud for your QuickStart so you might want to note the following:
    • Make sure you choose the correct region, either US West (Oregon) or US East (Virginia), to ensure the Cribl Stream workers are closest to the point of egress to lower costs. (It’s also wicked hard to change it later.)
    • Cribl.Cloud Free/ Standard does not include SSO.
    • Cribl.Cloud Free/ Standard does not support hybrid deployments. If you need to test on-premises workers, please request an Enterprise trial using the Chatbot below.
    • Cribl Packs are out-of-the box solutions for given technologies. You can combine Cribl.Cloud with sample data from a Pack. This combination may be enough for you to prove Cribl Stream will work to help you route data to analytics tools and low-cost storage. The following Packs might be helpful:
      • Palo Alto Networks
      • CrowdStrike Pack
      • Cisco ASA
      • Splunk UF Internal Pack
      • Microsoft Windows Events
      • Microsoft Office Activity
      • Cribl-AWS-Cloudtrail-logs
      • AWS VPC Flow for Security Teams
      • Cribl-Carbon-Black
      • Cribl-Fortinet-Fortigate-Firewall
      • Auth0

What You’ll Achieve:

  • You’ll complete 2 technical use cases to support your business case.
    • A business case is the business outcome you’re trying to achieve. The technical use cases you create will illustrate how Cribl features will work in your environment. Typically, you will need multiple technical use cases to achieve your business case.
  • You’ll connect 1-2 sources to 1-2 destinations.
  • You’ll show that in your environment, with your data sources, you can:
    • Route data to lower cost storage for retention purposes to reduce data volumes going to analytics tools
    • Replay data from low cost storage into analytics systems for compliance reporting or investigations/ troubleshooting across long time horizons

2. Implementation Overview

  1. First, set up the object store with your preferred cloud vendor. Here’s guidance from the leading vendors’ documentation:
    1. AWS S3
    2. Microsoft Azure Blob Storage
    3. Google Cloud Storage
  2. If you have different retention requirements for different data sets, you’ll want one cloud storage destination for each retention period. Further segregation by type of data or source of data may be desired.
  3. Create a cloud storage destination.
    1. Configure the destination within Cribl Stream. When configuring the Stream storage destination, set the partitioning settings to allow filtering on later ingest.
      1. AWS S3
      2. Microsoft Azure Blob Storage
      3. Google Cloud Storage
    2. Commit, Deploy, and test that data is delivered to the destination.
  4. Identify the data sets that go to each destination. For each:
    1. Create a route with a suitable filter to match that data set
    2. If data is to be sent as-is, set the route to use the passthru pipeline. If data modifications are needed, create a pipeline and configure functions as necessary.
    3. Set the route’s destination to the appropriate cloud storage destination
  5. Prepare for Replay for each cloud storage destination:
    1. Create a collector and name it to match the storage destination
    2. Set the path, and path extractors to match the partitioning scheme used in the storage destination
    3. After saving the collector, test using the “Run / Preview” combination

3. Select Your Data Sources

  • For ease of setup, we recommend you choose from the list of sources supported by the Packs listed in Before You Begin. For the full list of supported sources, see: https://docs.cribl.io/stream/sources/ . The category of vendors Cribl works with: Cloud, Analytics tools, SIEM, Object Store, and many more options: https://cribl.io/integrations/
  • Choose from supported formats and source types: JSON, Key-Value, CSV, Extended Log File Format, Common Log Format, and many more out of the box options. See our library for the full list.

Spec out each source:

  1. What’s the volume of that data source per day? (Find in Splunk | Find in Elastic)
  2. Is it a cloud source or an on-prem source?
  3. Do you need TLS, Certificates, or Keys to connect to the sources?
  4. What protocols are supported by both the source and by Cribl Stream?
  5. During your QuickStart, do you have access to the source from production, from a test environment, or not at all?
  6. What’s the data retention period required for this data source?

For your QuickStart, we recommend no more than 3 Sources

Source
Source Collection method
Volume
Cribl Worker Node Host Name / IPs / Load Balancer
Configuration Notes, TLS, Certificates
Data Retention Period

4. Select Your Destinations

Where does your data need to go to:

  1. AWS S3
  2. Microsoft Azure Blob Storage
  3. Google Cloud Storage
    1. Note: See the full list of supported destinations here.
  4. What volume of data do you expect to send to that destination per day?
  5. Do you need TLS, Certificates, or Keys to connect to the destination?
  6. What protocols are supported by both the destination and by Cribl Stream?
  7. During your QuickStart, will you be sending data to production environments, test environments, or not at all?

For your QuickStart, we recommend no more than 3 Destinations.

Destination
Destination Sending method
Volume
Destination Host Name / IPs / Load Balancers
Configuration Notes, TLS, Certificates

5. Prepare Your QuickStart Environment

  1. Regardless of where your data resides, the fastest way to prove success with Cribl is with Cribl.Cloud. With Cribl.Cloud, there is no infrastructure to spin up and manage and no hardware to deal with. You can get straight to your observability pipeline planning, giving you greater choice and control over your data in our SOC 2-compliant Cribl.Cloud.
  2. Cribl.Cloud supports up to 1TB/day–the perfect capacity for a Cribl QuickStart. Once the evaluation is done and you are ready for production, you can increase the volume and allocate more capacity.

6. Launch the QuickStart by Registering for Your Free Cribl.Cloud Instance

  1. Once you’ve registered on the portal, sign in to Cribl.Cloud.
  2. Select the Organization you want to work with.
  3. From the portal page, select Manage Stream.
  4. The Cribl Stream interface will open in a new tab or window – and you’re ready to go!
  5. Notice the Cribl.Cloud link in the upper left of the Cribl.Cloud home page, under the Welcome message. Click this link at any time to reopen the Cribl.Cloud portal page and all its resources.
    1. Follow the getting started Cribl.Cloud-hosted instance documentation
      https://docs.cribl.io/stream/deploy-cloud#getting-started
    2. Examine the available out-of-the-box cloud ports
      https://docs.cribl.io/stream/deploy-cloud/#ports-certs

7. Configure Cribl Sources and Destinations

As part of the exercise to prove your use case, we recommend you limit your evaluation to no more than 3 sources and 3 destinations.
  1. Configure Destinations first. Configure destinations one at a time. For each specified destination:
    1. Configure destinations one at a time. For each specified destination:
      1. After configuring the Cribl destination, , reopen its config modal, select the Test tab, and click Run Test. Look for Success in the Test Results.
      2. At the destination itself (AWS S3, Azure Blob Storage, Google Cloud Storage), validate that the sample events sent through Cribl have arrived. For example, in AWS you can log into the management console and navigate through the S3 interface to find the Cribl generated files. https://docs.aws.amazon.com/AmazonS3/latest/userguide/download-objects.html
  2. Configure Sources. Configure sources one at a time. For each specified source:
    1. Configure the sources in Cribl https://docs.cribl.io/stream/sources (For Distributed Deployment, remember to click Commit / Deploy button after you configure each Source to ensure it’s ready to use in your pipeline.)
    2. Note: If you need to test a hybrid environment–you will need to request an Enterprise trial entitlement. Use the chatbot below to make your request. Also note: there is not an automated way to transfer configurations across Free instances and Enterprise trials. Once you’re squared away with an Enterprise entitlement you can test hybrid deployments.
    3. Hybrid Workers (meaning, Workers that you deploy on-premises, or in cloud instances that you yourself manage) must be assigned to a different Worker Group than the Cribl-managed default Group – which can contain its own Workers:
      1. On all Workers’ hosts, port 4200 must be open for management by the Leader.
      2. On all Workers’ hosts, firewalls must allow outbound communication on port 443 to the Cribl.Cloud Leader, and on port 443 to https://cdn.cribl.io.
      3. If this traffic must go through a proxy, see System Proxy Configuration for configuration details.
      4. Note that you are responsible for data encryption and other security measures on Worker instances that you manage.
      5. See the available Source ports under Available Ports and TLS Configurations here.
    4. For some Sources, you’ll see example configurations to send data to Cribl Worker nodes (on-prem and/or Cloud) at the bottom.
    5. Test that Cribl Stream is receiving data from your source.
      1. After configuring the Cribl Source and configuring the source itself (Syslog, Splunk Universal Forwarder, Elastic Beats, etc.), go to the Live Data tab and ensure your results are coming into Cribl.Cloud.
      2. In some cases, you may want to change the time period for collecting data. Go to the Live Data tab, click Stop, then change Capture Time to 600. Click Start. This will give you more time to test sending data into Cribl.

8. Configure Cribl QuickConnect or Routes

Another way you can get started quickly with Cribl is with QuickConnect or Routes.

Cribl QuickConnect lets you visually connect Cribl Stream Sources to Destinations using a simple drag-and-drop interface. If all you need are independent connections that link parallel Source/Destination pairs, Cribl Stream’s QuickConnect rapid visual configuration tool is a useful alternative to configuring Routes.

For maximum control, you can use Routes to filter, clone, and cascade incoming data across a related set of Pipelines and Destinations. If you simply need to get data flowing fast, use QuickConnect.

  1. Use QuickConnect to route your Source to your Destination.
    1. Configure QuickConnect.
    2. Initially, you may want to use the passthrough pipeline. This pipeline does not manipulate any data.
    3. Test your end-to-end connectivity by selecting Source -> Cribl Source -> Cribl QuickConnect -> Cribl Destination -> Destination.
  2. Alternatively, use Routes to route your Source to a Destination.
    1. Configure Routes with a suitable filter to match that data set
    2. If data is to be sent as-is, set the route to use the passthrough pipeline. This pipeline does not manipulate any data. If data modifications are needed, create a pipeline and configure functions (like DNS Lookup, GeoIP, Eval, etc.) as necessary.
    3. Set the route’s destination to the appropriate cloud storage destination
    4. Set the Final flag to no. This will cause Stream to send a copy of data matching the filter to the cloud storage destination, and still send the original event through routes for further processing.
    5. Test your end-to-end connectivity by selecting Source -> Cribl Source -> Cribl Routes -> Cribl Destination -> Destination.

9. Prepare for Replay

Prepare for Replay for each cloud storage destination:

  1. Set up a new index in your analytics tool and send Replay data to that new index. Why? You don’t want to mix Replay data (from a different time period) with current data feeding into your analytics tool.*
  2. Create a collector and name it to match the storage destination
  3. Set the path, and path extractors to match the partitioning scheme used in the storage destination
  4. Follow the Replay instructions here
  5. After saving the collector, test using the “Run / Preview” combination

*Note: Once you have a production deployment of Cribl, there are other considerations for replaying data, including:

  • Keep the replay data in a separate index from your real-time data. This ensures you don’t impact reporting, alerting or dashboards with older data.
  • Architecture: If you’re Replaying huge volumes of data (multiple terabytes) you will need to re-think your architecture. Consider deploying dedicated worker groups so as not to overwhelm your existing worker groups.

10. Create the Pipeline

Pipelines are Cribl’s main way to manipulate events. Examine Cribl Tips and Tricks for additional examples and best practices. Look for the sections that have Try This at Home for Pipeline examples. https://docs.cribl.io/stream/usecase-lookups-regex/#try-this-at-home

  1. Download the Cribl Knowledge Learning Pack from the Pack Dispensary for more cool Pipeline samples.
  2. Examine best practice links, for example:  Syslog best practices
    1. Add a new pipeline, named after the dataset it will be used to process.
    2. Use the sample dataset you captured as you built your Pipeline.
    3. Add or edit functions to reduce, enrich, redact, aggregate, or shape your data as needed. Confirm the desired results in the Out view and basic statistics UI.

11. Review Your Results

  1. Repeat any of the above steps until the Source, QuickConnect, Routes, Pipelines, and Destinations are supporting your business case.
  2. Determine if you achieved your testing goals for this dataset, and note your results.
  3. Finally, summarize your findings. Common value our customers see include:
    1. Cost savings (infrastructure, license, cloud egress).
    2. Optimized analytics tools (prioritizing relevant data accelerates search and dashboard performance).
    3. Future-proofing (enabling choice and mitigating vendor-lock).

Please note: If you are working with existing data source being sent to your downstream systems and you do nothing to the output from Cribl Stream, it has a may break any existing dependencies on the original format of the data. Be sure to consult this Best Practices blog or the users and owners of your downstream systems before committing any data source to a destination from within Cribl Stream.

Technical Use Cases Tested:

  • Routing: Here’s how to see your results: To confirm data flow through the whole system we’ve built, select Monitoring > Data > Routes and examine demo.

    Also select Monitoring > Data > Pipelines and examine [your Pipeline name] For example, if you created a pipeline named slicendice, you could see the pipeline slicendice below.

For additional examples, see:

When you’re convinced that Stream is right for you, reach out to your Cribl team and we can work with you on advanced topics like architecture, sizing, pricing, and anything else you need to get started!