In this livestream, Cribl’s Ahmed Kira and I go into more detail about the Cribl Stream Reference Architecture, with a focus on scaling syslog. They share a few use cases, some guidelines for handling high-volume UDP and TCP syslog traffic, and talk about the pros and cons of some of the different approaches to tackling this challenge.
It’s also available on our podcast feed if you want to listen on the go. If you want to automatically get every episode of the Stream Life podcast, you can subscribe on your favorite podcast app.
Scaling syslog is one of the most frustrating issues our customers have to deal with. But syslog isn’t going away any time soon, so we’ve learned to love it. After taking advantage of Cribl Stream and following some of our syslog best practices to help process and make sense of it, maybe you will too.
In a nutshell, syslog is the main protocol network security vendors use to report what’s happening on their devices. It sends security, network, and status events to a destination to be consumed in a SIEM or log analytics tool. Syslog events tend to be very voluminous, which is one of the reasons they can cause so many issues.
It’s important to know what syslog is, but it’s equally important to know what it isn’t. Much of what people call syslog isn’t actually syslog. Syslog has a very strict and well-defined set of RFCs that, if followed, would make our lives a little easier — JSON payloads over UDP and events with inaccurate timestamps or nonexistent time zones don’t quite make the cut.
Syslog was originally designed as a protocol to be leveraged over UDP, so we recommend using it as such. Protocols (like HTTP) designed for TCP will regularly open new connections to take full advantage of the TCP stack — protocols built for UDP and used over TCP won’t be able to open these new connections. Instead, they’ll be put over one long-lived connection and cause major load-balancing issues.
Whatever you do, don’t try to home-run your UDP Syslog data across your WAN — there are very few situations where data loss would be more likely. Keep your collection sources as close to the destination as possible, and have at least one syslog server in every data center.
It would be hard to overstate the importance of properly configured load balancers for collecting syslog data. In addition to using UDP instead of TCP, be aware of UDP pinning, which can happen if your load bouncer is misconfigured. Even if you have three Cribl Stream workers below your load balancer, data can get funneled to only one of them if you have sticky sessions or other faulty settings enabled.
You want to have as much distribution across your worker processes as humanly possible because that’s how you can engage all the resources on the server and get that scale you’re looking for. If you have 50 cores on each server, but only one of them is getting engaged, scalability will be severely limited.
If you have multiple CPUs on one of Cribl workers, you want to spin up a different worker process to map to each one. When the load bouncer redirects the traffic to that worker, it’ll spread it across all of the available worker processes and take advantage of all of the available CPU cores.
Other best practices for correctly configuring your load balancers include using as many pools as possible. Unless you’re a small mom-and-pop shop, avoid sending all your syslog traffic over the default 514 — allocate different ports for different device types instead. You also want to map each pool to that same port on your Cribl workers.
Once you get data into Cribl Stream, it’s all TCP from then on — and it’s very flexible. All of your issues with managing Syslog will likely go away at that point, but the data has to make it there first.
The all-in-one worker group is a good choice for organizations with only one or two regions and less than 1 TB of data daily. Suppose you’re collecting data from various agents like Elastic, pushing data to Stream via HTTP, using a rest collector to pull data from an API, or sending data to just one destination. In that case, there’s no need to overcomplicate things.
If any of these things change, you’ll have to start thinking about separating out worker groups.
A dedicated worker group comes in handy when you make changes to your syslog data — commit-and-deploys in Stream won’t affect any other worker groups, and syslog will be the only thing that needs to be restarted. Isolating the workflow means fewer restarts and fewer problems for you and your enterprise.
Having a dedicated worker group for your syslog gives you more breathing room for processing and increases business resiliency at the same time. If you have a DDoS attack or a massive spike of data, it’ll only impact your Syslog worker group instead of shutting down everything else. Environments that deal with large volumes, heavy regulation, or high SLAs for availability benefit the most from splitting up data sources, allowing you to fail small instead of big when certain situations arise.
Watch the full livestream for more insights on how SecOps and Observability data admins can integrate Cribl Stream into any environment — including formatting and change management guidelines within data centers. Don’t miss out on this opportunity to empower your observability administration skills.
More on the Cribl Reference Architecture Series
Cribl, the Data Engine for IT and Security, empowers organizations to transform their data strategy. Customers use Cribl’s suite of products to collect, process, route, and analyze all IT and security data, delivering the flexibility, choice, and control required to adapt to their ever-changing needs.
We offer free training, certifications, and a free tier across our products. Our community Slack features Cribl engineers, partners, and customers who can answer your questions as you get started and continue to build and evolve. We also offer a variety of hands-on Sandboxes for those interested in how companies globally leverage our products for their data challenges.