x
cribl-homepage_hero.jpg

Using Cribl Stream for Syslog: Security, Volume Reduction, and More

August 24, 2021
Written by
Categories: Engineering

Cribl Stream has supported Syslog as a source, almost since its inception. Now, with the introduction of the Packs feature in the July release of Stream, you have the opportunity to handle inbound Syslog data even better than before. This article explores the use of the Cribl Pack for Syslog Input to remove redundant data, handle timezone normalization, and set sourcetypes and other metadata based on lookups.

Why Syslog Matters

Syslog was developed in the earliest days of the Internet as a means by which any OS, process, or device could deliver a log event to a Syslog server using a well-defined transport protocol with a well-defined message format. (Syslog is one of the earliest examples of a remote API call!)

The importance of Syslog is in its ubiquity. Linux and Unix OSes support the Syslog protocol, as does nearly every enterprise-grade piece of networking equipment. While there are other options for shipping logs from applications, Syslog remains the de-facto protocol for networking gear, including firewalls, switches, routers, and more. For many devices, Syslog is the only supported way to send logs.

To quote from Wikipedia:

Syslog was developed in the 1980s by Eric Allman as part of the Sendmail project. It was readily adopted by other applications and has since become the standard logging solution on Unix-like systems. A variety of implementations also exist on other operating systems and it is commonly found in network devices, such as routers.”

The Syslog protocol has continued to evolve over the years, adding support for TCP, TLS security, timezone specification, and larger message payloads.

A Few Problems to Solve

Risk of Data Loss:

Syslog was initially based on UDP, not TCP. The UDP choice made sense at the time; it reduced overhead back when servers’ memory was measured in kilobytes, not gigabytes. UDP allowed Syslog senders to keep working just fine, even if the destination was down. Even today, some network devices only support sending Syslog on UDP. UDP delivery is potentially lossy – your data will probably get there, but it might not. And if you’re sending across the Internet, the chance of data loss goes up.

Lack of encryption:

Syslog over UDP doesn’t support encryption on the wire. TCP allows Syslog to send using TLS, but this is a rare feature among Syslog-sending devices. Sending unencrypted messages on a LAN might be fine for some situations, but it’s not ideal.

Timezone issues:

While modern Syslog senders support RFC 5424 for accurate timezone, year, and day representation, most devices sending Syslog have a timestamp that looks like this:

May 12 21:20:39

Which timezone is that? What year? Is the sending system in UTC? Eastern daylight time? It’s impossible to tell.

Meta information & management:

When sending syslog data to Splunk, Elasticsearch, and other systems of analysis, it’s often important to set a sourcetype, index, or other fields that instruct that system on how to process and store those events for later retrieval.

Since this meta-information is not present in the original payload, it has to be added after the fact. This is typically done using a combination of configuration files for rsyslog or syslog-ng, in conjunction with Splunk forwarders or elastic beats agents. Management is tricky, as IP addresses and hostnames live within the syslog server’s config files, and these have to match up with the forwarding agent’s index and sourcetype settings.

Redundant Data:

When syslog data arrives in a destination such as Splunk or Elasticsearch, the human-readable timestamp – and the hostname – are superfluous. They’re already being delivered in the extracted timestamp field (as epoch time), and the host field. This data in the message field adds 25-30 bytes per event. This might sound small, but in some cases, this might represent more than 20% of the overall data volume.

Syslog data sent with RFC 5424’s TimeQuality feature adds up to 63 bytes per event. This is all rarely searched data, and the extracted timestamp is already in _time, epoch time format, with millisecond precision if available. These 63 bytes of data are just an added storage + indexing burden on the destination system.

How Cribl Stream Helps

Reducing Management Complexity:

Stream is a native Syslog receiver, just like rsyslog and syslog-NG. You may be able to retire or repurpose your existing Syslog servers completely, replacing that functionality with Stream workers instead, and those same Stream workers also support receiving on other protocols. All of the issues mentioned above can be addressed within Stream, without hand-editing config files.

Minimizing possibility of data loss:

To help with UDP data loss, deploy Stream workers as close to the sender as possible. Optimally, this would be a pair of properly sized workers, behind a load balancer, on the same subnet or LAN. Configure backpressure for Stream destinations to use “Persistent Queueing” function, to ensure data is delivered when an off-line destination comes back.

Securing the connection:

Once the Stream worker receives the data via syslog (encrypted or not), delivery to any subsequent connection can be configured over TCP with TLS encryption. As with the UDP scenario, one would want to locate a Stream worker on the same subnet or LAN, to minimize the risk of sending unencrypted data.

Cribl Pack for Syslog Input:

Addressing the timezone, data cleanup, and enrichment uses are all handled by the Cribl Pack for Syslog Input. This pack is attached to your Syslog sources as a PreProcessing pipeline, allowing the benefits below to be applied to most or all inbound Syslog data. In some cases, this Pack’s volume reduction can remove over 100 bytes per event just by dropping redundant or unnecessary values.

The Syslog pack, available in the Cribl Pack Dispensary, does the following:

  • Via lookup on the host field, identify timezone information for specific senders and use this to set _time.
  • Automatically set time correctly when inbound traffic is precisely off by N hours. For example, if a Syslog sender’s timestamp appears to be exactly 3 hours in the future – fix it!
  • Meta-information such as sourcetype and index are also provided via lookups.
  • Remove the human-readable timestamp from _raw, saving about 16-25 bytes.
  • Move the host value to meta-information and remove it from _raw, saving 5-25 bytes.
  • Handle the Syslog fields for Facility, Severity, and App to provide meta information. Optionally removed from _raw
  • Optionally, drop debug-level events based on Severity field
  • TimeQuality data can also be optionally removed from _raw, saving up to 63 bytes.

The pack includes data samples to allow testing of the various options prior to putting it into production. Full documentation for the Pack is included within the pack itself.

Summary

By deploying Stream workers as replacements for existing Syslog servers (rsyslog, syslog-NG, etc), you can reduce management complexity while ensuring the reliable, secure delivery of Syslog data to your chosen systems of analysis and systems of retention.

By using the Cribl Pack for Syslog, you can easily enrich your inbound Syslog data with metadata while simultaneously removing redundant data for a savings of 20% or more. And finally see an end to timestamp and timezone extraction issues.

For more info, check out our docs on syslog data reduction or grab the Cribl Pack for Syslog Input. Join the #packs channel in our Slack Community to get the latest updates on this and other packs for Cribl Stream.


 

Cribl, the Data Engine for IT and Security, empowers organizations to transform their data strategy. Customers use Cribl’s suite of products to collect, process, route, and analyze all IT and security data, delivering the flexibility, choice, and control required to adapt to their ever-changing needs.

We offer free training, certifications, and a generous free usage plan across our products. Our community Slack features Cribl engineers, partners, and customers who can answer your questions as you get started. We also offer a hands-on Sandbox for those interested in how companies globally leverage our products for their data challenges.

.
Blog
Loki Cribl Stream

Enhancing Log Analytics in Loki with Cribl Stream

Read More
.
Blog
data lake troubleshooting

Thou Shall Pass! Troubleshooting Common Amazon S3 Errors in Cribl Stream

Read More
.
Blog
Feature Image

Greater Control Over Windows Events for Qradar: Why Windows Events Matter

Read More
pattern

Try Your Own Cribl Sandbox

Experience a full version of Cribl Stream and Cribl Edge in the cloud with pre-made sources and destinations.

box