Cribl puts your IT and Security data at the center of your data management strategy and provides a one-stop shop for analyzing, collecting, processing, and routing it all at any scale. Try the Cribl suite of products and start building your data engine today!
Learn more ›Evolving demands placed on IT and Security teams are driving a new architecture for how observability data is captured, curated, and queried. This new architecture provides flexibility and control while managing the costs of increasing data volumes.
Read white paper ›Cribl Stream is a vendor-agnostic observability pipeline that gives you the flexibility to collect, reduce, enrich, normalize, and route data from any source to any destination within your existing data infrastructure.
Learn more ›Cribl Edge provides an intelligent, highly scalable edge-based data collection system for logs, metrics, and application data.
Learn more ›Cribl Search turns the traditional search process on its head, allowing users to search data in place without having to collect/store first.
Learn more ›Cribl Lake is a turnkey data lake solution that takes just minutes to get up and running — no data expertise needed. Leverage open formats, unified security with rich access controls, and central access to all IT and security data.
Learn more ›The Cribl.Cloud platform gets you up and running fast without the hassle of running infrastructure.
Learn more ›Cribl.Cloud Solution Brief
The fastest and easiest way to realize the value of an observability ecosystem.
Read Solution Brief ›Cribl Copilot gets your deployments up and running in minutes, not weeks or months.
Learn more ›AppScope gives operators the visibility they need into application behavior, metrics and events with no configuration and no agent required.
Learn more ›Explore Cribl’s Solutions by Use Cases:
Explore Cribl’s Solutions by Integrations:
Explore Cribl’s Solutions by Industry:
Watch On-Demand
Transforming Utility Operations: Enhancing Monitoring and Security Efficiency with Cribl Stream
Watch On-Demand ›Try Your Own Cribl Sandbox
Experience a full version of Cribl Stream and Cribl Edge in the cloud.
Launch Now ›Get inspired by how our customers are innovating IT, security and observability. They inspire us daily!
Read Customer Stories ›Sally Beauty Holdings
Sally Beauty Swaps LogStash and Syslog-ng with Cribl.Cloud for a Resilient Security and Observability Pipeline
Read Case Study ›Experience a full version of Cribl Stream and Cribl Edge in the cloud.
Launch Now ›Transform data management with Cribl, the Data Engine for IT and Security
Learn More ›Cribl Corporate Overview
Cribl makes open observability a reality, giving you the freedom and flexibility to make choices instead of compromises.
Get the Guide ›Stay up to date on all things Cribl and observability.
Visit the Newsroom ›Cribl’s leadership team has built and launched category-defining products for some of the most innovative companies in the technology sector, and is supported by the world’s most elite investors.
Meet our Leaders ›Join the Cribl herd! The smartest, funniest, most passionate goats you’ll ever meet.
Learn More ›Whether you’re just getting started or scaling up, the Cribl for Startups program gives you the tools and resources your company needs to be successful at every stage.
Learn More ›Want to learn more about Cribl from our sales experts? Send us your contact information and we’ll be in touch.
Talk to an Expert ›Cribl Stream has supported Syslog as a source, almost since its inception. Now, with the introduction of the Packs feature in the July release of Stream, you have the opportunity to handle inbound Syslog data even better than before. This article explores the use of the Cribl Pack for Syslog Input to remove redundant data, handle timezone normalization, and set sourcetypes and other metadata based on lookups.
Syslog was developed in the earliest days of the Internet as a means by which any OS, process, or device could deliver a log event to a Syslog server using a well-defined transport protocol with a well-defined message format. (Syslog is one of the earliest examples of a remote API call!)
The importance of Syslog is in its ubiquity. Linux and Unix OSes support the Syslog protocol, as does nearly every enterprise-grade piece of networking equipment. While there are other options for shipping logs from applications, Syslog remains the de-facto protocol for networking gear, including firewalls, switches, routers, and more. For many devices, Syslog is the only supported way to send logs.
To quote from Wikipedia:
Syslog was developed in the 1980s by Eric Allman as part of the Sendmail project. It was readily adopted by other applications and has since become the standard logging solution on Unix-like systems. A variety of implementations also exist on other operating systems and it is commonly found in network devices, such as routers.”
The Syslog protocol has continued to evolve over the years, adding support for TCP, TLS security, timezone specification, and larger message payloads.
Risk of Data Loss:
Syslog was initially based on UDP, not TCP. The UDP choice made sense at the time; it reduced overhead back when servers’ memory was measured in kilobytes, not gigabytes. UDP allowed Syslog senders to keep working just fine, even if the destination was down. Even today, some network devices only support sending Syslog on UDP. UDP delivery is potentially lossy – your data will probably get there, but it might not. And if you’re sending across the Internet, the chance of data loss goes up.
Lack of encryption:
Syslog over UDP doesn’t support encryption on the wire. TCP allows Syslog to send using TLS, but this is a rare feature among Syslog-sending devices. Sending unencrypted messages on a LAN might be fine for some situations, but it’s not ideal.
Timezone issues:
While modern Syslog senders support RFC 5424 for accurate timezone, year, and day representation, most devices sending Syslog have a timestamp that looks like this:
May 12 21:20:39
Which timezone is that? What year? Is the sending system in UTC? Eastern daylight time? It’s impossible to tell.
Meta information & management:
When sending syslog data to Splunk, Elasticsearch, and other systems of analysis, it’s often important to set a sourcetype, index, or other fields that instruct that system on how to process and store those events for later retrieval.
Since this meta-information is not present in the original payload, it has to be added after the fact. This is typically done using a combination of configuration files for rsyslog or syslog-ng, in conjunction with Splunk forwarders or elastic beats agents. Management is tricky, as IP addresses and hostnames live within the syslog server’s config files, and these have to match up with the forwarding agent’s index and sourcetype settings.
Redundant Data:
When syslog data arrives in a destination such as Splunk or Elasticsearch, the human-readable timestamp – and the hostname – are superfluous. They’re already being delivered in the extracted timestamp field (as epoch time), and the host field. This data in the message field adds 25-30 bytes per event. This might sound small, but in some cases, this might represent more than 20% of the overall data volume.
Syslog data sent with RFC 5424’s TimeQuality feature adds up to 63 bytes per event. This is all rarely searched data, and the extracted timestamp is already in _time, epoch time format, with millisecond precision if available. These 63 bytes of data are just an added storage + indexing burden on the destination system.
Reducing Management Complexity:
Stream is a native Syslog receiver, just like rsyslog and syslog-NG. You may be able to retire or repurpose your existing Syslog servers completely, replacing that functionality with Stream workers instead, and those same Stream workers also support receiving on other protocols. All of the issues mentioned above can be addressed within Stream, without hand-editing config files.
Minimizing possibility of data loss:
To help with UDP data loss, deploy Stream workers as close to the sender as possible. Optimally, this would be a pair of properly sized workers, behind a load balancer, on the same subnet or LAN. Configure backpressure for Stream destinations to use “Persistent Queueing” function, to ensure data is delivered when an off-line destination comes back.
Securing the connection:
Once the Stream worker receives the data via syslog (encrypted or not), delivery to any subsequent connection can be configured over TCP with TLS encryption. As with the UDP scenario, one would want to locate a Stream worker on the same subnet or LAN, to minimize the risk of sending unencrypted data.
Cribl Pack for Syslog Input:
Addressing the timezone, data cleanup, and enrichment uses are all handled by the Cribl Pack for Syslog Input. This pack is attached to your Syslog sources as a PreProcessing pipeline, allowing the benefits below to be applied to most or all inbound Syslog data. In some cases, this Pack’s volume reduction can remove over 100 bytes per event just by dropping redundant or unnecessary values.
The Syslog pack, available in the Cribl Pack Dispensary, does the following:
_raw
, saving about 16-25 bytes._raw
_raw
, saving up to 63 bytes.The pack includes data samples to allow testing of the various options prior to putting it into production. Full documentation for the Pack is included within the pack itself.
By deploying Stream workers as replacements for existing Syslog servers (rsyslog, syslog-NG, etc), you can reduce management complexity while ensuring the reliable, secure delivery of Syslog data to your chosen systems of analysis and systems of retention.
By using the Cribl Pack for Syslog, you can easily enrich your inbound Syslog data with metadata while simultaneously removing redundant data for a savings of 20% or more. And finally see an end to timestamp and timezone extraction issues.
For more info, check out our docs on syslog data reduction or grab the Cribl Pack for Syslog Input. Join the #packs channel in our Slack Community to get the latest updates on this and other packs for Cribl Stream.
Cribl, the Data Engine for IT and Security, empowers organizations to transform their data strategy. Customers use Cribl’s suite of products to collect, process, route, and analyze all IT and security data, delivering the flexibility, choice, and control required to adapt to their ever-changing needs.
We offer free training, certifications, and a generous free usage plan across our products. Our community Slack features Cribl engineers, partners, and customers who can answer your questions as you get started. We also offer a hands-on Sandbox for those interested in how companies globally leverage our products for their data challenges.
Experience a full version of Cribl Stream and Cribl Edge in the cloud with pre-made sources and destinations.
Classic choice. Sadly, our website is designed for all modern supported browsers like Edge, Chrome, Firefox, and Safari
Got one of those handy?