x

Backpressure Behavior in Cribl LogStream

Written by L Tang

September 2, 2020

In this quick dive into backpressure topics in Cribl LogStream, we will also touch upon persistent queueing, how Cribl LogStream sends information out to destinations when backpressure options are selected, and how to approach troubleshooting solutions for systems with non-responsive destinations. 

Sizing is an art and a science, born of expectations from back-of-napkin math, and refined through actually shoving electrons through wires. 

A few of the concepts we will discuss may seem counterintuitive, but we hope to provide our customers with the tools to understand and independently troubleshoot backpressure behavior.


“Backpressure is the state of not being able to send to the destination due to blocked status being reported up to Cribl LogStream. If the destination is able to receive data, but very slowly, Cribl LogStream will send as much as is possible – which means Cribl LogStream will also match the rate and send very slowly.”


What happens when a destination is unable to receive data from Cribl LogStream? This triggers a condition in Cribl called backpressure. More specifically, we use the TCP window attribute, and when the destination throttles incoming connections by setting the window size to 0, we do not send any more data to the destination. We periodically retry to check the window size, and when it is > 0, we will retry transmission. We send the same attribute to sources trying to send to us, when we experience the condition from our destinations. We refer to this as sending a block signal, or receiving a block signal. 

Cribl LogStream offers the following options when experiencing backpressure:

Block. The default for backpressure behavior is to block – we send a block signal to all sources that are attempting to send to the same Destination type.

Drop. We also offer the option to drop events and send them to /dev/null if appropriate. This option can improve recovery times. We recommend adding redundancy into the system to allow dropping of events when needed.

Persistent Queue (PQ). Persistent Queue selection allows overflow events to be written to disk on the local filesystem. When PQ is full, we will then start blocking. When the destination is no longer blocked, we send on the contents of PQ, oldest events first.


“When discussing backpressure, when we say “ah, it looks like you’re experiencing backpressure all the way back to the source”, we are referring to when the block signal is propagated back through Cribl LogStream and reported back to the Sources trying to send to Cribl.”


Here’s the big takeaway. When troubleshooting backpressure, we start at the destination and work our way back. This is highly counterintuitive, but should make sense when we see how data flow is constrained by capacity at the destination. 


“We can send only as quickly as the slowest connection for the same destination.”


A side note: when estimating costs and size for PQ, the following questions can be very helpful: How long of an outage at the current capacity would the proposed size be able to cover? An hour? Three hours? Days? What is the probability of an outage occuring? This gives us a commonly used risk metric: cost x probability.

In the case of customers experiencing PQ constantly being refilled – we would strongly suggest taking a closer look at capacity on the Destination. For most of our customers, backpressure issues are resolved by increasing capacity at the Destination.

Thanks for reading! We’re always interested in hearing your thoughts on our Community Slack channel.

Come on by! For an invite: https://cribl.io/community/  \o/

Questions about our technology? We’d love to chat with you.