x

the evil side of debounce/throttle

Written by Ledion Bitincka

May 1, 2018

If you’re a Javascript developer you probably already know the utility of the debounce / throttle function. For those unfamiliar/new with JS, debounce/throttle are basically utilities to generate a function that is limited in execution frequency independent of the call frequency. For example, execute a function at most every 250ms, no matter how frequently the function is called. This is useful when the frequency of triggering events spikes and the work done for the early events would be thrown away or repeated – a typical use-case would be type-ahead information in a search bar.  You can read more about these functions here

Note: while the following is JS specific, the pattern of throttling/debouncing is not language specific.

Race conditions are some of the nastiest bugs to troubleshoot, mostly due to being hard to reproduce. They’re generally species that prefer to live in multi-threaded systems. Javascript on the other hand, uses an event loop concurrency model, meaning that your code runs in a single thread. Even though it might be mentally harder to get used to, there are lots of benefits to this model – eg. locking is simply not necessary. Single threaded also means that race conditions are minimized, but not guaranteed to be eliminated – generally anything that introduces non-deterministic behavior in an application increases the chances of race conditions.

One potential use of debounce is to buffer events and process them in batches at some given max frequency – e.g the following code would process batches of events every 10ms.

const events = [];
let count = 0;
const processEvents = throttle(() => {
  // do something with the events ... i'm just wasting time
  count += events.length;
  events.length = 0;
}, 10);

const gotEvent = (e) => {
  events.push(e);
  processEvents();
}

So now let’s see where the evil of debounce/throttle comes in – hint: it’s the non-determinism that introduces race conditions. Say we were to use the above functions to generate batches of events every 9ms:

let intervalCalls = 10;
let interval = setInterval(() => {
 for(let i=0; i<10; i++) {
   gotEvent(i)
 }
 
 console.log(`count=${count}`);
 if(intervalCalls-- < 1) {
   clearInterval(interval);
 }
}, 9);

In the above code there is no guarantee at what the output would be – that would depend very much on what else is executing on the system and/or the clock of the CPU. Feel free to run the above code a number of times yourself here

While in the above code it could be easy to spot the race condition – non-determinism (throttle) + state (count) = race condition (breaking serializibility). Using throttle/debounce in deep library functions, mostly to improve performance, has the potential to result in race conditions that are hard to troubleshoot.

There two ways to go about solving this issue:

a. guarantee serializibility in your library
b. document the behavior of your code as eventually-consistent and give callers and ability to force consistency at will. You can test out the full example here

const processEventsNow = () => {
 // do something with the events ... 
  count += events.length;
  events.length = 0;
};
const processEvents = throttle(processEventsNow, 10);

We ran into this issue while building out Bots with actions – the race condition was between archive files being expanding and Bots executing and adding metadata about archive content files. Processing of the content files was throttled (think 10K files in an archive) resulting in files not being “present” in the state table even though bots had seen them. We went with solution b) from above as throttling is still beneficial for performance.

If you found the above interesting, we’re hiring and would like to talk to you – send a quick line: ledion at cribl.io to connect!

Questions about our technology? We’d love to chat with you.