Cribl puts your IT and Security data at the center of your data management strategy and provides a one-stop shop for analyzing, collecting, processing, and routing it all at any scale. Try the Cribl suite of products and start building your data engine today!
Learn more ›Evolving demands placed on IT and Security teams are driving a new architecture for how observability data is captured, curated, and queried. This new architecture provides flexibility and control while managing the costs of increasing data volumes.
Read white paper ›Cribl Stream is a vendor-agnostic observability pipeline that gives you the flexibility to collect, reduce, enrich, normalize, and route data from any source to any destination within your existing data infrastructure.
Learn more ›Cribl Edge provides an intelligent, highly scalable edge-based data collection system for logs, metrics, and application data.
Learn more ›Cribl Search turns the traditional search process on its head, allowing users to search data in place without having to collect/store first.
Learn more ›Cribl Lake is a turnkey data lake solution that takes just minutes to get up and running — no data expertise needed. Leverage open formats, unified security with rich access controls, and central access to all IT and security data.
Learn more ›The Cribl.Cloud platform gets you up and running fast without the hassle of running infrastructure.
Learn more ›Cribl.Cloud Solution Brief
The fastest and easiest way to realize the value of an observability ecosystem.
Read Solution Brief ›Cribl Copilot gets your deployments up and running in minutes, not weeks or months.
Learn more ›AppScope gives operators the visibility they need into application behavior, metrics and events with no configuration and no agent required.
Learn more ›Explore Cribl’s Solutions by Use Cases:
Explore Cribl’s Solutions by Integrations:
Explore Cribl’s Solutions by Industry:
Watch On-Demand
Transforming Utility Operations: Enhancing Monitoring and Security Efficiency with Cribl Stream
Watch On-Demand ›Try Your Own Cribl Sandbox
Experience a full version of Cribl Stream and Cribl Edge in the cloud.
Launch Now ›Get inspired by how our customers are innovating IT, security and observability. They inspire us daily!
Read Customer Stories ›Sally Beauty Holdings
Sally Beauty Swaps LogStash and Syslog-ng with Cribl.Cloud for a Resilient Security and Observability Pipeline
Read Case Study ›Experience a full version of Cribl Stream and Cribl Edge in the cloud.
Launch Now ›Transform data management with Cribl, the Data Engine for IT and Security
Learn More ›Cribl Corporate Overview
Cribl makes open observability a reality, giving you the freedom and flexibility to make choices instead of compromises.
Get the Guide ›Stay up to date on all things Cribl and observability.
Visit the Newsroom ›Cribl’s leadership team has built and launched category-defining products for some of the most innovative companies in the technology sector, and is supported by the world’s most elite investors.
Meet our Leaders ›Join the Cribl herd! The smartest, funniest, most passionate goats you’ll ever meet.
Learn More ›Whether you’re just getting started or scaling up, the Cribl for Startups program gives you the tools and resources your company needs to be successful at every stage.
Learn More ›Want to learn more about Cribl from our sales experts? Send us your contact information and we’ll be in touch.
Talk to an Expert ›June 10, 2021
On distributed LogStream deployments that can span hundreds of nodes, it becomes a critical feature to be able to upgrade all the nodes to the latest version in an automated fashion – without having to upgrade each node one by one, or leverage bash scripts to automate the upgrades. Here, we discuss how we leveraged our internal jobs framework to automate worker node upgrades.
At first, we tried to implement this feature as a part of the distributed communications framework we’d developed for Leader–Node communications. But the problem of monitoring the upgrade, and having some sort of control over the process, demanded a more robust solution.
We ultimately decided to run the upgrades through the jobs framework, writing a custom job executor to perform the work on the worker nodes’ side.
Job executors are JS modules that consist of a few functions – describing how tasks behave, and what action is to be taken upon the results of each task – bundled with any initialization.
The design consists of a soft control layer on the leader node, which invokes jobs on a set of nodes belonging to a worker group. This allows us to control the progress of the upgrade, to do rolling upgrades, and so on.
Blog post on Job scheduler/dispatcher
When an upgrade is initiated, first the files are pulled to the leader node and verified for correctness. Once this is done, an “upgrade job” is kicked off over a set of nodes to be upgraded. Each node then runs a task on the job that pulls the files from the leader node, verifies them for correctness, applies the upgrade, and finally restarts the node for the upgrade to take effect.
To add a layer of safety each time a node is upgraded, LogStream creates a backup with the current binary and default config. In the case of a failed upgrade, an auto rollback mechanism is triggered, setting everything back to the way it was before the upgrade.
By leveraging the job executors we were able to customize how the upgrade process is carried out. One of the key requirements was the ability to do rolling upgrades, performing the upgrade over x amount of nodes at a time while verifying the upgraded nodes are up running before continuing with the process, to this end we added the ability to filter which nodes get upgraded by a job enabling the possibility of partial upgrades, after that all we had to do was to add thin control layer over the jobs to divide the nodes in jobs and kick off them off sequentially as they finish, and voila distributed rolling upgrades.
We needed to provide a layer of control and observability over the upgrade process. Lucky for us we already had a framework that provided most of this functionality the aforementioned jobs framework, the framework is basically a set software elements that enable the system to perform batch tasks in controlled and isolated manner, it was mainly designed for data collection but the executor jobs allow us to perform any generic tasks on worker nodes in a batch fashion, applying this to upgrades we got a lot features we needed for free like, isolated execution logs, status tracking, and the ability to cancel/pause the upgrade, also all of the ui built for jobs that allow us to troubleshoot easily any issues that occur with a failed upgrade.
Working with distributed applications is very hard making it essential to build software components that can address the distributed aspects of a system in a generic way so that it provides basic functionality that can be exploited when building complex features such as a distributed upgrade, using the jobs framework to perform this task saves us a lot of work in observability and control, but also provided a framework in which we were able to incrementally build the feature, releasing it chunks and making it available sooner to our customers.
See for yourself – get started by signing up for Cribl.Cloud and try LogStream today for free.
Clint Sharp Aug 27, 2024
Felicia Dorng Aug 15, 2024
Classic choice. Sadly, our website is designed for all modern supported browsers like Edge, Chrome, Firefox, and Safari
Got one of those handy?