April 21, 2021
In the last 3 months since joining Cribl I have seen more code written, features built and verified with customers, and releases with truly valuable improvements than during any similar 3 month period at any point in my career as a Product Manager. One of the reasons I am so fortunate to be a part of this team is because the entire company has their eye on the ball. We’re making a daily impact on our ever growing number of customers, users, partners and community members.
I am lucky to be writing this blog today as the new Head of Product for Cribl LogStream to talk about our 2.4.4, which went out March 30th, and the 2.4.5 release which came out April 20th.
You didn’t click on this link to hear me gush about my career decisions. Let’s talk product. We are covering two maintenance releases with this blog, specific release numbers for each item are called out in their respective section.
This feature landed in 2.4.4. In a distributed deployment, under Settings > System > Controls , you’ll see a new set of options for upgrading Worker Nodes and Groups.
Management and operational features are mission-critical for large LogStream deployments. In the next several releases, LogStream operators will be able to upgrade all LogStream nodes in bulk, rolling, or selectively (to test upgrades), all from the LogStream UI. No need to SSH into every worker or ramp up a new automation script just to pick up all the cool new stuff we keep on building.
This is still in beta, so please use it outside of production while we test all the edge cases.
There’s a never-ending list of places to gather data from and send data to. So you can expect new sources, destinations, and collectors with almost every release. 2.4.4 and 2.4.5 are no different.
We’ve added support for Google Cloud Storage as a source, a destination, and for scheduled or on-demand collection as part of a Replay. This works exactly the same as our generic S3 and AWS S3 integration. This provides our customers with Google Cloud investments ways to better control costs, through using cheap Cloud Storage while also providing everything they need to ensure that data makes it to the right downstream location, ready for analysis.
Just like with AWS and Google Cloud, our customers with Azure environments are looking for ways to use the best storage options for their massive volumes of data. Our blob storage source, destination, and collectors give that same flexibility and cost control to those working in Azure. Whether you are storing application events in Blob Storage as an intermediary, or are looking to stop using expensive logging or analytics tools for long-term, compliance-driven storage, these new integrations have you covered.
Another feature that landed in 2.4.4 – The Prometheus remote write destination provides LogStream users with a way to send Prometheus formatted metrics to any remote-write compatible backend.
Ever been working on a flow of data and wish you could just send the events directly to any tool that accepts traffic over HTTP? If you have, then you are certainly not alone. One of the most-requested features that we landed in 2.4.4 is the Webhook destination. This opens up an entire world of possibilities as events collected and processed in LogStream can be easily sent to any destination that supports Webhooks.
There are also several performance improvements and bug fixes in both 2.4.4 and 2.4.5. To see a complete list of everything in these releases, check out the changelogs:
If you’re new to LogStream and want to learn how it works, I encourage you to try one of our interactive sandbox courses to try out some of the most popular use cases. If you are ready to test LogStream in your own environment, download it and process up to 1 TB of data per day, completely free. You can also join our community to talk with other users and get support.