The Cribl blog covers Observability, Big Data Analytics, Data Streams Processing... and anything else we feel like writing about!
Webhook destinations have been available in LogStream since 2020 (LogStream version 2.4.4), and Packs since July of 2021. In this blog post we’ll cover using Webhooks to trigger incidents in the PagerDuty API, and the Cribl Webhook Pagerduty Pack created to demonstrate how Packs make deployment easier. Sending Notifications via Webhooks LogStream’s core competency is […]
Many organizations are beginning to use containers due to the flexibility they provide over traditional virtual machine infrastructure. This technology allows infrastructure teams to increase agility, and adapt to changing business needs, by quickly deploying portable and scalable containerized applications. However, due to their complexity, container environments have introduced new challenges in monitoring the various […]
An ex-colleague at Splunk asked me in a LinkedIn post if Cribl LogStream does anything else besides log reduction. This blog is for him. LogStream optimizes data so that it’s consumable again. In this blog, I’ll focus on using LogStream to improve Splunk performance for search while lowering CPU usage. If you’re in the David […]
Key Management System (KMS) support was added in LogStream 3.0. In this version, integration with HashiCorp Vault was added, along with the default local filesystem KMS option. This integration allows customers to offload management of secrets used by Cribl LogStream to an external KMS provider The KMS feature can be used to improve the security posture of your LogStream deployment.
Recently, I had the opportunity to work with a customer who was looking to reduce their Splunk license cost. They were looking to expand their use of Splunk, but were constrained by the growth of their data volumes, and couldn’t spend more on top of their 500 GB license currently in use.
In part one of this blog post, I covered the concept, basic design, and results of using Redis to enrich VPC Flow Logs with security classification data from the GreyNoise API. In this post, I’m covering the details of how to do it. These steps will get you going if you want to try it, but keep in mind you’ll need your own GreyNoise API to run it.
Data collection from Amazon S3, first introduced in Cribl LogStream 2.0, has been an overnight success with most of our AWS customers. In 2.4.4 we’ve added a similar capability to read data at scale from Azure Blob Storage, where a lot of other customers store massive amounts of observability data; logs, metrics, events, etc. In this post, we’ll take a look at how it works, and how to configure it.
Postgres, like many database applications, has a robust dynamic trace capability. Combined with a highly configurable log facility, it’s quite possible to track database activity. But as with most attempts at observability, it isn’t quite that simple. AppScope has the ability to track all SQL activity associated with a Postgres service.
This is a short blog post about how we used AppScope to identify and resolve a DNS-related problem reported by one of our customers … and it is a fact that it’s always a DNS problem, except when it isn’t :).
AppScope is an application-centric instrumentation and data collection mechanism. With one instrumentation approach for all runtimes, AppScope offers ubiquitous, unified instrumentation of any unmodified Linux executable. It's equally useful for single-user troubleshooting or monitoring distributed deployments. So how does it work?