March 25, 2021
A recent survey showed that over a third of companies use more than thirty different data monitoring tools[1] for their observability data, spread across infrastructure and operations, DevOps, and SecOps teams. Each tool has deep awareness of its own narrow domain, but that source of truth doesn’t translate to neighboring areas or tools. Teams often introduce new data monitoring tools as a response to new architectures or changes in operations. This uncontrolled tool sprawl drives up costs and complexity, but it also leaves teams with a fragmented and confused monitoring environment.
To deal with this tool sprawl, many enterprises chase after a “single pane of glass” strategy, where a single tool offers all the capabilities various teams need. According to 451 Research, 83% of enterprise companies prefer buying as many monitoring tools as possible from a single vendor.[2] While this sounds like a great strategy, there are several reasons why a single-vendor strategy doesn’t work:
Despite the hype around a single pane of glass, the reality is enterprises will be living in a best-of-breed scenario for the foreseeable future. This is likely a controversial approach to managing observability data, but it reflects the reality most teams are coping with today and into the future.
Now that we’ve accepted that reality, we have to deal with it. We’re faced with two challenges, each pulling in opposite directions. First, we have multiple data monitoring and observability tools, each used by different teams and serving different outcomes. Those teams like the tools they’ve chosen. Getting rid of their preferred tooling can disrupt operations, impact SLOs, and make for a grumpy team.
The second challenge is reconciling how these disparate tools exist in the worlds they monitor and observe. Your log analytics platform will have a different view of the world than your APM platform, for example. Both views are correct in context, but both are also incomplete. Figure 1 illustrates the current state of many monitoring and observability environments. Each destination has its own source, and that source is locked into that destination.
This is where shifting the source of truth from the platform to the data becomes essential. By providing a central abstraction between the sources of data and their destinations, you can share data across platforms, creating a universal source of truth that exists upstream from the tools your teams know and love. Figure 2 illustrates what this concept looks like.
There are three advantages to using an observability pipeline as your source of truth:
An observability pipeline, like Cribl LogStream, realizes each of these benefits (Figure 3).
In most circles, a single source of truth is envisioned as a large repository of data at rest that is continuously processed, normalized, and aligned with multiple enterprise systems. That can work when you’re making offline decisions on data, like the kinds of results you’d get from a data warehouse. It doesn’t work for observability data.
Observability data is orders of magnitude larger than business data in every enterprise, and it has orders of magnitude more data shapes and types of analysis. Observability requires fit-for-purpose data stores like time series databases (TSDBs), object stores for bulk log storage, and log indexing engines. Unlike in business intelligence circles, there are no successful companies building observability on top of data warehouses. The needs of observable systems are simply too diverse for one platform to manage. Introducing an observability pipeline, like Cribl LogStream, into your environment gives your teams the freedom to choose which tools they use and enables data sharing across those tools.
The good news is you don’t have to take my word for it. I encourage you to try out LogStream in our sandbox to see what shifting the source of truth can do for you and your enterprise. If you have questions, feel free to jump into our community.