The traditional approach for searching observability data is a tried-and-true:
Once all the search staging is accomplished, we can perform high-speed, high-performance, deep-dive analysis of the data. But is this the best way or even the only way to search all that observability data? The answer to the first question is maybe, as it depends on what you are trying to accomplish. The answer to the second question must be a resounding no. Maybe, just maybe there is a path or justification to have more than one search tool supporting your system(s) of analysis. Maybe it’s time to start thinking about different query tools for different use cases, and that’s what this blog covers.
Imagine, you need to locate some very specific data and it could be anywhere in your enterprise, on any one of 100s, or 1000s of distributed hosts or already stored in an S3 bucket, and there could be multiple instances that need to be located. What’s the best approach to accomplish this query? Collect, ingest, index it all, and then query the data or, maybe ask the question first, locate the instance and then collect information from those sources for deeper analysis.
Ok, first let me state there are several search and analysis investigation tools available from vendors with household names, these are extremely effective and aren’t going anywhere in the foreseeable future. But just like on my tool bench, I have multiple tools that fundamentally can accomplish the same task, but I select the appropriate tool based on the use case I have. If I am building a deck, I am grabbing the power drill (plug-in) with the screwdriver bit as it is designed for that task, however, if I just need to tighten a few screws I am grabbing the regular screwdriver. Neither tool is a replacement for the other different use cases and is complementary in getting the jobs done. In the same vein, if I just need to discover where some specific data might exist across my enterprise, I probably don’t need all the horsepower, staging requirements, and costs associated with my traditional investigative tools. I very well might require these capabilities later, but not right now. So, what’s the second option?
At Cribl, we’ve flipped the approach to searching data on its head. Instead of having to stage a search, you know collect, ingest, index, and only then query, what if you discovered first and then only collected what was of interest, what was of value? Now you can with Cribl Search. You can dispatch queries to where the data is being generated (still on the hosts) or even already stored in an S3 bucket. This is what we call searching data-in-place, or as I say Point & Shoot.
The tried-and-true traditional search collects a mountain of data, but only a very small percentage of it is probably going to be useful. In fact, we have now reached the point where our ability to generate, collect, and store data has exceeded our ability to effectively analyze it. As a result, there are a lot of wasted resources and extra costs involved with collecting and processing huge volumes of data that may have little value. Imagine using the point & shoot approach to locate critical data first and then leverage the advanced capabilities of existing analysis systems to collect just what is required for deeper analysis.
With Cribl Search, users are able to query data in place, allowing you to identify and correlate data from different sources, determine its value, and then perform deeper analysis using complementary systems of investigation. Does this sound interesting and you’re ready to learn more? Great! If you’re responsible for monitoring, managing, and querying the volumes of observability data being generated, this webinar is one you shouldn’t miss. So, take the opportunity to join me, bring your own challenges to discuss, and experience a new way to search.
Experience a full version of Cribl Stream and Cribl Edge in the cloud with pre-made sources and destinations.