Another Splunk .conf has come to a close with several announcements and so many great customer presentations. Splunk made several exciting announcements around its observability platform, including always-on application profiling, enhanced database visibility to detect slow queries, and expanded OpenTelemetry support. I like what Splunk has done with what was formerly known as SignalFx. The product has been transformed since its acquisition and is a significant offering from Splunk.
The security-related announcements were a little more muted and included teases for more cloud-based Splunk security products. I would expect to see cloud-based Phantom and the fabled Mission Control shortly. The best announcement was for Splunk SURGe, which Splunk calls “an elite team of cybersecurity experts,” whose goal is to provide technical guidance to customers “during high-profile, time-sensitive cyberattacks.” This could be a valuable service to customers since Splunk can see the scale of attacks given how many companies use Splunk ES. Being able to surge resources and address everyday needs across a broad range of customers could be very powerful. I am interested to see how Splunk evolves this product offering.
Splunk’s most significant announcement around its core search products, Splunk Cloud and Splunk Enterprise expanded workload pricing options to shift to a more utilization-based model. You pay for what you use instead of how much data you ingest. This model could save some customers money since they are not frequently searching their data, so metrics around CPU utilization will be relatively low and not trigger as many costs as a customer that runs hundreds of concurrent searches. Of course, every customer must carefully evaluate its own particular details to determine if workload pricing is the right model.
Splunk also announced that a federated search was available. I am very interested in federated search since it supports searching both your on-premises and Splunk Cloud instances from one UI. This is a powerful feature that bridges silos and provides more flexibility. A new Splunk cloud storage option called Flex Index was announced as well. It offers options for storing high volume, low-value data at competitive rates to give teams more data management flexibility.
The best part of .conf is the customer presentations. I love hearing what customers are doing with Splunk. I get the best ideas from seeing customer presentations.
The Accenture Federal Services (AFS) team did an outstanding job describing how they replaced a legacy big data security platform with solutions that support unlimited scale, such as Splunk, data pipeline tools like Cribl Stream. The legacy solution was slow and not able to scale to meet current and future requirements. The AFS team needed a flexible solution to ingest a wide range of data at scale, and then shape the data in flight to optimize and enrich the data to work best with Splunk. The metrics the team presented after solution deployment were stunning. Detection, response, and resolution metrics went from weeks and days to hours and minutes.
What impressed me the most was the team created processes to continuously optimize data, build and test detections and finally deploy detections to drive better results faster than previously possible. Combining quality processes and great tools is the secret to success. Tools alone will not create a quality solution. Working with data is a development process and requires operations teams to adopt the developer mindset to be effective. You have to know your data, build tools to use the data, and then test your results constantly. Rinse and repeat. The AFS team did all of the above and more. I am looking forward to hearing what they are going to do next.
The TransUnion Splunk team presented how it created an advanced anomaly detection framework combined with per-service scoring to replace the need for expert-driven fault detection. The team built the data framework to drive the solution, and Cribl Stream was a pivotal component to drive ML by providing Splunk with high-speed optimized data, including turning logs to metrics. Unfortunately, Splunk’s legacy XML dashboarding was not up to displaying the breadth of data in easy-to-consume visualizations. The only option was dozens of visuals that were awkward and very slow. The team struggled with existing tools then settled on the Splunk Dashboard Beta App. The app was a risk, but it quickly became apparent that it was the answer. The dashboard app provided a rich framework to display data, and the feedback from users was outstanding.
The team knew it did something special when the solution detected a complex edge case during testing and alerted on an outage before it occurred. Ordinarily, this alert would only have happened when 1-2 SMEs were monitoring the system and knew to interpret the data and manually create an alert. Otherwise, an outage would have occurred that would have impacted customers. Replacing specialized knowledge with Splunk-powered anomaly detection was a significant win and enabled broader, better application support that was not tied to experts manually monitoring 24/7. In addition, the team did an outstanding job marrying its ML data framework with advanced visualization to give application support teams a powerful weapon against downtime.
Full disclosure – I used to manage the TransUnion Splunk team and could not be more proud of what they have built.
Cribl CEO Clint Sharp also spoke on the future of observability, and you can watch his keynote below or on our YouTube channel.
Splunk .conf is always a great learning experience. I attended my first .conf in 2013, and it keeps getting better. From new software and features to customer presentations, I always feel like I leave .conf better equipped to face the challenges of the following year.
Try Cribl’s free, hosted Stream Sandbox. I’d love to hear your feedback; after you run through the sandbox, connect with me on LinkedIn, or join our community Slack and let’s talk about your experience!