Ed Bailey is a passionate engineering advocate with more than 20 years of experience in i... Read Morenstrumenting a wide variety of applications, operating systems and hardware for operations and security observability. He has spent his career working to empower users with the ability to understand their technical environment and make the right data backed decisions quickly. Read Less
The cybersecurity industry is experiencing an explosion of innovative tools designed to tackle complex security challenges. However, the hype surrounding these tools has outpaced their actual capabilities, leading many teams to struggle with complexity and extracting value from their investment. In this conversation with Optiv‘s Randy Lariar, we explore the potential and dangers of bringing advanced data analytics and artificial intelligence tools to the cybersecurity space.
The complexity that IT and security teams deal with today is mind-boggling. They’re often tasked with stitching together data from factories, office buildings, data centers, multiple clouds, and thousands of applications — just to answer the question, “Are we secure?”
The sheer quantity of new data produced in the last few years is one of the main contributors to the increase in complexity of managing data. Entire organizations are working from home and shifting to SaaS or cloud solutions, leading to an enormous increase in the number of logs produced.
With the surge in logs comes an increase in the tools needed to manage them. The average organization has about 70 different tools sending in log data. If each of these sources was configured correctly and sent in beautifully formatted, 100% useful information, the extra data wouldn’t be so much of an issue — but this isn’t the case.
With the recent acceleration of AI capabilities, many organizations are also incorporating AI tools into their data management strategy. All of the data produced by these new tools has to be logged, so it doesn’t look like the boom in data quantity is slowing down any time soon.
All of this complexity calls for a return to simplicity. IT and security teams need to start from first principles and ask basic questions, like “What are we here to do?“ and “Which problems are we trying to solve?” to determine where to focus their attention.
It’s easy to get distracted by fancy new tools and dashboards with a bunch of numbers and colors everywhere. But when you’re considering buying a new tool, you really have to consider whether it’ll solve your organization’s problems in the long run. Ask yourself if it’s a solution you can support — is the potential benefit worth the added complexity?
I spoke with someone recently who had three different EDR solutions deployed in their environment. Situations like this can pop up in organizations after a few acquisitions, but at some point, you have to ask if you’re getting value from each of them. Can you drop down to one instead of three? That way, your three admins can focus on one of the tools and properly tune it to get better results and value.
A lot of people equate automation with the elimination of risk when it’s really just a shift. If you start to use an LLM, AI model, or large machine learning model without understanding the different failure cases, there’s potential for disaster. You have to have plans and processes in place or put a human in the loop to avoid major issues.
When you integrate AI features or ChatGPT into your products, you have to be aware of the data produced and the information being fed in. Are you logging the whole new streams of data coming into your environment? Are you ensuring private information isn’t being inputted into a public source? It might be best to wait until the large language models evolve into private models you can use internally.
I recently heard about an organization that integrated LogScale with ChatGPT, where it helped walk a junior engineer through what to do in a particular threat detection scenario. It was the perfect example of a use case for an AI model in that it could help accelerate productivity. The ability for LLMs to detect, resolve, and inform you about problems isn’t quite there yet, and I’m not sure we’ll ever get there — but the real value may be in the millions of small interactions. AI can help people through complicated tasks quickly, or help with simple, repetitive tasks like generating reports and sending emails. With these items off your to-do lists, you’ll have more time to think about the risks to your businesses and how to address them.
Check out the full livestream to learn more about how AI is affecting the cybersecurity landscape, including why the data going into LLMs needs constant supervision and quality checks, and to learn more about Randy and what he and his team of seasoned data scientists and engineers are up to at Optiv.
Cribl, the Data Engine for IT and Security, empowers organizations to transform their data strategy. Customers use Cribl’s suite of products to collect, process, route, and analyze all IT and security data, delivering the flexibility, choice, and control required to adapt to their ever-changing needs.
We offer free training, certifications, and a free tier across our products. Our community Slack features Cribl engineers, partners, and customers who can answer your questions as you get started and continue to build and evolve. We also offer a variety of hands-on Sandboxes for those interested in how companies globally leverage our products for their data challenges.