Ed Bailey is a passionate engineering advocate with more than 20 years of experience in i... Read Morenstrumenting a wide variety of applications, operating systems and hardware for operations and security observability. He has spent his career working to empower users with the ability to understand their technical environment and make the right data backed decisions quickly. Read Less
In this Livestream conversation, I spoke with John Alves from CyberOne Security about the struggles teams face in modernizing a SIEM, controlling costs, and extracting optimal value from their systems. We delve into the issues around single system-of-analysis solutions that attempt to solve detection and analytics use cases within the same tool.
We explored the strategic limitations of this type of security architecture, presenting alternative options for effectively mixing and matching data platforms. Be sure to watch the full conversation to get on the path toward achieving the optimal combination of data management and cost control capabilities.
If your security architecture is centered around a SIEM that houses all your security and operational data, it’s time for an upgrade. Data quantities, cyber attacks, and regulatory requirements are all on the rise, so having a single destination for your data leaves too much room for vulnerabilities.
Until recently, buying a SIEM meant deploying its agents, putting all your data into it, and going on your merry way. You were almost 100% confined to that one framework — if you wanted to use UEBA, your vendor or one of their partners provided it. Operating outside your SIEM or bringing in third-party vendors was very limited.
About five years ago, the concept of an observability pipeline emerged, allowing organizations to funnel their observability and security data through a consistent data plane. The idea of controlling where your data gets stored was born, and vendor-neutral considerations began gaining popularity.
Admins can now make copies of events for their SIEM, data lake, UEBA solution, or someone else’s data lake — easily turning one event into four events that power different parts of their security stack. By moving data into a data lake instead, admins can analyze data and build dashboards for operations teams without bloating their ingest. Teams have more choice and control over their data than ever before, so they can consider their specific needs when building out their infrastructure.
During our discussion, John mentioned how this flexibility is no longer a wish-list item for his clients, but a necessity. As the industry transitions to cloud infrastructure and cloud-based computing, organizations require vendor-neutral data that supports their scalability efforts. There are a host of benefits you get from modernizing your security architecture.
Routing data that isn’t needed for security to object storage is one of the best ways to reduce SIEM license costs. Ingest costs go down, and you avoid the upsell for archive data — around a 4- 8x markup — as opposed to using your own object storage or your SIEM cloud platforms archive. You can also store it in a vendor-neutral format, giving you enormous flexibility that you wouldn’t get otherwise.
We recently worked with a developer team and their debug logs, routing them to a lower-cost S3 bucket instead of their SIEM. All we had to do was create a rule in Cribl Stream to route them to the data lake, and now they’re available to be restored whenever necessary. This is just one example of many where we can set customers up to meet their simultaneous need for availability but lower cost and overhead.
When you can reduce your SIEM license costs, you no longer have to choose which data sources you can afford to collect. By removing the constraints for engineers that come from not having the raw data when needed, security teams can focus on security and not just moving data around.
No more time spent on tasks like going out to a server to manually zip up and pull in logs. The result? Better detections, analytics, and security.
Each team has a different use case for the data the organization collects — having different pipelines to transform and send data to different sources is invaluable. Putting firewall, threat, traffic, and systems logs into a single destination is a great way to bloat your ingest. And not all logs from a single data source are security relevant.
Routing some of them into a storage account or data lake will not only save on ingestion costs and create less noise for security teams, but you can also give access to relevant logs to your infrastructure, firewall, and other teams. Route your threat logs straight into the SIM, but send traffic and other logs straight into the data lake for your infrastructure network team.
Another benefit of keeping raw copies of data is complying with retention requirements. If you’re manipulating data before it goes into your SIEM, then you’re not adhering to some necessary standards. Transform events to get what you need for your SIEM, but keep unmanipulated, raw copies in your data lake. Your IR or legal counsel can control forensic copies.
As insurance companies get more sophisticated and start hiring engineers as auditors, they’ll dive deeper into your architecture than before. They’ll ensure you have a SIEM in place but also check to see if you’re putting the right data in and using it appropriately. Government auditors will want to see all your data sources and detections. They’ll be ready to write findings if you’re not following best practices.
The prevalence of bad data or an overwhelming amount of data leads to various issues with detection, and drives costs higher and higher. It is extremely common to witness a year-over-year cost increase of up to 35%, which is clearly unsustainable.
Watch the full livestream to hear John and I talk about alternative options for your SIEM platform, so you can be empowered to re-architect your data strategy. With the right strategies, SIEM platform challenges can be overcome, and we’re here to help as you embark on this transformative journey.