data Debt

The Evils of Data Debt

June 20, 2023
Written by
Ed Bailey's Image

Ed Bailey is a passionate engineering advocate with more than 20 years of experience in i... Read Morenstrumenting a wide variety of applications, operating systems and hardware for operations and security observability. He has spent his career working to empower users with the ability to understand their technical environment and make the right data backed decisions quickly. Read Less

Categories: Learn

In this livestream, Jackie McGuire and I discuss the harmful effects of data debt on observability and security teams. Data debt is a pervasive problem that increases costs and produces poor results across observability and security. Simply put — garbage in equals garbage out. We delve into what data debt is and some long term solutions. You can also subscribe to Cribl’s podcast to listen on the go!

In September of last year, Twitter’s former head of security told lawmakers that the company doesn’t understand how to properly manage 80% of their data. Since then, the topic of data debt is starting to get the amount of attention it deserves.

Why Data Debt Is Such a Widespread Issue

Organizations typically have too much data in a million different formats, making it difficult for security and operations teams to consume and make sense of it — even large organizations with all the resources they need at their disposal.

In the analytics space, highly disciplined people with degrees in data science build very advanced schemas — things don’t change much, there are only a few formats, and they can get a lot of results out of their data. However, in the operational security space, it’s quite the opposite. Data is highly volatile, there are too many different tools, and there isn’t the same amount of discipline, so the struggle to collect and do something with that data is enormous.

In addition, not everybody who works with data has best practices in mind with regard to retention and storage. For example, it’s not a top priority for data scientists to think about where the data they pull is going to live, how long it’s going to be there, or what they’ll do with it when they’re done — these are things that most people don’t think about unless it’s a specific part of their job.

The Costs of Improperly Formatted Data

Poorly formatted data costs money to consume — syslog, XML, JSON, or data from Elastic and Splunk eats up precious CPU cycles. Instead of using that bandwidth to analyze your data, get value from it, and drive your detections, you burn up resources consuming it.

Then you have to normalize it — when you write a security detection and try to correlate between multiple data sources in different formats, consistency is the key to finding your fields and data points.

Consistency is also the driving force behind getting valuable insights via machine learning. The amount you can learn from your data is directly correlated with how consistent and well-formatted it is. If your data is all over the place, it’ll be difficult to get any reliable information from your ML tools.

Don’t Forget About Compliance

From a compliance standpoint, there’s a whole other set of issues. It’s very easy to end up with data contamination without even knowing it. If there’s PII, HIPAA, or any other type of sensitive data that is unaccounted for, you’re left open to a host of regulatory problems.

Fines for mishandling sensitive data are not insignificant. You need to know where your data is coming from and what the controlling authorities require to protect it. Suppose you’re managing telemetry logging for security and operations. In that case, you need to have a GRC component to help you out with this, because it involves an overwhelming set of data points.

When it comes to security and UEBA anomaly detection, data scientists look for features of the data in their models to notify them of bad actors — if your data is messy, models can end up falsely identifying features, causing it to look like there are additional anomalies. If this goes unchecked for a while, you end up with tons of false positives that analysts eventually get desensitized to and start to dismiss, leaving the door open for an actual threat to slip in.

How to Approach Resolving Data Debt Issues

Start by asking what outcomes you’re looking for with the data — then you can work backward, decide what pieces of the data you actually need, and determine how to put processes in place to format your data.

For incident response, if you’re looking at logs to determine if you have a compromise, you have to know which pieces of data are critical pieces, and which only matter if there is an incident. Then you can decide if you just need to keep a full raw copy of logs, or if it’s more efficient to pull out a few fields that you need for your primary purposes and keep a copy in a more cost-effective object store.

Once you establish your end goals, start cataloging your processes. No leader is going to get behind fixing data debt problems without a set of standards to point to, especially if you’re a decent-sized organization. In that case, you’re probably dealing with 20 or 30 different development teams, working within 20 or 30 different development frameworks, with at least 15-20 security tools. All of your SaaS tools are generating logs, and each of them is doing it a little bit differently.

Using Cribl Stream to Address Data Debt

Once your standards are in place, a tool like Cribl Stream can really help put your plan in motion. You can use it to route all the data coming into its proper destination —reducing, enriching, or masking it as necessary along the way — and make sure new data coming in doesn’t worsen your data debt problem.

You can also use the replay feature to pull data out of places you have it stored and start dealing with the mountain of backlogged data you have. It would be expensive to pull it all at once, so you can use Stream to prioritize it and figure out how many gigs, terabytes, or even petabytes you want to reprocess each day — giving you a way to create a controlled plan for dealing with it all.

Remember that you don’t have to eat the apple whole. You want to have buy-in from everyone across your organization when taking on a project like this — take things step by step and just worry about making progress. The good news is that you aren’t the only one with massive data debt, and Cribl is here to help! If you want to try Cribl Stream right now, spin up one of our free Sandboxes!

Check out the full video on YouTube to hear us explore the complex nature of observability and security data, and how to get the best results with a high-quality observability pipeline like Cribl Stream. With the right strategies, you can overcome data debt, and Jackie and I will help you get started on the road to success.

 


 

Cribl, the Data Engine for IT and Security, empowers organizations to transform their data strategy. Customers use Cribl’s suite of products to collect, process, route, and analyze all IT and security data, delivering the flexibility, choice, and control required to adapt to their ever-changing needs.

We offer free training, certifications, and a generous free usage plan across our products. Our community Slack features Cribl engineers, partners, and customers who can answer your questions as you get started. We also offer a hands-on Sandbox for those interested in how companies globally leverage our products for their data challenges.

.
Blog
Feature Image

Cribl Stream: Up To 47x More Efficient vs OpenTelemetry Collector

Read More
.
Blog
Feature Image

12 Ways We Sleighed Innovation This Year

Read More
.
Blog
Feature Image

Scaling Observability on a Budget with Cribl for State, Local, and Education

Read More
pattern

Try Your Own Cribl Sandbox

Experience a full version of Cribl Stream and Cribl Edge in the cloud with pre-made sources and destinations.

box

So you're rockin' Internet Explorer!

Classic choice. Sadly, our website is designed for all modern supported browsers like Edge, Chrome, Firefox, and Safari

Got one of those handy?