x
Data Chaos

Data Chaos MUST Be Curbed, but How?

March 26, 2024
Written by
Jackie McGuire's Image

Jackie McGuire is a Senior Market Strategy Manager at Cribl, focused on the security mark... Read Moreet. Prior to joining Cribl, Jackie was a Research Analyst with S&P Global, writing, speaking, and providing thought leadership on information security and Web3. Jackie has also worked as a data scientist in cybersecurity, developing behavior analysis and anomaly detection models, been co-founder, CEO, and CFO for several startups, and before her work in technology, was a licensed securities broker and SEC Registered Investment Advisor. Read Less

Categories: Learn

My introduction to the world of data science was writing anomaly detection for a SIEM that catered to banks and credit unions. Some of these places were running on 50-year-old IBM core banking servers — meaning that someone trying to turn off a light in a server room could take down an entire bank with a literal flip of the wrong switch.

While some companies take their time updating infrastructure, others still embody the move-fast-and-break-things philosophy of the early dot-com era giants. Spoiler alert — neither one of them is good for security. Outdated technology and innovation-at-all-costs have both led to the current chaotic state of data.

We’re seeing the effects of not having found a happy medium — breaches are on the rise across healthcare, technology, finance and pretty much every other industry. With the constant push and pull around this complex problem, a simple, one-size-fits-all solution is unlikely.

Where Does All the Data Chaos Come From?

Part of the problem is that data is the smallest unit of measurement in security. Trying to resolve an issue so big at the cellular level is bound to be overwhelming. While improvements in DLP have begun to address the management side of the equation, we still haven’t scratched the surface of the data creation side.

A recent study shows that enterprises create over 64 zettabytes (ZB) of data, an amount that’s growing at a 28% CAGR. Most organizations are overly permissive with all of that data — in part so they don’t have to monitor access requirements, but also because data has to be convenient in order for people to be productive with it.

Once they get access to it, people don’t always understand the sensitivity of the information that they’re dealing with. We’ve gotten better with obvious things like dates of birth, SSNs, and other PII — but how many people outside of security know that access tokens shouldn’t be shared, or even know what a token is?

The kind of data that’s considered valuable has also changed. Social engineering attacks are on the rise, so hackers are likely to hack into Slack channels or photos stored in the cloud to find out your pet’s name or favorite brand of coffee. People are too often willing to exchange security for convenience without full knowledge of the trade-off they’re making.

Can You Control All This Data After It’s Been Created?

Ideally, we could address the data problem at the point of creation — but this is easier said than done. It’s much more difficult and expensive to classify data up front than to add controls after the fact, but it might be worth it to give it a try.

If each byte of data represents an actual data point — a date, timestamp, or other value — then we could capture the unique value, make sure it’s created only once, and then allow customers of that value. This way, you can track the path each data point takes and who touches it on the way to its destination.

This would help not only with access controls but also for those times when people forget to ask before they park a bunch of data somewhere. The next time someone accidentally dumps 400 TB of data into a data lake, you’d know exactly what all of it is, who has the right to hold it, and for how long. As we pass more regulations around PII, enterprises could use this approach to avoid the fines that come from non-compliance.

This approach will save enterprises enormous amounts of money in the long run if we can figure it out. Think about it — how often the same piece of data is created, multiple times throughout an organization? Creating data once and distributing it from there saves quite a bit of storage space, computing power, and more.

So, can we do it? Absolutely. Are we willing to commit the time and resources to doing it? I guess we’ll have to wait and see.

Maybe AI Will Come to the Rescue?

As hot as AI and machine learning are right now, it’s still in many ways a solution in search of a problem — but this may be the perfect use case. AI can likely identify files and apply naming conventions much more quickly and reliably than humans.

New advancements are giving us the ability to create internal LLMs that can help categorize data specific to an enterprise, helping it learn and apply the training. We’re headed in the right direction, but if we’re being real — most enterprises don’t even require MFA, so we’re probably not going to jump right into intelligent data categorization and file naming right away.

Where Best Practices and Standards Fail, Money Works

The goal should be to establish best practices, so things gradually improve as companies set themselves up from scratch over the next couple of decades. But financial regulations are generally what causes things to change.

We would never have known about breaches like Clorox and Johnson Controls without the new SEC disclosure rule, and the same thing will happen with data categorization and identification. Cyber insurance is getting significantly more expensive because actuaries aren’t experts in data, and the way they value a company’s data and its potential loss hasn’t necessarily been accurate.

Once they catch up, how we value data as an asset and liability is going to change — and categorization/identification will be critical for that. A social security number is a lot more valuable than a port number, so there will be a clear distinction between insuring X zettabytes of random data versus the same amount of critical data.

So, What Should We Do?

The answer for starting to address tech debt is actually the same one I would give as a financial advisor for addressing your actual debt. First, you have to change how you do things right now and going forward. Then you have to start to go backwards and address the problems that you created.

We need to develop significantly better policies around data creation. Enterprises can use something like Google Drive to sync and categorize data on cloud endpoints. Once you have better policies in place for how you’re creating data, you can go back through the data that you’re storing and decide if you still need to be storing it. If you do, decide how many different copies exist. Do they all actually need to exist?

As big of a problem as this is, it’s only going to get worse. Until organizations have better visibility into what data is flowing where they’ll continue to be at increased risk for cyber attacks.

 


 

Cribl, the Data Engine for IT and Security, empowers organizations to transform their data strategy. Customers use Cribl’s suite of products to collect, process, route, and analyze all IT and security data, delivering the flexibility, choice, and control required to adapt to their ever-changing needs.

We offer free training, certifications, and a free tier across our products. Our community Slack features Cribl engineers, partners, and customers who can answer your questions as you get started and continue to build and evolve. We also offer a variety of hands-on Sandboxes for those interested in how companies globally leverage our products for their data challenges.

.
Blog
Cribl ET30

Cribl Named to the 6th Annual Enterprise Tech 30 List!

Read More
.
Blog
Feature Image

Better, Faster, Stronger Network Monitoring: Cribl and Model Driven Telemetry

Read More
.
Blog
Feature Image

The UK Telecommunication Security Act (TSA): When Life Gives You Lemons, Make Lemonade

Read More
pattern

Try Your Own Cribl Sandbox

Experience a full version of Cribl Stream and Cribl Edge in the cloud with pre-made sources and destinations.

box