Zero Trust means a lot of things to a lot of people. Too often, it’s treated like a product you can buy: sign a contract, flip on a new platform, and declare victory.
In reality, zero trust is a journey and a methodology, not a black‑box SKU.
You start from a hard assumption:
Your network will always have holes, legacy corners, and misconfigurations
Some of your tools will fail or fall out of date
People will connect from places and devices you didn’t design for
The only way to navigate that reality is with a modern, unified data strategy. If you don’t have the right data, in the right shape, in the right places, you can’t:
Tell when a breach has actually happened
Decide which tools and signals can be trusted
Prove that access boundaries are respected
Zero trust lives or dies on the quality of your data. A modern strategy for managing that data—often including data tiering—is what makes the whole thing workable at scale.
Zero Trust starts with “Assume Breach” — Data makes it real
Zero trust is often framed very plainly: assume there are a lot of openings and issues in your network. Assume things are already broken in ways you don’t fully understand.
To do anything useful with that assumption, you need data from:
Endpoints – EDR events, OS logs, agent telemetry
Network and perimeter – firewalls, VPNs, DNS, proxies, load balancers
Identity and access – IdPs, SSO, MFA, directory services
Cloud and SaaS – audit logs and control plane events
If you’re not collecting the basics — endpoint logs, solution logs, core network telemetry — you cannot reliably tell:
When something has gone wrong
Whether a “trusted” solution is behaving correctly
Where a user or workload is actually doing damage
That’s why zero trust is really about risk management — and your data is at the center of that risk.
The role of data strategy in Zero Trust
A data strategy for zero trust is your plan to make sure security‑relevant data is:
Captured from the right sources, consistently
Unified and enriched into a usable form
Governed and shared across boundaries safely
Accessible for real‑time decisions and long‑window investigations
Treating telemetry as “exhaust” doesn’t work here. You need to treat it as a first‑class security asset.
Fix the basics: Blocking and tackling
A recurring pattern in many organizations is that teams jump straight to big ideas — AI, advanced analytics, next‑gen platforms — while skipping the blocking and tackling:
Legacy Syslog servers nobody really owns anymore
One‑off collectors that no one can explain
Misconfigured exports that silently drop critical data
A pragmatic first step toward zero trust is embarrassingly simple:
Modernize your Syslog layer
Front or replace those old servers with a pipeline that’s centrally managed, observable, and scalable.
Find out what’s actually sending data — and what isn’t, because it was never configured correctly.
Once that’s under control, pivot to richer data sources that matter most for risk:
Endpoints – rich, behavior‑level telemetry
DNS and proxy logs – to see where users and workloads are really going
Identity logs – to stitch together who did what, from where, and when
From there, you can start deciding where that data should go: SIEM, observability tools, data warehouse, data lake. That’s the foundation of your data strategy.
Why data tiering matters to Zero Trust
As you get more coverage and more use cases, one thing becomes obvious: you can’t treat every bit of data the same way.
Some events are highly sensitive and must stay in tightly controlled environments.
Some data is needed every day by Tier 1/Tier 2 teams for operations and investigations.
Other telemetry is only touched a few times a year, for audits, forensics, or long‑tail investigations.
A data tiering approach aligns storage, access, and sharing with how each dataset is actually used:
High‑sensitivity tiers for full‑fidelity data with narrow access
Operational tiers for de‑risked versions of that data used by broader teams
Archive and exploration tiers for long‑term retention, analytics, and model training
Instead of “collect everything, keep everything, show everything to everyone,” you intentionally place each dataset in the right tier, with the right controls and expectations.
That tiered structure is what keeps zero trust practical and affordable over time.
Using Cribl as a data control point
Cribl’s tooling is designed to be a control point for your data plane:
Cribl Stream controls data in motion — collection, shaping, routing, and initial tier placement.
Cribl Lake and Cribl Search extend that control to data at rest — operational, investigation, exploration, and archive tiers with search‑in‑place across all of them.
Together, they give you:
Visibility – see what data is traversing which boundaries and tiers.
Access – safely expose the right level of data to the right teams, especially Tier 1/Tier 2, without handing them raw PCI or other regulated data.
Control – apply masking, reduction, tokenization, and routing based on business logic, not just ad‑hoc scripts.
In other words, Cribl helps you understand your risk, manage your risk, and still share data and get value from it, instead of locking data away where nobody can use it.
Crossing sensitive boundaries safely: The PCI example
Here’s a concrete example that fits squarely into a tiered, zero trust data strategy: a Payment Card Industry (PCI) zone and an operational zone.
Here’s what that looks like when you lean on Cribl:
Define your boundary and tiers. Inside PCI, you treat transaction logs and related telemetry as high‑sensitivity data: full fidelity, narrow access, strict controls. Outside PCI, you have lower‑risk environments where you want to share a de‑risked view of that data with broader teams.
Place Stream on both sides of the boundary. One Stream instance lives inside the PCI zone and another in the operational zone, so all telemetry that needs to cross the boundary flows through these two control points.
Standardize collection in PCI first. Repoint Syslog servers, applications, and other sources to the Stream instance inside PCI. Use live capture, reporting, and analytics there to understand what’s actually in those logs — often discovering, for example, that audit logs have the data you care about but may be missing key enrichment.
Fork and de‑risk data before it leaves PCI. In Stream, you fork the data so that one copy stays in PCI as full‑fidelity data for highly privileged investigations and compliance, while the other copy is transformed into a de‑risked operational view via masking card numbers or other identifiers, reducing records to only the fields operational teams need, and tokenizing sensitive values so tools outside PCI can still correlate events without seeing raw data.
Route into the right tools and tiers in the operational zone. The de‑risked view is sent to SIEM, observability tools, and investigation datasets in Cribl Lake, while long‑term copies land in cost‑efficient storage where Search‑in‑place can query them on demand.
Empower Tier 1 and Tier 2 without over‑privileging them. Tier 1 and Tier 2 teams can now solve many problems directly using de‑risked data, and escalation to the small group with access to full‑fidelity PCI data becomes the exception, not the rule.
You’ve implemented a risk‑aware, tiered approach at the data level: high‑risk data stays protected, de‑risked views let more people do meaningful work, and every path across the boundary is explicit, observable, and governed.
Standardizing collection and workflows
Another important point in the discussion is standardization. Once you have a central control point, you can start to:
Standardize how new applications emit telemetry. New services are onboarded into known pipelines with predictable tagging, boundaries, and routing rules.
Build workflows around those standards. For example, when an application team stands up a new PCI‑relevant app, part of the workflow is tagging it correctly, making sure its logs go through Stream in the PCI zone, and ensuring de‑risked copies land where SOC and IT can use them.
Add controls as you grow. From a GRC point of view, Stream becomes a control point for your data plane: you define who can access what data, in which environment, under which conditions, and you apply that business logic consistently across current and future data sources.
That’s what turns zero trust and data tiering from a one‑time project into an ongoing operating model.
Bringing it all together
Zero trust initiatives often struggle because they jump straight to tools and policy without first fixing the data foundation.
This approach looks different. Start with blocking and tackling: clean up legacy Syslog, get basic telemetry flowing, and fix obvious gaps. Next, treat zero trust as risk management focused on data, not a feature you can switch on. Then use a modern data strategy, with tiering baked in, to decide what belongs where, who can see it, and how it moves across boundaries. Finally, put a control point like Cribl Stream in the path so you can see, shape, and govern data in motion — then extend that control to data at rest with Lake and Search.
Done well, this gives you a practical, sustainable way to make zero trust real:
High‑risk data stays protected.
De‑risked views empower more teams and tools.
Every boundary crossing is driven by a simple principle: start with the data, understand its risk, and let everything else follow from there.
Watch our full interview below to hear our perspective on how Cribl can support your efforts to accelerate zero trust initiatives.









