27

Manual vs. auto instrumentation OpenTelemetry: Choose what’s right

Last edited: October 9, 2025

Choosing how you instrument your code can make or break your observability strategy and what returns you get for the effort. OpenTelemetry, the open standard for collecting and exporting telemetry data, gives teams powerful tools to capture traces, metrics, and logs from their applications. But we’ve learned the path to robust observability isn’t just about what systems you collect data from — it’s about how you collect it and what value it provides your organization.

If you’ve decided to instrument your code, you have a core decision: Do I start with manual or automatic instrumentation? Each method shapes what you see, how much control you have, and ultimately, how you respond to issues in production and learn from your systems.

At Cribl, we believe teams should have control over what telemetry they collect, where they route it, and what context they enrich it with — regardless of how it’s instrumented. So whether you’re just starting with observability or fine-tuning a mature stack, understanding your instrumentation options is key. This post walks through the tradeoffs and shows how Cribl helps teams navigate the landscape, so you can make informed choices that fit your goals.

Quick Primer: What Is OpenTelemetry Instrumentation?

OpenTelemetry instrumentation is the process of adding code or configuration to your applications to collect telemetry data, i.e., logs, metrics, and traces. It’s the bridge between your running code and your observability tools and lets you see what’s happening inside your systems.

At its heart, instrumentation works through SDKs, agents, configs, and environment variables. You might configure an autoinstrumentation agent to collect data, or use an SDK in your application to manually initiate spans or increment a metric counter.

If you are instrumenting traces, a cool part of what an agent or SDK does is pass context and unique span, parent, and traceIDs from one span to the next request — a process called “context propagation.” It’s this context propagation that enables you to take these events that were created in disparate systems with no shared execution context and stitch them together to get a complete story — so you can learn more about that system as you are investigating.

Automatic Instrumentation vs. Manual Instrumentation Opentelemetry: Pros, Cons, and Use Cases

OpenTelemetry gives you two main ways to instrument your code: automatically or manually. Both have strengths and weaknesses, and the right choice depends on your team’s needs, your codebase, and your observability goals.

Automatic instrumentation uses agents, wrappers, and environment variables to collect telemetry without changing your code. For example, in Python, you might run:

pip install opentelemetry-instrumentation
opentelemetry-instrument python my_app.py

This approach is fast and broad. There are auto instrumentation packages set up in many languages and frameworks out of the box, making it ideal for polyglot teams or those fresh to learning OpenTelemetry. You get immediate data with minimal effort. You install the agent, set a few environment variables, and you’re off. But there’s a catch: You trade control for convenience. Automatic instrumentation emits telemetry signals from your application runtime, as well as any library you have installed that also comes with instrumentation. Check out the registry.

While there are mechanisms in some languages to control which of these libraries you turn on instrumentation for (see python's opentelemetry-bootstrap), auto-instrumentation with OpenTelemetry can very quickly generate more data than you may need for visibility. Note that because each dependency library may have its own instrumentation and versioning, it can sometimes prove tricky to balance for complex legacy environments.
Manual instrumentation means writing code to explicitly create spans and collect telemetry. You use the OpenTelemetry SDK for your language, like opentelemetry-python, to define exactly what you want to capture. This provides fine-grained control over span granularity, allowing you to tailor telemetry to your business needs and organizational goals. For example, you might create a span to track a payment gateway transaction, adding custom attributes for business context.

Manual instrumentation is powerful, but it takes more effort. You have to maintain the code as your application evolves, and you need standards to keep your telemetry consistent. Honestly, if you’ve got a greenfield project and you’re even a little bit familiar with OTel, I say go ahead and start with manual instrumentation. Try your hand at observability-driven development! Manual instrumentation is great when you have clear tracing and observability goals and want to minimize noise.

OTel auto instrumentation can help you know how the code and dependencies function when you put them together. But the more custom business logic or code logic you add, the more manual instrumentation can bring observability into your unique problem spaces.

Automatic Instrumentation

Automatic instrumentation shines when you need to get started quickly. Tools like opentelemetry-auto-instrumentation for Python, or language-specific libraries for Node.js and Java, let you instrument entire applications with a single command or configuration change. Environment variables control what gets collected, and agents handle the heavy lifting.

Pros:

  • Fast time to value: You can instrument an application and emit traces in minutes.

  • Broad coverage: Works across languages and frameworks with only a base level of domain knowledge needed.

  • Minimal code changes: No need to modify your codebase. Because this is mostly done through configuration or initialization, auto instrumentation is an approachable and repeatable process to make observability gains for your organization.

Cons:

  • Less control: You can’t easily enrich spans at the source with customized attributes.

  • Risk of “too much” data: You might collect more telemetry than you need. While some languages offer the ability to quickly configure opt- in or out of noisy dependencies (i.e python bootstrap), your mileage may vary here about signal:noise ratio.

  • Complex configuration: Non-trivial environments may require advanced tuning or version management

Automatic instrumentation is ideal for quick proofs of concept, utility applications, or teams managing telemetry strategy for multiple languages and frameworks.

It’s worth noting that there may be a higher potential for accidental PII leaks in the contents of URIs, POST body, SQL params, etc, with auto-instrumentation libraries. With great power comes great responsibility. However, don’t worry, there are ways to mask or redact data in both OpenTelemetry and Cribl. We've got you covered.

Manual Instrumentation

Manual instrumentation is all about precision. You use the OpenTelemetry SDK to define spans directly in your code, giving you full control over what’s captured and how. This is especially useful for business-critical paths like payment processing or order fulfillment, where you want to track specific transactions or add custom metadata. With manual instrumentation, you can even propagate baggage across service boundaries, ensuring that business or customer context follows a request end-to-end (not just within a single service).

Pros:

  • Precise control: Define exactly what gets instrumented. Get the data you need for your organization’s observability strategy and have it match your ideals for data normalization and governance.

  • Business and technical context: With manual instrumentation you can add customer context that matters to your team to directly tie your observability goals to customer outcomes

  • Easier to contextualize signals: Tailor spans to your use case.

Cons:

  • Higher dev effort: Requires writing and maintaining instrumentation code.

  • Can be more delicate line-by-line: Code changes could break instrumentation.

  • Readability vs Visibility: you’ll have more code. You may hear cons about manual instrumentation making code more difficult to read but the tradeoff is that, if done with intentionality, it will substantially increase your visibility into how that code runs in production.

On a PII note, with the ability to add custom attributes manual instrumentation also introduces places developers may unintentionally leak PII or secrets into telemetry. Keep an eye out for these and have systems in place to mask or redact them.

You’ll want to choose manual instrumentation for latency-sensitive applications, high-volume environments, teams with opinionated tracing goals, or if your organization has a bespoke observability strategy with clearly defined outcomes. Manual instrumentation is particularly beneficial in high-throughput systems or low-latency workflows because it gives you granular control over what is collected and when, minimizing both performance overhead and data volume. Instead of relying on broad, auto-instrumented traces that may capture too much irrelevant information, teams can selectively instrument only the most critical spans. This ensures observability signals don’t become bottlenecks themselves and helps to preserve performance in environments where every millisecond or every dropped event translates to potential customer impact.

How Cribl Helps Teams Navigate the Tradeoff

Cribl redefines OpenTelemetry instrumentation management by acting as an intelligent buffer between data collection and analysis. Teams can use the OTLP Source in Cribl Stream or Cribl Edge to process raw telemetry through configurable pipelines. This gives them granular controls that address the sprawl of auto-instrumentation while boosting the value of manual spans.

By intercepting spans at ingest, users can filter noise like health-check traces using regex-based rules, preserving backend capacity for mission-critical transactions. Simultaneously, Cribl can enrich hand-crafted spans with business or technical metadata (customer tiers, deployment regions, etc.) directly in the pipeline — no code changes required — so this robust context is available in your tool analysis.

With the Cribl Suite, there are also a ton of routing, redacting, masking, and enrichment use cases for your telemetry — even trace sampling options for volume reduction. For example, with Cribl Search a curated selection of your traces can flow to a Destination for real-time analysis while a full fidelity copy lives in Cribl Lake for compliance or further investigation or Replay.

It’s Not Either/Or. It’s About Control.

The choice between manual and automatic instrumentation isn’t binary. Both methods have value, depending on your team’s goals, codebase maturity, and tooling. With Cribl, you can adopt both approaches and evolve over time, without rewriting everything.

Cribl shifts instrumentation strategy from rigid commitments to fluid adaptation allowing teams to:

  • Phase manual enhancements into auto-instrumented services by injecting custom attributes through Cribl’s pipeline stages

  • Compare monitoring tools objectively by replaying identical trace datasets to competing APM vendors

  • Reduce metric cardinality by deriving RED metrics from existing spans instead of maintaining separate collectors

This approach eliminates forced tradeoffs between coverage depth and operational cost. DevOps teams collect comprehensive telemetry during development phases, then process with Cribl to streamline production data flows without altering instrumentation code. By controlling what’s collected, processed, and analyzed at each stage, you maintain observability agility as technical and business requirements evolve.

If you’re ready to take control of your telemetry pipeline, you can learn more about Cribl Stream or Edge, or check out our Docs on OpenTelemetry support. Observability is a journey, and Cribl is here to help you every step of the way.

Cribl, the Data Engine for IT and Security, empowers organizations to transform their data strategy. Customers use Cribl’s suite of products to collect, process, route, and analyze all IT and security data, delivering the flexibility, choice, and control required to adapt to their ever-changing needs.

We offer free training, certifications, and a free tier across our products. Our community Slack features Cribl engineers, partners, and customers who can answer your questions as you get started and continue to build and evolve. We also offer a variety of hands-on Sandboxes for those interested in how companies globally leverage our products for their data challenges.

More from the blog

get started

Choose how to get started

See

Cribl

See demos by use case, by yourself or with one of our team.

Try

Cribl

Get hands-on with a Sandbox or guided Cloud Trial.

Free

Cribl

Process up to 1TB/day, no license required.