Cribl puts your IT and Security data at the center of your data management strategy and provides a one-stop shop for analyzing, collecting, processing, and routing it all at any scale. Try the Cribl suite of products and start building your data engine today!
Learn more ›Evolving demands placed on IT and Security teams are driving a new architecture for how observability data is captured, curated, and queried. This new architecture provides flexibility and control while managing the costs of increasing data volumes.
Read white paper ›Cribl Stream is a vendor-agnostic observability pipeline that gives you the flexibility to collect, reduce, enrich, normalize, and route data from any source to any destination within your existing data infrastructure.
Learn more ›Cribl Edge provides an intelligent, highly scalable edge-based data collection system for logs, metrics, and application data.
Learn more ›Cribl Search turns the traditional search process on its head, allowing users to search data in place without having to collect/store first.
Learn more ›Cribl Lake is a turnkey data lake solution that takes just minutes to get up and running — no data expertise needed. Leverage open formats, unified security with rich access controls, and central access to all IT and security data.
Learn more ›The Cribl.Cloud platform gets you up and running fast without the hassle of running infrastructure.
Learn more ›Cribl.Cloud Solution Brief
The fastest and easiest way to realize the value of an observability ecosystem.
Read Solution Brief ›Cribl Copilot gets your deployments up and running in minutes, not weeks or months.
Learn more ›AppScope gives operators the visibility they need into application behavior, metrics and events with no configuration and no agent required.
Learn more ›Explore Cribl’s Solutions by Use Cases:
Explore Cribl’s Solutions by Integrations:
Explore Cribl’s Solutions by Industry:
Try Your Own Cribl Sandbox
Experience a full version of Cribl Stream and Cribl Edge in the cloud.
Launch Now ›Get inspired by how our customers are innovating IT, security and observability. They inspire us daily!
Read Customer Stories ›Sally Beauty Holdings
Sally Beauty Swaps LogStash and Syslog-ng with Cribl.Cloud for a Resilient Security and Observability Pipeline
Read Case Study ›Experience a full version of Cribl Stream and Cribl Edge in the cloud.
Launch Now ›Transform data management with Cribl, the Data Engine for IT and Security
Learn More ›Cribl Corporate Overview
Cribl makes open observability a reality, giving you the freedom and flexibility to make choices instead of compromises.
Get the Guide ›Stay up to date on all things Cribl and observability.
Visit the Newsroom ›Cribl’s leadership team has built and launched category-defining products for some of the most innovative companies in the technology sector, and is supported by the world’s most elite investors.
Meet our Leaders ›Join the Cribl herd! The smartest, funniest, most passionate goats you’ll ever meet.
Learn More ›Whether you’re just getting started or scaling up, the Cribl for Startups program gives you the tools and resources your company needs to be successful at every stage.
Learn More ›Want to learn more about Cribl from our sales experts? Send us your contact information and we’ll be in touch.
Talk to an Expert ›Cribl Stream enables you to get multiple data formats into your analytics tools quickly. You can use Stream to ingest data from any observability source or REST API, and easily transform and route that data to whatever analytics tool you choose.
This blog will walk through how I used Stream to ingest 3D printer data from OctoPrint, using Stream’s REST and filesystem collectors. And how I then transformed and routed that data to OpenSearch. The basic principles are the same for RepetierServer or Ultimaker.
OctoPrint is a print server with a streamlined Web UI, designed for hobbyist 3D printer users. It gives a user the ability to upload their G-Code to their 3D printers remotely. It also allows you to do things like preheat your printer and get historical data on your prints.
On top of the other things that a hobbyist might be more interested in, it also has an extensive REST API. We can use the REST API to retrieve data on jobs, plugins, printer state, etc. We can also get the octoprint.log
, which is OctoPrint’s main application log. This log contains information on everything happening while OctoPrint is running.
We will examine how we can ingest printer State data via the API, and the OctoPrint logs via the filesystem. Then we will create a Pipeline to manipulate these logs to be ready for ingestion into OpenSearch. Finally, we will route these logs to OpenSearch, and search and visualize them there.
Using the Stream REST collector, we can get data from OctoPrint’s REST API. As previously stated, we are going to get the printer State using GET /api/printer.
We are going to add one last configuration under Fields (Metadata). We will use this added field in our filter to route this collected data to OpenSearch.
collector = 'OctoPrintState'
Before we set up the Filesystem collector to ingest the octoprint.log, we will need to set up an Event Breaker to properly break the events when collecting them in Stream.
OctoPrint Ruleset
and click + Add Rule.OctoPrintLog
and, under Event Breaker Type, select Timestamp.Click OK and save the Event Breaker.
Now that we have configured the Event Breaker, let’s set up the Filesystem collector to collect the octoprint.log
.
~/.octoprint/logs
directory. Under Directory, enter the full path to these OctoPrint logs.octofilesystem
. We will use this field to filter these logs in the Route.octoprint.log
, which is now correctly broken. Go ahead and save this as a sample file to use in our Pipeline creation.Now I will create a pipeline in Stream to process the OctoPrint state information and octoprint.log
, and get it ready for ingestion in OpenSearch.
To create a new pipeline, go to pipelines → + Pipeline. I named mine OctoPrint-API-State
.
The first thing we need to do to get our data ready to send to OpenSearch is to specify the index where we want our data. We can do this by setting the __index
field using an Eval function.
__index
and the Value Expression to ‘octoprintstate
’.Let’s take a look at the sample data we saved, by clicking Preview Simple. If you click the gear icon and select Show Internal Fields, you can see the __index
field we just created. You might also notice that the _raw
field contains JSON. OpenSearch needs the nested fields extracted to use them. We can do this by using the Parser function.
The Parser has now extracted the fields out of _raw
, and they are ready to be sent to OpenSearch.
OpenSearch doesn’t recognize fields that start with _, and we don’t need that field anymore. We will get rid of _raw
with a final Eval function.
_raw
.Now that _raw
is gone, the event is ready to send on to OpenSearch.
Let’s create another pipeline for the octoprint.log
.
OctoPrint-Filesystem
.octoprint.log
we saved as a sample earlier. Open it on the right by clicking Preview > Simple on its row.We can see that, as with the API state data, our data is in _raw
, and OpenSearch will complain about fields starting with an underscore. We can fix this by adding a Rename function, to rename the _raw
field to message
.
Now we need to specify the OpenSearch index we want to send to, by setting the __index
field.
__index
under Name and Value Expression to octoprint_filesystem
.Now, this data is ready to be routed to OpenSearch.
OpenSearch is Elastic under the covers, and it works great with our Elasticsearch destination.
_bulk
API of our OpenSearch instance. Type should be _doc, as that is what OpenSearch prefers.octoprint
. In the Pipelines we created, we overwrote the __index
field._bulk
API.)Your settings should look something like this:
One last important setting when sending data to OpenSearch is the Elastic version.
Once we’ve configured the destination, we can set up a route to route the State data we collected with the REST API Collector to OpenSearch.
collector=='OctoPrintState'
, the field we added as metadata in our Collector configuration.OctoPrint-API-State pipeline
we created in a previous step.collector==’octofilesystem’
OctoPrint-Filesystem Pipeline
we created in a previous step.OpenSearch needs minimal configuration, as we have done most of the work on the Stream side. Once we have sent data to OpenSearch, we can set up an index pattern to analyze the data. To create an index pattern:
octoprintstate
, we should see that index show up in the list. Type octoprintstate*
as the index pattern name, and click next.@timestamp
in the timestamp dropdown and click create index pattern.You should now see your index pattern with all of the fields listed because we extracted them in Stream previously.
octoprint_filesystem*
@timestamp
in the timestamp dropdown, and click create index pattern.You can now go to Discover and search the data by filtering on the new index pattern octoprintstate*
or octoprint_filesystem*
.
This data can also be used in OpenSearch dashboards to represent it visually!
Are you interested in routing OctoPrint Logs, or any other data, to OpenSearch? It’s easy to get started on using Stream to route and transform your logs, by signing up for Cribl.Cloud and instantly provisioning an instance of Stream Cloud. Once connected to data sources, you can process up to 1TB per day for free!
Experience a full version of Cribl Stream and Cribl Edge in the cloud with pre-made sources and destinations.
Classic choice. Sadly, our website is designed for all modern supported browsers like Edge, Chrome, Firefox, and Safari
Got one of those handy?