Cribl’s suite of products excel at collecting and organizing your IT and security event data. Did you know it can also help with IoT data collection and analysis? If you can get the text of the data into Cribl, in most cases, we can process it, transform it, and send it to where you want it to go.
A few years ago, I bought a weather station. I immediately hooked up some home automation gear to show me the temperature, humidity, and air quality. But the geek in me wants more. What was the temp like a year ago today? Three years ago today? Unless I’m storing that data in a reliable and durable location and have a search tool that can use that archived data, I’m out of luck.
In this post, we’ll cover onboarding weather data from a PurpleAir weather station, add some simple processing, and store the data in a data lake. Once in the lake, we can easily use Cribl Search to generate reports, build dashboards, trigger alerts, or collect the data to send to another analysis tool.
PurpleAir provides an “air quality monitoring network built on a new generation of ‘Internet of Things’ sensors.” You can join their research by deploying a station and also indulge your inner geek with all the data it provides. Sensors include various forms of air quality, temperature, and pressure. While you might not be deploying one of this for your IT needs, I wanted to use it as a test case, showing how easy IoT data is to manage with Cribl’s products
The great thing about the PurpleAir device is that if you run an HTTP GET
on /json you get plain ol’ JSON payload on one line. I like simple. Example GET, piped through jq to pretty-print it:
$ curl -s https://yourPA/json | jq
{
"SensorId": "84:f3:eb:xx:xx:xx",
"DateTime": "2024/01/28T01:38:29z",
"Geo": "PurpleAir-xxxx",
"Mem": 20152,
<snip>
"place": "outside",
"version": "7.02",
"uptime": 86531,
"rssi": -58,
"period": 120,
"httpsuccess": 1467,
"httpsends": 1467,
"hardwareversion": "2.0",
"hardwarediscovered": "2.0+BME280+PMSX003-B+PMSX003-A",
"current_temp_f": 80,
"current_humidity": 22,
"current_dewpoint_f": 38,
"pressure": 1013.53,
<snip>
}
The revision of my hardware has a known issue where the temperature is reported to be 8 degrees too high. This will be one of the things we’ll fix in Cribl before sending it off to my data lake. Easy fix! Imagine how useful this will be for critical security data for adding context to it.
The tricky part of the device is that it has no memory or storage and doesn’t provide a method to push data out to a location of my choosing. I need a way to pull the data, a way to store it reliably, and a way to search it.
The best way to get data in is to run a Cribl Stream node on the same network as the PA device and schedule a REST Collector to GET from the URL.
https://yourPA/json
(replace your)sourcetype
with value purpleair
. This will help in routing and filtering.Run a simple cron job to grab the data and pipe it into OpenSSL to send to one of the default listening ports in my Cribl Cloud instance. For example, this will send the output of the curl command to a Cribl Cloud instance’s TCP JSON input:
curl -s https://yourpa/json | openssl s_client -connect \ default.your_instance_id.cribl.cloud:10070
This concept makes it very easy to collect data from devices that are too restricted or simple to install agents on.
I recommend setting up a pipeline to pre-process the data coming into the TCP JSON input. Use this pipeline to classify the data based on fields and/or patterns. Create a sourcetype
field to identify the data further down the road. With this mechanism in place, you can send different types of data to the same port, and classify it as it comes in.
In your Cribl Worker Group:
> Pipelines
tcp-json-preproc
_raw.includes('current_temp_f')
sourcetype => 'purpleair'
What we’re doing: If the event contains the string current_temp_f
, we will add a field to the event called sourcetype
with a value of ‘purpleair
‘. If sourcetype
already exists, it will be overwritten.
You may need to adjust the filter statement depending on what your data looks like when it arrives in Cribl Stream. As presented here, it depends on _raw being a string.
Let’s grab a capture of the incoming data, as presented to Cribl, so we can make the rules to prep it for storage and reporting, including fixing that errant temperature reading.
For the REST Collector method, please navigate to the setup page and run it in preview mode. Save the collected sample as a file.
For the TCP JSON style, In the Worker Group, navigate to your TCP JSON source and click on Live. Change the capture point to stage 2, before the routes, so that we capture after the pre-processing pipeline. Once you have data captured, save it as a sample file.
Any time you’re onboarding new data, you should have a plan of what you want to do with it. After inspecting the data, I have a few goals:
current_temp_f
by subtracting 8p25aqic* lat lon period
(just as a demo of what the remove fields can be used for)To do this in Cribl Stream:
iot-purpleair
_raw p25aqic* lat lon period
into Remove fieldsAs always, with Cribl Stream, you can send your data to any destination that supports open protocols and many proprietary ones. It’s your data – Do with it what you like. Aamzon S3, Azure Blob, Google Cloud Storage, or self-hosted solutions from Pure Storage, MinIO, and more. For this example, we will send it to an object store.
Remember that to use Cribl Search, the object store you choose must be accessible from AWS Lambda for optimal performance. There are some workarounds for using on-prem object stores, but that’s beyond the scope of this article. Join our Community Slack channel, or contact your favorite SE for ideas.
One key point in the setup is the partitioning scheme. This is essentially the path to your objects, but it can also give us tokens, or metadata, to help search and replay. See our docs and previous blogs for more info and tips on partitioning.
And, of course, Cribl Stream can deliver several analysis tools. Often, we can connect to them with their native/defined protocols (e.g., Dataset or Exabeam). Other times, you’ll need to use an open protocol or maybe even an object store as an intermediary (e.g., Snowflake as of Jan 2024). One way or another, we can talk to dozens of search tools. But remember, we always recommend sending it to an object store for safekeeping. Never rely on your analysis tier for retention duties.
Once you have your destination set up, use the built-in tester (available in the destination config screen) to test delivery. After receiving a green check mark for “success,” you should be able to find the test events in your storage and/or system of analysis.
Create a Route entry named iot-purpleair
with a filter or sourcetype == 'purpleair'
. If you set up the pre-processor pipeline above, this will catch your events. For the Pipeline, choose iot-purpleair
which we built above. Finally, choose your output – in my case, I’m sending it to my S3 destination I named iot
.
( —---------- CRIBL –------------)
PA <- curl -> (TCP JSON -> pre-proc -> pipeline) -> obj store
Save, commit, and deploy. After a few minutes to settle and after your schedule fires, your first run of the collector should be in your destination.
Cribl Search will also need to know how to access your object store. For my data, I created an access key and secret with appropriate permissions and filled out the Dataset Provider info:
Each provider can have multiple datasets defined.
Dataset Provider: Here is the service where I store my stuff, with auth credentials. There could be all kinds of data here, organized in different ways.
Dataset: With that Provider, here is information related to a particular data set.
Here’s a sample Dataset config:
Note the Bucket path should match the partitioning expression you defined for your Destination in Stream above.
My brilliant colleague, Sidd Shah, wrote a blog covering this setup. I’d also recommend this introductory post and our sandbox for Search.
Finally, let’s start building something cool in Cribl Search:
.show objects('iot') | limit 10
dataset='iot' sourcetype=='purpleair' | limit 10
dataset='iot' sourcetype=='purpleair' | timestats F=max(current_temp_f)
avg()
or the last reading findlatest()
dataset='iot' sourcetype=='purpleair' | timestats F=max(current_temp_f), H=max(current_humidity), AQ=findlatest('pm2.5_aqi')
dataset='iot' sourcetype=='purpleair' | summarize Temp=findlatest('current_temp_f')
summarize AQ=findlatest('pm2.5_aqi')
My finished product is below. How did yours turn out?
There are so many possibilities with an omnivorous consumer like Cribl. Share your ideas in our Slack community!
Cribl Suite continues to evolve, adding new ways to collect, transport, manage, and use your data. The growing data output from IOT can benefit from an o11y pipeline and data lake strategy. And we can help. Cribl Search is rapidly maturing and gives one more arrow in your quiver, making it much easier to get started on your long-term data strategy.
Whether you use Cribl Search or your existing familiar search tool, Cribl’s here to help you manage all your data. Collect it, normalize it, route it, optimize it, enrich it, archive it, replay it: This ultimately helps you get value from your data when you need it and stop burning expensive resources on it when you don’t.
Cribl, the Data Engine for IT and Security, empowers organizations to transform their data strategy. Customers use Cribl’s suite of products to collect, process, route, and analyze all IT and security data, delivering the flexibility, choice, and control required to adapt to their ever-changing needs.
We offer free training, certifications, and a free tier across our products. Our community Slack features Cribl engineers, partners, and customers who can answer your questions as you get started and continue to build and evolve. We also offer a variety of hands-on Sandboxes for those interested in how companies globally leverage our products for their data challenges.
Experience a full version of Cribl Stream and Cribl Edge in the cloud with pre-made sources and destinations.