Cribl puts your IT and Security data at the center of your data management strategy and provides a one-stop shop for analyzing, collecting, processing, and routing it all at any scale. Try the Cribl suite of products and start building your data engine today!
Learn more ›Evolving demands placed on IT and Security teams are driving a new architecture for how observability data is captured, curated, and queried. This new architecture provides flexibility and control while managing the costs of increasing data volumes.
Read white paper ›Cribl Stream is a vendor-agnostic observability pipeline that gives you the flexibility to collect, reduce, enrich, normalize, and route data from any source to any destination within your existing data infrastructure.
Learn more ›Cribl Edge provides an intelligent, highly scalable edge-based data collection system for logs, metrics, and application data.
Learn more ›Cribl Search turns the traditional search process on its head, allowing users to search data in place without having to collect/store first.
Learn more ›Cribl Lake is a turnkey data lake solution that takes just minutes to get up and running — no data expertise needed. Leverage open formats, unified security with rich access controls, and central access to all IT and security data.
Learn more ›The Cribl.Cloud platform gets you up and running fast without the hassle of running infrastructure.
Learn more ›Cribl.Cloud Solution Brief
The fastest and easiest way to realize the value of an observability ecosystem.
Read Solution Brief ›Cribl Copilot gets your deployments up and running in minutes, not weeks or months.
Learn more ›AppScope gives operators the visibility they need into application behavior, metrics and events with no configuration and no agent required.
Learn more ›Explore Cribl’s Solutions by Use Cases:
Explore Cribl’s Solutions by Integrations:
Explore Cribl’s Solutions by Industry:
Try Your Own Cribl Sandbox
Experience a full version of Cribl Stream and Cribl Edge in the cloud.
Launch Now ›Get inspired by how our customers are innovating IT, security and observability. They inspire us daily!
Read Customer Stories ›Sally Beauty Holdings
Sally Beauty Swaps LogStash and Syslog-ng with Cribl.Cloud for a Resilient Security and Observability Pipeline
Read Case Study ›Experience a full version of Cribl Stream and Cribl Edge in the cloud.
Launch Now ›Transform data management with Cribl, the Data Engine for IT and Security
Learn More ›Cribl Corporate Overview
Cribl makes open observability a reality, giving you the freedom and flexibility to make choices instead of compromises.
Get the Guide ›Stay up to date on all things Cribl and observability.
Visit the Newsroom ›Cribl’s leadership team has built and launched category-defining products for some of the most innovative companies in the technology sector, and is supported by the world’s most elite investors.
Meet our Leaders ›Join the Cribl herd! The smartest, funniest, most passionate goats you’ll ever meet.
Learn More ›Whether you’re just getting started or scaling up, the Cribl for Startups program gives you the tools and resources your company needs to be successful at every stage.
Learn More ›Want to learn more about Cribl from our sales experts? Send us your contact information and we’ll be in touch.
Talk to an Expert ›We were able to leverage the Node.js VM module to execute arbitrary JavaScript code with its own set of globals, separate from the running process’ globals.
Cribl Stream can be thought of as a streams processing engine for machine data, using functions that are shipped as a configuration in the form of index.js
files. Stream will load up the code in these files, compile the code, and send events through them to perform all of the manipulations on the machine data. We ship the functions as configuration files, so anyone has the ability to write new, custom functions to meet their data processing needs. You can check out this blog post on how to write custom functions.
In the 3.0 release of Stream, we introduced Packs. Boiling the problem down, we now need to run these functions with varying global scopes, depending on the Pack context you’re running data through. And we need to make this possible without updating each individual index.js
file we ship that references any globals.
The functions were initially loaded using Module._load(absoluteFilePath)
, to make the code within the file executable from within the context of the running process. However, this means you’re required to share the same global scope between the running process and the arbitrary function code being loaded.
We were able to decouple the global scope of the running process from the arbitrary function code by loading the content of the index.js file we want to run into memory, crafting an object that represents the global scope for the code we’re going to execute, and handing both of those things off to vm.runInNewContext().
In our case, we wanted to hand off mostly the same set of global variables – except, we wanted to override an application-specific variable (see more info here) that we expose, to enable developers of custom functions to leverage utility functions and APIs of the underlying Stream platform. We needed to override this variable to load it with the corresponding configuration files, based on the context in which the function is running.
Here’s an example of what the above would look like in code:
const module = require('module'); const path = require('path'); const fs = require('fs'); const vm = require('vm'); const globalsMap = new Map([ [ 'mycontext', { getTheAnswer: () => 42 } ] ]); function crequire(absFile, context) { const mod = {}; vm.runInNewContext( fs.readFileSync(absFile).toString(), { ...global, Promise, __filename: absFile, __dirname: path.dirname(absFile), exports: mod, MyCustomGlobals: globalsMap.get(context), // inject your custom global object here - allows code in absFile to call MyCustomGlobals.getTheAnswer() require: module.createRequire(absFile), process, Date }, {filename: absFile}); return mod; } const mod = crequire(path.join(process.cwd(), 'index.js'), 'mycontext'); // if the file loaded has exports.process = (...) => {...}, it should be added to the mod object: mod.process({foo: 'bar'});
crequire()
to load and return a given file, using ‘mycontext’ global variablesvm.runInNewContext()
to feed the content of the JavaScript file, and the desired globals, into a new V8 Virtual Machine contextvm.runInNewContext()
to compile and assign the exported values from within the JavaScript file into the mod variablecrequire()
Figuring out the right combination of globals to pass into the new context was a fun challenge, which ultimately required me to dig into the Node.js code itself. From the Node.js code, I learned about vm.runInNewContext()
and module.createRequire()
, and about how the global variables (e.g., __filename
, __dirname
, etc.) are not truly global and are actually contextualized per module/file.
The lesson here is that Node.js is an incredibly flexible runtime that provides a plethora of tools/modules for accomplishing most tasks at hand.
If you found this problem interesting, there are plenty more cool engineering problems where that came from! Come join us!
Experience a full version of Cribl Stream and Cribl Edge in the cloud with pre-made sources and destinations.
Classic choice. Sadly, our website is designed for all modern supported browsers like Edge, Chrome, Firefox, and Safari
Got one of those handy?