Cribl puts your IT and Security data at the center of your data management strategy and provides a one-stop shop for analyzing, collecting, processing, and routing it all at any scale. Try the Cribl suite of products and start building your data engine today!
Learn more ›Evolving demands placed on IT and Security teams are driving a new architecture for how observability data is captured, curated, and queried. This new architecture provides flexibility and control while managing the costs of increasing data volumes.
Read white paper ›Cribl Stream is a vendor-agnostic observability pipeline that gives you the flexibility to collect, reduce, enrich, normalize, and route data from any source to any destination within your existing data infrastructure.
Learn more ›Cribl Edge provides an intelligent, highly scalable edge-based data collection system for logs, metrics, and application data.
Learn more ›Cribl Search turns the traditional search process on its head, allowing users to search data in place without having to collect/store first.
Learn more ›Cribl Lake is a turnkey data lake solution that takes just minutes to get up and running — no data expertise needed. Leverage open formats, unified security with rich access controls, and central access to all IT and security data.
Learn more ›The Cribl.Cloud platform gets you up and running fast without the hassle of running infrastructure.
Learn more ›Cribl.Cloud Solution Brief
The fastest and easiest way to realize the value of an observability ecosystem.
Read Solution Brief ›Cribl Copilot gets your deployments up and running in minutes, not weeks or months.
Learn more ›AppScope gives operators the visibility they need into application behavior, metrics and events with no configuration and no agent required.
Learn more ›Explore Cribl’s Solutions by Use Cases:
Explore Cribl’s Solutions by Integrations:
Explore Cribl’s Solutions by Industry:
Watch On-Demand
Transforming Utility Operations: Enhancing Monitoring and Security Efficiency with Cribl Stream
Watch On-Demand ›Try Your Own Cribl Sandbox
Experience a full version of Cribl Stream and Cribl Edge in the cloud.
Launch Now ›Get inspired by how our customers are innovating IT, security and observability. They inspire us daily!
Read Customer Stories ›Sally Beauty Holdings
Sally Beauty Swaps LogStash and Syslog-ng with Cribl.Cloud for a Resilient Security and Observability Pipeline
Read Case Study ›Experience a full version of Cribl Stream and Cribl Edge in the cloud.
Launch Now ›Transform data management with Cribl, the Data Engine for IT and Security
Learn More ›Cribl Corporate Overview
Cribl makes open observability a reality, giving you the freedom and flexibility to make choices instead of compromises.
Get the Guide ›Stay up to date on all things Cribl and observability.
Visit the Newsroom ›Cribl’s leadership team has built and launched category-defining products for some of the most innovative companies in the technology sector, and is supported by the world’s most elite investors.
Meet our Leaders ›Join the Cribl herd! The smartest, funniest, most passionate goats you’ll ever meet.
Learn More ›Whether you’re just getting started or scaling up, the Cribl for Startups program gives you the tools and resources your company needs to be successful at every stage.
Learn More ›Want to learn more about Cribl from our sales experts? Send us your contact information and we’ll be in touch.
Talk to an Expert ›Integrations are the bread and butter of building vendor-agnostic software here at Cribl. The more connections we provide, the more choice and control customers have over their unique data strategy. Securing these integrations has challenges, but a new class of integrations is creating new challenges and testing existing playbooks: large language models. In this blog, we are going to explore why these integrations matter, investigate an example integration, and build a strategy to secure it.
Launching easily accessible large language models has created a new push to integrate LLMs into multiple aspects of business. This integration is driven by many factors, such as automation, speed, and efficiency. For many companies, LLM integration is limited to a chatbot on their sales website, but these integrations can provide even more value for software companies. As these implementations interface with business-critical systems, the impact of integration failure grows beyond confused customers and can reach a security incident. Let’s explore the security risks that face teams working on LLM integrations.
Let us begin by laying out a hypothetical LLM integration into an existing application. Our application is a file management and storage application that is attempting to utilize AI in its classification and organization space. This integration will require sensitive user data to be shared from the existing application to the new LLM for classification. In a threat model, this kind of data sharing is an immediate concern; how do we secure this trust boundary between our application and a new type of data consumer?
To answer this ‘how’ question, we security engineers must understand what we must protect against. Of course, there are common web vulnerabilities we must be wary of, but AI, LLMs in particular, bring new vulnerabilities into view. These issues have defined a whole new category from OWASP: The LLM Top Ten. Let’s look at what issues our file management application might have.
Since this integration we’re assessing handles, classifies, and stores sensitive data, we should investigate possible data exposure. Data exposure for LLMs involves returning unrelated, sensitive data in responses due to training errors or retrieval augmented generation. If this vulnerability occurs in our example scenario, user data could be unintentionally exposed across accounts. The primary mitigation to prevent this kind of data exposure is retrieval guardrails. Since user data must be retrieved before it can be used in responses, these guardrails should be the first line of defense. In our case study of a file management system, we would like to ensure data confidentiality by isolating results to data owned by the requesting user. A retrieval guardrail can check for data ownership before forwarding it to the LLM. These same guardrails can be used to prevent chatbot jailbreaks and are an important part of an in-depth defense strategy for defending LLMs.
A solid application security foundation must be present for the LLM mitigations above to be effective. This leads to our second vulnerability, a common web application issue involving broken access control and trusting unsanitized user input. After developing an authentication and authorization structure for your application, how do you extend it to your LLM integration? Is the LLM code running on the same backend as the application code? A pitfall made in this development process is to trust the front end to perform these critical functions. If you must create an authorization bridge to integrate, operate on trusted inputs, and use cryptographic best practices to verify critical inputs. Not following these principles nullifies other mitigations by allowing attackers to manipulate sensitive request attributes like the requesting user.
Our example application integrated with an LLM to classify user data. This is a deep integration that has complex implications for privacy and security. As mentioned at the beginning of this post, simpler integrations, such as website chatbots, may be isolated from user data privacy requirements but can still have a business impact. An example of a seemingly simple integration with business impact is a major airline support bot offering retroactive refunds outside the airline’s official policy. This offering was taken as fact by a customer who insisted on receiving a refund, which resulted in a small claims court ruling against the airline, forcing them to comply with the chatbot’s offer. One mitigation that could be applied here is reinforcement learning through human feedback to train incorrect responses from the chatbot. The guardrails we discussed earlier may not apply as well for this situation, but they could still be implemented to prevent other jailbreak attacks.
In this blog post, we have talked about why LLM integrations are growing, how they present a risk, and some mitigation strategies, for example, scenarios. As the topic of LLM integrations and security develops, keep some of these strategies in mind. Learn more about Cribl Copilot.
Cribl, the Data Engine for IT and Security, empowers organizations to transform their data strategy. Customers use Cribl’s suite of products to collect, process, route, and analyze all IT and security data, delivering the flexibility, choice, and control required to adapt to their ever-changing needs.
We offer free training, certifications, and a free tier across our products. Our community Slack features Cribl engineers, partners, and customers who can answer your questions as you get started and continue to build and evolve. We also offer a variety of hands-on Sandboxes for those interested in how companies globally leverage our products for their data challenges.
Experience a full version of Cribl Stream and Cribl Edge in the cloud with pre-made sources and destinations.
Classic choice. Sadly, our website is designed for all modern supported browsers like Edge, Chrome, Firefox, and Safari
Got one of those handy?