Source: Amazon S3
Amazon Simple Storage Service (S3) offers storage of any amount of data, at any time, from anywhere on the web. S3 is accessible via a web services interface, a management console, SDKs for several languages and frameworks, and several APIs. Read our Solution Brief.
How to get data flowing to S3
This is a built-in integration between Cribl Stream and the Amazon S3 APIs. Stream pulls data from S3 buckets using event notifications through Amazon SQS. Stream’s S3 Destination can be adapted to send data to services for which Stream currently has no preconfigured Destination.
S3 as Source and Stream as a destination
Configure your systems to store outbound log data in AWS S3 buckets.
Configure LogStream to read data from S3 via Sources > Amazon S3.
Supply your SQS queue. IAM roles or manual keys are both supported.
Stream will start fetching data as SQS messages become available.
Destination: Kafka
Apache Kafka is an open-source, distributed event streaming platform widely used for high-performance data pipelines, streaming analytics, metrics collection and monitoring, log aggregation, data integration, and mission-critical applications. As a durable message broker, Kafka enables applications to process, persist, and reprocess streamed data.
How to get data flowing to Kafka
This is a built-in integration between Cribl Stream and Kafka.
Kafka as Destination and LogStream as a source
Configure Stream to send data to Kafka via Destinations > Kafka.
Specify the Kafka brokers and topic to write to, along with other settings (record data format, compression and backpressure behavior, and optional Confluent Schema Registry, TLS certificate, and SASL authentication parameters).
Stream will start sending data as it becomes available.