Skip to content

Core features and use cases

STACKIT Intake eliminates the need to provision, operate, or maintain your own ingestion infrastructure. All components are handled for you, from message brokers to security and scaling.

Intake includes a built-in buffering mechanism that provides temporary data persistence, ensuring your data streams are resilient to downstream outages or processing delays for up to 24 hours.

The platform guarantees that data is ingested exactly once, preventing data duplication even during retries or system failures.

Intake streams data directly into Apache Iceberg tables within the Dremio REST Catalog, providing a seamless and native connection to your data lakehouse architecture. Process your data streams with Dremio SQL and Apache Spark.

Ingest arbitrary JSON messages with Intake.

Column data types are inferred automatically from JSON payloads, and the service manages the integration of new data into the target Iceberg tables.

The service supports the widely-adopted Apache Kafka Protocol, allowing you to use existing Kafka client libraries and a wide range of data producers such as Debezium without any modifications.

STACKIT Intake offers the performance and simplicity needed for modern, real-time data challenges. Here are key scenarios where it truly shines:

Ingest massive volumes of real-time sensor and telemetry data from devices into your data lakehouse for analysis and monitoring with only limited delay. The reliable buffering ensures no data is lost even with intermittent connectivity.

Stream database changes in near-real time for use cases like updating data lakes, powering real-time analytics dashboards, or synchronizing data across systems.

Build a robust ingestion layer to feed event streams—such as clickstream data, financial transactions, or application logs—into your data platform for instant analysis with Dremio SQL.

Microservices and event-driven architectures

Section titled “Microservices and event-driven architectures”

Use Intake to persist the events and messages exchanged between microservices as the foundation for your analytics.

Centralize logs and events from various applications and services into a single destination for centralized monitoring, auditing, and analytics.