Open Source Data Platform
The open source data platform
Combining best practices
Popular data apps, easy to use
Stackable provides you with a curated selection of the best open source data apps like Apache Kafka®, Apache Druid, Trino and Apache Spark™. Store, process and visualize your data with the latest versions. Stay with the curve, not behind it.
All data apps work together seamlessly, and can be added or removed in no time. Based on Kubernetes, it runs everywhere – on prem or in the cloud.
Use it to create unique and enterprise-class data architectures. For example, it supports modern Data Warehouses, Data Lakes, Event Streaming, Machine Learning or Data Meshes.
Cloud-native Kubernetes Operators
Stackable modules are regular Kubernetes operators. Because of the excellent performance, the low memory requirements as well as the memory and thread security we decided to use Rust as the programming language.
NEW: Stackable Operator for OpenSearch!
The Stackable Operator for OpenSearch simplifies the deployment and management of OpenSearch clusters within Kubernetes environments. OpenSearch, a high-performance search and analytics platform based on Apache Lucene, is optimized for seamless operation on Kubernetes thanks to Stackable’s dedicated operator.
Stackable Operator for Apache Kafka
Stackable Operator for Apache Druid
The Stackable Operator for Apache Druid is a tool that can manage Apache Druid clusters. Apache Druid is a real-time database to power modern analytics applications.
Stackable Operator for Apache Spark
Stackable Operator for Apache Superset
The Stackable Operator for Apache HBase is a tool that can manage Apache HBase clusters. HBase is a distributed, scalable, big data store.
The Stackable Operator for Apache Hive is a tool that can manage Apache Hive. Currently, it supports the Hive Metastore. The Apache Hive data warehouse software facilitates reading, writing, and managing large datasets residing in distributed storage using SQL.
How it works
From simple to complex environments with infrastructure-as-code
Stackable gives the flexibility to define both simple and complex data scenarios. Either way, the setup is always as simple as this:
1. in step one, you select the Stackable operators for the data apps you need for your data platform and install them using stackablectl or directly via Helm.
2. in step two, you install your data apps in the Kubernetes cluster by passing the appropriate configurations (CRDs) to the operators using stackablectl or directly via kubectl.
All of these definitions are maintained in an infrastructure-as-code fashion so that even the setup remains testable, repeatable and allows for standardization.
Subscribe to our Newsletter
With the Stackable newsletter, you’ll always stay up to date on the latest from Stackable!
Newsletter
Subscribe to the newsletter
With the Stackable newsletter you’ll always be up to date when it comes to updates around Stackable!