Aiven for Valkey™ clustering Limited availability
Aiven for Valkey™ clustering provides a managed, scalable solution for distributed in-memory data storage with built-in high availability and automatic failover capabilities.
Valkey clustering distributes your data across multiple nodes (shards) to handle larger datasets and higher traffic loads than a single-node deployment can support. Each shard contains a portion of your data, and the cluster automatically routes requests to the appropriate shard.
Key features
High availability
- Automatic failover: If a primary node fails, a replica is automatically promoted to maintain service availability.
- Minimal downtime: Designed to handle both expected maintenance and unexpected failures with minimal service interruption.
- Read replicas: Each shard includes at least one read replica for redundancy and improved read performance.
Scalability
- Flexible sizing: Supports various instance sizes, including smaller 4 GB RAM instances for cost optimization.
Compatibility
- Cluster-enabled mode: Fully compatible with existing Valkey and Redis cluster-aware client libraries.
- Standard protocols: If your application currently uses a client for Valkey standalone mode, switch to a cluster-aware client to enable compatibility with Aiven for Valkey clustering.
Architecture overview
Multi-shard deployment
The typical cluster deployment consists of three primary nodes, each with at least one replica, providing true high availability and scalability.
- Distributed data: Data is automatically partitioned across multiple shards.
- Independent replicas: Each shard has its own set of replicas for redundancy.
- Load distribution: Requests are distributed across shards based on data location.
Single-shard deployment
While Aiven for Valkey supports single-node clusters, this configuration is functionally equivalent to a standalone Valkey instance and is not the primary use case for clustering.
- Initial configuration: Starts with one primary node and 0 - 2 read replicas
- Use case: Ideal for smaller datasets or applications with moderate traffic
- High availability: Automatic failover to replicas if the primary fails
Benefits
Performance
- Higher throughput: Distribute read and write operations across multiple nodes.
- Read scaling: Multiple replicas per shard increase read capacity.
Reliability
- Fault tolerance: Adding replicas for each shard at service creation ensures your service remains available even if individual nodes fail.
- Automatic recovery: Failed nodes are automatically replaced and synchronized.
- Data protection: Multiple copies of your data across different nodes.