You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 14 Next »

Status

Current stateUnder Discussion

Discussion thread: here

JIRA:   Unable to render Jira issues macro, execution error.

Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).

Motivation

Similar to many distributed systems, Kafka Streams instances can also be grouped in different racks. When Kafka Stream's standby task is properly distributed in different rack compared to the corresponding active task, it provides fault tolerance and faster recovery time if the active task rack goes down.

Different distributed systems implement rack awareness. Below we will explore how other systems implement rack awareness and what kind of guarantees they aim to provide. 

Elasticsearch

Rack awareness in Elasticsearch works by defining a list of tags/attributes, called awareness attributes to each node in the cluster. When Elasticsearch knows the nodes' rack specification, it distributes the primary shard and its replica shards to minimize the risk of losing all shard copies in the event of a failure. Besides defining an arbitrary list of tags/attributes for each node, Elasticsearch provides a means of setting which tags/attributes it must consider when balancing the shards across the racks.

Example:

node.attr.rack_id: rack_one
node.attr.cluster_id: cluster_one
cluster.routing.allocation.awareness.attributes: rack_id,cluster_id

Besides, Elasticsearch provides "Forced awareness" configuration, a safeguard to prevent racks from being overloaded in case of a failure. By default, if one location fails, Elasticsearch assigns all of the missing replica shards to the remaining locations. In the case of limited resources, a single rack might be unable to host all of the shards. cluster.routing.allocation.awareness.attributes configuration can be used to prevent Elasticsearch from allocating replicas until nodes are available in another location.

Example:

cluster.routing.allocation.awareness.attributes: rack_id
cluster.routing.allocation.awareness.force.zone.values: zone1,zone2

In the example above, if we start two nodes with node.attr.zone set to zone1 and create an index with five shards and one replica, Elasticsearch creates the index and allocates the five primary shards but no replicas. Replicas are only allocated once nodes with node.attr.zone set to zone2 is available. 

Hadoop

In the case of Hadoop, rack is a physical collection of nodes in the cluster, and it's the mean of fault tolerance, as well as optimization. The idea in Hadoop is that read/write operation in the same rack is cheaper compared to when the process spans across multiple racks. With the rack information, Namenode chooses the closest Datanode while performing the read/write operation, which reduces network traffic.

A rack can have multiple data nodes storing the file blocks and replicas. Hadoop cluster with a replication factor of 3 will automatically write a particular file block in 2 different Datanodes in the same rack, plus in a different rack for redundancy. 

Rack awareness in the Hadoop cluster has to comply with the following policies:

  • There should not be more than 1 replica on the same Datanode.
  • More than 2 replica's of a single block is not allowed on the same rack.
  • The number of racks used inside a Hadoop cluster must be smaller than the number of replicas.

Redis

Rack "awareness" in Redis is called "Rack-zone awareness" and it's very similar to Kafka Broker's rack awareness. Rack-zone awareness only works in a clustered Redis deployment, and it's an enterprise feature.

Rack-zone awareness works by assigning a rack-zone ID to each node. This ID is used to map the node to a physical rack or logical zone (AWS availability zone, for instance). When appropriate IDs are set, cluster ensures that leader shards, corresponding replica shards, and associated endpoints are placed on nodes in different racks/zones.

In the event of a rack failure, the remaining racks' replicas and endpoints will be promoted. This approach ensures high availability when a rack or zone fails.

Proposed Changes

This KIP proposes to implement similar semantics in Kafka Streams as in Elasticsearch. Rack awareness semantics in Elasticsearch seems the most flexible and can cover more complex use-cases, such as multi-dimensional rack awareness. To achieve this, KIP proposes to introduce a new config prefix in StreamsConfig that will be used to retrieve user-defined instance tags of the Kafka Streams

/**
 * Prefix used to add arbitrary tags to a Kafka Stream's instance as key-value pairs.
 * Example:
 * instance.tag.zone=zone1
 * instance.tag.cluster=cluster1
 */
@SuppressWarnings("WeakerAccess")
public static final String INSTANCE_TAG_PREFIX = "instance.tag.";

We will also add a new configuration option in StreamsConfig, which will be the means of setting which tags Kafka Streams must take into account when balancing the standby tasks across the racks.

public static final String STANDBY_TASK_ASSIGNMENT_AWARENESS_CONFIG = "standby.task.assignment.awareness";
public static final String STANDBY_TASK_ASSIGNMENT_AWARENESS_DOC = "List of instance tag keys used to distribute standby replicas across Kafka Streams instances." +
                                                                   " Tag keys must be set in an order of precedence." +                                                                   
                                                                   " When configures, Kafka Streams will make a best effort to distribute" +
                                                                   " the standby tasks over each instance tag dimension.";

Example of standby task allocation

Absolute Preferred Standby Task Distribution

Suppose we have the following infrastructure setup: Three Kubernetes Clusters, let us call them K8s_Cluster1, K8s_Cluster2, and K8s_Cluster3. Each Kubernetes cluster is spanned across three availability zones: eu-central-1a, eu-central-1b, eu-central-1c. 

Our use-case is to have a distribution of the standby tasks across different Kubernetes clusters and AZs so we can be Kubernetes cluster and AZ failure tolerant.

With the new configuration options presented in this KIP, we will have the following:

Node-1:
instance.tag.cluster: K8s_Cluster1
instance.tag.zone: eu-central-1a
standby.task.assignment.awareness: zone,cluster
num.standby.replicas2

Node-2:
instance.tag.clusterK8s_Cluster1
instance.tag.zoneeu-central-1b
standby.task.assignment.awarenesszone,cluster
num.standby.replicas2

Node-3:
instance.tag.clusterK8s_Cluster1
instance.tag.zoneeu-central-1c
standby.task.assignment.awarenesszone,cluster
num.standby.replicas2

Node-4:
instance.tag.clusterK8s_Cluster2
instance.tag.zoneeu-central-1a
standby.task.assignment.awarenesszone,cluster
num.standby.replicas2

Node-5:
instance.tag.clusterK8s_Cluster2
instance.tag.zoneeu-central-1b
standby.task.assignment.awarenesszone,cluster
num.standby.replicas2

Node-6:
instance.tag.clusterK8s_Cluster2
instance.tag.zoneeu-central-1c
standby.task.assignment.awarenesszone,cluster
num.standby.replicas: 2

Node-7:
instance.tag.clusterK8s_Cluster3
instance.tag.zoneeu-central-1a
standby.task.assignment.awarenesszone,cluster
num.standby.replicas2

Node-8:
instance.tag.clusterK8s_Cluster3
instance.tag.zoneeu-central-1b
standby.task.assignment.awarenesszone,cluster
num.standby.replicas2

Node-9:
instance.tag.clusterK8s_Cluster3
instance.tag.zoneeu-central-1c
standby.task.assignment.awarenesszone,cluster
num.standby.replicas: 2


With the infrastructure topology and configuration presented above, we can easily achieve Absolute Preferred standby task distribution. Absolute Preferred standby task distribution is achievable because we have to allocate three tasks for any given stateful task (1 active task + 2 standby task), and it corresponds to unique values for each tag. So the formula for determining if Absolute Preferred standby task allocation is achievable can be something like this :

num.standby.replicas <= (allInstanceTags.values().stream().map(Set::size).reduce(0, Math::min) - 1) // -1 is for active task

1. Formula for determining if Absolute Preferred distribution is possible

Partially Preferred Standby Task Distribution

Suppose we have the following infrastructure setup: Two Kubernetes Clusters, let us call them K8s_Cluster1, K8s_Cluster2, and each Kubernetes cluster spanned across three availability zones: eu-central-1a, eu-central-1b, eu-central-1c. 

Our use-case is similar to the previous section - to have a distribution of the standby tasks across different Kubernetes clusters and AZs so we can be Kubernetes cluster and AZ failure tolerant.

With the new configuration options presented in this KIP, we will have the following:

Node-1:
instance.tag.cluster: K8s_Cluster1
instance.tag.zone: eu-central-1a
standby.task.assignment.awareness: zone,cluster
num.standby.replicas2

Node-2:
instance.tag.clusterK8s_Cluster1
instance.tag.zoneeu-central-1b
standby.task.assignment.awarenesszone,cluster
num.standby.replicas2

Node-3:
instance.tag.clusterK8s_Cluster1
instance.tag.zoneeu-central-1c
standby.task.assignment.awarenesszone,cluster
num.standby.replicas2

Node-4:
instance.tag.clusterK8s_Cluster2
instance.tag.zoneeu-central-1a
standby.task.assignment.awarenesszone,cluster
num.standby.replicas2

Node-5:
instance.tag.clusterK8s_Cluster2
instance.tag.zoneeu-central-1b
standby.task.assignment.awarenesszone,cluster
num.standby.replicas2

Node-6:
instance.tag.clusterK8s_Cluster2
instance.tag.zoneeu-central-1c
standby.task.assignment.awarenesszone,cluster
num.standby.replicas: 2


With the infrastructure topology presented above, we can't achieve Absolute Preferred standby task distribution because we only have two unique cluster tags in the topology. The Absolute Preferred distribution could have been achieved with a third Kubernetes cluster (K8s_Cluster3) spanned across three AZs (as seen in the previous section).

Even though we can't achieve Absolute Preferred standby task distribution with the configuration presented above, we can still achieve Partially Preferred distribution.

Partially Preferred distribution can be achieved by distributing standby tasks over different zone tags. Zone has higher precedence than cluster in the standby.task.assignment.awareness configuration. Therefore, Kafka Streams would prefer to distribute standby tasks over the different zone, rather than different cluster when Absolute Preferred distribution check formula [1] returns false.

Kafka Streams will be eligible to perform Partially Preferred standby task distribution when at least one of the instance tag unique values is >= num.standby.replicas. So formula of determining if Partially Preferred standby task allocation is doable, will look like this: 

num.standby.replicas <= (allInstanceTags.values().stream().map(Set::size).reduce(0, Math::max) - 1) // -1 is for active task

2. Formula for determining if Partially Preferred distribution is possible


Assuming active stateful task 0_0 is in Node-1, Partially Preferred standby task distribution will look like this:

  1. Node-5 (different cluster, different zone), LL([(Node-3, Node-6])
  2. Node-6 (different cluster, different zone), LL([(Node-2, Node-5])

Where LL is a function determining the least-loaded client based on active + standby task assignment.

The Least Preferred Standby Task Distribution

The Least Preferred standby task distribution is eligible when none of the Absolute Preferred and Partially Preferred standby task distributions can be satisfied.

Suppose we have the following infrastructure setup: Two Kubernetes Clusters, lets call them K8s_Cluster1, K8s_Cluster2  and each Kubernetes cluster is spanned across three availability zones:  eu-central-1a, eu-central-1b, eu-central-1c

With the new configuration options presented in this KIP, we will have the following:

Node-1:
instance.tag.cluster: K8s_Cluster1
instance.tag.zone: eu-central-1a
standby.task.assignment.awareness: zone,cluster
num.standby.replicas2

Node-2:
instance.tag.clusterK8s_Cluster1
instance.tag.zoneeu-central-1b
standby.task.assignment.awarenesszone,cluster
num.standby.replicas2

Node-3:
instance.tag.clusterK8s_Cluster2
instance.tag.zoneeu-central-1a
standby.task.assignment.awarenesszone,cluster
num.standby.replicas2

Node-4:
instance.tag.clusterK8s_Cluster2
instance.tag.zoneeu-central-1b
standby.task.assignment.awarenesszone,cluster
num.standby.replicas2

With the setup presented above, we can't distribute second standby task in different zone as requested by standby.task.assignment.awareness configuration, because there're only two distinct zones available (and one will be reserved for active task). In this case Kafka Streams will default to using the Least Loaded client to allocate remaining standby task.

Assuming active stateful task 0_0 is in Node-1, The Least Preferred standby task distribution will look like this:

  1. Node-4 (different cluster, different zone), LL([(Node-2, Node-3])

Where LL is a function determining the least-loaded client based on active + standby task assignment.


Compatibility, Deprecation, and Migration Plan

N/A

Rejected Alternatives

  • The initial idea was to introduce two configurations in StreamsConfig, rack.id, which defines the rack of the Kafka Streams instance and standby.task.assignor - class that implements RackAwareStandbyTaskAssignor interface. 

    The signature of RackAwareStandbyTaskAssignor was the following:

    public interface RackAwareStandbyTaskAssignor {
    
        /**
         * Computes desired standby task distribution for a different {@link StreamsConfig#RACK_ID_CONFIG}s.
         * @param sourceTasks - Source {@link TaskId}s with a corresponding rack IDs that are eligible for standby task creation.
         * @param clientRackIds - Client rack IDs that were received during assignment.
         * @return - Map of the rack IDs to set of {@link TaskId}s. The return value can be used by {@link TaskAssignor}
         *           implementation to decide if the {@link TaskId} can be assigned to a client that is located in a given rack.
         */
        Map<String, Set<TaskId>> computeStandbyTaskDistribution(final Map<TaskId, String> sourceTasks,
                                                                final Set<String> clientRackIds);
    }
    
    

    By injecting custom implementation of RackAwareStandbyTaskAssignor interface, users could hint Kafka Streams where to allocate certain standby tasks when more complex processing logic was required — for example, parsing rack.id, which can be a combination of multiple identifiers (as seen in the previous examples where we have cluster and zone tags).

    The above mentioned idea was abandoned because it's easier and more user-friendly to let users control standby task allocation with just configuration options instead of forcing them to implement a custom interface. 

  • The second approach was to refactor TaskAssignor interface to be more user-friendly and expose it as a public interface. Users then could implement custom TaskAssignor logic and set it via StreamsConfig. With this, Kafka Streams users would effectively be in control of Active and Standby task allocation.
    Similarly to the point above, this approach also was rejected because it's more complex.
    Even though it's more-or-less agreed on the pluggable TaskAssignor interface's usefulness, it was decided to cut it out of this KIP's scope and prepare a separate one for that feature.
  • No labels