Status
Current state: Under Discussion
Discussion thread: here
JIRA:
Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).
Motivation
Similar to many distributed systems, Kafka Streams instances can also be grouped in different racks. When Kafka Stream's standby task is properly distributed in different rack compared to the corresponding active task, it provides fault tolerance and faster recovery time if the rack of the active task goes down.
Below we will explore how other distributed systems implement rack awareness and what kind of guarantees they aim to provide.
Elasticsearch
Rack awareness in Elasticsearch works by defining a list of tags/attributes, called awareness attributes to each node in the cluster. When Elasticsearch knows the nodes' rack specification, it distributes the primary shard and its replica shards to minimize the risk of losing all shard copies in the event of a failure. Besides defining an arbitrary list of tags/attributes for each node, Elasticsearch provides a means of setting which tags/attributes it must consider when balancing the shards across the racks.
Example:
node.attr.rack_id: rack_one node.attr.cluster_id: cluster_one cluster.routing.allocation.awareness.attributes: rack_id,cluster_id
Besides, Elasticsearch provides "Forced awareness" configuration, a safeguard to prevent racks from being overloaded in case of a failure. By default, if one location fails, Elasticsearch assigns all of the missing replica shards to the remaining locations. In the case of limited resources, a single rack might be unable to host all of the shards. cluster.routing.allocation.awareness.attributes
configuration can be used to prevent Elasticsearch from allocating replicas until nodes are available in another location.
Example:
cluster.routing.allocation.awareness.attributes: rack_id cluster.routing.allocation.awareness.force.zone.values: zone1,zone2
In the example above, if we start two nodes with node.attr.zone
set to zone1
and create an index with five shards and one replica, Elasticsearch creates the index and allocates the five primary shards but no replicas. Replicas are only allocated once nodes with node.attr.zone
set to zone2
is available.
Hadoop
In the case of Hadoop, rack is a physical collection of nodes in the cluster, and it's the mean of fault tolerance, as well as optimization. The idea in Hadoop is that read/write operation in the same rack is cheaper compared to when the process spans across multiple racks. With the rack information, Namenode chooses the closest Datanode while performing the read/write operation, which reduces network traffic.
A rack can have multiple data nodes storing the file blocks and replicas. Hadoop cluster with a replication factor of 3 will automatically write a particular file block in 2 different Datanodes in the same rack, plus in a different rack for redundancy.
Rack awareness in the Hadoop cluster has to comply with the following policies:
- There should not be more than 1 replica on the same Datanode.
- More than 2 replica's of a single block is not allowed on the same rack.
- The number of racks used inside a Hadoop cluster must be smaller than the number of replicas.
Redis
Rack "awareness" in Redis is called "Rack-zone awareness" and it's very similar to Kafka Broker's rack awareness. Rack-zone awareness only works in a clustered Redis deployment, and it's an enterprise feature.
Rack-zone awareness works by assigning a rack-zone ID to each node. This ID is used to map the node to a physical rack or logical zone (AWS availability zone, for instance). When appropriate IDs are set, cluster ensures that leader shards, corresponding replica shards, and associated endpoints are placed on nodes in different racks/zones.
In the event of a rack failure, the remaining racks' replicas and endpoints will be promoted. This approach ensures high availability when a rack or zone fails.
Proposed Changes
This KIP proposes to implement similar semantics in Kafka Streams as in Elasticsearch. Rack awareness semantics in Elasticsearch seems the most flexible and can cover more complex use-cases, such as multi-dimensional rack awareness. To achieve this, KIP proposes to introduce a new config prefix in StreamsConfig that will be used to retrieve user-defined instance tags of the Kafka Streams
/** * Prefix used to add arbitrary tags to a Kafka Stream's instance as key-value pairs. * Example: * client.tag.zone=zone1 * client.tag.cluster=cluster1 */ @SuppressWarnings("WeakerAccess") public static final String CLIENT_TAG_PREFIX = "client.tag.";
We will also add a new configuration option in StreamsConfig
, which will be the means of setting which tags Kafka Streams must take into account when balancing the standby tasks across the racks.
public static final String TASK_ASSIGNMENT_RACK_AWARENESS_CONFIG = "task.assignment.rack.awareness"; public static final String TASK_ASSIGNMENT_RACK_AWARENESS_DOC = "List of client tag keys used to distribute standby replicas across Kafka Streams instances." + " When configured, Kafka Streams will make a best effort to distribute" + " the standby tasks over each client tag dimension.";
When client.tag.*
dimensions are configured, Kafka Streams will read this information from the configuration and encode it into SubscriptionInfoData as key-value pairs.
{ "name": "SubscriptionInfoData", // version bump "validVersions": "1-10", "fields": [ ... { "name": "clientTags", "versions": "10+", "type": "[]ClientTag" } ], "commonStructs": [ { "name": "ClientTag", "versions": "1+", "fields": [ { "name": "key", "versions": "1+", "type": "bytes" }, { "name": "value", "versions": "1+", "type": "bytes" } ] }, ... ] }
Kafka Streams's Task Assignor will make a decision on how to distribute standby tasks over the available clients based on encoded clientTags
within the subscription info and configured task.assignment.rack.awareness
Changes in HighAvailabilityTaskAssignor
Implementation of this KIP must not affect HighAvailabilityTaskAssignor in a breaking way, meaning that all the existing behavior should stay unchanged (e.g., when new configurations are not specified). Once required configurations are set, the main change should happen within the code that deals with standby task allocation, specifically:
HighAvailabilityTaskAssignor#
assignStandbyReplicaTasks
and HighAvailabilityTaskAssignor#assignStandbyTaskMovements
Compatibility, Deprecation, and Migration Plan
The changes proposed by this KIP shouldn't affect previously setup applications. Since we introduce new configuration options, existing ones shouldn't be affected by this change.
Rejected Alternatives
The initial idea was to introduce two configurations in StreamsConfig,
rack.id
, which defines the rack of the Kafka Streams instance andstandby.task.assignor
- class that implementsRackAwareStandbyTaskAssignor
interface.The signature of RackAwareStandbyTaskAssignor was the following:
public interface RackAwareStandbyTaskAssignor { /** * Computes desired standby task distribution for a different {@link StreamsConfig#RACK_ID_CONFIG}s. * @param sourceTasks - Source {@link TaskId}s with a corresponding rack IDs that are eligible for standby task creation. * @param clientRackIds - Client rack IDs that were received during assignment. * @return - Map of the rack IDs to set of {@link TaskId}s. The return value can be used by {@link TaskAssignor} * implementation to decide if the {@link TaskId} can be assigned to a client that is located in a given rack. */ Map<String, Set<TaskId>> computeStandbyTaskDistribution(final Map<TaskId, String> sourceTasks, final Set<String> clientRackIds); }
By injecting custom implementation of RackAwareStandbyTaskAssignor interface, users could hint Kafka Streams where to allocate certain standby tasks when more complex processing logic was required — for example, parsing rack.id, which can be a combination of multiple identifiers (as seen in the previous examples where we have cluster and zone tags).
The above mentioned idea was abandoned because it's easier and more user-friendly to let users control standby task allocation with just configuration options instead of forcing them to implement a custom interface.
- The second approach was to refactor
TaskAssignor
interface to be more user-friendly and expose it as a public interface. Users then could implement customTaskAssignor
logic and set it viaStreamsConfig
. With this, Kafka Streams users would effectively be in control of Active and Standby task allocation.
Similarly to the point above, this approach also was rejected because it's more complex.
Even though it's more-or-less agreed on the pluggable TaskAssignor interface's usefulness, it was decided to cut it out of this KIP's scope and prepare a separate one for that feature.