Table of Contents |
---|
Status
Current state: Under Discussion" Completed
Discussion thread: here
JIRA: here
Jira | ||||||
---|---|---|---|---|---|---|
|
Released: <Kafka Version>
...
Public interface
Zookeeper
1) Store data with the following json format in znode /log_dir_event_notification/log_dir_event_*
Code Block |
---|
{ "version" : int, "broker" : int, "event" : int <-- We currently use 1 to indicate LogDirFailure event. } |
Protocol
Add a isNewReplica field to LeaderAndIsrRequestPartitionState which will be used by LeaderAndIsrRequest
Code Block |
---|
LeaderAndIsrRequest => controller_id controller_epoch partition_states live_leaders controller_id => int32 controller_epoch => int32 partition_states => [LeaderAndIsrRequestPartitionState] live_leaders => [LeaderAndIsrRequestLiveLeader] LeaderAndIsrRequestPartitionState => topic partition controller_epoch leader leader_epoch isr zk_version replicas topic => str partition => int32 controller_epoch => int32 leader => int32 leader_epoch => int32 isr => [int32] zk_version => int32 replicas => [int32] is_new_replica => boolean <-- NEW |
Add a offline_replicas field to UpdateMetadataRequestPartitionState which will be used by UpdateMetadataRequest
Code Block |
---|
UpdateMetadataRequest => controller_id controller_epoch partition_states live_brokers controller_id => int32 controller_epoch => int32 partition_states => [UpdateMetadataRequestPartitionState] live_brokers => [UpdateMetadataRequestBroker] UpdateMetadataRequestPartitionState => topic partition controller_epoch leader leader_epoch isr zk_version replicas offline_replicas topic => string partition => int32 controller_epoch => int32 leader => int32 leader_epoch => int32 isr => [int32] zk_version => int32 replicas => [int32] offline_replicas => [int32] <-- NEW. This includes offline replicas due to both broker failure and disk failure. |
Add a offline_replicas field to PartitionMetadata which will be used by MetadataResponse
Code Block |
---|
MetadataResponse => brokers cluster_id controller_id topic_metadata brokers => [MetadatBroker] cluster_id => nullable_str controller_id => int32 topic_metadata => TopicMetadata TopicMetadata => topic_error_code topic is_internal partition_metadata topic_error_code => int16 topic => str is_internal => boolean partition_metadata => [PartitionMetadata] PartitionMetadata => partition_error_code partition_id leader replicas isr offline_replicas partition_error_code => int16 partition_id => int32 leader => int32 replicas => [int32] isr => [int32] offline_replicas => [int32] <-- NEW. This includes offline replicas due to both broker failure and disk failure. |
Scripts
1) When describing a topic, kafka-topics.sh will show the offline replicas for each partition.
Metrics
Here are the metrics we need to add as part of this proposal
1) kafka.server:name=OfflineReplicasCountOfflineReplicaCount,type=ReplicaManager
The number of offline replicas on a live broker. This is equivalent to the number of TopicParition log on the bad log directories of the broker. One gauge per broker.
2) kafka.server:name=OfflineLogDirectoriesCountOfflineLogDirectoryCount,type=LogManager
The number of offline log directories on a live broker. One gauge per broker.
Changes in Operational Procedures
In this section we describe the expected changes in operational procedures in order to switch Kafka to run with JBOD instead of RAID. Administrators of Kafka cluster need to be aware of these changes before switching from RAID-10 to JBOD.
1) Need to adjust replication factor and min.insync.replicas
After we switch from RAID-10 to JBOD, the number of disks that can fail will be smaller if replication factor is not changed. Administrator needs to change replication factor and min.insync.replicas to balance the cost, availability and performance of Kafka cluster. With proper configuration of these two configs, we can have reduced disk cost or increased tolerance of broker failure and disk failure. Here are a few examples:
- If we switch from RAID-10 to JBOD and keep replication factor to 2, the disk usage of Kafka cluster would be reduced by 50% without reducing the availability against broker failure. But tolerance of disk failure will decrease.
- If we switch from RAID-10 to JBOD and increase replication factor from 2 to 3, the disk usage of Kafka cluster would be reduced by 25%, the number of brokers that can fail without impacting availability can increase from 1 to 2. But tolerance of disk failure will still decrease.
- If we switch from RAID-10 to JBOD and increase replication factor from 2 to 4, the disk usage of Kafka would stay the same, the number of brokers that can fail without impacting availability can increase from 1 to 3, and number of disks that can fail without impacting availability would stay the same.
2) Need to monitor disk failure via OfflineLogDirectoriesCount metric
Administrator will need to detect log directory failure by looking at OfflineLogDirectoriesCount. After log directory failure is detected, administrator needs to fix disks and reboot broker.
3) Need to decide whether to restart broker that had known disk failure before fixing the disk
Although this KIP allows broker to start with bad disks (i.e. log directories), Kafka administrator needs to be aware that problematic disks may be simply slow (e.g. 100X slower) without giving fatal error (e.g. IOException) and Kafka currently does not handle this scenario. Kafka cluster may be stuck in an unhealthy state if disk is slow but not showing fatal error. Since disk with known failure is more likely to have problematic behavior, administrator may choose not to restart broker before fixing its disks to play on the safe side.
In addition, administor needs to be aware that if a bad log directory is removed from broker config, all existing replicas on the bad log directory will be re-created on the good log directories. Thus bad log directories should only be removed from broker config if there is enough space on the good log directories.
Compatibility, Deprecation, and Migration Plan
The KIP changes the inter-broker protocol. Therefore the migration requires two rolling bounce. In the first rolling bounce we will deploy the new code but broker will still communicate using the existing protocol. In the second rolling bounce we will change the config so that broker will start to communicate with each other using the new protocol.
Test Plan
The new features will be tested through unit, integration, and system tests. In the following we explain the system tests only. In addition to the tests described in this KIP, we also have test in KIP-113 to verify that replicas already created on good log directories will not be affected by failure of other log directories.
Note that we validate the following when we say "validate client/cluster state" in the system tests.
- Brokers are all running and show expected error message
- topic description shows expected results for all topics
- A pair of producer and consumer can successfully produce/consume from a topic without message loss or duplication.
1) Log directory failure discovered during bootstrap
- Start 1 zookeeper and 3 brokers. Each broker has 2 log directories.
- Create a topic of 1 partition with 3 replicas
- Start a pair of producer and consumer to produce/consume from the topic
- Kill the leader of the partition
- Change permission of the first log direcotry of the leader to be 000
- Start the previous leader again
- Validated client/cluster state
2) Log directory failure discovered on leader during runtime
- Start 1 zookeeper and 3 brokers. Each broker has 2 log directories.
- Create a topic of 1 partition with 3 replicas
- Start a pair of producer and consumer to produce/consume from the topic
- Change permission of the leader's log direcotry to be 000
- Validated client/cluster state
// Now validate that the previous leader can still serve replicas on the good log directories
- Create another topic of 1 partition with 3 replicas
- Kill the other two brokers
- Start a pair of producer and consumer to produce/consume from the new topic
- Validated client/cluster state
3) Log directory failure discovered on follower during runtime
- Start 1 zookeeper and 3 brokers. Each broker has 2 log directories.
- Create a topic of 1 partition with 3 replicas
- Start a pair of producer and consumer to produce/consume from the topic
- Change permission of the follower's log direcotry to be 000
- Validated client/cluster state
// Now validate that the follower can still serve replicas on the good log directories
- Create another topic of 1 partition with 3 replicas
- Kill the other two brokers
- Start a pair of producer and consumer to produce/consume from the new topic
- Validated client/cluster state
Rejected Alternatives
- Let broker keep track of the replicas that it has created.
The cons of this approach is that each broker, instead of controller, keeps track of the replica placement information. However, this solution will split the task of determining offline replicas among controller and brokers as opposed to the current Kafka design, where the controller determines states of replicas and propagate this information to brokers. We think it is less error-prone to still let controller be the only entity that maintains metadata (e.g. replica state) of Kafka cluster.
- Avoid adding "create" field to LeaderAndIsrRequest.
- Add a new field "created" in the existing znode
/broker/topics/[topic]/partitions/[partitionId]/state
instead of creating a new znodeLeaderAndIsrRequset
, the leader would need to read this list of created replicas from zookeeper before updating isr in the zookeeper. This is different from the current design where all information except isr are read from LeaderAndIsrRequest from controller. And it creates opportunity for race condition. Thus we propose to add a new znode to keep those information that can only be written by controller.- Identify replica by 4-tuple (topic, partition, broker, log_directory) in zookeeper and various requests
1) It seems if we were to tell kafka user to deploy 50 brokers on a machine of 50 disks. The overhead of managing so many brokers' config would also increase.
Running one broker per disk adds a good bit of administrative overhead and complexity. If you perform a one by one rolling bounce of the cluster, you’re talking about a 10x increase in time. That means a cluster that restarts in 30 minutes now takes 5 hours. If you try and optimize this by shutting down all the brokers on one host at a time, you can get close to the original number, but you now have added operational complexity by having to micro-manage the bounce. The broker count increase will percolate down to the rest of the administrative domain as well - maintaining ports for all the instances, monitoring more instances, managing configs, etc.
2) Either when user deploys Kafka on a commercial cloud platform or when user deploys their own cluster, the size or largest disk is usually limited. There will be scenarios where user want to increase broker capacity by having multiple disks per broker. This JBOD KIP makes it feasible without hurting availability due to single disk failure.
3) There is performance concern when you deploy 10 broker vs. 1 broker on one machine. The metadata the cluster, including FetchRequest, ProduceResponse, MetadataRequest and so on will all be 10X more. The packet-per-second will be 10X higher which may limit performance if pps is the performance bottleneck. The number of socket on the machine is 10X higher. And the number of replication thread will be 100X more. The impact will be more significant with increasing number of disks per machine. Thus it will limit Kakfa's scalability in the long term. Our stress test result shows that one-broker-per-disk has 15% lower throughput.
You also have the overhead of running the extra processes - extra heap, task switching, etc. We don’t have a problem with page cache really, since the VM subsystem is fairly efficient about how it works. But just because cache works doesn’t mean we’re not wasting other resources. And that gets pushed downstream to clients as well, because they all have to maintain more network connections and the resources that go along with it.
4) Less efficient way to manage quota. If we deploy 10 brokers on a machine, each broker should receive 1/10 of the original quota to make sure the user doesn't exceed a given byte-rate limit on this machine. It will be harder for user to reach this limit on the machine if e.g. user only sends/receives from one partition on this machine.
5) Rebalance between disks/brokers on the same machine will be less efficient and less flexible. Broker has to read data from another broker on the same machine via socket. It is also harder to do automatic load balance between disks on the same machine in the future.
6) Running more brokers in a cluster also exposes you to more corner cases and race conditions within the Kafka code. Bugs in the brokers, bugs in the controllers, more complexity in balancing load in a cluster (though trying to balance load across disks in a single broker doing JBOD negates that).
Potential Future Improvement
1. Distribute segments of a given replica across multiple log directories on the same broker. It is useful but complicated. It is something that can be done later via a separate KIP.
2. Provide intelligent solution to select log directory to place new replicas and re-assign replicas across log directories to balance the load.
3. Have broker automatically rebalance replicas across its log directories. It is worth exploring separately in a future KIP as there are a few options in the design space.
4. Allow controller/user to specify quota when moving replicas between log directories on the same broker.