You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 11 Next »

This page is meant as a template for writing a KIP. To create a KIP choose Tools->Copy on this page and modify with your content and replace the heading with the next KIP number and a description of your issue. Replace anything in italics with your own description.

Status

Current stateUnder Discussion

Discussion thread: here

JIRA: here

Released: <Kafka Version>

Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).

Motivation

The most expensive part of a Kafka cluster is probably its storage system. At LinkedIn we use RAID-10 for storage and set Kafka’s replication factor = 2. This setup requires 4X space to store data and tolerates up to 1 broker failure. We are at risk of data loss with just 1 broker failure, which is not acceptable for e.g. financial data. On the other hand, it is prohibitively expensive to set replication factor = 3 with RAID-10 because it will increase our existing hardware cost and operational cost by 50%.

The solution is to use JBOD and set replication factor = 3 or higher. It is based on the idea that Kafka already has replication across brokers and it is unnecessary to use RAID-10 for replication. Let’s say we set the replication factor = 4 with JBOD. This setup requires 4X space to store data and tolerate up to 3 broker failures in order not to lose any data. In comparison to our existing setup, this allows us to obtain 3X broker failure tolerance without increasing storage hardware cost.

We have evaluated the possibility of using other RAID setup in LinkedIn. But none of them addresses our problem as JBOD does. RAID-0 stops working entirely with just one disk failure. RAID-5 or RAID-6 has sizable performance loss as compared to RAID-0 and probably JBOD as well, due to their use of block-level striping with distributed parity.

Unfortunately, JBOD is not recommended for Kafka because some important features are missing. For example, Kafka lacks good support for tools as well as load balancing across disks when multiple disks are used. Here is a list of problems that need to be addressed for JBOD to be useful:

1) Broker will shutdown if any disk fails. This means a single disk failure can bring down the entire broker. Instead, broker should still serve those replicas on the good disks as long as there is good disk available.

2) Kafka doesn’t provide the necessary tools for users to manage JBOD. For example, Kafka doesn’t provide script to re-assign replicas between disks of the same broker. These tools are needed before we can use JBOD with Kafka.

3) JBOD doesn’t by itself balance load across disks as RAID-10 does. This will be a new problem for us to solve in order for JBOD setup to work well. We should have a better solution than round-robin which we are using to select disk to place a new replica. And we should probably figure out how to re-assign replicas across disks of the same broker if we notice load imbalance across disks of a broker.

For ease of discussion, we have separated the design of JBOD support into two different KIPs. This KIP address the second problem. See KIP - Handle disk failure for JBOD to read our proposal of how to address the first problem.

Since Kafka configuration and implementation does not expose "disk", we will use log directory and disk interchangably in the rest of the KIP.

Goals

The goal of this KIP is to allow administrator to re-assign replicas to the specific log directories of brokers, query offline replicas of topics, and query offline replicas of brokers, and replace bad disks with good disks. This addresses the second problem raised in the motivation section. See KIP - Handle disk failure for JBOD to read our proposal of how to address the first problem.

Proposed change

1) How to move replica between log directories on the same broker

Problem statement:

Kafka doesn’t not allow user to move replica to another log directory on the same broker in runtime. This is not needed previously because we uses RAID-10 and the load is already balanced across disks. But it will be needed to use JBOD with Kafka.

Currently a replica only has two possible states, follower or leader. And it is identified by the 3-tuple (topic, partition, broker). This works for the current implementation because there can be at most one such replica on a broker. However, we will now have two such replicas on a broker when we move replica from one log directory to another log directory on the same broker. Either a replica should be identified by log directory as well, or the broker needs to persist information under the log directory to tell the destination replica from source replica that is being moved.

In addition, user needs to be able to query list of partitions and their size per log directory on any machine so that they can determine how to move replicas to balance the load. While these information may be retrieved via external tools if user can ssh to the machine that is running the Kafka broker, development of such tools may not be easy on all the operating systems that Kafka may run on. Further, a user may not even have the authorization to access the machine. Therefore Kafka needs to provide new request/response to provide this information to users.

Solution:

The idea is that user can send a ChangeReplicaDirRequest which tells broker to move topicPartition.log to a destination log directory. Broker can create a new directory with .move postfix on the destination log directory to hold all log segments of the replica. This allows broker to tell log segments of the replica on the destination log directory from log segments of the replica on the source log directory during broker startup. The broker can create new log segments for the replica on the destination log directory, push data from source log to the destination log, and replace source log with the destination log for this replica once the new log has caught up.

In the following we describe each step of the replica movement.

 

1. Initiate replica movement using ChangeReplicaDirRequest

Either user or controller can send ChangeReplicaDirRequest to broker to initiate replica movement between its log directories. The flow graph below illustrates how broker handles ChangeReplicaDirRequest.

 

 


Broker will put ChangeReplicaDirRequest in a DelayedOperationPurgatory. The ChangeReplicaDirRequest can be completed when results for all partitions specified in the ChangeReplicaDirRequest are available. The result of a partition is determined using the following logic:

 

  • If source or destination disk fails, the result of this partition will be KafkaStorageException
  • If destination replica has caught up with source replica and has replaced source replica, the result of this partition has no error, which means success.

2. Copy replica data from source log directory to destination log directory

Here we describe how a broker moves data from source log directory to destination log directory.

Case 1: broker is moving a leader replica of topicParition

- Java class Replica will keep track of two instances of Java class Log, one referencing log directory topicPartition and the other referencing log directory topicPartition.move.
- Broker starts a ReplicaFetcherThread to move data from topicPartition on the source log directory to topicParition.move on the destination log directory. This ReplicaFetcherThread does not fetch data from other brokers.
- The ReplicaFetcherThread repeatedly reads ByteBufferMessageSet of size replica.fetch.max.bytes from topicPartition on the source log directory and appends data to the topicParition.move on the destination log directory as long as the rate doesn't exceeded the user-specified replication quota as introduced in KIP-73, the topicParition will not be included in the ByteBufferMessageSet.
- If the ReplicaFetcherThread is moving multiple replicas between log directories, it will choose partitions in alphabetical order when selecting partitions to move. This helps us reduce among of double writes during the period that the replica is being moved and thus improves performance.

Case 2: broker is moving a follower replica of topicParition

- Java class Replica will keep track of two instances of Java class Log, one referencing log directory topicPartition and the other referencing log directory topicPartition.move.
- Broker starts a ReplicaFetcherThread to move data from topicPartition on the source log directory to topicParition.move on the destination log directory. This is the same ReplicaFetcherThread that is fetching data from leader broker of topicPartition. 
The maximum wait time of FetchRequset will be set to 0 ms if the ReplicaFetchThread needs to move any partition from topicPartition.log to topicPartition.move.
- The ReplicaFetcherThread builds FetchRequest with maximum wait time = 0 because this ReplicaFetcherThread needs to move data between log directories.
- The ReplicaFetcherThread sends FetchRequest to the leader broker of the topicPartition
- The ReplicaFetcherThread receives FetchResponse and appends data from FetchReponse to local disks.
- The ReplicaFetcherThread reads one or more ByteBufferMessageSet from topicPartition on the source log directory. Each ByteBufferMessageSet has size replica.fetch.max.bytes, and the total size of ByteBufferMessageSet read in this step should be limited by replica.fetch.response.max.bytes AND the user-specified replication quota as introduced in KIP-73.
- If the ReplicaFetcherThread is moving multiple replicas between log directories, it will choose partitions in alphabetical order when selecting partitions to move. This helps us reduce among of double writes during the period that the replica is being moved and thus improves performance.

Notes:
- The replica movement will stop if either source or destination replica becomes offline due to disk failure.
- We use the same mechanism introduced in KIP-73 to throttle the rate of replica movement between disks on the same broker. User will need to configure leader.replication.throttled.replicas, follower.replication.throttled.replicas, leader.replication.throttled.rate and follower.replication.throttled.rate in the same way as specified in KIP-73, i.e. through kafka-reassign-partitions.sh or kafka-config.sh. For every message that is moved from source disk to destination disk, the size of the message will be subtracted from both leader replication quota and follower replication quota if its partition is included in the throttled replicas list. No data will be moved for a partition in the *.replication.throttled.replicas if either leader replication quota or follower replication quota is exceed. 


3. Replacing replica in the source log directory with replica in the destination log directory

Case 1: broker is moving a leader replica of topicParition

- The ReplicaFetcherThread discovers that topicPartition.move on the destination log directory has caught up with topicPartition on the source log directory after it pushes a ByteBufferMessageSet to topicPartition.move.
- The ReplicaFetcherThread attempts to get a lock to prevent KafkaRequestHandler thread from appending data to the topicParition.
- The ReplicaFetcherThread renames directory topicPartition to topicPartition.delete on the source log directory. topicParition.delete will be subject to asynchronous delete.
- The ReplicaFetcherThread renames directory topicParition.move to topicParition on the destination log directory.
- The ReplicaFetcherThread changes the Replica instance of this topicPartition to reference only the directory topicParition on the destination log directory.
- The ReplicaFetcherThread releases the lock so that KafkaRequestHandler thread can continue to append data to topicParition.
- The data from ProduceRequest will be appended to the topicPartition on the destination log directory in the future.
- FetchRequest will get data from topicParition on the destination log directory in the future.

 

Case 2: broker is moving a follower replica of topicParition
- The ReplicaFetcherThread discovers that topicPartition.move on the destination log directory has caught up with topicPartition on the source log directory after it pushes a ByteBufferMessageSet to topicPartition.move.

- The ReplicaFetcherThread renames directory topicPartition to topicPartition.delete on the source log directory. topicParition.delete will be subject to asynchronous delete.
- The ReplicaFetcherThread renames directory topicParition.move to topicParition on the destination log directory.
- The ReplicaFetcherThread changes the Replica instance of this topicPartition to reference only the directory topicParition on the destination log directory.
- The ReplicaFetcherThread will append data from the leader of topicParition to the directory topicPartition on the destination log directory in the future.

Notes:
- When swapping a leader replica after the replica in the destination disk has caught up, proper lock is needed to prevent KafkaRequestHandler from appending data to the topicPartition.log on the source disks while ReplicaFetcherThread is swapping the replica.
- When swapping a follower replica after the replica in the destination disk has caught up, no lock is needed to swap the replica because the same ReplicaFetchThread will do the replacement and fetch data from leader.


4. Handle failure that happens broker is moving data or swapping replica

Broker does the following to recover from failure when it starts up.

- If both the directory topicPartition and the directory topicPartition.move exist on good log directories, broker will start ReplicaFetcherThread to copy data from topicPartition to topicPartition.move. The effect is the same as if broker has received ChangeReplicaDirRequest to move replica from topicPartition to topicPartition.move.
- If topicPartition.move exists but topicPartition doesn't exist on any good log directory, and if there is no bad log directory, then broker renames topicPartition.move to topicPartition.
- If topicPartition.move exists but topicPartition doesn't exist on any good log directory, and if there is bad log directory, then broker considers topicPartition as offline and would not touch topicPartition.move.
- If topicPartition.delete exists, the broker schedules topicParition.delete for asynchronous delete.

2) How to reassign replica between log directories across brokers

Problem statement:

kafka-reassign-partitions.sh should provide the option for user to specify destination log directory of the replica on any broker. And user should be sure that the replica has been moved to the specific log directory after the reassignment is completed. This is needed in order for user to balance load across log directories of brokers in the cluster.

Solution:

The idea is that user should be able to specify log directory when using kafka-reassign-partitions.sh to reassign partition. Controller should be able to read this optional log directory info when reading assignment from zookeeper. Controller should be able to send ChangeReplicaDirRequest and wait for ChangeReplicaDirResponse to confirm the movement to the specific log directory before declaring that this partition has been moved. We describe the procedure in more detail below:

- User specifies a list of log directories, one log directory per replica, for each topic partition in the reassignment json file that is provided to kafka-reassignemnt-partitions.shThe log directory specified by user must be either "any", or absolute path which begins with '/'. See Scripts section for the format of this json file.

- kafka-reassignment-partitions.sh writes log directories obtained from the reassignment json file to the znode /admin/reassign_partitions.  If user doesn't specify log directory, "any" will be used as the default log directory name. See Zookeeper section for the format of the data in the znode.

- Controller updates state machine, sends LeaderAndIsrRequest and so on to perform partition reassignment. In addition, it also sends ChangeReplicaDirRequest for all replicas that are specified with log directory != "any". The ChangeReplicaDirRequest will move the replica to a specific log directory if it is not already placed there on the broker.

- In addition to the existing requirement of partition reassignment completion, controller will also wait for ChangeReplicaDirResponse (corresponding to the ChangeReplicaDirRequest it has sent) before it considers a movement to be completed and removes a partition from /admin/reassign_partitions. This allows user to confirm that the reassignment to specific disks of brokers is completed after the partition is removed from znode data of /admin/reassign_partitions.

3) How to retrieve information to determine the new replica assignment across log directories

Problem statement:

In order to optimize replica assignment across log directories, user would need to figure out the list partitions per log directory, the size of each partition. As of now Kafka doesn't expose this information via any RPC and user would need to use external tools to directly exam the log directories on each machine to get this information. It is better if Kafka can expose this information via RPC.

Solution:

We introduce DescribeDirsRequest and DescribeDirsResponse. When a broker receives DescribeDirsRequest with empty list of log directories, it will respond with a DescribeDirsResponse which shows the size of each partition and lists of partitions per log directory for all log directories. If user has specified a list of log directories in the DescribeDirsRequest, the broker will provide the above information for only log directories specified by the user. Non-zero error code will specified in the DescribeDirsResponse for each log directory that is either offline or not found by the broker.


User can use command such as ./bin/kafka-log-dirs.sh --describe --zookeeper localhost:2181 --broker 1 to get the above information per log directory.

Public interface

Zookeeper

Change the format of data stored in znode /admin/reassign_partitions to allow log directory to be specified for each replica.

{
  "version" : int,
  "all_log_dirs": [str] <-- NEW. This is a list of unique strings representing log directory paths. "any" will be included as the first element of this list.
  "partitions" : [
    {
      "topic" : str,
      "partition" : int,
      "replicas" : [int],
      "log_dirs" : [int]    <-- NEW. This is a list of indexes of log directory paths in the "all_log_dirs". Thus we can translate this list of indexes into the list of log directory paths. If log directory is not explicitly specified by user, "any" will be used as log directory name and broker will select log directory using its own policy. Currently the log directory is selected in a round-robin manner.
    },
    ...
  ]
}

Protocol

Create ChangeReplicaDirRequest

 

ChangeReplicaDirRequest => [ReplicaState]

ReplicaState =>
  topic => str
  partition => int32
  dir => str

 

Create ChangeReplicaDirResponse

 

ChangeReplicaDirResponse => error_code partitions
  error_code => int16
  partitions => [ChangeReplicaDirResponsePartition]
 
ChangeReplicaDirResponsePartition => topic partition error_code
  topic => str
  partition => int32
  error_code => int16

Create DescribeDirsRequest

DescribeDirsRequest => log_dirs
  log_dirs => [str]

Create DescribeDirsResponse

DescribeDirsResponse => log_dirs
  log_dirs => [DescribeDirsResponseDirMetadata]
 
DescribeDirsResponseDirMetadata => error_code path topics
  error_code => int16
  path => str
  topics => [DescribeDirsResponseTopic]
 
DescribeDirsResponseTopic => topic partitions
  topic => str
  partitions => [DescribeDirsResponsePartition]
  
DescribeDirsResponsePartition => partition size
  partition => int32
  size => int64

 

Scripts

1) Add kafka-log-dirs.sh which allows user to get list of replicas per log directory on a broker.


./bin/kafka-log-dirs.sh --describe --zookeeper localhost:2181 --broker 1 --log-dirs dir1,dir2,dir3 will show list of partitions and their size per log directory for the specified log directories on the broker. If no log directory is specified by the user, then all log directories will be queried. If a log directory is offline, then its error code in the DescribeDirsResponse will indicate error and the log directory is marked offline in the script output.

The script output would have the following json format.

 

{
  "version" : 1,
  "log_dirs" : [
    {
      "is_live" : boolean,
      "path" : str,
      "partitions": [
        {
          "topic" : str, 
          "partition" : int32, 
          "size" : int64
        },
        ...
      ]
    },

    ...
  ]
}

 

2) Change kafka-reassignemnt-partitions.sh to allow user to specify the log directory that the replica should be moved to. This is provided via the reassignment json file with the following new format:
{
  "version" : int,
  "partitions" : [
    {
      "topic" : str,
      "partition" : int,
      "replicas" : [int],
      "log_dirs" : [str]    <-- NEW. A log directory can be either "any", or a valid absolute path that begins with '/'
    },
    ...
  ]
}

Changes in Operational Procedures

In this section we describe the expected changes in operational procedures in order to run Kafka with JBOD. Administrators of Kafka cluster need to be aware of these changes before switching from RAID-10 to JBOD.

- Need to load balance across log directories

When running Kafka with RAID-10, we only need to take care of load imbalance across brokers. Administrator can balance load across brokers using the script kafka-reassign-partitions.sh. After switching from RAID-10 to JBOD, we will start to see load imbalance across log directories. In order to address this problem, administrator needs to get the partition assignment and their size per log directory using kafka-log-dirs.sh, determines the reassignment of replicas per log directory (as opposed to per broker), and provides partition -> log_directory mapping as input to either kafka-reassign-partitions.sh to execute the new assignment.

Administrator also needs to be prepared that the need to rebalance across log directories will probably be much more frequent than the need to rebalance across brokers since the capacity of individual disk is likely much smaller than the capacity of existing RAID-10 setup.

Compatibility, Deprecation, and Migration Plan

This KIP is a pure addition. So there is no backward compatibility concern.

The KIP changes the inter-broker protocol. Therefore the migration requires two rolling bounce. In the first rolling bounce we will deploy the new code but broker will still communicate using the existing protocol. In the second rolling bounce we will change the config so that broker will start to communicate with each other using the new protocol.

Test Plan

The new features will be tested through unit and integration tests.

Rejected Alternatives

 

Potential Future Improvement

 

1. Allow controller/user to specify quota when moving replicas between log directories on the same broker.

 



  • No labels