Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

The idea is that user can send a ChangeReplicaDirRequest which tells broker to move topicPartition.log to a destination log directory. Broker can create a new directory with .move postfix on the destination log directory to hold all log segments of the replica. This allows broker to tell log segments of the replica on the destination log directory from log segments of the replica on the source log directory during broker startup. The broker can create new log segments for the replica on the destination log directory, push data from source log to the destination log, and replace source log with the destination log for this replica once the new log has caught up.

In the following we describe each step of the replica movement. 

1. Initiate replica movement using ChangeReplicaDirRequest

Either user or controller can User uses kafka-reassignemnt-partitions.sh to send ChangeReplicaDirRequest to broker to initiate replica movement between its log directories. The flow graph below illustrates how broker handles ChangeReplicaDirRequest.

 

View file
nameJBOD-flowgraph.pdf
height250

 

Broker will put ChangeReplicaDirRequest in a DelayedOperationPurgatory. The ChangeReplicaDirRequest can be completed when results for all partitions specified in the ChangeReplicaDirRequest are available. The result of a partition is determined using the following logic:

 

  • If source or destination disk fails, the result of this partition will be KafkaStorageException
  • If destination replica has caught up with source replica and has replaced source replica, the result of this partition has no error, which means success.

2. Copy replica data from source log directory to destination log directory

Here we describe how a broker moves data from source log directory to destination log directory.

Case 1: broker is moving a leader replica of topicParition

- Java class Replica will keep track of two instances of Java class Log, one referencing log directory topicPartition and the other referencing log directory topicPartition.move.
- Broker starts a ReplicaFetcherThread to move data from topicPartition on the source log directory to topicParition.move on the destination log directory. This ReplicaFetcherThread does not fetch data from other brokers.
- The ReplicaFetcherThread repeatedly reads ByteBufferMessageSet of size replica.fetch.max.bytes from topicPartition on the source log directory and appends data to the topicParition.move on the destination log directory as long as the rate doesn't exceeded the user-specified replication quota as introduced in KIP-73, the topicParition will not be included in the ByteBufferMessageSet.
- If the ReplicaFetcherThread is moving multiple replicas between log directories, it will choose partitions in alphabetical order when selecting partitions to move. This helps us reduce among of double writes during the period that the replica is being moved and thus improves performance.

Case 2: broker is moving a follower replica of topicParition

...

Notes:
- The replica movement will stop if either source or destination replica becomes offline due to disk failure.
- We use the same mechanism introduced in KIP-73 to throttle the rate of replica movement between disks on the same broker. User will need to configure leader.replication.throttled.replicas, follower.replication.throttled.replicas, leader.replication.throttled.rate and follower.replication.throttled.rate in the same way as specified in KIP-73, i.e. through kafka-reassign-partitions.sh or kafka-config.sh. For every message that is moved from source disk to destination disk, the size of the message will be subtracted from both leader replication quota and follower replication quota if its partition is included in the throttled replicas list. No data will be moved for a partition in the *.replication.throttled.replicas if either leader replication quota or follower replication quota is exceed. 

3. Replacing replica in the source log directory with replica in the destination log directory

Case 1: broker is moving a leader replica of topicParition

- The ReplicaFetcherThread discovers that topicPartition.move on the destination log directory has caught up with topicPartition on the source log directory after it pushes a ByteBufferMessageSet to topicPartition.move.
- The ReplicaFetcherThread attempts to get a lock to prevent KafkaRequestHandler thread from appending data to the topicParition.
- The ReplicaFetcherThread renames directory topicPartition to topicPartition.delete on the source log directory. topicParition.delete will be subject to asynchronous delete.
- The ReplicaFetcherThread renames directory topicParition.move to topicParition on the destination log directory.
- The ReplicaFetcherThread changes the Replica instance of this topicPartition to reference only the directory topicParition on the destination log directory.
- The ReplicaFetcherThread releases the lock so that KafkaRequestHandler thread can continue to append data to topicParition.
- The data from ProduceRequest will be appended to the topicPartition on the destination log directory in the future.
- FetchRequest will get data from topicParition on the destination log directory in the future.

 

Case 2: broker is moving a follower replica of topicParition
- The ReplicaFetcherThread discovers that topicPartition.move on the destination log directory has caught up with topicPartition on the source log directory after it pushes a ByteBufferMessageSet to topicPartition.move.

- The ReplicaFetcherThread renames directory topicPartition to topicPartition.delete on the source log directory. topicParition.delete will be subject to asynchronous delete.
- The ReplicaFetcherThread renames directory topicParition.move to topicParition on the destination log directory.
- The ReplicaFetcherThread changes the Replica instance of this topicPartition to reference only the directory topicParition on the destination log directory.
- The ReplicaFetcherThread will append data from the leader of topicParition to the directory topicPartition on the destination log directory in the future.

Notes:
- When swapping a leader replica after the replica in the destination disk has caught up, proper lock is needed to prevent KafkaRequestHandler from appending data to the topicPartition.log on the source disks while ReplicaFetcherThread is swapping the replica.
- When swapping a follower replica after the replica in the destination disk has caught up, no lock is needed to swap the replica because the same ReplicaFetchThread will do the replacement and fetch data from leader.

4. Handle failure that happens broker is moving data or swapping replica

Broker does the following to recover from failure when it starts up.

- If both the directory topicPartition and the directory topicPartition.move exist on good log directories, broker will start ReplicaFetcherThread to copy data from topicPartition to topicPartition.move. The effect is the same as if broker has received ChangeReplicaDirRequest to move replica from topicPartition to topicPartition.move.
- If topicPartition.move exists but topicPartition doesn't exist on any good log directory, and if there is no bad log directory, then broker renames topicPartition.move to topicPartition.
- If topicPartition.move exists but topicPartition doesn't exist on any good log directory, and if there is bad log directory, then broker considers topicPartition as offline and would not touch topicPartition.move.
- If topicPartition.delete exists, the broker schedules topicParition.delete for asynchronous delete.

2) How to reassign replica between log directories across brokers

Problem statement:

kafka-reassign-partitions.sh should provide the option for user to specify destination log directory of the replica on any broker. And user should be sure that the replica has been moved to the specific log directory after the reassignment is completed. This is needed in order for user to balance load across log directories of brokers in the cluster.

Solution:

The idea is that user should be able to specify log directory when using kafka-reassign-partitions.sh to reassign partition. Controller should be able to read this optional log directory info when reading assignment from zookeeper. Controller should be able to send ChangeReplicaDirRequest and wait for ChangeReplicaDirResponse to confirm the movement to the specific log directory before declaring that this partition has been moved. We describe the procedure in more detail below:

- User specifies a list of log directories, one log directory per replica, for each topic partition in the reassignment json file that is provided to kafka-reassignemnt-partitions.shThe log directory specified by user must be either "any", or absolute path which begins with '/'. See Scripts section for the format of this json file.

- kafka-reassignment-partitions.sh writes log directories obtained from the reassignment json file to the znode /admin/reassign_partitions.  If user doesn't specify log directory, "any" will be used as the default log directory name. See Zookeeper section for the format of the data in the znode.

- Controller updates state machine, sends LeaderAndIsrRequest and so on to perform partition reassignment. In addition, it also sends ChangeReplicaDirRequest for all replicas that are specified with log directory != "any". The ChangeReplicaDirRequest will move the replica to a specific log directory if it is not already placed there on the broker.

- In addition to the existing requirement of partition reassignment completion, controller will also wait for ChangeReplicaDirResponse (corresponding to the ChangeReplicaDirRequest it has sent) before it considers a movement to be completed and removes a partition from /admin/reassign_partitions. This allows user to confirm that the reassignment to specific disks of brokers is completed after the partition is removed from znode data of /admin/reassign_partitions.

3) How to retrieve information to determine the new replica assignment across log directories

Problem statement:

In order to optimize replica assignment across log directories, user would need to figure out the list partitions per log directory, the size of each partition. As of now Kafka doesn't expose this information via any RPC and user would need to use external tools to directly exam the log directories on each machine to get this information. It is better if Kafka can expose this information via RPC.

Solution:

We introduce DescribeDirsRequest and DescribeDirsResponse. When a broker receives DescribeDirsRequest with empty list of log directories, it will respond with a DescribeDirsResponse which shows the size of each partition and lists of partitions per log directory for all log directories. If user has specified a list of log directories in the DescribeDirsRequest, the broker will provide the above information for only log directories specified by the user. Non-zero error code will specified in the DescribeDirsResponse for each log directory that is either offline or not found by the broker.

User can use command such as ./bin/kafka-log-dirs.sh --describe --zookeeper localhost:2181 --broker 1 to get the above information per log directory.

Public interface

Zookeeper

Change the format of data stored in znode /admin/reassign_partitions to allow log directory to be specified for each replica.

...

2. Complete replica data movement

Here we describe how a broker moves a Log from source to destination log directory and swaps the Log.  This corresponds to the "Initiate replica data movement" box in the flow graph above.

1) The Replica instance is updated to track two instances of Log, one referencing the directory topicPartition on the source log directory and the other referencing the directory topicPartition.move on the destination log directory.
2) If there is thread available in ReplicaMoveThreadPool, one thread gets allocated to move this replica. Otherwise, the movement of this replica gets delayed until there is available thread.
3) The ReplicaMoveThread keeps reading data from the Log in the source log directory to the Log (i.e. toipcPartition.move) in the destination log directory using zero-copy. ReplicaMoveThread may need to sleep to ensure that total throughput in byte-rate used by all ReplicaMoveThread instances does not exceed the configured value of intra.broker.throttled.rate.
4) If the Log in the destination log directory has caught up with the Log in the source log directory, the ReplicaMoveThread grabs lock on the Replica instance.
5) The ReplicaMoveThread continues to move data as specified in step 3) until the Log in the destination log directory has caught up with the Log in the source log directory
6) The ReplicaMoveThread renames directory topicPartition to topicPartition.delete on the source log directory. topicParition.delete will be subject to asynchronous delete.
7) The ReplicaMoveThread renames directory topicParition.move to topicParition on the destination log directory 
8) The ReplicaMoveThread updates the corresponding Replica instance to track only the Log in the destination log directory.
9) The ReplicaMoveThread releases lock on the Replica instance.
 

Notes:
- The replica movement will stop if either source or destination replica becomes offline due to log directory failure.
- The RequestHandlerThread or ReplicaFetcherThread needs to grab lock of the Replica instance in order to append data to the Replica. This prevents race condition while ReplicaMoveThread is swapping the Log in the source log directory with the log in the destination log directory.

3. Handle failure that happens broker is moving data or swapping replica

Broker does the following to recover from failure when it starts up.

- If both the directory topicPartition and the directory topicPartition.move exist on good log directories, broker will start ReplicaFetcherThread to copy data from topicPartition to topicPartition.move. The effect is the same as if broker has received ChangeReplicaDirRequest to move replica from topicPartition to topicPartition.move.
- If topicPartition.move exists but topicPartition doesn't exist on any good log directory, and if there is no bad log directory, then broker renames topicPartition.move to topicPartition.
- If topicPartition.move exists but topicPartition doesn't exist on any good log directory, and if there is bad log directory, then broker considers topicPartition as offline and would not touch topicPartition.move.
- If topicPartition.delete exists, the broker schedules topicParition.delete for asynchronous delete.

2) How to reassign replica between log directories across brokers

Problem statement:

kafka-reassign-partitions.sh should provide the option for user to specify destination log directory of the replica on any broker. And user should be able to verify that the replica has been moved to the specific log directory after the reassignment is completed. This is needed by user to balance load across log directories of brokers in the cluster.

Solution:

The idea is that user should be able to specify log directory when using kafka-reassign-partitions.sh to reassign partition. If user has specified log directory on the destination broker, the script should send ChangeReplicaDirRequest directly to the broker so that broker can either start replica movement or mark the replica to be created on the destination log directory. Finally, the script should send DescribeDirsRequest to broker to verify that the replica has been created/moved in the specified log directory when user requests to verify the assignment.

Here are the steps to execute partition reassignment:

- User specifies a list of log directories, one log directory per replica, for each topic partition in the reassignment json file that is provided to kafka-reassignemnt-partitions.shThe log directory specified by user must be either "any", or absolute path which begins with '/'. See Scripts section for the format of this json file.
- In addition to creating znode at /admin/reassign_partitions with the replica assignment, the script will also send ChangeReplicaDirRequest to the leader brokers of partitions for which the log directory path in the assignment is not "any".
- Broker handles ChangeReplicaDirRequest as specified in the section "How to move replica between log directories on the same broker".

Here are the steps to verify partition assignment:

kafka-reassignemnt-partitions.sh will verify partition assignment across brokers as it does now. 
- For those partitions with destination log directory != "any", kafka-reassignemnt-partitions.sh groups those partitions according to their leader brokers and and sends DescribeDirsRequest to those brokers. The DescribeDirsRequest should provide the log directories and partitions specified in the expected assignment.
- kafka-reassignemnt-partitions.sh determines whether the replica has been moved to the specified log directory based on the DescribeDirsResponse.

3) How to retrieve information to determine the new replica assignment across log directories

Problem statement:

In order to optimize replica assignment across log directories, user would need to figure out the list partitions per log directory, the size of each partition. As of now Kafka doesn't expose this information via any RPC and user would need to either query the JMX metrics of the broker, or use external tools to log onto each machine to get this information. It is better if Kafka can expose this information via RPC.

Solution:

We introduce DescribeDirsRequest and DescribeDirsResponseWhen a broker receives DescribeDirsRequest with empty list of log directories, it will respond with a DescribeDirsResponse which shows the size of each partition and lists of partitions per log directory for all log directories. If user has specified a list of log directories in the DescribeDirsRequest, the broker will provide the above information for only log directories specified by the user. Non-zero error code will specified in the DescribeDirsResponse for each log directory that is either offline or not found by the broker.


User can use command such as ./bin/kafka-log-dirs.sh --describe --zookeeper localhost:2181 --broker 1 to get the above information per log directory.

Public interface

Protocol

Create ChangeReplicaDirRequest

...

 

Code Block
ChangeReplicaDirResponse => error_code partitions
  error_code => int16
  partitions => [ChangeReplicaDirResponsePartition]
 
ChangeReplicaDirResponsePartition => topic partition error_code
  topic => str
  partition => int32
  error_code => int16

Create DescribeDirsRequest

Code Block
DescribeDirsRequest => log_dirs topics
  log_dirs => [str]  // If this is empty, then all log directories will be queried
  topics => [str] // If this is empty, all topics will be queried

Create DescribeDirsResponse

Code Block
DescribeDirsResponse => log_dirs
  log_dirs => [DescribeDirsResponseDirMetadata]
 
DescribeDirsResponseDirMetadata => error_code path topics
  error_code => int16
  path => str
  topics => [DescribeDirsResponseTopic]
 
DescribeDirsResponseTopic => topic partitions
  topic => str
  partitions => [DescribeDirsResponsePartition]
  
DescribeDirsResponsePartition => partition size
  partition => int32
  size => int64
  size => int64
  is_temporary => boolean  // True if replica is *.move

Broker Config

1) Add config intra.broker.throttled.rate. This config specified the maximum rate in bytes-per-second that can be used to move replica between log directories.
2) Add config num.replica.move.threads. This config specified the numbe of threads in ReplicaMoveThreadPool. The thread in this thread pool is responsible to moving replica between log directories.
 

Scripts

1) Add kafka-log-dirs.sh which allows user to get list of replicas per log directory on a broker.

...

./bin/kafka-log-dirs.sh --describe --zookeeper localhost:2181 --broker 1 --log-dirs dir1,dir2,dir3 --topics topic1,topic2 will show list of partitions and their size per log directory for the specified topics and the specified log directories on the broker. If no log directory is specified by the user, then all log directories will be queried. If no topic is specified, then all topics will be queried. If a log directory is offline, then its error code in the DescribeDirsResponse will indicate the error and the log directory is will be marked offline in the script output.

...

 

Code Block
{
  "version" : 1,
  "log_dirs" : [
    {
      "is_live" : boolean,
      "path" : str,
      "partitions": [
        {
          "topic" : str, 
          "partition" : int32, 
          "size" : int64,
          "is_temporary" : boolean
        },
        ...
      ]
    },

    ...
  ]
}

 

2) Change kafka-reassignemnt-partitions.sh to allow user to specify the log directory that the replica should be moved to. This is provided via the reassignment json file with the following new format:

...

The new features will be tested through unit and integration tests.

Rejected Alternatives

 

Potential Future Improvement

 

1. Allow controller/user to specify quota when moving replicas between log directories on the same broker.