You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 4 Next »

In Kafka releases through 0.8.1.1, consumers commit their offsets to ZooKeeper. ZooKeeper does not scale extremely well (especially for writes) when there are a large number of offsets (i.e., consumer-count * partition-count). Fortunately, Kafka now provides an ideal mechanism for storing consumer offsets. Consumers can commit their offsets in Kafka by writing them to a durable (replicated) and highly available topic. Consumers can fetch offsets by reading from this topic (although we provide an in-memory offsets cache for faster access). i.e., offset commits are regular producer requests (which are inexpensive) and offset fetches are fast memory look ups.

The official Kafka documentation describes how the feature works and how to migrate offsets from ZooKeeper to Kafka. This wiki provides sample code that shows how to use the new Kafka-based offset storage mechanism.

Step 1: Discover the offset manager for a consumer group

BlockingChannel offsetChannel;
// suppose a query channel has been established to some broker
try {
  queryChannel.send(new ConsumerMetadataRequest("myconsumergroup"));
  ConsumerMetadataResponse metadataResponse = ConsumerMetadataResponse.readFrom(queryChannel.read().buffer());
  if (metadataResponse.errorCode == ErrorMapping.NoError && metadataResponse.coordinator != null) {
    if (metadataResponse.coordinator.host == queryChannel.host && metadataResponse.port == queryChannel.port)
      offsetChannel = queryChannel;
    else {
      offsetChannel = new BlockingChannel(coordinator.host, coordinator.port, BlockingChannel.UseDefaultBufferSize(), BlockingChannel.UseDefaultBufferSize(), socketTimeoutMs);
      queryChannel.disconnect();
    }
  } else {
    // retry the query (after backoff)
  }
}
catch (IOException e) {
  // retry the query (after backoff)
}

Step 2: Issue the OffsetCommitRequest or OffsetFetchRequest to the coordinator

// How to commit offsets

Map<TopicAndPartition, OffsetAndMetadata> offsets;
// populate the offsets map
OffsetCommitRequest ocRequest = new OffsetCommitRequest("myconsumergroup", offsets, correlationId, clientId);
try {
  offsetChannel.send(ocRequest);
  OffsetCommitResponse ocResponse = OffsetCommitResponse.readFrom(offsetChannel.read().buffer());
  for (partitionErrorCode: ocResponse.commitStatus.values()) {
    if (partitionErrorCode == ErrorMapping.OffsetMetadataTooLargeCode) {
      // You must reduce the size of the metadata if you wish to retry
    } else if (partitionErrorCode == ErrorMapping.NotCoordinatorForConsumerCode || partitionErrorCode == ErrorMapping.ConsumerCoordinatorNotAvailableCode) {
      offsetChannel.disconnect();
      // Go to step 1 (offset manager has moved) and then retry the commit to the new offset manager
    } else {
      error("Error in commit");
      // retry the commit
    }
  }
}
catch (IOException ioe) {
  offsetChannel.disconnect();
  // Go to step 1 and then retry the commit
} 



// How to fetch offsets

List<TopicAndPartition> partitionList;
// populate partitionList with list of partitions for which we want to look up last committed consumer offset
OffsetFetchRequest ofRequest = new OffsetFetchRequest("myconsumergroup", partitionList);
try {
  offsetChannel.send(ofRequest);
  OffsetFetchResponse ofResponse = OffsetFetchResponse.readFrom(offsetChannel.read().buffer());
  Short errorCode = ofResponse.offsets.values.iterator.next().error();
  if (errorCode == ErrorMapping.NotCoordinatorForConsumerCode) {
    offsetChannel.disconnect();
    // Go to step 1 (offset manager has moved) and then retry the offset fetch to the new offset manager
  } else if (errorCode == ErrorMapping.OffsetsLoadInProgress) {
    // retry the offset fetch (after backoff)
  }
}
catch (IOException ioe) {
  offsetChannel.disconnect();
  // Go to step 1 and then retry the offset fetch
}


  • No labels