Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Migrated to Confluence 5.3

...

Code Block
titleRequest Header (all single non-multi requests begin with this)
borderStylesolid
   0                   1                   2                   3
   0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
  |                       REQUEST_LENGTH                          |
  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
  |         REQUEST_TYPE          |        TOPIC_LENGTH           |
  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
  /                                                               /
  /                    TOPIC (variable length)                    /
  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
  |                           PARTITION                           |
  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

  REQUEST_LENGTH = int32 // Length in bytes of entire request (excluding this field)
  REQUEST_TYPE   = int16 // See table below
  TOPIC_LENGTH   = int16 // Length in bytes of the topic name

  TOPIC = String // Topic name, ASCII, not null terminated
                 // This becomes the name of a directory on the broker, so no
                 // chars that would be illegal on the filesystem.

  PARTITION = int32 // Partition to act on. Number of available partitions is
                    // controlled by broker config. Partition numbering
                    // starts at 0.

  ============  =====  =======================================================
  REQUEST_TYPE  VALUE  DEFINITION
  ============  =====  =======================================================
  PRODUCE         0    Send a group of messages to a topic and partition.
  FETCH           1    Fetch a group of messages from a topic and partition.
  MULTIFETCH      2    Multiple FETCH requests, chained together
  MULTIPRODUCE    3    Multiple PRODUCE requests, chained together
  OFFSETS         4    Find offsets before a certain time (this can be a bit
                       misleading, please read the details of this request).
  ============  =====  =======================================================

Very similar to the Request-Header is the multi-request header used for requesting more than one topic-partition combo at a time. Either for multi-produce, or multi-fetch.

Code Block
titleResponse Header (all responses begin with this 6 byte headerMulti-Request Header (more than one topic-partition combo)
borderStylesolid
   0                   1                   2                   3
   0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
  |                        RESPONSEREQUEST_LENGTH                          |
  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
  |         ERROR_CODEREQUEST_TYPE          |    TOPICPARTITION_COUNT       |
  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
 
  RESPONSEREQUEST_LENGTH       = int32 // Length in bytes of entire responserequest (excluding this field)
 REQUEST_TYPE ERROR_CODE = int16 // See table below.

  = int16 // See table above
 TOPICPARTITION_COUNT = int16 // number of unique topic-partition combos in this request

Code Block
titleResponse Header (all responses begin with this 6 byte header)
borderStylesolid

   0       ===============  =====  ===================================================
  ERROR_CODE        VALUE  DEFINITION
  ================  =====  ===================================================
  Unknown            -1    Unknown Error
  NoError            2 0    Success
  OffsetOutOfRange    1    Offset requested is no longer3
 available on the server
  InvalidMessage      2    A message you sent failed its checksum and is corrupt.
  WrongPartition      3    You tried to access a partition that doesn't exist
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
  |                        RESPONSE_LENGTH             (was not between 0 and (num_partitions - 1)).
  InvalidFetchSize    4    The size you requested for fetching is smaller than
|
  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
  |         ERROR_CODE            |
  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

  RESPONSE_LENGTH = int32 // Length in bytes of entire response (excluding this field)
  ERROR_CODE = int16 // See table below.

  ================  =====  ===================================================
  ERROR_CODE        VALUE   the message you're trying to fetch.DEFINITION
  ================  =====  ===================================================

Very similar to the Request-Header is the multi-request header used for requesting more than one topic-partition combo at a time. Either for multi-produce, or multi-fetch.

Code Block
titleMulti-Request Header (more than one topic-partition combo)
borderStylesolid

 0                   1         ======
  Unknown          2  -1    Unknown Error
  NoError          3
 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4Success
 5 6OffsetOutOfRange 7 8 9 0 1 2 3 4 5Offset 6requested 7 8 9 0 1
 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
 | is no longer available on the server
  InvalidMessage      2    A message you sent failed its checksum and is corrupt.
  WrongPartition  REQUEST_LENGTH    3    You tried to access a partition that doesn't exist
          |
 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
 |         REQUEST_TYPE      (was not between 0 |and (num_partitions -  TOPICPARTITION_COUNT 1)).
  InvalidFetchSize    4  |
 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
 
 REQUEST_LENGTH       = int32 // Length in bytes of entire request (excluding this field)
 REQUEST_TYPE The size you requested for fetching is smaller than
                     = int16 // See table above
 TOPICPARTITION_COUNTthe =message int16 // number of unique topic-partition combos in this request
you're trying to fetch.
  ================  =====  ===================================================

FIXME: Add tests to verify all these codes.

...

Code Block
titleOffsets Response
borderStylesolid
   0                   1                   2                   3
   0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
  /                         REQUESTRESPONSE HEADER                        /
  /                                                               /
  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
  |                         NUMBER_OFFSETS                        |
  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
  /                       OFFSETS (0 or more)                     /
  /                                                               /
  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

  NUMBER_OFFSETS = int32 // How many offsets are being returned
  OFFSETS = int64[] // List of offsets

...

Kafka relies on ZooKeeper in order to coordinate multiple brokers and consumers. If you're unfamiliar with ZooKeeper, just think of it as a server that allows you to atomically create nodes in a tree, assign values to those nodes, and sign up for notifications when a node or its children get modified. Nodes can be either be permanent or ephemeral, the latter meaning that the nodes will disappear if the process that created them disconnects (after some timeout delay).unmigrated-wiki-markup

While creating the nodes we care about, you'll often need to create the intermediate nodes that they are children of. For instance, since offsets are stored at {{/consumers/\[consumer_group\]/offsets/\[topic\]/\[broker_id\]-\[partition_id\]}}, something has to create {{/consumers}}, {{/consumers/\[consumer_group\]}}, etc. All nodes have values associated with them in ZooKeeper, even if Kafka doesn't use them for anything. To make debugging easier, the value that should be stored at an intermediate node is the ID of the node's creator. In practice that means that the first Consumer you create will need to make this skeleton structure and store its ID as the value for {{/consumers}}, {{/consumers/\[consumer_group\]}}, etc.

ZooKeeper has Java and C libraries, and can be run as a cluster.

...

<ac:structured-macro ac:name="unmigrated-wiki-markup" ac:schema-version="1" ac:macro-id="6c21fb07-af36-466d-8af7-0d1aa9082fd7"><ac:plain-text-body><![CDATA[

Role

ZooKeeper Path

Type

Data Description

ID Registry

/brokers/ids/[0..N]

Ephemeral

String in the format of "creator:host:port" of the broker.

]]></ac:plain-text-body></ac:structured-macro>

<ac:structured-macro ac:name="unmigrated-wiki-markup" ac:schema-version="1" ac:macro-id="ba4474a3-4dfe-415a-8665-297f693f2faf"><ac:plain-text-body><![CDATA[in the format of "creator:host:port" of the broker.

Topic Registry

/brokers/topics/[topic]/[0..N]

Ephemeral

Number of partitions that topic has on that Broker. ]]></ac:plain-text-body></ac:structured-macro>

So let's take the example of the following hypothetical broker:

...

  • Broker IDs don't have to be sequential, but they do have to be integers. They are a config setting, and not randomly generated. If a Kafka server goes offline for some reason and comes back an hour later, it should be reconnecting with the same Broker ID.
  • The ZooKeeper hierarchy puts individual brokers under topics because Producers and Consumers will want to put a watch on a specific topic node, to get notifications when new brokers enter or leave the pool.
  • The Broker's description is formatted such that it's creator:host:port. The host will also up as part of the creator because of the version of UUID that Kafka's using, but don't rely on that behavior. Always split on ":" and extract the host that will be the second element.
  • These nodes are ephemeral, so if the Broker crashes or is disconnected from the network, it will automatically be removed. But this removal is not instantaneous, and it might show up for a few seconds. This can cause errors when a broker crashes and is restarted, and subsequently tries to re-create its still existent Broker ID registry node.

Producer

Reads:

  • Wiki Markup{{/brokers/topics/\[topic\]/\[0..N\]}}, so that it knows what Broker IDs are available for this topic, and how many partitions they have.
  • Wiki Markup{{/brokers/ids/\[0..N\]}}, to find the address of the Brokers, so it knows how to connect to them.

Watches:

  • Wiki Markup{{/brokers/topics/\[topic\]}}, so that it knows when Brokers enter and exit the pool.
  • /brokers/ids, so that it can update the Broker addresses in case you bring down a Broker and bring it back up under a different IP/port.

...

  1. A Producer is created for a topic.unmigrated-wiki-markup
  2. The Producer reads the Broker-created nodes in {{/brokers/ids/\[0..N\]}} and sets up an internal mapping of Broker IDs => Kafka connections.unmigrated-wiki-markup
  3. The Producer reads the nodes in {{/brokers/topics/\[topic\]/\[0..N\]}} to find the number of partitions it can send to for each Broker.
  4. The Producer takes every Broker+Partition combination and puts them in an internal list.
  5. When a Producer is asked to send a message set, it picks from one of it's Broker+Partition combinations, looks up the appropriate Broker address, and sends the message set to that Broker, for that topic and partition. The precise mechanism for choosing a destination is undefined, but debugging would probably be easier if you ordered them by Broker+Partition (e.g. "0-3") and used a hash function to pick the index you wanted to send to. You could also just make it randomly choose.

...

The latter is actually extremely common, which brings us to the only tricky part about Producers – dealing with new topics.

Creating New Topics

...

Topics are not pre-determined. You create them just by sending a new message to Kafka for that topic. So let's say you have a number of Brokers that have joined the pool and don't list themselves in {{/brokers/topics/\[topic\]/\[0..N\]}} for the topic you're interested in. They haven't done so because those topics don't exist on those Brokers yet. But our Producer knows the Brokers themselves exist, because they are in the Broker registry at {{/brokers/ids/\[0..N\]}}. We definitely need to send messages to them, but what partitions are safe to send to? Brokers can be configured differently from each other and topics can be configured on an individual basis, so there's no way to infer the definitive answer by looking at what's in ZooKeeper.

Wiki MarkupThe solution is that for new topics where the number of available partitions on the Broker is unknown, you should just send to partition 0. Every Broker will at least have that one partition available. As soon as you write it and the topic comes into existence on the Broker, the Broker will publish all available partitions in ZooKeeper. You'll get notified by the watch you put on {{/brokers/topics/\[topic\]}}, and you'll add the new Broker+Partitions to your destination pool.

Consumer

FIXME: Go over all the registration stuff that needs to happen.

...

  • All Consumers in a ConsuerGroup will come to a consensus as to who is consuming what.
  • Each Broker+Topic+Partition combination is consumed by one and only one Consumer, even if it means that some Consumers don't get anything at all.
  • Wiki MarkupA Consumer should try to have as many partitions on the same Broker as possible, so sort the list by \ [Broker ID\]-\[Partition\] (0-0, 0-1, 0-2, etc.), and assign them in chunks.
  • Consumers are sorted by their Consumer IDs. If there are three Consumers, two Brokers, and three partitions in each, the split might look like:
      unmigrated-wiki-markup
    • Consumer A: \ [0-0, 0-1\]unmigrated-wiki-markup
    • Consumer B: \ [0-2, 1-0\]
    • Wiki MarkupConsumer C: \ [1-1, 1-2\]
  • If the distribution can't be even and some Consumers must have more partitions than others, the extra partitions always go to the earlier consumers on the list. So you could have a distribution like 4-4-4-4 or 5-5-4-4, but never 4-4-4-5 or 4-5-4-4.