Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Migrated to Confluence 4.0

...

Kafka relies on ZooKeeper in order to coordinate multiple brokers and consumers. If you're unfamiliar with ZooKeeper, just think of it as a server that allows you to atomically create nodes in a tree, assign values to those nodes, and sign up for notifications when a node or its children get modified. Nodes can be either be permanent or ephemeral, the latter meaning that the nodes will disappear if the process that created them disconnects (after some timeout delay).

Wiki MarkupWhile creating the nodes we care about, you'll often need to create the intermediate nodes that they are children of. For instance, since offsets are stored at {{/consumers/\[consumer_group\]/offsets/\[topic\]/\[broker_id\]-\[partition_id\]}}, something has to create {{/consumers}}, {{/consumers/\[consumer_group\]}}, etc. All nodes have values associated with them in ZooKeeper, even if Kafka doesn't use them for anything. To make debugging easier, the value that should be stored at an intermediate node is the ID of the node's creator. In practice that means that the first Consumer you create will need to make this skeleton structure and store its ID as the value for {{/consumers}}, {{/consumers/\[consumer_group\]}}, etc.

ZooKeeper has Java and C libraries, and can be run as a cluster.

...

<ac:structured-macro ac:name="unmigrated-wiki-markup" ac:schema-version="1" ac:macro-id="ab3a4ea4-44dc-4c92-a21a-44bd8bfc7b63"><ac:plain-text-body><![CDATA[

Role

ZooKeeper Path

Type

Data Description

ID Registry

/brokers/ids/[0..N]

Ephemeral

String in the format of "creator:host:port" of the broker.

Topic Registry

/brokers/topics/[topic]]></ac:plain-text-body></ac:structured-macro>

<ac:structured-macro ac:name="unmigrated-wiki-markup" ac:schema-version="1" ac:macro-id="a135f5e7-be8e-48fc-bd19-15df80daf497"><ac:plain-text-body><![CDATA[

Topic Registry

/brokers/topics/[topic]/[0..N]

Ephemeral

Number of partitions that topic has on that Broker.

]]></ac:plain-text-body></ac:structured-macro>

So let's take the example of the following hypothetical broker:

/[0..N]

Ephemeral

Number of partitions that topic has on that Broker.

So let's take the example of the following hypothetical broker:

  • Broker ID is 2 (brokerid=2 in the Kafka config file)
  • Running on IP 10.0.0.12
  • Using port 9092
  • Topics:
    • "dogs" with 4 partitions
    • "mutts" with 5 partitions

...

  • Broker IDs don't have to be sequential, but they do have to be integers. They are a config setting, and not randomly generated. If a Kafka server goes offline for some reason and comes back an hour later, it should be reconnecting with the same Broker ID.
  • The ZooKeeper hierarchy puts individual brokers under topics because Producers and Consumers will want to put a watch on a specific topic node, to get notifications when new brokers enter or leave the pool.
  • The Broker's description is formatted such that it's creator:host:port. The host will also up as part of the creator because of the version of UUID that Kafka's using, but don't rely on that behavior. Always split on ":" and extract the host that will be the second element.
  • These nodes are ephemeral, so if the Broker crashes or is disconnected from the network, it will automatically be removed. But this removal is not instantaneous, and it might show up for a few seconds. This can cause errors when a broker crashes and is restarted, and subsequently tries to re-create its still existent Broker ID registry node.

Producer

Reads:

  • Wiki Markup{{/brokers//brokers/topics/\[topic\]/\[0..N\]}}, so that it knows what Broker IDs are available for this topic, and how many partitions they have.unmigrated-wiki-markup
  • {{/brokers/ids/\[0..N\]}}, to find the address of the Brokers, so it knows how to connect to them.

Watches:

...

  • {{/brokers/topics/\[topic\]}}, so that it knows when Brokers enter and exit the pool.
  • /brokers/ids, so that it can update the Broker addresses in case you bring down a Broker and bring it back up under a different IP/port.

...

  1. A Producer is created for a topic.
  2. Wiki MarkupThe Producer reads the Broker-created nodes in {{/brokers/ids/\[0..N\]}} and sets up an internal mapping of Broker IDs => Kafka connections.
  3. Wiki MarkupThe Producer reads the nodes in {{/brokers/topics/\[topic\]/\[0..N\]}} to find the number of partitions it can send to for each Broker.
  4. The Producer takes every Broker+Partition combination and puts them in an internal list.
  5. When a Producer is asked to send a message set, it picks from one of it's Broker+Partition combinations, looks up the appropriate Broker address, and sends the message set to that Broker, for that topic and partition. The precise mechanism for choosing a destination is undefined, but debugging would probably be easier if you ordered them by Broker+Partition (e.g. "0-3") and used a hash function to pick the index you wanted to send to. You could also just make it randomly choose.

...

The latter is actually extremely common, which brings us to the only tricky part about Producers – dealing with new topics.

Creating New Topics

Wiki MarkupTopics are not pre-determined. You create them just by sending a new message to Kafka for that topic. So let's say you have a number of Brokers that have joined the pool and don't list themselves in {{/brokers/topics/\[topic\]/\[0..N\]}} for the topic you're interested in. They haven't done so because those topics don't exist on those Brokers yet. But our Producer knows the Brokers themselves exist, because they are in the Broker registry at {{/brokers/ids/\[0..N\]}}. We definitely need to send messages to them, but what partitions are safe to send to? Brokers can be configured differently from each other and topics can be configured on an individual basis, so there's no way to infer the definitive answer by looking at what's in ZooKeeper.

Wiki MarkupThe solution is that for new topics where the number of available partitions on the Broker is unknown, you should just send to partition 0. Every Broker will at least have that one partition available. As soon as you write it and the topic comes into existence on the Broker, the Broker will publish all available partitions in ZooKeeper. You'll get notified by the watch you put on {{/brokers/topics/\[topic\]}}, and you'll add the new Broker+Partitions to your destination pool.

Consumer

FIXME: Go over all the registration stuff that needs to happen.

...

  • All Consumers in a ConsuerGroup will come to a consensus as to who is consuming what.
  • Each Broker+Topic+Partition combination is consumed by one and only one Consumer, even if it means that some Consumers don't get anything at all.
  • Wiki MarkupA Consumer should try to have as many partitions on the same Broker as possible, so sort the list by \ [Broker ID\]-\[Partition\] (0-0, 0-1, 0-2, etc.), and assign them in chunks.
  • Consumers are sorted by their Consumer IDs. If there are three Consumers, two Brokers, and three partitions in each, the split might look like:
      unmigrated-wiki-markup
    • Consumer A: \ [0-0, 0-1\]unmigrated-wiki-markup
    • Consumer B: \ [0-2, 1-0\]
    • Wiki MarkupConsumer C: \ [1-1, 1-2\]
  • If the distribution can't be even and some Consumers must have more partitions than others, the extra partitions always go to the earlier consumers on the list. So you could have a distribution like 4-4-4-4 or 5-5-4-4, but never 4-4-4-5 or 4-5-4-4.