Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • These limits are cluster-wide. This is obviously true for max.partitions which is meant to apply at the cluster level. However, we choose this for max.broker.partitions too, instead of supporting different values for each broker. This is in alignment with the current recommendation to run homogenous Kafka clusters where all brokers have the same specifications (CPU, RAM, disk etc.).
  • If both limits max.partitions and max.broker.partitions are specified, then the more restrictive of the two will apply. It is possible that a request is rejected because it causes the max.partitions limit to be hit without causing any broker to hit the max.broker.partitions limit. The vice versa is true as well.
  • These limits can be changed at runtime, without restarting brokers. This provides greater flexibility. See the "Rejected alternatives" section for why we did not go with read-only configuration.
  • These limits apply to all topics, even internal topics (i.e. __consumer_offsets and __transaction_state, which usually are not configured with too many partitions). This provides the same consistent experience across all topics and partitions.
  • These limits also apply to topics created via auto topic creation (currently possible via the Metadata and FindCoordinator API requests). By enforcing this, we disallow having a backdoor to bypass these limits.
  • These limits do not apply when creating topics or partitions, or reassigning partitions via the ZooKeeper-based admin tools. This is unfortunate, because it does create a backdoor to bypass these limits. However, we leave this out of scope here given that ZooKeeper will eventually be deprecated from Kafka.

...