Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents

Status

Current state["Draft"]

Discussion thread: here [Change the link from the KIP proposal email archive to your own email thread]

...

Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).

Motivation

The motivations here are similar to KIP-854 Separate configuration for producer ID expiry.  Idempotent producers became the default in Kafka since KIP-679: Producer will enable the strongest delivery guarantee by default unless otherwise specified at the client side as a result of this all producer instances will be assigned a PIDs. The increase of number of PIDs stored in Kafka brokers expose the broker to OOM errors if it has high number of producers, rogue or misconfigured client(s). As a result of this the broker will hit OOM and become offline. The only way to recover is to increase the heap.  

...

  • keep a cache of user (KafkaPrincipal) to unique active PIDs to track active PIDs. The cache will be implemented using a simple bloom filter controlled by time to avoid any unwanted growth that might cause OOM.
  • add rating metrics which will increment if caching layer doesn't contain the PID. And user will be throttled once we reach the allowed quota. 

Cache Active PIDs per KafkaPrincipal

The cache will be represented as a map of KafkaPrincipal to timed controlled bloom filter. The lifecycle of a user's PIDs in the bloom filter in the caching layer will be as the following:

...

Users will be entirely removed from the caching layer if it doesn't have any active bloom filters attached to it anymore. 

Public Interfaces

New Broker Configurations

We propose to introduce the following new configuration to the Kafka broker: 

NameTypeDefaultDescription
producer.id.quota.window.num Int11

The number of samples to retain in memory for alter producer id quotas

producer.id.quota.window.size.seconds Int1

The time span of each sample for producer id quotas

producer.id.quota.cache.cleanup.scheduler.interval.ms

Int10

The frequency in ms that the producer id quota manager will check for disposed cached window.

New Quota Types

We propose the introduce the following new quota types in the Kafka Broker:

...

  • Extends `DynamicConfig`  and `ClientQuotaControlManager.configKeysForEntityType` to handle the new quota.

New Broker Metrics

The new metrics will be exposed by the broker:

GroupNameTagsDescription
ProducerIdsrateuserThe current rate
ProducerIdstokensuserThe remaining tokens in the bucket. < 0 indicates that throttling is applied. 
ProducerIdsthrottle-timeuserTracking average throttle-time per user. 


Client Errors

The new quota type will use QuotaViolationException similar to ClientQuotaManager 

New TimeControlledBloomFilter

Code Block
languagescala
class TimedBloomFilter[T](numberOfItems: Int, falsePositiveRate: Double, disposalSchedulerIntervalMs: Long, quotaWindowSizeSeconds: Long, scheduler: Scheduler) {
  val bloomFilters: ConcurrentHashMap[Long, SimpleBloomFilter[T]] = new ConcurrentHashMap() // This keep a map of create time to bloom filter

 def put(value: T): Unit = {
	// Will choose the right bloom filter to use 
	}
  def mightContain(value: T): Boolean = {
	// Will check all available bloom filters
	}

  scheduler.schedule("dispose-old_bloom-filter", ()=> {
		// dispose the bloom filter that older the 1.5 x quotaWindowSizeSeconds.
	}, 0L, disposalSchedulerIntervalMs)
}

class SimpleBloomFilter[T](numberOfBits: Int, numberOfHashes: Int) {
 val bits = mutable.BitSet.empty

 def put(value: T): Unit {
	// Will use MurmurHash3 to has the value
	}
 def mightContain(value: T): Boolean {
	// will check if any of the available bloom filters contains the value
	}
}

New ProducerIdQuotaManagerCache

Code Block
languagescala
class ProducerIdQuotaManager[K, V](disposalSchedulerIntervalMs: Long, cleanupScheduler: Scheduler) {
	protected val concurrentMap: ConcurrentHashMap[K
, TimedBloomFilter[V]] = new ConcurrentHashMap()    

	protect val schedulerIntervalMs = new AtomicLong(disposalSchedulerIntervalMs)

    cleanupScheduler.schedule("cleanup-keys", () => {
			// Cleanup Keys that doesn't have empty TimedBloomFilter
    }, 0L, schedulerIntervalMs)

  	
 	def disposalSchedulerIntervalMs(intervalMs: Long): Unit = {      
		disposalSchedulerIntervalMs(intervalMs)
  	}
	
	def add(key: K, value: V): ControlledCachedMap[K, V] = {
		// Add value to the key bloom filter
	}

    def containsKeyValuePair(key: K, value: V): Boolean = {
		// Check if key, value exist in the cache
	}
}

Tools

kafka-configs.sh will be extended to support the new quota.  A new quota property will be added, which can be applied to <user>:

...

bin/kafka-configs  --zookeeper localhost:2181 --alter --add-config 'producer_ids_rate=200' --entity-type users

Known Limitations

  • As we are using BloomFilter we might get false positives.
  • Throttling based on User will punish any client is used by the same user. However, this is similar risk like existing quotas.

Compatibility, Deprecation, and Migration Plan

Compatibility with Old Clients

  • None, since we are using the same throttling from ClientQuota which the client knows how to handle.

Rejected Alternatives

  1. Limit the total active producer ID allocation number: This solution is the simplest however as stated in the motivation the OOM is always caused by rough or misconfigured client this solution will punish good client along side the rough one. 
  2. Having a limit to the number of active producer IDs: The idea here is if we had misconfigured client, we will expire the older entries This solution will risk the idempotency guarantees. Also there are risk that we my end up expiring the PIDs for good clients as the there is no way to link back PID to specific client at this point. 
  3. allow clients to "close" the producer ID usage: This solution is better however it only improve the situation with new clients leaving the broker exposed to OOM because of old producers. We may need to consider improving the Producer Client to include this but not as part of the scope of this KIP.
  4. Throttle INIT_PRODUCER_ID requests: This solution might look simple however throttling the INIT_PRODUCER_ID doesn't grutnee the OOM would happened as
    1. INIT_PRODUCER_ID for idempotent producer request PIDs from random controller every time so if a client got throttled on one controller doesn't guarantee it will not go through on next controller causing OOM at the leader later
    2. The problem happened on the activation of the PID when it produce and not at the initialisation. So it's more sufficient to throttle at the produce time
  5. Throttle PIDs based on IPs: Similar solution#1 we will end up punishing good users specially if the misbehaving producer is deployed on K8S cluster that has other usecase.