THIS IS A TEST INSTANCE. ALL YOUR CHANGES WILL BE LOST!!!!
...
A new property at topic level should be created:
Name | Description | Type | Default | Valid Values | Server default property | Importance |
---|---|---|---|---|---|---|
non.consumed.offsets.groups | comma separated list of consumer groups that will expose a metric with the number of messages that expired before being consumed | List | "" | "" | medium |
A new JMX metric should be created to be exposed by the broker:
Metric / Attribute Name | Description | MBEAN NAME |
---|---|---|
non-consumed-total | Number of messages expired without being consumed by a consumer group |
Proposed Changes
Currently the LogManager schedule the "kafka-delete-logs" thread, that will call the deleteLogs() method. Is possible to add into that method the metric to expose the number of offsets non consumed by a list of consumer groups.
The pseudo code is in the comment starting on starts on the line 19:
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
private def deleteLogs(): Unit = { var nextDelayMs = 0L try { def nextDeleteDelayMs: Long = { if (!logsToBeDeleted.isEmpty) { val (_, scheduleTimeMs) = logsToBeDeleted.peek() scheduleTimeMs + currentDefaultConfig.fileDeleteDelayMs - time.milliseconds() } else currentDefaultConfig.fileDeleteDelayMs } while ({nextDelayMs = nextDeleteDelayMs; nextDelayMs <= 0}) { val (removedLog, _) = logsToBeDeleted.take() if (removedLog != null) { try { removedLog.delete() info(s"Deleted log for partition ${removedLog.topicPartition} in ${removedLog.dir.getAbsolutePath}.") // // KIP-490: log when consumer groups lose a message because offset has been deleted // val consumerGroupsSubscribed : Seq[String] = getConsumerGroups(removedLog.topicPartition.topic() ); val groupsToNotify : Seq[String] = consumerGroupsSubscribed intersect groupsTobeNotify // value get from topic config property 'retention.notify.groups' groupsToNotify.forEach( { val lastCosumedOffsetGroup : Integer = getLastOffsetConsumed( _, removedLog.topicPartition); if(lastCosumedOffsetGroup < removedLog.nextOffsetMetadata) { // increment and expose JMS metric non-consumed-total. } }) } catch { case e: KafkaStorageException => error(s"Exception while deleting $removedLog in dir ${removedLog.dir.getParent}.", e) } } } } catch { case e: Throwable => error(s"Exception in kafka-delete-logs thread.", e) } finally { try { scheduler.schedule("kafka-delete-logs", deleteLogs _, delay = nextDelayMs, unit = TimeUnit.MILLISECONDS) } catch { case e: Throwable => if (scheduler.isStarted) { // No errors should occur unless scheduler has been shutdown error(s"Failed to schedule next delete in kafka-delete-logs thread", e) } } } } |
...