Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: add rejected alternatives

...

  • Auditing information still need to be collected in a central place, so it would require extra configuration on the client side.
  • Also repetition of the same events should be avoided which means we have to implement cache on the client side. This makes the clients more heavy which we would like to avoid. Also the same caching would apply to the brokers as well so implementation-wise we wouldn’t be ahead.

AbstractRequest and AbstractResponse in audit()

To provide a very generic auditing-like interface we can just pass the AbstractRequest and AbstractResponse objects to the audit() method call. The first problem with this is they are not public interfaces so first of all we would need to publish them as interface classes. Then the next problem that they expose a bunch of generated classes which therefore would be interfaces as well, so just by exposing these two classes we need to expose a lot of others as well, therefore growing the footprint too much. Secondly the created interface would be like a generic interceptor rather than an auditor. This is not what this KIP aims for, although the "audit" functionality could be inserted as a post-action interceptor. Refactoring the KafkaApis code to allow inserting interceptors would be a too big scope for this KIP.

Event Class per API

In Kafka there are 50+ APIs. To create an event for each one, like TopicCreateEvent, TopicDeleteEvent etc. would explode the boilerplate code we would need to implement the audit functionality. This isn't what we want and wouldn't be a good programming practice either. Instead of this we created event classes mostly around the resource types that are being manipulated (topics, acls, configs, etc.). These are much less in number and could be easier to use.