Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

For "external" exceptions, we need to consider KafkaConsumerKafkaProducer, and StreamsKafkaClient (to be replaced by AdmintClient)KafkaAdmintClient. For internal exceptions, we have for example (de)serialization, state store, and user code exceptions as well as any other exception Kafka Streams raises itself (e.g., configuration exceptions).

Last but not least, we distinguish between exception that should never occur. If those exception do really occur, they indicate a bug and thus all those exception are fatal. All regular Java exception (eg, NullPointerException) are in this category.

Coding implications:

  • We should never try to handle any fatal exceptions but clean up and shutdown
    • We should catch all those exceptions for clean up only and rethrow unmodified (they will eventually bubble out of the thread and trigger uncaught exception hander if one is registered)
    • We should only log those exception (with ERROR level) once at the thread level before they bubble out of the thread to avoid duplicate logging
  • We need to do fine grained exception handling, ie, catch exceptions individually instead of coarse grained and react accordingly
  • All methods should have complete JavaDocs about exception they might throw
  • All exception classes must have strictly defined semantics that are documented in their JavaDocsFor retriable exception, we should throw a special StreamsException if reties are exhausted
  • In runtime code, we should never throw any regular Java excepiton (except it's fatal) but define our own exceptions if required (this allows us to destinguish between bugs and our own exceptions)
  • We should catch, wrap, and rethrow exceptions each time we can add important information to it that helps users and us to figure out the root cause of what when wrong

...

  • How to handle Throwable ?
    • Should we try to catch-and-rethrow in order to clean up?
      • Throwable is fatal, so clean up might fail anyway?
      • Furthermore, should we assume that the whole JVM is dying anyway?
    • Should we be harsh and call System.exit (note, we are a library – but maybe we are "special" enough to justify this?
      • Note, if a thread dies without clean up, but other threads are still running fine, we might end up in a deadlock as locks are not released
      • Could also be configurable
      • Could also be a hybrid: try to clean up on Throwable but call System.exit if clean up fails (as we would end up in a deadlock anyway – maybe only if running with more than one thread?)
    • Should we force users to provide uncaught exception handler via KafkaStreams constructor to make sure they get notified about dying streams?
  • Restructure exception class hierarchy:
    • Remove all sub-classed of StreamsException from public API (we only hand out this one to the user)

    • StreamsException inidicates a fatal error (we could sub-class StreamsException with more detailed fatal errors if required – but don't think this is necessary)
    • We sub-class StreamsException with (an abstract?) RecoverableStreamsException in internal package for any internal exception that should be handled by Streams and never bubble out
      • As an alternative (that I would prefer) we could introduce this as an independet and checked exception instead of inheriting from StreamsException (this forces us to declare and handle those exceptions in our code and makes it hart do miss – otherwise, one might bubble out due to a bug
    • We sub-class inidividual recoverable exceptions in a fine grained manner from RecoverableStreamsException for individual errors
    • We can further group all retriable exceptions by sub-classing them from abstract RetriableStreamsException extends RecoverableStreamsException – the more details/categories the better?

...

-> RetryableException, CoordinatorNotAvailalbeException, RetryableCommitException, DuplicateSequenceNumberException, NotEnoughReplicasException, NotEnoughReplicasAfterAppendException, InvalidRecordException, DisconnectException, InvalidMetaDataException (NotLeaderForPartitionException, NoAvailableBrokersException, UnkonwTopicOrPartitionException, KafkaStoreException, LeaderNotAvailalbeException), GroupCoordinatorNotAvailableException

Should never happen: - UnsupportedVersionException (we do a startup check and should not start processing for this case in the first place)

Handled by client (consumer, producer, admin) internally and should never bubble out of a client: (verify)

...