Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

The reason we do not have per-topic parallelism specification in wildcard is that with the wildcard topicFilter, we will not know exactly which topics to consume at the construction time, hence no way to specify per-topic specs.


How to consume large messages?

First you need to make sure these large messages can be acceptted at Kafka brokers:

{code}
message.max.bytes
{code}


controls the maximum size of a message that can be accepted at the broker, and any single message (including the wrapper message for compressed message set) whose size is larger than this value will be rejected for producing.

Then you need to make sure consumers can fetch such large messages on brokers:

{code}
fetch.message.max.bytes
{code}

 controls the maximum number of bytes a consumer issues in one fetch. If it is less than a message's size, the fetching will be blocked on that message keep retrying.


Brokers

How does Kafka depend on Zookeeper?

...