...
Table of Contents |
---|
Status
Current state: Under Discussion Adopted
Current active discussion thread: here
Previous discussion threads: here and hereJIRA: here
Vote thread (current): here
JIRA:
Jira | ||||||
---|---|---|---|---|---|---|
|
Released: AK 2.6.0
Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).
...
As of 0.11.0.0, Kafka Connect can automatically create its internal topics using the new AdminClient (see KIP-154), but it still relies upon the broker to auto-create new topics to which source connector records are written. This is error prone, as it's easy for the topics to be created with an inappropriate cleanup policy, replication factor, and/or number of partitions. Some Kafka clusters disable auto topic creation via auto.create.topics.enable=false
, and in these cases users creating connectors must manually pre-create the necessary topics. That, of course, can be quite challenging for some source connectors that choose topics dynamically based upon the source and that result in large numbers of topics.
...
This proposal adds several source connector configuration properties that specify the default replication factor, number of partitions, and other topic-specific settings to be used by Connect to create any topic to which the source connector writes that does not exist at the time the source connector generates its records. None of these properties has defaults, so therefore this feature is enabled for this connector only when the feature is enabled for the Connect cluster and when the source connector configuration specifies at least the replication factor and number of partitions for at least one group. Users may choose to use the default values specified in the Kafka broker by setting the replication factor or the number of partitions to -1 respectively.
Different classes of configuration properties can be defined through the Different classes of configuration properties can be defined through the definition of groups. Group definition is following a pattern that resembles what has been used previously to introduce property definition for transformations in Kafka Connect. The config property groups are listed within the property topic.creation.groups
. The hierarchy of groups is built on top of a single implicit group that is called default. The default group always exists and does not need to be listed explicitly in topic.creation.groups
(if it does, it will be ignored with a warning message).
...
Property | Type | Default | Possible Values | Description |
---|---|---|---|---|
topic.creation.groups | List of String types | empty | The group default is always defined for topic configurations. The values of this property refer to additional groups | A list of group aliases that will be used to define per group topic configurations for matching topics. If the feature if topic configs is enabled, The group default always exists and matches all topics. |
topic.creation.$alias.include | List of String types | empty | Comma separated list of exact topic names or regular expressions. | A list of strings that represent either exact topic names or regular expressions that may match topic names. This list is used to include topics that match their values and apply this group's specific configuration to the topics that match this inclusion list. $alias applies to any group defined in topic.creation.groups but not the default |
topic.creation.$alias.exclude | List of String types | empty | Comma separated list of exact topic names or regular expressions | A list of strings that represent either exact topic names or regular expressions that may match topic names. This list is used to exclude topics that match their values and refrain from applying this group's specific configuration to the topics that match this exclusion list. $alias applies to any group defined in topic.creation.groups but not the default. Note that exclusion rules have precedent and override any inclusion rules for topics. |
topic.creation.$alias.replication.factor | int | n/a | >= 1 when for a value is specifiedspecific valid value, or -1 to use the broker's default value | The replication factor for new topics created for this connector. This value must not be larger than the number of brokers in the Kafka cluster, or otherwise an error will be thrown when the connector will attempt to create a topic. For the default group this configuration is required. For any other group defined in topic.creation.groups this config is optional and if it's missing it gets the value the default group |
topic.creation.$alias.partitions | int | n/a | >= 1 when for a value is specifiedspecific valid value, or -1 to use the broker's default value | The number of partitions new topics created for this connector. For the default group this configuration is required. For any other group defined in topic.creation.groups this config is optional and if it's missing it gets the value the default group |
topic.creation.$alias.${kafkaTopicSpecificConfigName} | several | broker value | Any of the Kafka topic-level configurations for the version of the Kafka broker where the records will be written. The broker's topic-level configuration value will be used if that configuration is not specified for the rule. $alias applies to the default as well as any group defined in topic.creation.groups |
...
Code Block | ||||
---|---|---|---|---|
| ||||
...
topic.creation.groups=inorder
topic.creation.default.replication.factor=3
topic.creation.default.partitions=5
topic.creation.inorder.include=status, orders.*
topic.creation.inorder.partitions=1
... |
...
Example 3: By default, new topics created by Connect for this connector will have replication factor of 3 and 5 partitions, while the key_value_topic
topic and another.compacted.topic
topics or topics that begin with the prefix configurations
will be compacted and have a replication factor of 5 and 1 partition.
Code Block | ||||
---|---|---|---|---|
| ||||
... topic.creation.groups=compacted topic.creation.default.replication.factor=3 topic.creation.default.partitions=5 topic.creation.compacted.include=key_value_topic, another\\.compacted\\.topic, configurations.* topic.creation.compacted.replication.factor=5 topic.creation.compacted.partitions=1 topic.creation.compacted.cleanup.policy=compact ... |
...
Code Block | ||||
---|---|---|---|---|
| ||||
... topic.creation.groups=compacted, highly_parallel topic.creation.default.replication.factor=3 topic.creation.default.partitions=5 topic.creation.highly_parallel.include=hpc.*,parallel.* topic.creation.highly_parallel.exclude=.*internal, .*metadata, .*config.* topic.creation.highly_parallel.replication.factor=1 topic.creation.highly_paralle.partitions=1 topic.creation.compacted.include=configurations.* topic.creation.compacted.cleanup.policy=compact ... |
...
Therefore, in order to use this feature, the Kafka principal specified in the worker configuration and used for the source connectors (e.g., producer.*
) must have the permission to DESCRIBE and CREATE topics. If
Note that when the worker's producer does not have the necessary privileges to DESCRIBE existing and CREATE missing topics but a source connector does specify the topic.creation.*
configuration properties, the worker will log a warning and will default to the previous behavior of assuming the topics already exist or that the broker will auto-create them when needed.Note that when the Connect worker starts Connect worker starts up, it already has the ability to create in the Kafka cluster the internal topics used for storing connector configurations, connector and task statuses, and source connector offsets.
...
Code Block | ||||
---|---|---|---|---|
| ||||
...
producer.override.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
username="alice" \
password="alice-secret";
admin.override.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
username="bob" \
password="bob-secret";
... |
If neither the worker properties or the connector overrides allow for creation of new topics during runtime, if this feature is enabled, an error will be logged and the task will fail.
If creating topics is not desired for security purposes, this feature should be disabled by setting topic.creation.enable=false
. In this case, the previous behavior of assuming the topics already exist or that the broker will auto-create them when needed will be in effect.
Compatibility, Deprecation, and Migration Plan
...
Finally, this feature uses Kafka's Admin API methods to check for the existence of a topic and to create new topics. This feature will do nothing if If the broker does not support the Admin API methods, which is equivalent to relying upon auto-topic creationan error will be logged and the task will fail if this feature is enabled. If ACLs are used, the Kafka principal used in the Connect worker's producer.*
settings is assumed to have privilege to create topics when needed; if not, then appropriate overrides will have to be present in the connector's configuration, and, finally, if that's not in place either, then an error will be logged and the task will fail if this feature is enabled.
...