Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents

Status

Current state: Under Discussion

...

Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).

Motivation

Kafka Connect currently defines a default REST API request timeout of 90 seconds which isn't configurable. If a REST API request takes longer than this, a 500 Internal Server Error  response is returned with the message "Request timed out". In exceptional scenarios, a longer timeout may be required for operations such as connector config validation or connector creation / updation (both of which internally do a config validation first) to complete successfully. Consider a database / data warehouse connector which has elaborate validation logic involving querying information schema to get a list of tables/views to validate the user's connector configuration. If the database / data warehouse has a very high number of tables / views and the database / data warehouse is under a heavy load in terms of query volume, such information schema queries can end up taking longer than 90 seconds which will cause connector config validation / creation REST API calls to timeout. 

Public Interfaces

This KIP proposes to add a new query parameter "timeout"  to the following Kafka Connect REST API endpoints:

...

A new Kafka Connect worker configuration - rest.api.max.request.timeout.ms  will be added to configure an upper bound for the timeout  query parameter on the above 3 REST API endpoints. The default value for this config will be 600000 (10 minutes) and it will be marked as a low importance config

Proposed Changes

The timeout here will be updated to use the value from the timeout query parameter if specified (else fallback to the current default 90 seconds) for the aforementioned endpoints. If the value for the "timeout" parameter is invalid (<= 0 or > rest.api.max.request.timeout.ms), a 400 Bad Request response will be returned.

...

Another small improvement will be made to avoid double connector config validations when Connect is running in distributed mode - currently, if a request to POST /connectors or PUT /connectors/{connector}/config is made on a worker that isn't the leader of the group, a config validation is done first, and the request is forwarded to the leader if the config validation is successful (only the leader is allowed to do writes to the config topic, which is what a connector create / update entails). The forwarded request results in another config validation before the write to the config topic can finally be done on the leader. The only benefit of this approach is that it avoids request forwarding to the leader for requests with invalid connector configs. However, it can be argued that it's cheaper and more optimal overall to forward the request to the leader at the outset, and allow the leader to do a single config validation before writing to the config topic. Since config validations are done on their own thread and are typically short lived operations, it should not be an issue to allow the leader to do all config validations arising from connector create / update requests (the only situation where we're adding to the leader's load is for requests with invalid configs, since the leader today already has to do a config validation for forwarded requests with valid configs). Note that the PUT /connector-plugins/{pluginName}/config/validate endpoint doesn't do any request forwarding and can be used if frequent validations are taking place (i.e. they can be made on any worker in the cluster to avoid overloading the leader).

Compatibility, Deprecation, and Migration Plan

The proposed changes are fully backward compatible since we're just introducing a new optional query parameter to 3 REST API endpoints along with a new worker configuration that has a default value.

Test Plan

A simple integration test will be added to ensure that a validate REST API request for a connector that takes longer than the default REST API request timeout (90 seconds) doesn't fail if the query parameter timeout is set to a higher value. Unit tests will be added wherever applicable.


Rejected Alternatives

Configure the timeout via a worker configuration

Summary: A Kafka Connect worker configuration could be introduced to control the request timeouts.

Rejected because: This doesn't allow for per request timeout configuration and also requires a worker restart if changes are requested. Configuring the timeout via a request query parameter allows for much more fine-grained control.


Allow configuring timeouts for ConnectClusterStateImpl

Summary: Currently, ConnectClusterStateImpl  is configured in the RestServer and passed to REST extensions via the context object (see here). ConnectClusterStateImpl takes a request timeout parameter for its operations such as list connectors and get connector config (implemented as herder requests). This timeout is set to the minimum of ConnectResource.DEFAULT_REST_REQUEST_TIMEOUT_MS (90 seconds) and DistributedConfig.REBALANCE_TIMEOUT_MS_CONFIG  (defaults to 60 seconds). We could allow configuring these timeouts too.

Rejected because: The overall behavior would be confusing to end users (they'll need to tweak two configs to increase the overall timeout) and there is seemingly no additional value here (as the herder requests should not take longer than the current configured timeout anyway).


Allow configuring producer zombie fencing admin request timeout

Summary: ConnectResource.DEFAULT_REST_REQUEST_TIMEOUT_MS is also used as the timeout for producer zombie fencings done in the worker for exactly once source tasks (see here). We could allow configuring this timeout as well.

...