Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents

Status

Current state: Under Discussion

...

Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).

Motivation

Kafka Connect currently defines a default REST API request timeout of 90 seconds which isn't configurable. If a REST API request takes longer than this timeout value, a 500 Internal Server Error  response is returned with the message "Request timed out". In exceptional scenarios, a longer timeout may be required for operations such as connector config validation or connector creation / updation (both of which internally do a config validation first) to complete successfully. Consider a database / data warehouse

The POST /connectors  and the PUT /connectors/{connector}/config  endpoints that are used to create or update connectors internally do a connector configuration validation (the details of which vary depending on the connector plugin) before proceeding to create or update the connector. If the configuration validation takes longer than 90 seconds, the connector is still eventually created after the config validation completes (even though a 500 Internal Server Error  response is returned to the user) which leads to a fairly confusing user experience.

Furthermore, this situation is exacerbated by the potential for config validations occurring twice for a single request. If Kafka Connect is running in distributed mode, requests to create or update a connector are forwarded to the Connect worker which is currently the leader of the group, if the initial request is made to a worker which is not the leader. In this case, the config validation occurs both on the initial worker, as well as the leader (assuming that the first config validation is successful) - this means that if a config validation takes longer than 45 seconds to complete each time, it will result in the original create / update connector request timing out.

Slow config validations can occur in certain exceptional scenarios - consider a database connector which has elaborate validation logic involving querying information schema to get a list of tables /and views to validate the user's connector configuration. If the database / data warehouse has a very high number of tables / and views and the database / data warehouse is under a heavy load in terms of query volume, such information schema queries can end up taking longer than 90 seconds which will cause connector config validation / creation REST API calls to timeout. 

Public Interfaces

This KIP proposes to add a new request header "Request-Timeout"  (integer value in milliseconds; for instance "Request-Timeout: 120000" for a timeout of 120000 milliseconds / 2 minutes) to the following Kafka Connect REST API endpoints:

  • PUT /connector-plugins/{pluginName}/config/validate 
  • POST /connectors 
  • PUT /connectors/{connector}/config 

being considerably slow to complete.

Public Interfaces

The behavior of the existing POST /connectors  and the The POST /connectors  and PUT /connectors/{connector}/config  endpoints internally do a config validation first (and only proceed to connector creation / updation if the validation passes), which is why the "Request-Timeout" header is relevant for these endpoints too.

A new Kafka Connect worker configuration - rest.api.max.request.timeout.ms  will be added to configure an upper bound for the "Request-Timeout" header on the above 3 REST API endpoints. The default value for this config will be 600000 (10 minutes) and it will be marked as a low importance config

Proposed Changes

The timeout here will be updated to use the value from the "Request-Timeout" header if specified (else fallback to the current default of 90 seconds) for the aforementioned endpoints. If the value for the "Request-Timeout" header is invalid (<= 0 or > rest.api.max.request.timeout.ms), a 400 Bad Request response will be returned.

Note that a higher / lower configured timeout doesn't change how long requests actually run in the herder - currently, if a request exceeds the default timeout of 90 seconds we return a 500 Internal Server Error response but the request isn't interrupted or cancelled and is allowed to continue to completion. Another thing to note is that each connector config validation is  done on its own thread via a cached thread pool executor in the herder (create / update connector requests are processed asynchronously by simply writing a record to the Connect cluster's config topic, so config validations are the only relevant operation here).

will be modified in cases where the configuration validation exceeds the request timeout. Instead of proceeding with the connector create / update operation (which is the current behavior), we will abort the request.

Proposed Changes

After the configuration validation completes for a request to POST /connectors or This KIP also proposes to change the behavior of the POST /connectors  and the PUT /connectors/{connector}/config  endpoints on request timeouts - currently, even if the connector config validation takes too long and causes a timeout response to be returned to the user, the connector create/update request is still made if the config validation completes successfully eventually. This can be pretty confusing to users and is generally a poor user experience because a 500 Internal Server Error  response should mean that the request couldn't be fulfilled. This behavior will be changed to be more intuitive - if the config validation takes too long and exceeds the timeout (either configured via the proposed new "Request-Timeout" header or the default 90 seconds), the request will be aborted and the connector won't be created / updated.a check will be made to verify that the request timeout hasn't already been exceeded. If it has, the connector create / update request will be aborted.

Another change that will be made with this KIP is to avoid the double connector config validation issue in Connect's distributed mode. Workers will directly forward requests to create or update a connector to the leader without performing any config validation first. The only small benefit of the existing Another small improvement will be made to avoid double connector config validations when Connect is running in distributed mode - currently, if a request to POST /connectors or PUT /connectors/{connector}/config is made on a worker that isn't the leader of the group, a config validation is done first, and the request is forwarded to the leader if the config validation is successful (only the leader is allowed to do writes to the config topic, which is what a connector create / update entails). The forwarded request results in another config validation before the write to the config topic can finally be done on the leader. The only benefit of this approach is that it avoids request forwarding to the leader for requests with invalid connector configs. However, it can be argued that it's cheaper and more optimal overall to forward the request to the leader at the outset, and allow the leader to do a single config validation before writing to the config topic. Since config validations are done on their own thread and are typically short lived operations, it should not be an issue even with large clusters to allow the leader to do all config validations arising from connector create / update requests (the only situation where we're adding to the leader's load is for requests with invalid configs, since the leader today already has to do a config validation for forwarded requests with valid configs). Note that the PUT /connector-plugins/{pluginName}/config/validate endpoint doesn't do any request forwarding and can be used if frequent validations are taking place (i.e. they can be made on any worker in the cluster to avoid overloading the leader).

Compatibility, Deprecation, and Migration Plan

The proposed changes are fully backward compatible since we're just introducing a new optional request header to 3 REST API endpoints along with a new worker configuration that has a default valueshouldn't have any backward compatibility concerns outside of the unrealistic scenario where users are relying on the current behavior of connector create / update requests proceeding to completion even when config validation causes the request to exceed the timeout value. Note that this would still be possible by manually writing the connector's configuration to the Connect cluster's config topic.

Test Plan

...

  • Add an integration test

...

  • to

...

  • verify that a

...

  • connector is not created if config validation exceeds the request timeout.
  • Add an integration test to verify that config validation only occurs a single time when requests to create or update a connector are made to a worker which is not the leader.
  • Add unit tests wherever applicable.


Rejected Alternatives

Introduce a new internal endpoint to persist a connector configuration without doing a config validation

Summary: Instead of forwarding all create / update requests to the leader directly, we could do a config validation on the non-leader worker first and if the validations pass forward the request to a new internal-only endpoint on the leader which will just do the write to the config topic without doing a config validation first.

Rejected because: Introduces additional complexity with very little benefit as opposed to simply delegating all config validations from create / update requests to the leader.

Configure the timeout via a worker configuration

Summary: A Kafka Connect worker configuration could be introduced to control the request timeouts.

Rejected because: This doesn't allow for per request timeout configuration and also requires a worker restart if changes are requested. Configuring the timeout via a request header allows for much more fine-grained control.

Allow configuring timeouts for ConnectClusterStateImpl

Summary: Currently, ConnectClusterStateImpl  is configured in the RestServer and passed to REST extensions via the context object (see here). ConnectClusterStateImpl takes a request timeout parameter for its operations such as list connectors and get connector config (implemented as herder requests). This timeout is set to the minimum of ConnectResource.DEFAULT_REST_REQUEST_TIMEOUT_MS (90 seconds) and DistributedConfig.REBALANCE_TIMEOUT_MS_CONFIG  (defaults to 60 seconds). We could allow configuring these timeouts too.

Rejected because: The overall behavior would be confusing to end users (they'll need to tweak two configs to increase the overall timeout) and there is seemingly no additional value here (as the herder requests should not take longer than the current configured timeout anyway).

Allow configuring producer zombie fencing admin request timeout

Summary: ConnectResource.DEFAULT_REST_REQUEST_TIMEOUT_MS is also used as the timeout for producer zombie fencings done in the worker for exactly once source tasks (see here). We could allow configuring this timeout as well.

Rejected because: Zombie fencing is an internal operation for Kafka Connect and users shouldn't be able to configure itFurthermore, this could have security implications where the internal endpoint could be abused to bypass config validation (although the internal endpoint could potentially be secured using the mechanism introduced in KIP-507: Securing Internal Connect REST Endpoints).