Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents

Status

Current state: Under discussionAccepted

Discussion threadhere

JIRAKAFKA-5657

Released: 1.0.0.0

Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).

...

Currently we don't expose information about whether a connector is a source or sink in its description. This is useful when, e.g., categorizing connectors in a UI. Given suggested naming conventions we try to encourage , you might be able to determine this via the connector's class name, but that isn't reliable. This proposal makes the type of connector explicit in the REST responses.

Public Interfaces

We will modify the following REST API endpoint endpoints for Connect to include the type of the connector:

Code Block
GET /connectors/(string:name)
GET /connectors/(string:name)/status

Proposed Changes

The aforementioned endpoint endpoint GET /connectors/(string:name) currently returns the following structure:

Code Block
HTTP/1.1 200 OK
Content-Type: application/json

{
    "name": "hdfs-sink-connector",
    "config": {
        "connector.class": "io.confluent.connect.hdfs.HdfsSinkConnector",
        "tasks.max": "10",
        "topics": "test-topic",
        "hdfs.url": "hdfs://fakehost:9000",
        "hadoop.conf.dir": "/opt/hadoop/conf",
        "hadoop.home": "/opt/hadoop",
        "flush.size": "100",
        "rotate.interval.ms": "1000"
    },
    "tasks": ["my-jdbc-source", "my-hdfs-sink"]
        { "connector": "hdfs-sink-connector", "task": 1 },
        { "connector": "hdfs-sink-connector", "task": 2 },
        { "connector": "hdfs-sink-connector", "task": 3 }
    ]
}
We will add a `type` field to each element of the array that indicates the document to indicate whether the given Connector is a Source or SInkSink. e.g.:

 

Code Block
HTTP/1.1 200 OK
Content-Type: application/json

{
    "name": "hdfs-sink-connector",
    "config": {
        "connector.class": "io.confluent.connect.hdfs.HdfsSinkConnector",
        "tasks.max": "10",
        "topics": "test-topic",
        "hdfs.url": "hdfs://fakehost:9000",
        "hadoop.conf.dir": "/opt/hadoop/conf",
        "hadoop.home": "/opt/hadoop",
        "flush.size": "100",
        "rotate.interval.ms": "1000"
    },
    "type": "sink",
    "tasks": [
        { "connector": "hdfs-sink-connector", "task": 1 },
        { "connector": "hdfs-sink-connector", "task": 2 },
        { "connector": "hdfs-sink-connector", "task": 3 }
    ]
}

 

 

Similarly, the endpoint GET /connectors/(string:name)/status currently returns the following structure:

Code Block
HTTP/1.1 200 OK
{
    "name": "myhdfs-jdbcsink-sourceconnector",
    "connector": {
        "typestate": "sourceRUNNING",
        "worker_id": "fakehost:8083"
    },
    "tasks":
    [
        {
            "id": 0,
            "state": "RUNNING",
            "worker_id": "fakehost:8083"
        },
        {
            "id": 1,
            "state": "FAILED",
            "worker_id": "fakehost:8083",
            "trace": "org.apache.kafka.common.errors.RecordTooLargeException\n"
        }
    ]
}

 

We will add a `type` field to the response to indicate whether the given Connector is a Source or Sink. e.g.:
Code Block
HTTP/1.1 200 OK
{
    "name": "my-hdfs-sink-connector",
    "connector": {
        "type": "sink",
        "state": "RUNNING",
        "worker_id": "fakehost:8083"
    },
    "tasks":
    [
        {
            "id": 0,
            "state": "RUNNING",
            "worker_id": "fakehost:8083"
        },
        {
            "id": 1,
            "state": "FAILED",
            "worker_id": "fakehost:8083",
            "trace": "org.apache.kafka.common.errors.RecordTooLargeException\n"
        }
    ]
}

 

Compatibility, Deprecation, and Migration Plan

This

...

propsal only adds a field to existing JSON response messages from existing REST API endpoints, and does not otherwise change the structure of the responses. Many applications will be able to handle the response documents and ignore fields they don't know about. Therefore, this should be backwards compatible.

KIP-151 brought in the connector type enum which generates lowercase. This is kept in this KIP.

Though consumers of the API should not make assumptions about the capitalization of connector type.

Rejected Alternatives

 

Rejected Alternatives

The current method to get this information is to use a heuristic (does a class contain SInk/Source?), or use a lookup table. Neither of which works in every case, or with dynamically loaded Connectors.