You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 11 Next »

Status

Current state: Under discussion

Discussion threadhere

JIRAKAFKA-5657

Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).

Motivation

Currently we don't expose information about whether a connector is a source or sink in its description. This is useful when, e.g., categorizing connectors in a UI. Given suggested naming conventions, you might be able to determine this via the connector's class name, but that isn't reliable. This proposal makes the type of connector explicit in the REST responses.

Public Interfaces

We will modify the following REST API endpoint for Connect:

GET /connectors/(string:name)

Proposed Changes

The aforementioned endpoint currently returns the following structure:

HTTP/1.1 200 OK
Content-Type: application/json

{
    "name": "hdfs-sink-connector",
    "config": {
        "connector.class": "io.confluent.connect.hdfs.HdfsSinkConnector",
        "tasks.max": "10",
        "topics": "test-topic",
        "hdfs.url": "hdfs://fakehost:9000",
        "hadoop.conf.dir": "/opt/hadoop/conf",
        "hadoop.home": "/opt/hadoop",
        "flush.size": "100",
        "rotate.interval.ms": "1000"
    },
    "tasks": [
        { "connector": "hdfs-sink-connector", "task": 1 },
        { "connector": "hdfs-sink-connector", "task": 2 },
        { "connector": "hdfs-sink-connector", "task": 3 }
    ]
}
We will add a `type` field to the array that indicates whether the given Connector is a Source or SInk. e.g.:

 

HTTP/1.1 200 OK
Content-Type: application/json

{
    "name": "hdfs-sink-connector",
    "config": {
        "connector.class": "io.confluent.connect.hdfs.HdfsSinkConnector",
        "tasks.max": "10",
        "topics": "test-topic",
        "hdfs.url": "hdfs://fakehost:9000",
        "hadoop.conf.dir": "/opt/hadoop/conf",
        "hadoop.home": "/opt/hadoop",
        "flush.size": "100",
        "rotate.interval.ms": "1000"
    },
    "type": "Sink",
    "tasks": [
        { "connector": "hdfs-sink-connector", "task": 1 },
        { "connector": "hdfs-sink-connector", "task": 2 },
        { "connector": "hdfs-sink-connector", "task": 3 }
    ]
}

For the following REST API endpoint:

GET /connectors/(string:name)/config

Proposed Changes

The aforementioned endpoint currently returns the following structure:

HTTP/1.1 200 OK
Content-Type: application/json
{
    "connector.class": "io.confluent.connect.hdfs.HdfsSinkConnector",
    "tasks.max": "10",
    "topics": "test-topic",
    "hdfs.url": "hdfs://fakehost:9000",
    "hadoop.conf.dir": "/opt/hadoop/conf",
    "hadoop.home": "/opt/hadoop",
    "flush.size": "100",
    "rotate.interval.ms": "1000"
}

 

We will add a `type` field to the array that indicates whether the given Connector is a Source or SInk. e.g.:
HTTP/1.1 200 OK
Content-Type: application/json
{
    "connector.class": "io.confluent.connect.hdfs.HdfsSinkConnector",
    "type": "Sink",
    "tasks.max": "10",
    "topics": "test-topic",
    "hdfs.url": "hdfs://fakehost:9000",
    "hadoop.conf.dir": "/opt/hadoop/conf",
    "hadoop.home": "/opt/hadoop",
    "flush.size": "100",
    "rotate.interval.ms": "1000"
}

For the following REST API endpoint:

GET /connectors/(string:name)/status

Proposed Changes

The aforementioned endpoint currently returns the following structure:

HTTP/1.1 200 OK
{
    "name": "hdfs-sink-connector",
    "connector": {
        "state": "RUNNING",
        "worker_id": "fakehost:8083"
    },
    "tasks":
    [
        {
            "id": 0,
            "state": "RUNNING",
            "worker_id": "fakehost:8083"
        },
        {
            "id": 1,
            "state": "FAILED",
            "worker_id": "fakehost:8083",
            "trace": "org.apache.kafka.common.errors.RecordTooLargeException\n"
        }
    ]
}

 

We will add a `type` field to the array that indicates whether the given Connector is a Source or SInk. e.g.:
HTTP/1.1 200 OK
{
    "name": "hdfs-sink-connector",
    "type": "Sink",
    "connector": {
        "state": "RUNNING",
        "worker_id": "fakehost:8083"
    },
    "tasks":
    [
        {
            "id": 0,
            "state": "RUNNING",
            "worker_id": "fakehost:8083"
        },
        {
            "id": 1,
            "state": "FAILED",
            "worker_id": "fakehost:8083",
            "trace": "org.apache.kafka.common.errors.RecordTooLargeException\n"
        }
    ]
}

 

 

 

Compatibility, Deprecation, and Migration Plan

  • This is purely an addition to the current API and is backwards compatible.

Rejected Alternatives

 

  • No labels