Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

      1. Flink to Redshift: The Flink connects to Redshift via JDBC using a username and password.
        Redshift does not support the use of IAM roles to authenticate this connection.
        This connection can be secured using SSL; for more details, see the Encryption section below.
      2. Flink to S3: S3 acts as a middleman to store bulk data when reading from or writing to Redshift.
        Flink connects to S3 using both the Hadoop FileSystem interfaces and directly using the Amazon Java SDK's S3 client.

        This connection can be authenticated using either AWS keys or IAM roles.
      3. Default Authentication: Flink-connector-aws provides different modes of Authentication redshift connectors will use CredentialProvider.
        AWS credentials will automatically be retrieved through the
        DefaultAWSCredentialsProviderChain.
        Users can use IAM instance roles to authenticate to S3 (e.g. on EMR, or EC2).

POC : https://github.com/Samrat002/flink-connector-aws/tree/redshift-connectorRedshift Connector (TABLE API Implementation)

Limitations

  1. Parallelism in flink-connector-redshift will be limited by Quotas configured for the job in Redshift Connections Quotas and limits in Amazon Redshift - Amazon Redshift.
  2. Speed and latency in source and sink will be regulated by the latency and data transfer rate of UNLOAD and COPY feature from redshift
  3. Flink connector redshift sink will only support append-only tables. (no changelog mode support)

...