Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • flink-autoscaler is a common autoscaler module
    • [Must-to-have] It includes the general autoscaler strategy
      • JobAutoScaler and the general JobAutoScalerImpl
    • [Must-to-have] Define the general Interface, such as: JobAutoScalerAutoScalerHandler and AutoScalerStateStore
      • AutoScalerHandler: defining the event handler interface.
      • AutoScalerStateStore: defining the state store interface for accessing and persisting state.
    • [Must-to-have] It should not depend on any k8s related dependencies, including: fabric8, flink-kubernetes-operator and flink-kubernetes.
    • [Must-to-have] It can depend on apache/flink project, because it must depend on some classes, such as: JobVertexID, Configuration, and MetricGroup, etc.
    • [Nice-to-have] Move the flink-autoscaler to flink repo
      • It's easy to implement flink-yarn-sutoscaler after decoupling. And adding flink-yarn-sutoscaler to flink-kubernetes-operator repo is wired.
      • I prefer keeping this module in flink-kubernetes-operator repo during this FLIP, and we can move it in the last step of this FLIP.
  • flink-kubernetes-operator-autoscaler  is a autoscaler module for flink on kubernetes
    • [Must-to-have] Implement the JobAutoScaler

...

    • the AutoScalerHandler and AutoScalerStateStore
      • AutoScalerHandler calls kubernetes do the real scaling or records some events.
      • AutoScalerStateStore calls kubernetes to query and update the ConfigMap

Note: the independent `flink-kubernetes-operator-autoscaler` module isn't necessary, moving these classes to flink-kubernetes-operator  will reduce complexity. We can discuss it in the mail list.

Why it isn't necessary?

  • In the POC version, `flink-kubernetes-operator-autoscaler` just defines 2 class, they are: KubernetesAutoScalerHandler and KubernetesAutoScalerStateStore.
  • If `flink-kubernetes-operator-autoscaler` as an independent module, it must depend on `flink-kubernetes-operator` module.
  • `flink-kubernetes-operator` cannot depend on `flink-kubernetes-operator-autoscaler, so it's more difficult to load these classes than remove the `flink-kubernetes-operator-autoscaler` module.

Public Interfaces

JobAutoScaler

Its interfaces are similar to the current JobAutoScaler, however the method parameters are changed. Changing these parameters from classes related to kubernetes to JobAutoScalerContext<KEY, INFO> and KEY jobKey.

It may be more reasonable to change the generic KEY to JOB_KEY, the auto sclaer will consider them as the same flink job when the jobKey is the same.

The generic INFO will be introduced later.

Code Block
/** The general Autoscaler instance. */
public interface JobAutoScaler<KEY, INFO> {

    /** Called as part of the reconciliation loop. Returns true if this call led to scaling. */
    boolean scale(JobAutoScalerContext<KEY, INFO> context);

    /** Called when the custom resource is deleted. */
    void cleanup(KEY jobKey);
}


JobAutoScalerContext

JobAutoScalerContext includes the flink job information that needed during scaling, such as: jobKey, jobId, stateStore and `INFO extraJobInfo`, etc.

For current code or kubernetes job, the jobKey is `io.javaoperatorsdk.operator.processing.event.ResourceID`. We can define the jobKey of yarn flink job in the future.
For the `INFO extraJobInfo`, the flink-autoscaler doesn't use the extraJobInfo, it is only used in some implements of AutoScalerHandler. This whole context will be passed to these implements when the autoscaler callbacks them.


Code Block
/**
 * The job autoscaler context.
 *
 * @param <KEY>
 * @param <INFO>
 */
@AllArgsConstructor
public class JobAutoScalerContext<KEY, INFO> {

    // The identifier of each flink job.
    @Getter private final KEY jobKey;

    @Getter private final JobID jobID;

    @Getter private final long jobVersion;

    // Whether the job is really running, the STARTING or CANCELING aren't running.
    @Getter private final boolean isRunning;

    @Getter private final Configuration conf;

    @Getter private final MetricGroup metricGroup;

    private final SupplierWithException<RestClusterClient<String>, Exception> restClientSupplier;

    @Getter private final Duration flinkClientTimeout;

    @Getter private final AutoScalerStateStore stateStore;

    /**
     * The flink-autoscaler doesn't use the extraJobInfo, it is only used in some implements of AutoScalerHandler. This
     * whole context will be passed to these implements when the autoscaler callbacks them.
     */
    @Getter @Nullable private final INFO extraJobInfo;

    public RestClusterClient<String> getRestClusterClient() throws Exception {
        return restClientSupplier.get();
    }
}


AutoScalerHandler

AutoScalerHandler will be called by auto scaler when some cases need to be handle, such as: scaling error, report scaling result and update flink job based on the recommended parallelism.

For current code or kubernetes job, most of handlers will record event, all of handlers needs the `AbstractFlinkResource<?, ?>`, it's saved at `INFO extraJobInfo` of JobAutoScalerContext.
The `KubernetesAutoScalerHandler` object is shared for all flink jobs, it doesn't have the job information. However, it needs the `AbstractFlinkResource<?, ?>` of every job, that's why adding the `INFO extraJobInfo` to the JobAutoScalerContext.


Code Block
/**
 * Handler all events during scaling.
 *
 * @param <KEY>
 * @param <INFO>
 */
public interface AutoScalerHandler<KEY, INFO> {

    void handlerScalingError(JobAutoScalerContext<KEY, INFO> context, String errorMessage);

    void handlerScalingReport(JobAutoScalerContext<KEY, INFO> context, String scalingReportMessage);

    void handlerIneffectiveScaling(JobAutoScalerContext<KEY, INFO> context, String message);

    void handlerRecommendedParallelism(
            JobAutoScalerContext<KEY, INFO> context,
            HashMap<String, String> recommendedParallelism);
}


AutoScalerStateStore

AutoScalerStateStore is responsible for persist and access state during scaling.

For current code or kubernetes job, the state is persisted to ConfigMap. So the KubernetesAutoScalerStateStore needs to fetch ConfigMap before scaling, and persist the ConfigMap after scaling.
For other jobs(yarn or standalone), I implement a `HeapedAutoScalerStateStore`, it means the state will be lost after autoscaler restart. Of course, we can implement MySQLAutoScalerStateStore to persist the store in the future.


Code Block
public interface AutoScalerStateStore {

    Optional<String> get(String key);

    // Put the state to state store, please flush the state store to prevent the state lost.
    void put(String key, String value);

    void remove(String key);

    void flush();
}



Proposed Changes


Compatibility, Deprecation, and Migration Plan

...