Status

Discussion threadhttps://lists.apache.org/thread/88kxk7lh8bq2s2c2qrf06f3pnf9fkxj2
Vote threadhttps://lists.apache.org/list.html?dev@flink.apache.org
JIRA

Unable to render Jira issues macro, execution error.

Release1.16

Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).

Motivation

Statistics are one of the most important inputs to the optimizer. Accurate and complete statistics allows the optimizer to be more powerful. Currently, the statistics of Flink SQL come from Catalog only, while many Connectors have the ability to provide statistics, e.g. FileSystem. In production, we find many tables in Catalog do not have any statistics. As a result, the optimizer can't generate better execution plans, especially for Batch jobs.

There are two approaches to enhance statistics for the planner, one is to introduce the "ANALYZE TABLE" syntax which will write the analyze result to the catalog, another is to introduce a new connector interface which allows the connector itself to report statistics directly to the planner. The second one is a supplement to the first one.

The main purpose of this FLIP is to discuss the second approach. Compared to the first approach, the second one is to get statistics in real time, no need to run analyze job for each table. This could help improve the user experience. The disadvantage is, in most cases, the statistics reported by connector is not as complete as the results of analyze job. We will also introduce the "ANALYZE TABLE" syntax in other FLIP.

Public Interfaces


SupportsStatisticReport is an interface that allows the Connector to report statistics to the planner. The statistics reported by Connector will be used when the statistics from Catalog is unknown.

/** Enables to report the estimated statistics provided by the {@link DynamicTableSource}. */
@PublicEvolving
public interface SupportsStatisticReport {

    /**
     * Returns the estimated statistics of this {@link DynamicTableSource}, else {@link
     * TableStats#UNKNOWN} if some situations are not supported or cannot be handled.
     */
    TableStats reportStatistics();
}


We introduce a new config option as following to whether to call the reportStatistics method or not. Because it's a heavy operation to collect the statistics for some source in same cases.

public static final ConfigOption<Boolean> TABLE_OPTIMIZER_SOURCE_REPORT_STATISTICS_ENABLED =
    key("table.optimizer.source.report-statistics-enabled")
        .booleanType()
        .defaultValue(true)
        .withDescription(
            "When it is true, the optimizer will collect and use the statistics from source connector"
            + " if the source extends from SupportsStatisticReport and the collected statistics is not UNKNOWN."
            + "Default value is true.");


The FileSystemTableSource and HiveTableSource are commonly used connectors, especially for batch jobs. They support multiple kinds of format, such as: csv, parquet, orc, etc. [1] Different formats have different ways of getting statistics. For parquet[2] and orc[3], they both have metadata information stored in the file footer, which including row count, max/min, null count, etc. For csv, we can get file size and estimated row count (file_size / simpled_lines_length).

Currently, the statistical dimensions used by the optimizer include row count, ndv(number fo distinct value), null count, max length, min length, max value and min value.[4] The file count, file size (which can be easily get from file system) is not used in the planner now, we can improve this later.

/**
 * Extension of input format which is able to report estimated statistics for FileSystem
 * connector.
 */
@PublicEvolving
public interface FileBasedStatisticsReportableInputFormat {

    /**
     * Returns the estimated statistics of this input format.
     *
     * @param files The files to be estimated.
     * @param producedDataType the final output type of the format.
     */
    TableStats reportStatistics(List<Path> files, DataType producedDataType);
}


It's a heavy operation if there are thousands of file to list or to read footer, so we also introduce a config option as following to allow the users to choose which kind of statistics is needed. Once we introduce file size, FileStatisticsType.SIZE can be added.

public static final ConfigOption<FileStatisticsType> SOURCE_STATISTICS_TYPE =
    key("source.report-statistics")
        .enumType(FileStatisticsType.class)
        .defaultValue(FileStatisticsType.ALL)
        .withDescription("The file statistics type which the source could provide. "
            + "The statistics collecting is a heavy operation in some cases,"
            + "this config allows users to choose the statistics type according to different situations.");

public enum FileStatisticsType implements DescribedEnum {
    NONE("NONE", text("Do not collect any file statistics.")),
    ALL("ALL", text("Collect all file statistics that the format can provide."));

    private final String value;
    private final InlineElement description;

    FileStatisticsType(String value, InlineElement description) {
        this.value = value;
        this.description = description;
    }

    @Override
    public String toString() {
        return value;
    }

    @Override
    public InlineElement getDescription() {
        return description;
    }
}

Proposed Changes

  1. How the planner use the statistics reported by connector?

The statistics for a table needs to be re-computed when:

    1. the statistics from catalog is unknown
    2. the partitions are pruned
    3. the filter predicates are pushed down

In order to avoid multiple recalculations for each of the above operations, we introduced a new optimzation program after the predicate pushdown program to collect the statistics one-time.

The pseudocode is as follows:

public class FlinkRecomputeStatisticsProgram implements FlinkOptimizeProgram<BatchOptimizeContext> {

    @Override
    public RelNode optimize(RelNode root, BatchOptimizeContext context) {
        // create a visitor to find all LogicalTableScan nodes
        // call recomputeStatistics method for each LogicalTableScan
    }

    private LogicalTableScan recomputeStatistics(LogicalTableScan scan) {
        final RelOptTable scanTable = scan.getTable();
        if (!(scanTable instanceof TableSourceTable)) {
            return scan;
        }
         boolean reportStatEnabled =
                ShortcutUtils.unwrapContext(scan)
                                .getTableConfig()
                                .get(TABLE_OPTIMIZER_SOURCE_REPORT_STATISTICS_ENABLED)
                        && table.tableSource() instanceof SupportsStatisticReport;

        SourceAbilitySpec[] specs = table.abilitySpecs();
        PartitionPushDownSpec partitionPushDownSpec = getSpec(specs, PartitionPushDownSpec.class);
        FilterPushDownSpec filterPushDownSpec = getSpec(specs, FilterPushDownSpec.class);
        TableStats newTableStat =
                recomputeStatistics(
                        table, partitionPushDownSpec, filterPushDownSpec, reportStatEnabled);
        FlinkStatistic newStatistic =
                FlinkStatistic.builder()
                        .statistic(table.getStatistic())
                        .tableStats(newTableStat)
                        .build();
        TableSourceTable newTable = table.copy(newStatistic);
        return new LogicalTableScan(
                scan.getCluster(), scan.getTraitSet(), scan.getHints(), newTable);
    }
    
    private TableStats recomputeStatistics(
            TableSourceTable table,
            PartitionPushDownSpec partitionPushDownSpec,
            FilterPushDownSpec filterPushDownSpec,
            boolean reportStatEnabled) {
        DynamicTableSource tableSource = table.tableSource();
        if (filterPushDownSpec != null) {
            // filter push down
            // the catalog do not support get statistics with filters, 
            // so only call reportStatistics method if needed
            return reportStatEnabled
                    ? ((SupportsStatisticReport) tableSource).reportStatistics()
                    : null;
        } else {
            if (partitionPushDownSpec != null) {
                // collect the partitions statistics from catalog
                TableStats newTableStat = getPartitionsTableStats(table, partitionPushDownSpec);
                if (reportStatEnabled
                        && (newTableStat == null || newTableStat == TableStats.UNKNOWN)) {
                    return ((SupportsStatisticReport) tableSource).reportStatistics();
                } else {
                    return newTableStat;
                }
            } else {
                if (reportStatEnabled
                        && (table.getStatistic().getTableStats() == TableStats.UNKNOWN)) {
                    return ((SupportsStatisticReport) tableSource).reportStatistics();
                } else {
                    return null;
                }
            }
        }
    }
    
    private TableStats getPartitionsTableStats(
            TableSourceTable table, PartitionPushDownSpec partitionPushDownSpec) {
        if (partitionPushDownSpec == null) {
            return null;
        }
        // collect and merge the partitions statistics from catalog
    }
}



  1. Which connectors and formats will be supported by default?

FileSystem collector, Hive connector, Csv format, Parquet format, Orc format will be supported by default.

More collectors and formats can be supported as needed in the future.

Compatibility, Deprecation, and Migration Plan

This is new feature, no compatibility, deprecation, and migration plan.

Test Plan

  1. UT tests will be added for each format to verify the estimation statistics logic.
  2. Plan tests will be added to verify the logic of how planner uses the connector statistics.

Rejected Alternatives

None


POC: https://github.com/godfreyhe/flink/tree/FLIP-231


[1] https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/table/formats/overview

[2] https://parquet.apache.org/docs/file-format/metadata

[3] https://orc.apache.org/specification/ORCv1

[4] https://github.com/apache/flink/blob/master/flink-table/flink-table-common/src/main/java/org/apache/flink/table/plan/stats/TableStats.java