You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 4 Next »


Status

Current state: "Under Discussion"

Discussion thread: https://lists.apache.org/thread/88kxk7lh8bq2s2c2qrf06f3pnf9fkxj2

JIRA: here (<- link to https://issues.apache.org/jira/browse/FLINK-XXXX)

Released: <Flink Version>

Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).

Motivation

Statistics are one of the most important inputs to the optimizer. Accurate and complete statistics allows the optimizer to be more powerful. Currently, the statistics of Flink SQL come from Catalog only, while many Connectors have the ability to provide statistics, e.g. FileSystem. In production, we find many tables in Catalog do not have any statistics. As a result, the optimizer can't generate better execution plans, especially for Batch jobs.

There are two approaches to enhance statistics for the planner, one is to introduce the "ANALYZE TABLE" syntax which will write the analyzed result to the catalog, another is to introduce a new connector interface which allows the connector itself to report statistics directly to the planner. The second one is a supplement to the catalog statistics.

The main purpose of this FLIP is to discuss the second approach. Compared to the first approach, the second one is to get statistics in real time, no need to run an analysis job for each table. This could help improve the user experience. The disadvantage is, in most cases, the statistics reported by connector is not as complete as the results of analyzed job. We will also introduce the "ANALYZE TABLE" syntax in other FLIP.

Public Interfaces

Currently, table statistics and column statistics are described via two classes. We introduce CatalogStatistics to combine table statistics and column statistics, which describes all statistics for a table or partition.

/**
 * All statistics for a table or partition, including {@link CatalogTableStatistics} and {@link
 * CatalogColumnStatistics}.
 */
@PublicEvolving
public class CatalogStatistics {
    public static final CatalogStatistics UNKNOWN =
            new CatalogStatistics(CatalogTableStatistics.UNKNOWN, CatalogColumnStatistics.UNKNOWN);

    private final CatalogTableStatistics tableStatistics;
    private final CatalogColumnStatistics columnStatistics;

    public CatalogStatistics(
            CatalogTableStatistics tableStatistics, CatalogColumnStatistics columnStatistics) {
        this.tableStatistics = tableStatistics;
        this.columnStatistics = columnStatistics;
    }

    public CatalogTableStatistics getTableStatistics() {
        return tableStatistics;
    }

    public CatalogColumnStatistics getColumnStatistics() {
        return columnStatistics;
    }
}


SupportStatisticReport is an interface that allows the Connector to report statistics to the planner. The statistics reported by Connector have a high priority and could override the statistics from Catalog.

/** Enables to report the estimated statistics provided by the {@link DynamicTableSource}. */
@PublicEvolving
public interface SupportStatisticReport {

    /**
     * Returns the estimated statistics of this {@link DynamicTableSource}, else {@link
     * CatalogStatistics#UNKNOWN} if some situations are not supported or cannot be handled.
     */
    CatalogStatistics reportStatistics();
}

We introduce a new config option as following to whether to call the reportStatistics method or not. Because it's a heavy operation to collect the statistics for some source in same cases.

public static final ConfigOption<Boolean> TABLE_OPTIMIZER_SOURCE_CONNECT_STATISTICS_ENABLED =
    key("table.optimizer.source.connect-statistics-enabled")
        .booleanType()
        .defaultValue(true)
        .withDescription(
            "When it is true, the optimizer will connect and use the statistics from source connector"
            + " if the source extends from SupportStatisticReport and the connected statistics is not UNKNOWN."
            + "Default value is true.");



The FileSystem connector is a commonly used connector, especially for batch jobs. FileSystem supports multple kinds of format, such as: csv, parquet, orc, etc. [1] Different formats have different ways of getting statistics. For parquet[2] and orc[3], they both have metadata information stored in the file footer, which including row count, max/min, null count, etc. For csv, we can get file size and estimated row count (file_size/simpled_lines_length).

Currently, the statistical dimensions used by the optimizer include row count, ndv(number fo disitinct value), null count, max length, min length, max value and min value.[4] The file count, file size (which can be easily get from file system) is not used in the planner now, we can improve this later.

We introduce FileBasedStatisticsReportableDecodingFormat interface to get the estimated statistics for the format in FileSystem connector.

/**
 * Extension of {@link DecodingFormat} which is able to report estimated statistics for FileSystem
 * connector.
 */
@PublicEvolving
public interface FileBasedStatisticsReportableDecodingFormat<I> extends DecodingFormat<I> {

    /**
     * Returns the estimated statistics of this {@link DecodingFormat}.
     *
     * @param files The files to be estimated.
     * @param producedDataType the final output type of the format.
     */
    CatalogStatistics reportStatistics(List<Path> files, DataType producedDataType);
}

It's a heavy operation if there are thousands of file to list or to read footer, so we also introduce a config option as following to allow the users to choose which kind of statistics is needed.

public static final ConfigOption<FileStatisticsType> SOURCE_STATISTICS_TYPE =
    key("source.statistics-type")
        .enumType(FileStatisticsType.class)
        .defaultValue(FileStatisticsType.ALL)
        .withDescription("The file statistics type which the source could provide. "
            + "The statistics collecting is a heavy operation in some cases,"
            + "this config allows users to choose the statistics type according to different situations.");

public enum FileStatisticsType implements DescribedEnum {
    NONE("NONE", text("Do not collect any file statistics.")),
    ALL("ALL", text("Collect all file statistics that the format can provide."));

    private final String value;
    private final InlineElement description;

    FileStatisticsType(String value, InlineElement description) {
        this.value = value;
        this.description = description;
    }

    @Override
    public String toString() {
        return value;
    }

    @Override
    public InlineElement getDescription() {
        return description;
    }
}

Proposed Changes

  1. How the planner use the statistics reported by connector?

    The statistics for a table needs to be re-computed when:

      1. the statistics from catalog is unknown
      2. the partitions are pruned
      3. the filter predicates are pushed down

    In order to avoid multiple recalculations for each of the above operations, we introduced a new optimzation program after the predicate pushdown program to collect the statistics one-time.

    The pseudocode is as follows:

    public class FlinkCollectStatisticsProgram implements FlinkOptimizeProgram<BatchOptimizeContext> {
    
        @Override
        public RelNode optimize(RelNode root, BatchOptimizeContext context) {
            // create a visitor to find all LogicalTableScan nodes
            // call collectStatistics method for each LogicalTableScan
        }
    
        private LogicalTableScan collectStatistics(LogicalTableScan scan) {
            final RelOptTable scanTable = scan.getTable();
            if (!(scanTable instanceof TableSourceTable)) {
                return scan;
            }
            boolean collectStatEnabled =
                    ShortcutUtils.unwrapContext(scan)
                            .getTableConfig()
                            .get(TABLE_OPTIMIZER_SOURCE_CONNECT_STATISTICS_ENABLED);
    
            TableSourceTable table = (TableSourceTable) scanTable;
            DynamicTableSource tableSource = table.tableSource();
            SourceAbilitySpec[] specs = table.abilitySpecs();
            PartitionPushDownSpec partitionPushDownSpec = // find the PartitionPushDownSpec
            FilterPushDownSpec filterPushDownSpec = // find the FilterPushDownSpec
            TableStats newTableStat = null;
            
            if (partitionPushDownSpec != null && filterPushDownSpec == null) {
                // do partition pruning while no filter push down 
                if (table.contextResolvedTable().isPermanent()) {
                    // collect the statistics from catalog
                }
    
                if (collectStatEnabled
                        && (newTableStat == null || newTableStat == TableStats.UNKNOWN)
                        && tableSource instanceof SupportStatisticReport) {
                    CatalogStatistics statistics =
                            ((SupportStatisticReport) tableSource).reportStatistics();
                    newTableStat = CatalogTableStatisticsConverter.convertToTableStats(statistics);
                }
           
            } else if (filterPushDownSpec != null) {
                // only filter push down
                // the catalog do not support get statistics with filters, 
                // so only call reportStatistics method if needed
                if (collectStatEnabled && tableSource instanceof SupportStatisticReport) {
                    CatalogStatistics statistics =
                            ((SupportStatisticReport) tableSource).reportStatistics();
                    newTableStat = CatalogTableStatisticsConverter.convertToTableStats(statistics);
                }
            } else if (collectStatEnabled
                    && (table.getStatistic().getTableStats() == TableStats.UNKNOWN)
                    && tableSource instanceof SupportStatisticReport) {
                // no partition pruning and no filter push down
                // call reportStatistics method if needed
                CatalogStatistics statistics =
                        ((SupportStatisticReport) tableSource).reportStatistics();
                newTableStat = CatalogTableStatisticsConverter.convertToTableStats(statistics);
            }
            FlinkStatistic newStatistic =
                    FlinkStatistic.builder()
                            .statistic(table.getStatistic())
                            .tableStats(newTableStat)
                            .build();
            return new LogicalTableScan(
                    scan.getCluster(), scan.getTraitSet(), scan.getHints(), table.copy(newStatistic));
        }
    }


  2. Which connectors and formats will be supported by default?

FileSystem collector, Csv format, Parquet format, Orc format will be supported by default.

More collectors and formats can be supported as needed in the future.

Compatibility, Deprecation, and Migration Plan

This is new feature, no compatibility, deprecation, and migration plan.

Test Plan

  1. UT tests will be added for each format to verify the estimation statistics logic.
  2. Plan tests will be added to verify the logic of how planner uses the connector statistics.

Rejected Alternatives

None


POC: https://github.com/godfreyhe/flink/tree/FLIP-231


[1] https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/table/formats/overview

[2] https://parquet.apache.org/docs/file-format/metadata

[3] https://orc.apache.org/specification/ORCv1

[4] https://github.com/apache/flink/blob/master/flink-table/flink-table-common/src/main/java/org/apache/flink/table/plan/stats/TableStats.java


  • No labels