Apache Kylin : Analytical Data Warehouse for Big Data
Page History
...
sortColumn | Importance |
---|---|
multiple | false |
enableHeadingAttributes | false |
enableSorting | false |
Property | Importance | Default | Description | Version | ||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
kylin.engine.spark.build-class-name |
| org.apache.kylin.engine.spark.job.CubeBuildJob | For developer only. The className use in spark-submit |
| ||||||||||||||||||||||||||
kylin.engine.spark.cluster-info-fetcher-class-name |
| org.apache.kylin.cluster.YarnInfoFetcher | For developer only. Fetch yarn information of spark job |
| ||||||||||||||||||||||||||
kylin.engine.spark-conf.XXX |
|
kylin.engine.spark-conf.spark.driver.extraJavaOptions=-Dhdp.version=current |
| |||||||||||||||||||||||||||
kylin.storage.provider |
| org.apache.kylin.common.storage.DefaultStorageProvider | The content summary objects returned by different cloud vendors are not the same, so need to provide targeted implementation. You can refer to this to learn more : org.apache.kylin.common.storage.IStorageProvider |
| ||||||||||||||||||||||||||
kylin.engine.spark.merge-class-name |
| org.apache.kylin.engine.spark.job.CubeMergeJob | For developer only. The className use in spark-submit |
| ||||||||||||||||||||||||||
kylin.engine.spark.task-impact-instance-enabled |
| true |
|
| ||||||||||||||||||||||||||
kylin.engine.spark.task-core-factor |
| 3 |
|
| ||||||||||||||||||||||||||
kylin.engine.driver-memory-base |
| 1024 | Auto adujst spark.driver.memory for Build Engine if kylin.engine.spark-conf.spark.driver.memory is not set. |
| ||||||||||||||||||||||||||
kylin.engine.driver-memory-strategy |
| {"2", "20", "100"} |
|
| ||||||||||||||||||||||||||
kylin.engine.driver-memory-maximum |
| 4096 |
|
| ||||||||||||||||||||||||||
kylin.engine.persist-flattable-threshold |
| 1 | If the number of cuboids which will be build from flat table is bigger than this threshold, the flat table will be persisted into $HDFS_WORKING_DIR/job_tmp/flat_table for saving more memory. |
| ||||||||||||||||||||||||||
kylin.snapshot.parallel-build-timeout-seconds |
| 3600 | To improve the speed of snapshot build. |
| ||||||||||||||||||||||||||
kylin.snapshot.parallel-build-enabled |
| true |
| |||||||||||||||||||||||||||
kylin.spark-conf.auto.prior |
| true | Enable adjust spark parameters adaptively. |
| ||||||||||||||||||||||||||
kylin.engine.submit-hadoop-conf-dir |
| /etc/hadoop/conf | Set HADOOP_CONF_DIR for spark-submit. |
| ||||||||||||||||||||||||||
kylin.storage.columnar.shard-size-mb |
| 128 | The max size of pre-calcualted cuboid parquet file. |
| ||||||||||||||||||||||||||
kylin.storage.columnar.shard-rowcount |
| 2500000 | The max rows of pre-calcualted cuboid parquet file. |
| ||||||||||||||||||||||||||
kylin.storage.columnar.shard-countdistinct-rowcount |
| 1000000 | The max rows of pre-calcualted cuboid parquet file when cuboid has bitmap measure. (When cuboid has BItmap, it is large.) |
| ||||||||||||||||||||||||||
kylin.query.spark-engine.join-memory-fraction |
| 0.3 | Limit memory used by broadcast join of Sparder. (Broadcast join cause unstable.) |
|
Overview
Content Tools
ThemeBuilder
Apps