You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »


Apache CarbonData community is pleased to announce the release of the Version 1.5.1 in The Apache Software Foundation (ASF). 

CarbonData is a high-performance data solution that supports various data analytic scenarios, including BI analysis, ad-hoc SQL query, fast filter lookups on detail record, streaming analytics, and so on. CarbonData has been deployed in many enterprise production environments, in one of the largest scenario it supports queries on single table with 3PB data (more than 5 trillion records) with response time less than 3 seconds!

We encourage you to use the release https://dist.apache.org/repos/dist/release/carbondata/1.5.1/, and feedback through the CarbonData user mailing lists!

This release note provides information on the new features, improvements, and bug fixes of this release.

What’s New in CarbonData Version 1.5.1?

CarbonData 1.5.1 intention was to move more closer to unified analytics. We want to enable CarbonData files to be read from more engines/libraries to support various use cases. In this regard we have added support to read CarbonData files from c++ libraries. Additionally CarbonData files can be read using Java SDK, Spark FileFormat interface, Spark, Presto.

CarbonData added multiple optimisations to reduce the store size so that query can take advantage of lesser IO. Several enhancements have been made to Streaming support from CarbonData.

In this version of CarbonData, more than 150 JIRA tickets related to new features, improvements, and bugs has been resolved. Following are the summary.

CarbonData Core

Optimised carbon scan performance

Carbondata scan performance is improved by avoiding multiple data copies in case of vector flow. This is achieved through short circuit the read and vector filling, it means fill the data directly to vector after reading the data from file with out any intermediate copies.  

Row Filter pruning is handled in execution engine after pruning the blocklet and pages using the filter in carbon. This is controlled by property  carbon.push.rowfilters.for.vector and default it is false. 

Support custom column compressor

Carbondata supports customised column compressor so that user can add their own implementation of compressor. To customise compressor, user can directly use its full class name while creating table or setting it to carbon property.

Optimised compaction performance

Compaction performance is optimised through prefetching the data while reading carbon files.


CarbonData SDK

SDK Supports C++ Interfaces for writing CarbonData files

To enable integration with non java based execution engines, CarbonData supports C++ writer to write the CarbonData files. These writers can be integrated with any execution engine and write data to CarbonData files without the dependency on Spark or Hadoop.

Multi-Thread Read API in SDK 

To improve the read performance when using SDK, CarbonData supports multi-thread read APIs. This enables the applications to read data from multiple CarbonData files in parallel. It significantly improves the SDK read performance.

Other Improvements

  • Added more CLI enhancements by adding more options.
  • Supported fallback mechanism when offheap memory is not enough then switch to onheap
  • Enable Local dictionary by default.
  • Make inverted index false by default.
  • Supported a separate audit log.
  • Support read batch row in CSDK to improve performance.

New Configuration Parameters

Configuration nameDefault ValueRange
carbon.push.rowfilters.for.vectorfalseNA


Please find the detailed JIRA list: https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12320220&version=12341006

Sub-task

Bug

  • [CARBONDATA-1787] - Carbon 1.3.0- Global Sort: Global_Sort_Partitions parameter doesn't work, if specified in the Tblproperties, while creating the table.
  • [CARBONDATA-2418] - Presto can't query Carbon table when carbonstore is created at s3
  • [CARBONDATA-2478] - Add datamap-developer-guide.md file in readme
  • [CARBONDATA-2515] - Filter OR Expression not working properly in Presto integration
  • [CARBONDATA-2516] - Filter Greater-than for timestamp datatype not generating Expression in PrestoFilterUtil
  • [CARBONDATA-2528] - MV Datamap - When the MV is created with the order by, then when we execute the corresponding query defined in MV with order by, then the data is not accessed from the MV. 
  • [CARBONDATA-2530] - [MV] Wrong data displayed when parent table data are loaded 
  • [CARBONDATA-2531] - [MV] MV not hit when alias is in use
  • [CARBONDATA-2534] - MV Dataset - MV creation is not working with the substring() 
  • [CARBONDATA-2539] - MV Dataset - Subqueries is not accessing the data from the MV datamap.
  • [CARBONDATA-2540] - MV Dataset - Unionall queries are not fetching data from MV dataset.
  • [CARBONDATA-2542] - MV creation is failed for other than default database
  • [CARBONDATA-2550] - [MV] Limit is ignored when data fetched from MV, Query rewrite is Wrong
  • [CARBONDATA-2560] - [MV] Exception in console during MV creation but MV registered successfully
  • [CARBONDATA-2568] - [MV] MV datamap is not hit when ,column is in group by but not in projection 
  • [CARBONDATA-2576] - MV Datamap - MV is not working fine if there is more than 3 aggregate function in the same datamap.
  • [CARBONDATA-2610] - DataMap creation fails on null values 
  • [CARBONDATA-2614] - There are some exception when using FG in search mode and the prune result is none
  • [CARBONDATA-2616] - Incorrect explain and query result while using bloomfilter datamap
  • [CARBONDATA-2629] - SDK carbon reader don't support filter in HDFS and S3
  • [CARBONDATA-2644] - Validation not present for carbon.load.sortMemory.spill.percentage parameter 
  • [CARBONDATA-2658] - Fix bug in spilling in-memory pages
  • [CARBONDATA-2674] - Streaming with merge index enabled does not consider the merge index file while pruning. 
  • [CARBONDATA-2703] - Fix bugs in tests
  • [CARBONDATA-2711] - carbonFileList is not initalized when updatetablelist call
  • [CARBONDATA-2715] - Failed to run tests for Search Mode With Lucene in Windows env
  • [CARBONDATA-2729] - Schema Compatibility problem between version 1.3.0 and 1.4.0
  • [CARBONDATA-2758] - selection on local dictionary fails when column having all null values more than default batch size.
  • [CARBONDATA-2769] - Fix bug when getting shard name from data before version 1.4
  • [CARBONDATA-2802] - Creation of Bloomfilter Datamap is failing after UID,compaction,pre-aggregate datamap creation
  • [CARBONDATA-2823] - Alter table set local dictionary include after bloom creation fails throwing incorrect error
  • [CARBONDATA-2854] - Release table status file lock before delete physical files when execute 'clean files' command
  • [CARBONDATA-2862] - Fix exception message for datamap rebuild command
  • [CARBONDATA-2866] - Should block schema when creating external table
  • [CARBONDATA-2874] - Support SDK writer as thread safe api
  • [CARBONDATA-2886] - select filter with int datatype is showing incorrect result in case of table created and loaded on old version and queried in new version
  • [CARBONDATA-2888] - Support multi level sdk read support for carbon tables
  • [CARBONDATA-2901] - Problem: Jvm crash in Load scenario when unsafe memory allocation is failed.
  • [CARBONDATA-2902] - Fix showing negative pruning result for explain command
  • [CARBONDATA-2908] - the option of sort_scope don't effects while creating table by data frame
  • [CARBONDATA-2910] - Support backward compatability in fileformat and support different sort colums per load
  • [CARBONDATA-2924] - Fix parsing issue for map as a nested array child and change the error message in sort column validation for SDK
  • [CARBONDATA-2925] - Wrong data displayed for spark file format if carbon file has mtuiple blocklet
  • [CARBONDATA-2926] - ArrayIndexOutOfBoundException if varchar column is present before dictionary columns along with empty sort_columns.
  • [CARBONDATA-2927] - Multiple issue fixes for varchar column and complex columns that grows more than 2MB
  • [CARBONDATA-2932] - CarbonReaderExample throw some exception: Projection can't be empty
  • [CARBONDATA-2933] - Fix errors in spelling
  • [CARBONDATA-2940] - Fix BufferUnderFlowException for ComplexPushDown
  • [CARBONDATA-2955] - bug for legacy store and compaction with zstd compressor and adaptiveDeltaIntegralCodec
  • [CARBONDATA-2956] - CarbonReader can't support use configuration to read S3 data
  • [CARBONDATA-2967] - Select is failing on pre-aggregate datamap when thrift server is restarted.
  • [CARBONDATA-2969] - Query on local dictionary column is giving empty data
  • [CARBONDATA-2974] - Bloomfilter not working when created bloom on multiple columns and queried
  • [CARBONDATA-2975] - DefaultValue choosing and removeNullValues on range filters is incorrect
  • [CARBONDATA-2979] - select count fails when carbondata file is written through SDK and read through sparkfileformat for complex datatype map(struct->array->map)
  • [CARBONDATA-2980] - clear bloomindex cache when dropping datamap
  • [CARBONDATA-2982] - CarbonSchemaReader don't support Array<string>
  • [CARBONDATA-2984] - streaming throw NPE when there is no data in the task of a batch
  • [CARBONDATA-2986] - Table Properties are lost when multiple driver concurrently creating table 
  • [CARBONDATA-2990] - JVM crashes when rebuilding the datamap.
  • [CARBONDATA-2991] - NegativeArraySizeException during query execution 
  • [CARBONDATA-2992] - Fixed Between Query Data Mismatch issue for timestamp data type
  • [CARBONDATA-2993] - Concurrent data load throwing NPE randomly.
  • [CARBONDATA-2994] - Unify property name for badrecords path in create and load.
  • [CARBONDATA-2995] - Queries slow down after some time due to broadcast issue

New Feature

Improvement

Task

  • No labels