Welcome to the CarbonData
Overview
- What is CarbonData?
- What problem does CarbonData solve?
- What are the key technology benefits of CarbonData?
- CarbonData vs popular Hadoop Data Stores
What is CarbonData?
CarbonData is a fully indexed columnar and Hadoop native data-store for processing heavy analytical workloads and detailed queries on big data. In customer benchmarks, CarbonData has proven to manage Petabyte of data running on extraordinarily low-cost hardware and answers queries around 10 times faster than the current open source solutions (column-oriented SQL on Hadoop data-stores).
What problem does CarbonData solve?
For big data interactive analysis scenarios, many customers expect sub-second response to query TB-PB level data on general hardware clusters with just a few nodes.
In the current big data ecosystem, there are few columnar storage formats such as ORC and Parquet that are designed for SQL on Big Data. Apache Hive’s ORC format is a columnar storage format with basic indexing capability. However, ORC cannot meet the sub-second query response expectation on TB level data, because ORC format performs only stride level dictionary encoding and all analytical operations such as filtering and aggregation is done on the actual data. Apache Parquet is columnar storage can improve performance in comparison to ORC, because of more efficient storage organization. Though Parquet can provide query response on TB level data in a few seconds, it is still far from the sub-second expectation of interactive analysis users. Cloudera Kudu can effectively solve some query performance issues, but kudu is not hadoop native, can’t seamlessly integrate historic HDFS data into new kudu system.
However, CarbonData uses specially engineered optimizations targeted to improve performance of analytical queries which can include filters, aggregation and distinct counts, the required data to be stored in an indexed, well organized, read-optimized format, CarbonData’s query performance can achieve sub-second response.
What are the key technology benefits of CarbonData?
The key aspects of Carbon’s technology that enables such dramatic performance benefits are summarized as follows:
- Global Dictionary Encoding with Lazy Conversion: Most databases and big data SQL data stores employ columnar encoding to achieve data compression by storing small integers numbers (surrogate value) instead of full string values. However, almost all existing databases and data stores divide the data into row groups containing anywhere from few thousand to a million rows and employ dictionary encoding only within each row group. Hence, the came column value can have different surrogate values in different row groups. So, while reading the data, conversion from surrogate value to actual value needs to be done immediately after the data is read from the disk. But Carbon employs global surrogate key which means that a common dictionary is maintained for the full store on one machine/node. So carbon can perform all the query processing work such as grouping/aggregation, sorting etc.. on light weight surrogate values. The conversion from surrogate to actual values needs to be done only on the final result. This procedure improves performance on two aspects.
- Conversion from surrogate values to actual values is done only for the final result rows which are much less than the actual rows read from the store.
- All query processing and computation such as grouping/aggregation, sorting, and so on is done on lightweight surrogate values which requires less memory and CPU time compared to actual values.
- Unique Data Organization: Though Carbon stores data in Columnar format, it differs from traditional Columnar format that the columns in each row-group(Data Block) is sorted independent of the other columns. Though this arrangement requires carbon to store the row-number mapping against each column value, it makes possible to use binary search for faster filtering and Since the values are sorted, same/similar values come together which yields better compression and offsets the storage overhead required by the row number mapping.
- Multi Level Indexing: Carbon uses multiple indices at various levels to enable faster search and speed up query processing.
- Global Multi Dimensional Keys(MDK) based B+Tree Index for all non- measure columns:Aids in quickly locating the row groups(Data Blocks) that contain the data matching search/filter criteria.
- Min-Max Index for all columns: Aids in quickly locating the row groups(Data Blocks) that contain the data matching search/filter criteria.
- Data Block level Inverted Index for all columns: Aids in quickly locating the rows that contain the data matching search/filter criteria within a row group(Data Blocks).
- Advanced Push Down Optimizations: Carbon pushes as much of query processing as possible close to the data to minimize the amount of data being read, processed, converted and transmitted/shuffled.
- Projection and Filters: Since carbon uses columnar format, it reads only the required columns form the store and also reads only the rows that match the filter conditions provided in the query.
Besides remarkable performance on a variety of database workloads, Carbon includes several other features designed to offer performance, scalability, reliability, and ease of use. These include:
- A shared nothing, grid-based database architecture based on Spark that allows Carbon to scale effectively on clusters of commodity CPUs.
CarbonData vs popular Hadoop Data Stores
Structure Comparison
Carbon file format has lot of similarities in the structure with Parquet and ORC formats, yet there are some significant differences which make carbon several times faster for queries than Parquet or ORC.
Performance Comparison
Carbon performs much better than ORC and Parquet in most query scenarios, however the performance advantage is more evident in the following:
- Filter Queries: Carbon performs much better than ORC and Parquet in filter queries because of the multi level indices which helps reduce the I/O and directly locate the required data.
- Group By Queries: Carbon performs much better than ORC and Parquet in group by queries, because grouping is done based on the lightweight surrogate values.
- Distinct Count Query: Carbon performs several orders faster than ORC and Parquet in distinct count queries taking advantage of the unique bit-set based approach.