ID | IEP-20 |
Author | Vladimir Ozerov Ozerov |
Sponsor | Vladimir Ozerov Ozerov |
Created | 25 Apr 2018 |
Status | DRAFT |
Compression is used extensively by all database vendors to reduce TCO and improve performance. Compression could be applied to different parts of the system to achieve the following goals:
Performance numbers of other vendors demonstrate that we could expect 2x-4x decrease in required disk space and >1.5x increase in throughput (ops/sec) on typical workloads.
This section describes general compression approaches and their pros and cons. The following compression mechanisms are implemented in practice:
Efficient disk usage starts with proper data layout. Vendors strive to place data in pages in such a way that total overhead is kept as low as possible while still maintaining high read speed. Typically this is achieved as follows:
Examples:
[1] https://docs.microsoft.com/en-us/sql/relational-databases/data-compression/row-compression-implementation?view=sql-server-2017
[2] https://dev.mysql.com/doc/refman/5.6/en/innodb-physical-record.html
Secondary indexes tend to have entries with common prefix. E.g. {'IndexedValue', link1}, {'indexedValue', link2}. Prefix compression technique extracts common prefixes from entries on the index page and place them in a special directory within a page.
Prefix compression could be applied to the following index types:
Prefix compression could be implemented as follows:
Examples:
[1] https://blogs.oracle.com/dbstorage/compressing-your-indexes:-index-key-compression-part-1
[2] https://blogs.oracle.com/dbstorage/compressing-your-indexes:-advanced-index-compression-part-2
[3] https://docs.mongodb.com/manual/core/wiredtiger/#storage-wiredtiger-compression
The whole pages could be compressed. This gives 2x-4x reduciton in size on average. Two different approaches are used in practice - without in-memory compression, with in-memory compression
Data is stored in-memory as is, in uncompressed form. When it is time to flush data to disk compression is applied. If data size is reduced significantly, data is stored in compressed form. Otherwise it is stored in plain form (compression faiure). Big block sizes (e.g. 32Kb) is typically used in this case to achieve higher compression rates. Data is still being written to disk in blocks of smaller sizes. E.g. one may have 32Kb block in-memory, which is compressed to 7Kb, which is then written as two 4Kb blocks to disk. Vendors allow to select compression algorithm (Snappy, zlib, lz4, etc.).
Page compression is not applicable to index pages because it incurs serious slowdow no reads.
Hole punching with fallocate [1] might be added if underlying file system supports it. In this case compressed block is written as is, but then empty space is trimmed with separate system call. E.g. if 32Kb block is compressed to 6.5Kb, then 32Kb is written as is, and then 32 - 7 = 25 Kb are released.
Advantages:
Disadvantages:
Examples:
[1] http://man7.org/linux/man-pages/man2/fallocate.2.html
[2] https://mariadb.org/innodb-holepunch-compression-vs-the-filesystem-in-mariadb-10-1/
[3] https://dev.mysql.com/doc/refman/5.7/en/innodb-compression-background.html
[4] https://mysqlserverteam.com/innodb-transparent-page-compression/
[5] https://www.objectrocket.com/blog/company/mongodb-3-0-wiredtiger-compression-and-performance/
[6] https://habr.com/company/postgrespro/blog/337180/
[7] https://www.percona.com/blog/2017/11/20/innodb-page-compression/
Data is stored in-memory in compressed form. Data is accomodated in blocks in raw form. When certain threshold is reached data is compressed and more records could be added to it. Original row structure may be maintained to certain extent, so that subsequent reads do not need to uncompress data.
Advantages:
Disadvantages:
Examples:
[1] https://blogs.oracle.com/dbstorage/updates-in-row-compressed-tables
[2] https://blogs.oracle.com/dbstorage/advanced-row-compression-improvements-with-oracle-database-12c-release-2
[3] https://docs.microsoft.com/en-us/sql/relational-databases/data-compression/page-compression-implementation?view=sql-server-2017
Write-ahead log writes all changes to data to journal. Data is read back only during crash recovery. Data chunks being written to journal may be compressed. It reduces number of disk writes and saves WAL space increasing likelihood of delta-update in case of temporal node shutdown. Compression could be applied to specific records - the larger the record, the more savings are expected. Compression typically takes more time than decompression, so operation latency may increase. However less number of IO calls typically increases overall system throughput because disk resources are typically more deficient than CPU.
Examples:
[1] https://www.pgcon.org/2016/schedule/attachments/432_WAL-Reduction.pdf
[2] https://mariadb.com/kb/en/library/compressing-events-to-reduce-size-of-the-binary-log/
[3] https://docs.mongodb.com/manual/core/wiredtiger/#storage-wiredtiger-journal
It is possible to compress specific columns only. This can be done either manually using built-in functions, or transparently if column compression hint is defined in CREATE TABLE or ALTER TABLE commands.
Advantages:
Disadvantages:
Examples:
[1] https://docs.microsoft.com/en-us/sql/t-sql/functions/compress-transact-sql?view=sql-server-2017
[2] https://dev.mysql.com/doc/refman/8.0/en/encryption-functions.html#function_compress
[3] https://www.percona.com/doc/percona-server/LATEST/flexibility/compressed_columns.html
[4] https://mariadb.com/kb/en/library/storage-engine-independent-column-compression/
TBD
TBD