ID | IEP-113 |
Author | |
Sponsor | |
Created |
|
Status | DRAFT |
Currently, to recovery from failures during checkpoints we use physycal records in the WAL. Physical records take most of the WAL size, required only for crash recovery and these records are useful only for a short period of time (since previous checkpoint).
Size of physical records during checkpoint is more than size of all modified pages between checkpoints, since we need to store page snapshot record for each modified page and page delta records, if page is modified more than once between checkpoints.
We process WAL file several times in stable workflow (without crashes and rebalances):
So, totally, we write all physical records twice and read physical records at least twice (if historical rebalance is in progress, physical records can be read more than twice).
Looks like we can get rid of redundant reads/writes and reduce disk workload.
A page write that is in process during an operating system crash might be only partially completed, leading to an on-disk page that contains a mix of old and new data. The row-level change data normally stored in WAL will not be enough to completely restore such a page during post-crash recovery. For half written pages protection we need to have a persisted copy of the written page somewhere for recovery purposes.
After modification of the data page we don't write changes to disk immediately, instead, we have dedicated checkpoint process, which starts periodically and flushes all modified (dirty) pages to the disk. This approach is commonly used by other DBMSs. Taking this into account we have to persist somewhere at least a copy of each page participating in checkpoint process until checkpoint complete. Currently, WAL physical records are used to solve this problem.
How other vendors deal with the same problem:
Our current approach is similar to PostgreSQL approach, except the fact, that we strictly divide page level WAL records and logical WAL record, but PostgreSQL uses mixed physical/logical WAL records for 2nd and next updates of the page after checkpoint.
To provide the same crash recovery guarantees we can change the approach and, for example, write modified pages twice during checkpoint. First time to some checkpoint recovery file sequentially and second time to the page storage. In this case we can restore any page from recovery file (instead of WAL, as we do now) if we crash during write to page storage. This approach is similar to doublewrite buffer used by MySQL InnoDB engine.
On checkpoint, pages to be written are collected under checkpoint write lock (relatively short perieod of time). After write lock is released checkpoint start marker is stored to disk, pages are written to page store (this takes relatively long period of time) and finally checkpoint end marker is stored to disk. On recovery, if we found checkpoint start marker without corresponding checkpoint end marker we know that database is crashed on checkpoint and data in page store can be corrupted.
What need to be changed: after checkpoint write lock is released, but before checkpoint start marker is stored to disk we should write checkpoint recovery data for each checkpoint page. This data can be written multithreaded to different files, using the same thread pool as we use for writing pages to page store. Checkpoint start marker on disk guarantees that all recovery data are already stored and we can proceed to writing to page store.
We have two phases of recovery after crash: binary memory restore and logical records apply. Binary memory restore required only if we crashed during last checkpoint. Only pages, affected by last checkpoint are required to be restored. So, currently, to perform this phase we replay WAL from the previous checkpoint to the last checkpoint and apply page records (page snapshot records or page delta records) to page memory. After binary page memory restore we have consistant page memory and can apply logical records starting from last checkpoint, to make database up-to-date.
What need to be changed: on binary memory restore phase, before trying to restore from WAL files we should try to restore from checkpoint recovery files, if these files are found. We still need to replay WAL starting from previous checkpoint, because some other records, except page snapsot and page deltas, need to be applied.
To provide compatibility we can have implementation of both recovery mechanisms in code (current and the new one), and allow user to configure it. In the next release we can use physical WAL records by default, in the following release we can switch default to checkpoint recovery file. On recovery Ignite can decide what to do by analyzing files for current checkpoint.
Pros:
Cons:
Alternatively, we can write physical records (page snapshots and page delta records) the same way as we do now, but use different files for physical and logical WALs. In this case there will be no redundant reads/writes of physical records (physical WAL will not be archived, but will be deleted after checkpoint). This approach will reduce disk worload and don't increase checkpoint duration, but still extra data is required to be written as page delta records for each page modification and physical records can't be written in background.
TBD
Longer checkpoint time can lead to write throttling or even OOM if insufficient checkpoint buffer size is configured.
// Links to discussions on the devlist, if applicable.
[1] https://wiki.postgresql.org/wiki/Full_page_writes
[2] https://dev.mysql.com/doc/refman/8.0/en/innodb-doublewrite-buffer.html
[3] https://docs.oracle.com/en/database/oracle/oracle-database/19/bradv/rman-block-media-recovery.html