You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 3 Next »

IDIEP-104
Author
Sponsor
Created 26/05/2023
Status
DRAFT


Motivation

IEP-59 Change Data Capture defines CDC that runs in near realtime. The background process ignite-cdc awaits WAL segments to be archived for data capturing. The awaiting leads to the lag between the moment event happens and consumer is notified about it. This lag can be relatively big (1s-10s seconds). It's proposed to provide opportunity to capture data and notify consumers directly from the Ignite node process. It will minimize the lag by cost of additional memory usage.

Description

User paths

Enable realtime CDC on cluster:

  1. Configure CDC in Ignite (cdcEnabled=true, set up CdcConsumer)
  2. Start Ignite node
  3. Start background process ignite-cdc (it automatically switches to the PASSIVE mode)

Ignite node restart after failure:

  1. Start Ignite node as usual (Ignite automatically recovers the CDC state)

Run CDC with only ignite-cdc.sh process:

    ./control.sh –cdc realtime off

Command stops Ignite internal cdc process, CDC relies on ignite-cdc only (it automatically switches to the ACTIVE state).

Try restart realtime CDC after working with online ignite-cdc.sh:

    ./control.sh --cdc realtime on

Command will return immediately, but it doesn't guarantee success of the switch. It might fallback to using the ignite-cdc only again. User should check logs and metrics here.

User interface

Ignite

  1. IgniteConfiguration#CdcConfiguration - CdcConsumer, keepBinary.
  2. DataStorageConfiguration#cdcBufSize - by default (walSegments * walSegmentSize). it’s now 640 MB by default.
    1. All non-archived segments are fitted in memory. If realtime CDC requires more space than it, it looks like ordinary CDC process should be used instead.
  3. Logs: 
    1. initialization (amount of records read during the restore)
    2. failure 
    3. buffer is full
    4. switch between modes.
  4. Metrics: 
    1. ordinary cdc metrics (count of wal segments, wal entries)
    2. current buffer size
    3. mode of CDC
    4. last committed WALPointer
    5. lag between buffer and WAL archive (segments)
    6. lag between WAL and CDC consumer (milliseconds).

ignite-cdc:

  1. Logs: 
    1. clearing cdc dir, switch state.
  2. Metrics:
    1. current state.

control.sh

  1. CdcRealtime subcommand
    1. ./control.sh --cdc realtime [ on | off ] 

Segments

Note, there is a confusion of using “segment” word:

  1. WAL segments are represented as numerated files. Size of WAL segments is configured with DataStorageConfiguration#walSegmentSize.
  2. ReadSegment is a slice of the mmap WAL segment. It contains WAL records to sync with the actual file. Size of the segment differs from time to time and its maximum can be configured with DataStorageConfiguration#walBuffSize.

Initialization

On Ignite start during memory restore (in the main thread):

  1. If CdcConfiguration#cdcConsumer is not null, then create CdcProcessor.
  2. CdcProcessor read from the Metastorage the last persisted CdcConsumerState.
    1. CdcState#enabled is false then skip initialization.
    2. If CdcState == null then initialize.
  3. Initialization - collect logical updates from the CdcState#committedPtr until the end of WAL. See GridCacheDatabaseSharedManager#performBinaryMemoryRestore.

Realtime capturing of WALRecords

Entrypoint for WALRecords to be captured by CDC. Options are:

  1. During read of SegmentedRingByteBuffer after fsync is invoked. It is a multi-producer/single-consumer data structure, then the only place to built-in is read operations (invoked at the moment of fsync).
    1. + Relying on the consumer workflow we can guarantee order of events.
    2. + Consumer is a background thread, capturing records doesn't affect performance of transactional threads
    3. - No opportunity to filter physical records at the entrypoint (might waste the buffer space). Will filter them before actual sending.
    4. - The consumer is triggered by a schedule - every 500ms by default.
    5. - Logic has some differences depending on the WAL settings (mmap true/false, FULL_SYNC) 
  2. Capturing in FileWriteAheadLogManager#log(WALRecord).
    1. + Capture logical records only
    2. + Common logic for all WAL settings  
    3. - Captures record in buffer in transactional threads - might affect performance
    4. - CDC process must sort events by WALPointer by self - maintain concurrent ordering data structure, and implementing waiting for closing WAL gaps before sending.
    5. - Send events before they actually flushed in local Ignite node - lead to inconsistency between main and stand-by clusters.

First option is proposed to use.

CdcWorker

CdcWorker is a thread responsible for collecting WAL records and submitting them to a CdcConsumer. The worker collects records in the queue.

Capturing from the buffer (wal-sync-thread):

  1. wal-sync-thread (the only reader of mmap WAL), under the lock that synchronizes preparing ReadSegment and rolling the WAL segment, to guarantee there are no changes in the underlying buffer.
  2. Offers a deep copy of flushing ReadSegments to the CdcWorker.
  3. CdcWorker checks remaining capacity and the buffer size:
    1. If the size fits the capacity then store the offered buffer data into the Queue. 
    2. Otherwise, stop realtime CDC:

      1. Persist actual CdcConsumerState with (enabled=false, last send WALPointer)
      2. Write StopRealtimeCdcRecord into WAL (use the prepared CdcConsumerState).
      3. Clear the buffer, stop CdcWorker.
  4. Optimization: thread might filter ReadSegments by record type, and store only logical records.

Body loop (cdc-worker-thread):

  1. Checks metadata (mappings, binary_meta, caches - can check inside Ignite, not reading files), prepare updates if any.
  2. Polls the Queue, transforms ReadSegment data to Iterator<CdcEvent>, pushes them to CdcConsumer.
  3. If CdcConsumer#onEvents returns true:
    1. Persists CdcConsumerState.
    2. Write RealtimeCdcRecord record to WAL with the WALPointer.
  4. Optimization: transform segment buffers to CdcEvents in background (to reduce the buffer usage). CdcConsumer should be async then?

Try switch to the realtime mode:

  1. User sends the command to switch modes
  2. Ignite does initialization - CdcWorker, buffer
  3. Writes TryStartRealtimeCdcRecord into WAL and rollover current segment (since this record realtime cdc becomes active again).
  4. Ignite monitors the CDC directory, awaits while segment with the record cleaned - it means ignite-cdc.sh reach the record and stops capturing the data.
  5. If buffer is not overflowed in this moment - Ignite enables CDCConsumer and starts sending the records
  6. Otherwise, ordinal stop is invoked (with writing StopRealtimeCdc record)


WAL records
RealtimeCdcRecord extends WALRecord {
	private WALPointer last;
}

StopRealtimeCdcRecord extends WALRecord {
	private WALPointer last;
}

TryStartRealtimeCdcRecord extends WALRecord {
	
}

ignite-cdc in PASSIVE mode

  1. Parses WAL records, looking for RealtimeCdcRecord and StopRealtimeCdcRecord
  2. For RealtimeCdcRecord - clears obsolete links from CDC directory
  3. For StopRealtimeCdcRecord - switch to ACTIVE mode, start capturing from the last WALPointer (from previous RealtimeCdcRecord).

ignite-cdc in ACTIVE mode

  1. Capturing WAL records
  2. Looking for TryStartRealtimeCdcRecord - after reaching it, persist CdcConsumerState locally, switch to PASSIVE mode.

Meta Storage

  1. Realtime CDC - ON / OFF
  2. Committed pointer (confirmed by CdcConsumer).


CdcWorker
class CdcWorker {
	private final CdcConsumer consumer;
	
	private final long checkFreq;
	
	// Invoked in wal-sync-thread.
	public void offer(ReadSegment seg) {
		// Check capacity, adding segment to the queue.
	} 

	// online-cdc-thread
	public void body() {
		// Polling queue, push to CdcConsumer, writing CdcState to MetaStorage.
	}
}

Risks and Assumptions

// Describe project risks, such as API or binary compatibility issues, major protocol changes, etc.

Discussion Links

// Links to discussions on the devlist, if applicable.

Reference Links

// Links to various reference documents, if applicable.

Tickets

// Links or report with relevant JIRA tickets.

  • No labels