Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

For some time, there has been a demand for a feature to allow users to determine the redundancy status of partitioned regions and to restore any missing redundancy without having to trigger a full rebalance of the system.[1] [2] Currently, no simple internal API call or gfsh command exists that provides users with the redundancy status of all partitioned regions in a system and the only way to manually trigger redundancy recovery is to perform a rebalance operation, which is a resource-intensive operation that can potentially move a lot of data around and cause exceptions if transactions are running in the system.[3] In order to determine the redundancy status of all partitioned regions, a user has to use a workaround of repeatedly calling 

...

for every partitioned region in the system, the output of which contains a lot of information that is not relevant to redundancy status.[4]

Anti-Goals

These gfsh commands and internal API are not intended to facilitate moving buckets or data from one member to another. Nor are they intended to guarantee full redundancy after calling, as it is possible that there are not enough members in the cluster to allow regions to meet their configured redundancy. It is also not within the scope of this RFC to describe any REST API that may be created at a future point in time to make use of the proposed internal API.

...

The proposed solution is to create a new RestoreRedundancyOperationRestoreRedundancyBuilder which will behave and be accessed in broadly the same way as the existing RebalanceOperation. The operation RebalanceFactory. When started, the builder will schedule PartitionedRegionRebalanceOp operations for each appropriate region,  but use a new RestoreRedundancyDirector instead of the CompositeDirector, which will only perform the removing over redundancy, restoring missing redundancy and (optionally) reassigning primary buckets steps.

Since the existing RebalanceResults object does not capture the relevant information regarding actual redundancy, the RestoreRedundancyOperationRestoreRedundancyBuilder.getResultsstart() method will return a new CompletableFuture that will return a RestoreRedundancyResults object containing the redundancy status for each region involved in the operation as well as information about primary bucket reassignments.

...

RestoreRedundancyBuilder createRestoreRedundancyBuilder()
Set<RestoreRedundancyOperation>Set<CompletableFuture<RestoreRedundancyResults>> getRestoreRedundancyOperations()

...

The RestoreRedundancyBuilder will be responsible for setting the regions to be explicitly included or excluded in the RestoreRedundancyOperationrestore redundancy operation (default behaviour will be to include all regions), setting whether to reassign which members host primary buckets (default behaviour will be to reassign primaries) and starting the operation. A method will also be included for getting the current redundancy status:

public interface RestoreRedundancyBuilder {
  RestoreRedundancyBuilder includeRegions(Set<String> regions);

  RestoreRedundancyBuilder excludeRegions(Set<String> regions);

  RestoreRedundancyBuilder doNotReassignPrimaries(boolean shouldNotReassign);
  RestoreRedundancyOperation
  CompletableFuture<RestoreRedundancyResults> start();

  RestoreRedundancyResults redundancyStatus();
}

RestoreRedundancyOperation

The RestoreRedundancyOperation interface will provide access to the current status of the operation, the ability to cancel it, and the ability to retrieve the results, with or without a timeout:

public interface RestoreRedundancyOperation {
  boolean isCancelled();
  boolean isDone();
  boolean cancel();
  RestoreRedundancyResults getResults();
  RestoreRedundancyResults getResults(long timeout, TimeUnit unit);

}

RestoreRedundancyResults

The RestoreRedundancyResults object will be a collection of individual results for each region and will contain methods for determining the overall success or failure of the operation,  generating a detailed description of the state of the regions and for getting information about the work done to reassign primaries as part of the operation. The Status returned by the RestoreRedundancyResults will be FAILURE if at least one bucket in one region has zero redundant copies (and that region is configured to have redundancy),  ERROR if the restore redundancy operation failed to start or encountered an exception and SUCCESS otherwise:

public interface RestoreRedundancyResults {
  enum Status {
    SUCCESS,
    FAILURE,
    ERROR
  }

  void addRegionResults(Map<String, RestoreRedundancyRegionResult> resultsRestoreRedundancyResults results);

void addPrimaryReassignmentDetails(PartitionRebalanceInfo details);

  void addRegionResult(RestoreRedundancyRegionResult regionResult);

  Status getStatus();

  String getMessage();

  RestoreRedundancyRegionResult getRegionResult(String regionName);

  Map<String, RestoreRedundancyRegionResult> getZeroRedundancyRegionResults();

  Map<String, RestoreRedundancyRegionResult> getUnderRedundancyRegionResults();

  Map<String, RestoreRedundancyRegionResult> getSatisfiedRedundancyRegionResults();

  Map<String, RestoreRedundancyRegionResult> getRegionResults();

  int getTotalPrimaryTransfersCompleted();

  long getTotalPrimaryTransferTime();

}

...

The first command will execute a function on members hosting the specified partitioned regions and trigger the restore redundancy operation for those regions, then report the final redundancy status of those regions. If at

The command will return success status if:

  • At least one redundant copy exists for every bucket in

...

  • regions with redundancy configured that were included, either explicitly or implicitly.
  • At least one of the explicitly included partitioned regions was found and had redundancy successfully restored.
  • No partitioned regions were found and none were explicitly included.

The command will return error status if:

  • At least one bucket in a region has zero redundant copies,

...

  • and that region has redundancy configured.
  • None of the explicitly included partitioned regions were found.
  • There is a member in the system with

...

  • a version of Geode

...

  • older than 1.13.0 (assuming that is the version in which this feature is implemented).
  • The restore redundancy function encounters an exception

...

  • .

The second command will determine the current redundancy status for the specified regions and report it to the user.

Both commands will take optional --include-region and --exclude-region arguments, similar to the existing rebalance command. If neither argument is specified, all regions will be included. Included regions will take precedence over excluded regions when both are specified. The restore redundancy command will also take an optional --dont-reassign-primaries argument to determine if primaries should not be reassigned during the operation. The default behaviour will be to reassign primaries.

Both commands will output a list of regions with zero redundant copies first (unless they are configured to have zero copiesredundancy), then regions with less than their configured redundancy, then regions with full redundancy. The restore redundancy command will also output information about how many primaries were reassigned and how long that process took, similar to the existing rebalance command.

...

Since the proposed changes do not modify any existing behaviour, no performance impact is anticipated. Moreover, since restoring redundancy without performing a full rebalance is significantly less resource intensive, the addition of this API and these gfsh commands provides a more performant solution for cases in which only restoration of redundancy is wanted.

...