Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

A pool has a unique name and a type (eg. "cache"), which is defined by the pool implementation. A pool defines total limits for components of this type in this pool - e.g. "searcherFieldValueCache" pool knows how to handle components of SolrCache type, and it manages all instances of SolrCache in all SolrCore-s that are responsible for field value caching, and it defines total limits for all searcher field value caches across all SolrCore-s. There can be multiple pools of the same type (e.g. "cache") under different names and with different parameters (total limits, schedule, etc), each managing different set of components. Pool configuration specifies the initial limits as well as the interval between management runs - resource manager is responsible for executing each pool's management at the specified intervals.

Limits are expressed as arbitrary name / value pairs, which make sense for the specific pool implementation - e.g. for a "cache" pool type the supported limits are "maxRamMB" and "maxSize". By convention limits use the same names as the component limits (controlled parameters - see below).

...

Story 1: controlling global cache RAM usage in a Solr node

SolrIndexSearcher caches are currently configured statically, using either item count limits or maxRamMB limits. We can only specify the limit per-cache and then we can limit the number of cores in a node to arrive at a hard total upper limit.

However, this is not enough because it leads to keeping the heap at the upper limit when the actual consumption by caches might be far lesser. It'd be nice for a more active core to be able to use more heap for caches than another core with less traffic while ensuring that total heap usage never exceeds a given threshold (the optimization aspect). It is also required that total heap usage of caches doesn't exceed the max threshold to ensure proper behavior of a Solr node (the control aspect).

In order to do this we need a control mechanism that is able to adjust individual cache sizes per core, based on the total hard limit and the actual current "need" of a core, defined as a combination of hit ratio, QPS, and other arbitrary quality factors / SLA. This control mechanism also needs to be able to forcibly reduce excessive usage (evenly? prioritized by collection's SLA?) when the aggregated heap usage exceeds the threshold.

In terms of the proposed API this scenario would work as follows:

  • global resource pools "searcher*Pool" are created with a hard limit on eg. total maxRamMB.
  • these pools knows how to manage components of a "cache" type - what parameters to monitor and what parameters to use in order to control their resource usage. This logic is encapsulated in CacheManagerPool implementation.
  • all searcher caches from all cores register themselves in these pools for the purpose of managing their "cache" aspect.
  • the pools are executed periodically to check the current resource usage of all registered caches (monitored values), using eg. the aggregated value of ramBytesUsed.
  • if this aggregated monitored value exceeds the total maxRamMB limit configured for the pool then the plugin adjusts the maxRamMB setting of each cache in order to reduce the total RAM consumption - currently this uses a simple proportional formula without any history (the P part of PID), with a dead-band in order to avoid thrashing.
  • as a result of this action some of the cache content will be evicted sooner and more aggressively than initially configured, thus freeing more RAM.
  • when the memory pressure decreases the CacheManagerPool may expand the maxRamMB settings of each cache to a multiple of the initially configured values. This is the optimization part.

Story 2: controlling global IO usage in a Solr node

Similarly to the scenario above, currently we can only statically configure merge throttling (RateLimiter) per core but we can't monitor and control the total IO rates across all cores, which may easily lead to QoS degradation of other cores due to excessive merge rates of a particular core.

Although RateLimiter parameters can be dynamically adjusted, this functionality is not exposed, and there's no global control mechanism to ensure "fairness" of allocation of available IO (which is limited) between competing cores.

In terms of the proposed API this scenario would work as follows:

  • a global resource pool "mergeIOPool" is created with a single hard limit maxMBPerSec, which is picked based on a fraction of the available hardware capabilities that still provides acceptable performance.
  • this pool knows how to manage components of a "mergeIO" type. It monitors their current resource usage (using SolrIndexWriter metrics) and knows how to adjust each core's ioThrottle. This logic is encapsulated in MergeIOManagerPool (doesn't exist yet).
  • all SolrIndexWriter-s in all cores register themselves in this pool for the purpose of managing their "mergeIO" aspect.

The rest of the scenario is similar to the Story 1. As a result of the pool's adjustments the merge IO rate of some of the cores may be decreased / increased according to the available pool of total IO.

Public Interfaces

  • ResourceManager - base class for resource management. Only one instance of resource manager is created per Solr node (CoreContainer)
    • DefaultResourceManager - default implementation.
  • ResourceManagerPoolFactory
    • DefaultResourceManagerPoolFactory
  • ResourceManagerPool
    • CacheManagerPool
  • ChangeListener
  • ManagedComponent
  • ManagedComponentId
  • SolrResourceContext
  • ResourceManagerHandler

Briefly list any new interfaces that will be introduced as part of this proposal or any existing interfaces that will be removed or changed. The purpose of this section is to concisely call out the public contract that will come along with this feature.

The definition of a public interface is found on the main SIP page.

CacheManagerPool implementation

This pool implementation manages SolrCache components, and it supports "maxSize" and "maxRamMB" limits.

The control algorithm consists of two phases:

  • hard limit control - applied only when total monitored resource usage exceeds the pool limit. In this case the controlled parameters are evenly and proportionally reduced by the ratio of actual usage to the total limit.
  • optimization - performed only when the total limit is not exceeded, because it may want to not only shrink but also expand cache sizes, thus making a bad situation worse.

...

  • - base class for creating type-specific pool instances.
    • DefaultResourceManagerPoolFactory - default implementation, containing default registry of pool implementations (currently just cache → CacheManagerPool).
  • ResourceManagerPool - base class for managing components in a pool.
    • CacheManagerPool - pool implementation specific to cache resource management.
  • ChangeListener - listener interface for component limit changes. Pools report any changes to their managed components' limits via this interface.
  • ManagedComponent - interface for components to be managed
  • ManagedComponentId - hierarchical unique component ID
  • SolrResourceContext - component's context that helps to register and unregister the component from its pool(s)
  • ResourceManagerAPI - public v2 API for pool operations (CRUD) and component operations (RUD)

CacheManagerPool implementation

This pool implementation manages SolrCache components, and it supports "maxSize" and "maxRamMB" limits.

The control algorithm consists of two phases:

  • hard limit control - applied only when total monitored resource usage exceeds the pool limit. In this case the controlled parameters are evenly and proportionally reduced by the ratio of actual usage to the total limit.
  • optimization - performed only when the total limit is not exceeded, because it may want to not only shrink but also expand cache sizes thus making a bad situation worse. Optimization uses hit ratio to determine whether to shrink or to expand each cache individually, while still staying within the total resource limits.

Some background on hitRatio vs. cache size: the relationship between cache size and hit ratio is positive and monotonic, ie. larger cache size leads to a higher hit ratio (an extreme example would be an unlimited cache that has a perfect recall because it keeps all items). On the other hand there's a point where increasing the size yields diminishing returns in terms of higher hit ratio, if we also consider the cost of the resources it consumes. So there's a sweet spot in the cache size where the hit ratio is still "good enough" but the resource consumption is minimized. In the proposed PR this hit ratio threshold is 0.6, which may be probably too high for realistic loads (should we use something like 0.4?).

Hit ratio, by its definition, is an average outcome of several trials on a stochastic process. For this average to have a desired confidence there's a minimum number of trials (samples) needed. The PR is using a formula of 0.5 / sqrt(lookups) to determine the minimum number of lookups for a given confidence level - the default value being 100 lookups for a 5% accuracy. If there are fewer lookups between adjustments then this means that the current hit ratio cannot be determined with enough confidence and the optimization is skipped.

Maximum possible adjustments are bounded by a maxAdjustRatio (by default 2.0

...

). This means that the pool can grow or shrink each managed cache at most by this factor as compared to the initially configured limit. This functionality prevents the algorithm from ballooning or shrinking the cache indefinitely for very busy or very idle caches.

(This algorithm is in fact a very simple PID controller, but without the ID factors (yet  )).

Resource management and component life-cycle

Components are created outside the scope of this framework, and then their creators may register the components with the framework (using ManagedComponent.initializeManagedComponent(...) method). From now on the component is managed by at least one pool. When a component's close() method is called its SolrResourceContext is responsible for unregistering the component from all pools - for this reason it's important to always call super.close() (or ManagedComponent.super.close()) in component implementations - failure to do so may result in object leaks.

Components are always identified by unique component ID, specific to this instance of a component, because there may be multiple instances of the same component under the same logical path. This is a similar model that already works well with complex Solr metrics (such as gauges), where often an overlap in the life-cycle of logically identical metrics occurs. E.g. when re-opening a searcher a new instance of SolrIndexSearcher is created, but the old one still remains open for some time. The new instance proceeds to register its caches as managed components (the respective pools then correctly reflect the fact that suddenly there's a spike in resource usage because the old searcher is not closed yet). After a while the old searcher is closed, at which point it unregisters its old caches from the framework, which again correctly reflects the fact that some resources have been released.

Proposed Changes

Internal changes

Framework and pool bootstraps

CoreContainer creates and initializes a single instance of ResourceManager in its load() method. This instance is configured using a new file in /resourceMgr/managerConfig.json. Several default pools are always created (at the moment they are all related to SolrIndexSearcher caches) but their parameters can be customized using the /resourceMgr/poolConfigs.json.

SolrIndexSearcher.register() now also registers all its caches in their respective pools and unregisters them on close().

Other changes

  • SolrMetricsContext now as a rule is created for each child component, and it includes also the component's metric names and scope. This simplifies the management of metrics, obtaining metrics snapshots - and it was needed in order to construct fully-qualified component IDs for the resource API.
  • SolrCache.warm(...) also re-sets the limits (such as maxSize and maxRamMB) using the old cache's limits - this is to preserve custom limits from the old instance when a new instance is a replacement for the old one.

User-level APIs

Config files

The instance of ResourceManager that is created and initialized uses the configuration in /resourceMgr/managerConfig.json. This contains the typical Solr plugin info, ie. implementation class and its initArgs.

Pool configurations are kept in ZK in /resourceMgr/poolConfigs.json. Changes made to this file via API are watched by all live nodes, and upon change each node refreshes its internal copy of the config and re-configures local pools to match the config.

The content of the pool configurations file is a serialization of ResourceManagerAPI.ResourcePoolConfigs, which is basically a map of pool names to their configurations. Each pool configuration consists of the following:

  • name (required) - unique pool name
  • type (required) - one of the supported pool types (currently only "cache" is supported)
  • poolParams (optional) - a map of arbitrary key-value pairs containing runtime parameters of the pool. Currently supported parameters:
    • scheduleDelaysSeconds - how often the resource manager will invoke the pool's manage() method, which checks and controls the resource usage of its components.
  • poolLimits (optional) - a map of arbitrary key-value pairs containing total resource limits for the pool. Eg. for "cache" type pools these are currently:
    • maxSize - defines the total maximum number of elements in all cache instances in the pool
    • maRamMB - defined the total maximum memory use of all cache instances in the pool

There are several pre-defined pools, which can be listed using the /cluster/resource API.

Example configuration in /resourceMgr/poolConfigs.json:

{
"configs":{
"searcherUserCache":{
"name":"searcherUserCache",
"type":"cache",
"poolParams":{},
"poolLimits":{
"maxSize": 1000,
"maxRamMB":-1}},
...
}

Currently the PR doesn't use other configuration files or system properties.

Remote API

There's a new v2 ResourceManagerAPI accessible at /cluster/resources for managing cluster-level aspects of the framework (such as pool configurations, their limits and parametersrs) and /node/resource for managing node-level parameters (such as directly modifying individual component's limits).

Changes to pool configurations are persisted in ZK . Also, each node watches the changes in this file and upon change it reloads the config and re-configures local pools to match the config - this may include removing, adding pools, changing their limits and parameters.

Per-node (component) operations that select named items all treat the name as a prefix, ie. the selected items are those that match the prefix provided as the name parameter. This is required because of the quickly changing identifiers of the components.

Update operations that use maps of key-value pairs as payload all use the same "partial update" semantics: new or existing values with the same keys are created/updated, null values cause existing keys to be deleted, and all other existing KV pairs are unchanged.

The following operations are supported:

  • Pool operations
    • Read API (GET):
      • (no payload): lists selected pools and their limits and parameters. Additional boolean request parameters are supported:
        • components - list also all components registered in the pool
        • limits - show limit values for each pool
        • params - show pool parameters
        • values - show current aggregated total values (resource usage) of all components in the pool
    • Write API (POST):
      • create - create a new pool, using the provided ResourcePoolConfig configuration, containing pool name, pool type, and it's initial parameters and resource limits.
      • delete - delete an existing pool (and unregister its components). The name of the pool to delete can be obtained from the string payload or from the path (eg. /cluster/resources/myPool)
      • setlimits - set, modify or delete existing pool(s) limits. The payload is a map of arbitrary key / value pairs.
      • setparams - set, modify or delete existing pool(s) parameters. The payload is a map of arbitrary key / value pairs.
  • Component operations
    • Read API (GET):
      • (no payload): list components in specified pool(s) and their current resource limits
    • Write API (POST):
      • setlimits - set the current limits of specified component(s). Payload is a map of key / value pairs defining the updated limits.
      • delete - unregister specified components from the pool(s) 

Compatibility, Deprecation, and Migration Plan

This is a new feature so there's no deprecation involved. Some of the internal Java APIs are modified but this functionality is scheduled to be included in Solr 9.0 so we can break back-compat if needed.

Users can migrate to this framework gradually by specifying concrete resource limits in place of the defaults - the default settings create unlimited pools for searcher caches so the back-compat behavior remains the same.

Test Plan

An integration test TestCacheDynamics has been created to show the behavior of cache resource management under changing resource constraints. Obviously more tests are needed on a real cluster.

An integration test TestResourceManagerIntegration exercises the REST API. 


Ref Guide content

Most of the material in this SIP, plus example configurations, will become a new section in the Ref Guide. 

(This algorithm is in fact a very simple P controller, without the ID factors (yet  )).

Proposed Changes

Describe the new thing you want to do in appropriate detail. This may be fairly extensive and have large subsections of its own. Or it may be a few sentences. Use judgement based on the scope of the change.

Compatibility, Deprecation, and Migration Plan

  • What impact (if any) will there be on existing users?
  • If we are changing behavior how will we phase out the older behavior?
  • If we need special migration tools, describe them here.
  • When will we remove the existing behavior?

Test Plan

Describe in few sentences how the SIP will be tested. We are mostly interested in system tests (since unit-tests are specific to implementation details). How will we know that the implementation works as expected? How will we know nothing broke?

Rejected Alternatives

If there are alternative ways of accomplishing the same thing, what were they? The purpose of this section is to motivate why the design is the way it is and not some other way.