Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Story 1: controlling global cache RAM usage in a Solr node

SolrIndexSearcher caches are currently configured statically, using either item count limits or maxRamMB limits. We can only specify the limit per-cache and then we can limit the number of cores in a node to arrive at a hard total upper limit.

However, this is not enough because it leads to keeping the heap at the upper limit when the actual consumption by caches might be far lesser. It'd be nice for a more active core to be able to use more heap for caches than another core with less traffic while ensuring that total heap usage never exceeds a given threshold (the optimization aspect). It is also required that total heap usage of caches doesn't exceed the max threshold to ensure proper behavior of a Solr node (the control aspect).

In order to do this we need a control mechanism that is able to adjust individual cache sizes per core, based on the total hard limit and the actual current "need" of a core, defined as a combination of hit ratio, QPS, and other arbitrary quality factors / SLA. This control mechanism also needs to be able to forcibly reduce excessive usage (evenly? prioritized by collection's SLA?) when the aggregated heap usage exceeds the threshold.

In terms of the proposed API this scenario would work as follows:

  • global resource pools "searcher*Pool" are created with a hard limit on eg. total maxRamMB.
  • these pools knows how to manage components of a "cache" type - what parameters to monitor and what parameters to use in order to control their resource usage. This logic is encapsulated in CacheManagerPool implementation.
  • all searcher caches from all cores register themselves in these pools for the purpose of managing their "cache" aspect.
  • the pools are executed periodically to check the current resource usage of all registered caches (monitored values), using eg. the aggregated value of ramBytesUsed.
  • if this aggregated monitored value exceeds the total maxRamMB limit configured for the pool then the plugin adjusts the maxRamMB setting of each cache in order to reduce the total RAM consumption - currently this uses a simple proportional formula without any history (the P part of PID), with a dead-band in order to avoid thrashing.
  • as a result of this action some of the cache content will be evicted sooner and more aggressively than initially configured, thus freeing more RAM.
  • when the memory pressure decreases the CacheManagerPool may expand the maxRamMB settings of each cache to a multiple of the initially configured values. This is the optimization part.

Story 2: controlling global IO usage in a Solr node

Similarly to the scenario above, currently we can only statically configure merge throttling (RateLimiter) per core but we can't monitor and control the total IO rates across all cores, which may easily lead to QoS degradation of other cores due to excessive merge rates of a particular core.

Although RateLimiter parameters can be dynamically adjusted, this functionality is not exposed, and there's no global control mechanism to ensure "fairness" of allocation of available IO (which is limited) between competing cores.

In terms of the proposed API this scenario would work as follows:

  • a global resource pool "mergeIOPool" is created with a single hard limit maxMBPerSec, which is picked based on a fraction of the available hardware capabilities that still provides acceptable performance.
  • this pool knows how to manage components of a "mergeIO" type. It monitors their current resource usage (using SolrIndexWriter metrics) and knows how to adjust each core's ioThrottle. This logic is encapsulated in MergeIOManagerPool (doesn't exist yet).
  • all SolrIndexWriter-s in all cores register themselves in this pool for the purpose of managing their "mergeIO" aspect.

The rest of the scenario is similar to the Story 1. As a result of the pool's adjustments the merge IO rate of some of the cores may be decreased / increased according to the available pool of total IO.

Public Interfaces

  • ResourceManager - base class for resource management.
    • DefaultResourceManager - default implementation.
  • ResourceManagerPoolFactory - base class for creating type-specific pool instances.
    • DefaultResourceManagerPoolFactory - default implementation, containing default registry of pool implementations (currently just "cache" → CacheManagerPool).
  • ResourceManagerPool - base class for managing components in a pool.
    • CacheManagerPool - pool implementation specific to cache resource management.
  • ChangeListener - listener interface for component limit changes. Pools report any changes to their managed components' limits via this interface.
  • ManagedComponent - interface for components to be managed
  • ManagedComponentId - hierarchical unique component ID
  • SolrResourceContext - component's context that helps to register and unregister the component from its pool(s)
  • ResourceManagerHandler - public API for pool operations (CRUD) and resource operations (RUD)

Briefly list any new interfaces that will be introduced as part of this proposal or any existing interfaces that will be removed or changed. The purpose of this section is to concisely call out the public contract that will come along with this feature.

The definition of a public interface is found on the main SIP page.

CacheManagerPool implementation

This pool implementation manages SolrCache components, and it supports "maxSize" and "maxRamMB" limits.

...

  • hard limit control - applied only when total monitored resource usage exceeds the pool limit. In this case the controlled parameters are evenly and proportionally reduced by the ratio of actual usage to the total limit.
  • optimization - performed only when the total limit is not exceeded, because it may want to not only shrink but also expand cache sizes, thus making a bad situation worse.


Some background on hitRatio vs. cache size: the relationship between cache size and hit ratio is positive and monotonic, ie. larger cache size leads to a higher hit ratio (an extreme example would be an unlimited cache that has a perfect recall because it keeps all items). On the other hand there's a point where increasing the size yields diminishing returns in terms of higher hit ratio, if we also consider the cost of the resources it consumes. So there's a sweet spot in the cache size where the hit ratio is still "good enough" but the resource consumption is minimized. In the proposed PR this hit ratio threshold is 0.6, which may be probably too high for realistic loads (should we use something like 0.4?).

Hit ratio, by its definition, is an average outcome of several trials on a stochastic process. For this average to have a desired confidence there's a minimum number of trials (samples) needed. The PR is using a formula of 0.5 / sqrt(lookups) to determine the minimum number of lookups for a given confidence level - the default value being 100 lookups for a 5% accuracy. If there are fewer lookups between adjustments then this means that the current hit ratio cannot be determined with enough confidence and the optimization is skipped.

Maximum possible adjustments are bounded by a maxAdjustRatio (by default 2.0). This means that the pool can grow or shrink each managed cache at most by this factor as compared to the initially configured limit. This functionality prevents the algorithm from ballooning or shrinking the cache indefinitely for very busy or very idle caches.

(This algorithm is in fact a very simple

...

PID controller, but without the ID factors (yet  )).

Resource management and component life-cycle

Components are created outside the scope of this framework, and then their creators may register the components with the framework (using ManagedComponent.initializeManagedComponent(...) method). From now on the component is managed by at least one pool. When a component's close() method is called its SolrResourceContext is responsible for unregistering the component from all pools - for this reason it's important to always call super.close() (or ManagedComponent.super.close()) in component implementations - failure to do so may result in object leaks.

Components are always identified by unique component ID, specific to this instance of a component, because there may be multiple instances of the same component under the same logical path. This is a similar model that already works well with complex Solr metrics (such as gauges), where often an overlap in the life-cycle of logically identical metrics occurs. E.g. when re-opening a searcher a new instance of SolrIndexSearcher is created, but the old one still remains open for some time. The new instance proceeds to register its caches as managed components (the respective pools then correctly reflect the fact that suddenly there's a spike in resource usage because the old searcher is not closed yet). After a while the old searcher is closed, at which point it unregisters its old caches from the framework, which again correctly reflects the fact that some resources have been released.

Proposed Changes

Internal changes

Framework and pool bootstraps

CoreContainer creates and initializes a single instance of ResourceManager in its load() method. This instance is configured using a new section in /clusterprops.json/poolConfigs. Several default pools are always created (at the moment they are all related to SolrIndexSearcher caches) but their parameters can be customized using the clusterprops.

SolrIndexSearcher.register() now also registers all its caches in their respective pools and unregisters them on close().

Other changes

  • SolrMetricsContext now as a rule is created for each child component, and it includes also the component's metric names and scope. This simplifies the management of metrics, obtaining metrics snapshots - and it was needed in order to construct fully-qualified component IDs for the resource API.
  • SolrCache.warm(...) also re-sets the limits (such as maxSize and maxRamMB) using the old cache's limits - this is to preserve custom limits from the old instance when a new instance is a replacement for the old one.

User-level APIs

Config files

Remote API

Describe the new thing you want to do in appropriate detail. This may be fairly extensive and have large subsections of its own. Or it may be a few sentences. Use judgement based on the scope of the change.

Compatibility, Deprecation, and Migration Plan

...

This is a new feature so there's no deprecation involved. Some of the internal Java APIs are modified but this functionality is scheduled to be included in Solr 9.0 so we can break back-compat if needed.

Users can migrate to this framework gradually by specifying concrete resource limits in place of the defaults - the default settings create unlimited pools for searcher caches so the back-compat behavior remains the same.

  • When will we remove the existing behavior?

...