Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: some small cleanups.

...

  • Why is Solr performance so bad?
  • Why does Solr take so long to start up?
  • Why is SolrCloud acting like my servers are failing when they are fine?

This is an attempt to give basic information only. For a better understanding of the issues involved, read the included links, look for other resources, and ask well thought out questions via Solr support resources.

This guide does not contain info specific to Solr installs that are using HDFS for index storage. If anyone has any info about how to achieve effective index data caching with HDFS, please share that information with the solr-user mailing list or the #solr IRC channel on freenode so it can be incorporated here.

Table of Contents

General information

There is a performance bug that makes *everything* slow in versions 6.4.0 and 6.4.1. The problem is fixed in 6.4.2. It is described by SOLR-10130. This is highly version specific, so if you are not running one of the affected versions, don't worry about it. The rest of this document outside of this one paragraph is not specific to ANY version of Solr.

...

  • A large index.
  • Frequent updates.
  • Super large documents.
  • Extensive use of faceting.
  • Using a lot of different sort parameters.
  • Very large Solr caches
  • A large RAMBufferSizeMB.
  • Use of Lucene's RAMDirectoryFactory.

How much heap space do I need?

...

Here is an incomplete list, in no particular order, of how to reduce heap requirements, based on the list above for things that require a lot of heap:

  • Take a large index and make it distributed - break your index into multiple shards.
    • One very easy way to do this is to switch to SolrCloud. You may need to reindex but SolrCloud will handle all the sharding for you.  This doesn't actually reduce the overall memory requirement for a large index (it may actually increase it slightly), but spreads it a sharded index can be spread across multiple servers, so with each server will have having lower memory requirements. For redundancy, there should be multiple replicas on different servers.
    • If the query rate is very low, putting multiple shards on a single server will perform well. As the query rate increases, it becomes important to only have one shard replica per server.
  • Don't store all your fields, especially the really big ones.
    • Instead, have your application retrieve detail data from the original data source, not Solr.
    • Note that doing this will mean that you cannot use Atomic Updates.
  • You can also enable docValues on fields used for sorting/facets and reindex.
  • Reduce the number of different sort parameters. Just like for facets, docValues can have a positive impact on both performance and memory usage for sorting.
  • Reduce the size of your Solr caches.
  • Reduce RAMBufferSizeMB. The default in recent Solr versions is 100.
    • This value can be particularly important if you have a lot of cores, because a buffer will be used for each core.
  • Don't use RAMDirectoryFactory - instead, use the default and install enough system RAM so the OS can cache your entire index as discussed above.

GC pause problems

When you have a large heap (larger than 2GB), garbage collection pauses can be a major problem. This is usually caused by occasionally required full garbage collections that must "stop the world" – pause all program execution to clean up memory. There are two main solutions: One is to use a commercial low-pause JVM like Zing, which does come with a price tag. The other is to tune the free JVM you've already got. GC tuning is an art form, and what works for one person may not work for you.

...

Manually tuning the sizes of the various heap generations is very important with CMS. The G1 collector automatically tunes the sizes of the generations as it runs, and forcing the sizes will generally result in lower performance.

...

Asking for millions of rows with e.g. rows=9999999 in combination with high query rate is a known combination that can also cause lots of full GC problems on moderate size indexes (5-10mill). Even if the number of actual hits are very low, the fact that the client requests a huge number of rows will cause the allocation of tons of Java objects (one ScoreDoc per row requested) and also reserve valuable RAM (28 bytes per row). So asking for "all" docs using a high rows param does not come for free. You will see lots of garbage collection going on, and memory consumption rising until the point where a full GC is triggered. Increasing heap may help sometimes, but eventually you'll end up with long pauses, so we need to fix the root problem. Read Toke Eskildsen's blog post about the details of the problem and some his suggestions for improvementsimproving Solr's code.

The simple solution is to ask for fewer rows, or if you need to get a huge number of docs, switch to either /export or , cursorMark (, or streaming).

If you have no control over the client you can instead try to set rows in an invariants section of solrconfig, or if it needs to be dynamic, set a cap on the max value allowed through a custom SearchComponent, such as the RequestSanitizerComponent.

...

SSD

Solid State Disks are amazing. They have high transfer rates and pretty much eliminate the latency problems associated with randomly accessing data.

...

One potential problem with SSD is that operating system TRIM support is required for good long-term performance. For single disks, TRIM is usually well supported, but if you want to add any kind of hardware RAID (and most software RAID as well), TRIM support disappears. At the time of this writing, it seems that only Intel supports a solution and that is limited to Windows 7 or later and RAID 0. One way to make this less of a problem with Solr is to put your OS and Solr itself on a RAID of regular disks, and put your index data on a lone SSD. If On a proper Solr setup, if the SSD fails, your redundant server(s) will still be there to handle requests.

...

Turning on autoCommit in your solrconfig.xml update handler definition is the solution:

No Format

<updateHandler class="solr.DirectUpdateHandler2">
  <autoCommit>
    <maxDocs>25000</maxDocs>
    <maxTime>300000</maxTime>
    <openSearcher>false</openSearcher>
  </autoCommit>
  <updateLog />
</updateHandler>

...

  • Large autowarmCount values on Solr caches.
  • Heap size issues. Problems from the heap being too big will tend to be infrequent, while problems from the heap being too small will tend to happen consistently.
  • Extremely frequent commits.
  • Not enough OS memory for disk caching, discussed above.

If you have large autowarmCount values on your Solr caches, it can take a very long time to do that cache warming. The filterCache is particularly slow to warm. The solution is to reduce the autowarmCount, reduce the complexity of your queries, or both.

...