Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Table of Contents

...

Requirements


  1. Allow user to create Lucene Indexes on data stored in Geode

  2. Update the indexes asynchronously to avoid impacting write latency
  3. Allow user to perform text (Lucene) search on Geode data using the Lucene index. Results from the text searches may be stale due to asynchronous index updates.

  4. Provide highly available of indexes using Geodes HA capabilities

  5. Provide high throughput indexing and querying by partitioning index data to match partitioning in Geode
  6. Scalability
  7. Performance comparable to RAMFSDirectory

Out of Scope
  1. Building next/better Solr/Elasticsearch.

  2. Enhance current Geode OQL to use Lucene index.

2

Related Documents

A previous integration of Lucene and GemFire:

   

Similar efforts done by other data products

  • Hibernate Search: Hibernate search

  • Solandra: Solandra embeds Solr in Cassandra.

3

Terminology

  • Documents: In Lucene, a Document is the unit of search and index. An index consists of one or more Documents.
  • Fields: A Document consists of one or more Fields. A Field is simply a name-value pair.
  • Indexing involves adding Documents to an IndexWriter, and searching involves retrieving Documents from an index via an IndexSearcher.

4

API

User Input

  1. A region and list of to-be-indexed fields
  2. [ Optional ] Standard Analyzer or its implementation to be used with all the fields in a index
  3. [ Optional ] Field types. A string can be Text or String in lucene. The two have different behavior

...

Users will interact with a new LuceneService interface, which provides methods for creating and querying indexes. Users can also create indexes through gfsh or cache.xml.

Java API 

...

LuceneService

 

Code Block

...

/**
   * Create a lucene index using default analyzer.
   *

...

/
  public 

...

LuceneIndex 

...

createIndex(String indexName

...

, String 

...

regionName, String... fields);
  
  

...

/**
   * 

...

Create 

...

a 

...

lucene index 

...

using specified analyzer per field
   */
  public LuceneIndex 

...

createIndex(String indexName, String regionName

...

,  
  

...

   

...

 

...

Map<String, 

...

Analyzer> analyzerPerField);

  public void destroyIndex(LuceneIndex index);
 
  public LuceneIndex getIndex(String indexName, String regionName);
  
  public Collection<LuceneIndex> getAllIndexes();

  /**
   *

...

 Get a 

...

factory 

...

for 

...

building 

...

queries
   */ 
  public LuceneQueryFactory createLuceneQueryFactory();

...

 
    

LuceneQueryFactory

Code Block
public enum ResultType {
    /**
     *  Query results only contain value, which is the default setting.
     *  If field projection is specified, use projected fields' values instead of whole domain object
     */
    VALUE,
    
    /**
     * Query results contain score
     */
    SCORE,
    
    /**
     * Query results contain key
     */
    KEY
  };
 /**
   * Set page size for a query result. The default page size is 0 which means no pagination.
   * If specified negative value, throw IllegalArgumentException
   * @param pageSize
   * @return itself
   */
  LuceneQueryFactory setPageSize(int pageSize);
  
  /**
   * Set max limit of result for a query
   * If specified limit is less or equal to zero, throw IllegalArgumentException
   * @param limit
   * @return itself
   */
  LuceneQueryFactory setResultLimit(int limit);
  
  /**
   * set weather to include SCORE, KEY in result
   * 
   * @param resultTypes
   * @return itself
   */
  LuceneQueryFactory setResultTypes(ResultType... resultTypes);
  
  /**
   * Set a list of fields for result projection.
   * 
   * @param fieldNames
   * @return itself
   */
  LuceneQueryFactory setProjectionFields(String... fieldNames);
  
  /**
   * Create wrapper object for lucene's QueryParser object.
   * The queryString is using lucene QueryParser's syntax. QueryParser is for easy-to-use 
   * with human understandable syntax. 
   *  
   * @param regionName region name
   * @param indexName index name
   * @param queryString query string in lucene QueryParser's syntax
   * @param analyzer lucene Analyzer to parse the queryString
   * @return LuceneQuery object
   * @throws ParseException
   */
  public LuceneQuery create(String indexName, String regionName, String queryString, 
      Analyzer analyzer) throws ParseException;
  
  /**
   * Create wrapper object for lucene's QueryParser object using default standard analyzer.
   * The queryString is using lucene QueryParser's syntax. QueryParser is for easy-to-use 
   * with human understandable syntax. 
   *  
   * @param regionName region name
   * @param indexName index name
   * @param queryString query string in lucene QueryParser's syntax
   * @return LuceneQuery object
   * @throws ParseException
   */
  public LuceneQuery create(String indexName, String regionName, String queryString) 
      throws ParseException;
  
  /**
   * Create wrapper object for lucene's Query object.
   * Advanced lucene users can customized their own Query object and directly use in this API.  
   * 
   * @param regionName region name
   * @param indexName index name
   * @param query lucene Query object
   * @return LuceneQuery object
   */
  public LuceneQuery create(String indexName, String regionName, Query query);

 

LuceneQuery

Code Block
/**
 * Provides wrapper object of Lucene's Query object and execute the search. 
 * <p>Instances of this interface are created using
 * {@link LuceneQueryFactory#create}.
 * 
 */
public interface LuceneQuery {
  /**
   * Execute the search and get results. 
   */
  public LuceneQueryResults<?> search();
  
  /**
   * Get page size setting of current query. 
   */
  public int getPageSize();
  
  /**
   * Get limit size setting of current query. 
   */
  public int
getLimit(); /** * Get result types setting of current query. */ public ResultType[] getResultTypes
 getLimit();
  
/**
   * Get 
projected
result 
fields
types setting of current query. 
   */
  public 
String
ResultType[] 
getProjectedFieldNames
getResultTypes();
}

 

 

LuceneResultStruct

Code Block
  
  /**
 
*
 
<p>
 * 
Abstract
Get 
data
projected 
structure
fields 
for
setting 
one
of 
item
current 
in
query. 
result.

 
*
 
 *
@author Xiaojian Zhou
/
 
* @since
 
8.5 */
public
interface
 
LuceneResultStruct {
String[] getProjectedFieldNames();
}
 

LuceneResultStruct

Code Block
  /**
   * Return the value associated with the given field name
   *
   * @param fieldName the String name of the field
   * @return the value associated with the specified field
   * @throws IllegalArgumentException If this struct does not have a field named fieldName
   */
  public Object getProjectedField(String fieldName);
  
  /**
   * Return key of the entry
   *
   * @return key
   * @throws IllegalArgumentException If this struct does not contain key
   */
  public Object getKey();
  
  /**
   * Return value of the entry
   *
   * @return value the whole domain object
   * @throws IllegalArgumentException If this struct does not contain value
   */
  public Object getValue();
  
  /**
   * Return score of the query 
   *
   * @return score
   * @throws IllegalArgumentException If this struct does not contain score
   */
  public Double getScore();
  
  /**
   * Get the types of values ordered list
   * Item in the list could be either ResultType, or field name
   * @return the array of result types
   */
  public Object[] getNames();
  
  /**
   * Get the values in same order as result types
   * @return the array of values
   */
  public Object[] getResultValues();
}

 

 

 

 

 

  

Examples

to use the APIs: 

 
Code Block
// Get LuceneService
LuceneService luceneService = LuceneService.get(cache);

// Create Index
LuceneIndex index = luceneService.createIndex(indexName, regionName, "field1", "field2", "field3");

// create index on fields with specified analyzer:
LuceneIndex index = luceneService.createIndex(indexName, regionName, analyzerPerField);

// Create Query
LuceneQuery query = luceneService.createLuceneQueryFactory().setLimit(200).setPageSize(20)
  .setResultType(SCORE, VALUE, KEY).setFieldProjection("field1", "field2")
  .create(indexName, regionName, querystring, analyzer);


// Search using Query
LuceneQueryResults results = query.search();

List values = results.getNextPage(); // return all results in one page

// Pagination
while (results.hasNextPage())
  List page = results.getNextPage(); // return result page by page

  for (LuceneResultStruct r : page) {
    System.out.prinlnt(r.getValue());
  }
}

 

Gfsh API

 

Code Block
// Create Index
gfsh> create lucene-index --name=indexName --region=/drugs --fields=sideEffects,manufacturer

// Destory Index
gfsh> destroy lucene-index --name=indexName --region=/drugs

Execute Lucene query
gfsh> luceneQuery --regionName=/drugs -queryStrings="" --limit=100 page-size=10

 

XML Configuration 

 

Code Block
<region name="drugsregion">  
 <lucene-index indexName="luceneIndex">
             <FieldDefinition name="fieldName" analyzer="KeywordAnalyzer"/> 
 </lucene-index>
</region>

 

REST API

TBD

Spring Data GemFire Support

TBD - But the Searchable annotation described in this blog might be a good place to start.

Implementation

Index Storage

The lucene indexes will be stored in memory instead of disk. This will be done by implementing a lucene FSDirectory called GeodeFSDirectory which uses Geode as a flat file system. This way we get all the benefits offered by Geode and we can achieve replication and shard-ing of the indexes. The lucene indexes will be co-located with the region they are defined on. 

 

PlantUML
[Lucene Indexer] --> [GeodeFSDirectory]
() "User"
node "Colocated and Replicated" {
  () User --> [User Region] : Puts
  [User Region] --> [Async Queue]
  [Async Queue] --> [Lucene Indexer] : Batch Writes
  [GeodeFSDirectory] --> [Lucene Regions]
}

  

Partitioned region data flow

PlantUML
() User -down-> [Cache] : PUTs
node cluster {
 database {
 () "indexBucket1Primary"
 }

 database {
 () "indexBucket1Secondary"
 }

[Cache] ..> [Bucket 1]
 [Bucket 1] -down-> [Async Queue Bucket 1]
[Async Queue Bucket 1] -down-> [FSDirectoryBucket1] : Batch Write
[FSDirectoryBucket1] -> indexBucket1Primary
indexBucket1Primary -right-> indexBucket1Secondary

 database {
 () "indexBucket2Primary"
 }

 database {
 () "indexBucket2Secondary"
 }

[Cache] ..> [Bucket 2]
 [Bucket 2] -down-> [Async Queue Bucket 2]
 [Async Queue Bucket 2] -down-> [FSDirectoryBucket2] : Batch Write
 [FSDirectoryBucket2] -> indexBucket2Primary
 indexBucket2Primary -right-> indexBucket2Secondary 
}
 


In a partition region every bucket in the region will have its own GeodeDirectory to store the lucene indexes. The GeodeDirectory implements a file system using 2 regions 
  • FileRegion : holds the meta data about indexing files
  • ChunkRegion : Holds the actual data chunks for a given index file. 

The FileRegion and ChunkRegion will be collocated with the data region which is to be indexed. The GeodeFSDirectory will have a key that contains the bucket id for file metadata chunks. The FileRegion and ChunkRegion will have partition resolver that looks at the bucket id part of the key only.
In AsyncEventListener, when a data entry is processed ,
1)
  1. determine the bucket id of the entry.
2)
  1. Get the directory for that bucket, do the indexing operation into that instance.

Storage with different region types

PersistentRegions
The Lucene Index will be persisted.
OverflowRegions
The Lucene Index will not be overflowed. The rational here is that the Lucene index will be much smaller than the data size, so it is not necessary to overflow the index.
EmptyRegions
The Lucene Index not supported
OffHeapRegions
The Lucene index will be stored in OffHeap

Index Maintenance

An AsynchEventQueue will be used to update the LuceneIndex. This will allow us to do updates in batches supported by AEQ.

Indexed field values are obtained from AsynchEvent through reflection (in case of domain object) or by PdxInstance interface (in case pdx or JSON); constructing Lucene document object and adding it to the LuceneIndex associated with that region.

 

Handling failures, restarts, and rebalance 

...


Processing Queries
 

Partitioned regions

In the case of partitioned regions, the query must be sent out to all of the primaries. The results will then need to be aggregated back together. We are still investigating options for how to aggregate the data, see Text / Lucene Search Aggregation Options.

Replicated regions

TBD

 



Result collection and paging

The ResultSet will support pagination mechanism to retrieve the results. All the keys are aggregated at the query executor node (client or peer); and getAll is used to fetch the values according to page size.