...
Allow user to create Lucene Indexes on data stored in Geode
- Update the indexes asynchronously to avoid impacting write latency
Allow user to perform text (Lucene) search on Geode data using the Lucene index. Results from the text searches may be stale due to asynchronous index updates.
Provide highly available of indexes using Geodes Geode's HA capabilities
- Scalability
- Performance comparable to RAMFSDirectory
Out of Scope
Building next/better Solr/Elasticsearch.
Enhancing the current Geode OQL to use Lucene index.
...
- A region and list of to-be-indexed fields
- [ Optional ] Specified Analyzer for fields or Standard Analyzer or its implementation to be used with all the fields in a index
- [ Optional ] Field types. A string can be Text or String in Lucene. The two have different behavior
- if not specified with fields
Key points
A single index will not support multiple regions. Join queries between regions are not supported
- Heterogeneous objects in single region will be supported
- Only top level fields and nested objects can be indexed, not nested collections
- The index needs to be created before adding the data (for phase1)
- Pagination of results will be supported
Users will interact with a new LuceneService interface, which provides methods for creating indexes and querying indexes. Users can also create indexes through gfsh or cache.xml.
...
LuceneQueryFactory
Code Block |
---|
public enum ResultType { /** * Query results only contain value, which is the default setting. * If field projection is specified, use projected fields' values instead of whole domain object */ VALUE, /** * Query results contain score */ SCORE, /** * Query results contain key */ KEY }; /** * Set page size for a query result. The default page size is 0 which means no pagination. * If specified negative value, throw IllegalArgumentException * @param pageSize * @return itself */ LuceneQueryFactory setPageSize(int pageSize); /** * Set max limit of result for a query * If specified limit is less or equal to zero, throw IllegalArgumentException * @param limit * @return itself */ LuceneQueryFactory setResultLimit(int limit); /** * set weather to include SCORE, KEY in result * * @param resultTypes * @return itself */ LuceneQueryFactory setResultTypes(ResultType... resultTypes); /** * Set a list of fields for result projection. * * @param fieldNames * @return itself */ LuceneQueryFactory setProjectionFields(String... fieldNames); /** * Create wrapper object for lucene's QueryParser object. * The queryString is using lucene QueryParser's syntax. QueryParser is for easy-to-use * with human understandable syntax. * * @param regionName region name * @param indexName index name * @param queryString query string in lucene QueryParser's syntax * @param analyzer lucene Analyzer to parse the queryString * @return LuceneQuery object * @throws ParseException */ public LuceneQuery create(String indexName, String regionName, String queryString, Analyzer analyzer) throws ParseException; /** * Create wrapper object for lucene's QueryParser object using default standard analyzer. * The queryString is using lucene QueryParser's syntax. QueryParser is for easy-to-use * with human understandable syntax. * * @param regionName region name * @param indexName index name * @param queryString query string in lucene QueryParser's syntax * @return LuceneQuery object * @throws ParseException */ public LuceneQuery create(String indexName, String regionName, String queryString) throws ParseException; /** * Create wrapper object for lucene's Query object. * Advanced lucene users can customized their own Query object and directly use in this API. * * @param regionName region name * @param indexName index name * @param query lucene Query object * @return LuceneQuery object */ public LuceneQuery create(String indexName, String regionName, Query query); |
LuceneQuery
Code Block |
---|
/** * Provides wrapper object of Lucene's Query object and execute the search. * <p>Instances of this interface are created using * {@link LuceneQueryFactory#create}. * */ public interface LuceneQuery { /** * Execute the search and get results. */ public LuceneQueryResults<?> search(); /** * Get page size setting of current query. */ public int getPageSize(); /** * Get limit size setting of current query. */ public int getLimit(); /** * Get result types setting of current query. */ public ResultType[] getResultTypes(); /** * Get projected fields setting of current query. */ public String[] getProjectedFieldNames(); } |
LuceneResultStruct
Code Block |
---|
/** * Return the value associated with the given field name * * @param fieldName the String name of the field * @return the value associated with the specified field * @throws IllegalArgumentException If this struct does not have a field named fieldName */ public Object getProjectedField(String fieldName); /** * Return key of the entry * * @return key * @throws IllegalArgumentException If this struct does not contain key */ public Object getKey(); /** * Return value of the entry * * @return value the whole domain object * @throws IllegalArgumentException If this struct does not contain value */ public Object getValue(); /** * Return score of the query * * @return score * @throws IllegalArgumentException If this struct does not contain score */ public Double getScore(); /** * Get the types of values ordered list * Item in the list could be either ResultType, or field name * @return the array of result types */ public Object[] getNames(); /** * Get the values in same order as result types * @return the array of values */ public Object[] getResultValues(); } |
Examples
Code Block |
---|
// Get LuceneService
LuceneService luceneService = LuceneServiceProvider.get(cache);
// Create Index on fields with default analyzer:
LuceneIndex index = luceneService.createIndex(indexName, regionName, "field1", "field2", "field3");
// create index on fields with specified analyzer:
LuceneIndex index = luceneService.createIndex(indexName, regionName, analyzerPerField);
// Create Query
LuceneQuery query = luceneService.createLuceneQueryFactory().setLimit(200).setPageSize(20)
.setResultType(SCORE, VALUE, KEY).setFieldProjection("field1", "field2")
.create(indexName, regionName, querystring, analyzer);
// Search using Query
LuceneQueryResults results = query.search();
List values = results.getNextPage(); // return all results in one page
// Pagination
while (results.hasNextPage())
List page = results.getNextPage(); // return result page by page
for (LuceneResultStruct r : page) {
System.out.prinlnt(r.getValue());
}
} |
Gfsh API
Code Block |
---|
// Create Index gfsh> create lucene-index --name=indexName --region=/orders --fields=customer,tags // Destory Index gfsh> destroy lucene-index --name=indexName --region=/orders Execute Lucene query gfsh> luceneQuery --regionName=/orders -queryStrings="" --limit=100 page-size=10 |
XML Configuration
Code Block |
---|
<region name="region"> <lucene-index indexName="luceneIndex"> <FieldDefinition name="fieldName" analyzer="KeywordAnalyzer"/> </lucene-index> </region> |
REST API
TBD - But using solr to provide a REST API might make a lot of sense
Spring Data GemFire Support
TBD - But the Searchable annotation described in this blog might be a good place to start.
Implementation Flowchart
Index Storage
The lucene indexes will be stored in memory instead of disk. This will be done by implementing a lucene FSDirectory Directory called GeodeFSDirectory RegionDirectory which uses Geode as a flat file system. This way we get all the benefits offered by Geode and we can achieve replication and shard-ing of the indexes. The lucene indexes will be co-located with the region they are defined on.data region in case of HA.
PlantUML |
---|
[Lucene Indexer] --> [GeodeFSDirectory] () "User" node "Colocated and Replicated" { () User --> [User Region] : Puts [User Region] --> [Async Queue] [Async Queue] --> [Lucene Indexer] : Batch Writes [GeodeFSDirectory] --> [Lucene Regions] } |
Partitioned region data flow
PlantUML |
---|
() User -down-> [Cache] : PUTs node cluster { database { () "indexBucket1Primary" } database { () "indexBucket1Secondary" } [Cache] ..> [Bucket 1] [Bucket 1] -down-> [Async Queue Bucket 1] [Async Queue Bucket 1] -down-> [FSDirectoryBucket1] : Batch Write [FSDirectoryBucket1] -> indexBucket1Primary indexBucket1Primary -right-> indexBucket1Secondary database { () "indexBucket2Primary" } database { () "indexBucket2Secondary" } [Cache] ..> [Bucket 2] [Bucket 2] -down-> [Async Queue Bucket 2] [Async Queue Bucket 2] -down-> [FSDirectoryBucket2] : Batch Write [FSDirectoryBucket2] -> indexBucket2Primary indexBucket2Primary -right-> indexBucket2Secondary } |
...