Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Authors: ringles@vmware.com, sabbey@pivotalsabbey@vmware.iocom

Status: Draft | Discussion | Active | Dropped | Superseded

Superseded by: N/A

...

The current implementation is not thread-safe and displays many concurrency issues when multiple clients are operating on the same keys or data structures. (Since the Redis API and data structures do not map directly to Geode APIs and data structures, the API implementation must perform a certain amount of translation and accounting, which have not been implemented in a thread-safe manner before.) Additionally, the data is not stored in a manner that can provide High Availability - data is not distributed among multiple serves servers so a single failure can lose data.

...

Initially we propose implementing and fully testing the subset of Redis commands that are required for Spring Session Data Redis.  Why this subset of commands?  Session State caching is one of the most popular use cases for Redis.  It is well defined and has a limited amount of Redis commands which makes it a manageable scope of work. The following commands will be fully implemented according to the Redis specification:

Connection: AUTH, PING, QUIT

Hashes: HSET, HMSET, HGETALL

Keys: APPEND, DEL, EXISTS, EXPIRE, EXPIREAT, RENAMEKEYS, EXPIREPERSIST, PEXPIRE, PEXPIREAT, PUBLISH, SUBSCRIBEPTTL, RENAME, TTL, TYPE

Publish/Subscribe: PUBLISH, PSUBSCRIBE, UNSUBSCRIBEPUNSUBSCRIBE, PUNSUBSCRIBESUBSCRIBE, UNSUBSCRIBE

Sets: SADD, SMEMBERS, SREM, HGETALL, HMSET, HSET, GET, GETRANGE, GETSET, PERSIST, SET, STRLEN

Strings: APPEND, GET, SET

Redis API Region Management

The proposed update is to store multiple Redis data - Sets, Hashes, Lists, etc. in combined regions (for example, as illustrated below, a ”Sets” region and a “Hashes” region, etc.)- in a single region, and use Geode functions to interact with them, along with delta propagation . Delta propagation will be used to keep data in sync between Geode servers. This avoids the overhead of region creation and destruction, and limits network traffic, while allowing data to be shared across Geode servers to promote High Availability.

Image RemovedImage Added

Note that currently, the regions used to implement the Redis API are not “internal” regions, and are therefore visible to the Geode API (gfsh, etc.). It is proposed that they the new Redis-specific region be marked as “internal” going forward. 

In terms of general implementation, the collections regions region will use the String/Set/Hash/List name as the key, and the value contains all the members of the collections object, which would implement the Geode Delta interface. This will limit network traffic to redundant copies, and also keep the value deserialized when stored in the region; these traits should benefit performance.

...

Using the Geode mechanism of functions to execute operations on the primary region where the key is stored, and then disseminating that information to the other servers via delta propagation, the Redis data can be effciently efficiently distributed to multiple Geode servers allowing for transparent redundancy and failover. If an individual server goes down, the client can connect to a live server and access all of their data.

To further ensure HA, the partition type of the internal Redis regions region will no longer default to PARTITION, and will no longer be customizable and . It will be fixed as a to the PARTITION_REDUNDANT type, with the default redundancy of 1.

Note that the complexity of reconnecting to a new server can be minimized with a load balancer or DNS aliases. If a list of healthy servers is kept, clients can be directed to individual servers via the normal DNS lookup process. If the server a client is connected to fails, the client will try to reconnect to the same host, and the DNS alias will automatically direct that client to a healthy server. From the client’s perspective, it will look like be indistinguishable from a momentary network failure, and no special failover logic is necessary.

...