Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

To be Reviewed By: 26-03-2020

Authors: Alberto Bustamante Reyes (alberto.bustamante.reyes@est.tech)

Status: Draft | Discussion | Active | Dropped | Superseded Development

Superseded by: N/A

Related: N/A

...

There is a problem with Geode WAN replication when GW receivers are configured with the same hostname-for-senders and port on all servers. The reason for such a setup is deploying Geode cluster on a Kubernetes cluster where all GW receivers are reachable from the outside world on the same VIP and port. Other kinds of configuration (different hostname and/or different port for each GW receiver) are not cheap from OAM operation & maintenance and resources perspective in cloud native environments and also limit some important use-cases (like scaling).

...

Cluster-2 gfsh>list gateways
GatewayReceiver Section

Member                            | Port  | Sender Count | Senders Connected
--------------------------------- | ----- | ------------ | -------------------------------------------------------------------------------------------------------
172.17.0.5(server-0:65)<v1>:41000 | 32000 | 0            |
172.17.0.9(server-1:51)<v1>:41000 | 32000 | 3            | 172.17.0.8(server-1:46)<v1>:41000, 172.17.0.8(server-1:46)<v1>:41000, 172.17.0.8(server-1:46)<v1>:41000

Anti-Goals

N/A

Solution

Current status of the solution is located on this PR: https://github.com/apache/geode/pull/4824

Help wanted! There is one test failing (testExecuteOp from ConnectionPoolImplJUnitTest) that causes integration test and stress test tasks to fail. We are working to fix it, but it would be great if a more experience Geode developer takes a look. More info at the annex at the end of this page.

Gw sender failover

Solution consists on refactoring some maps on LocatorLoadSnapshot class. They use ServerLocation objects as key, this has to change due to it will not be unique for each server. We changed the maps to use InternalDistributedMember objects as key for the map entries. The ServerLocation information is not lost, as it is contained in the entry value for all the maps.

Gw sender failover

Solution consists on refactoring some maps on LocatorLoadSnapshot class. They use ServerLocation objects as key, this has to change due to it will not be unique for each server. We changed the maps to use InternalDistributedMember objects as key for the map entries. The ServerLocation information is not lost, as it is contained in the entry value for all the maps.

The The same refactoring is done in EndPointManager, as it holds a map of endpoints that also uses ServerLocation objects as key.

Check this commit for a draft of the proposed solution: https://github.com/apache/geode/pull/4824/commits/b180869c73095e7a810ba2e1c92e243a0220e888

Gw sender pings not reaching Gw sender pings not reaching gw receivers

When PingTask are run by LiveServerPinger, they call PingOp.execute(ExecutablePool pool, ServerLocation server). PingOp only uses hostname and ip (ServerLocation) to get the connection to send the ping message. As all receivers are sharing the same host and port, it is not guaranteed that the connection is really pointing to the server we want to connect to.

Solution consists on the modification of the ping messages to include info about the server they want to reach. If the messages are received by other server, they can be sent to the proper server.

Other alternative is the addition of We decided to add a retry mechanism to PingOp to be able to discard a connection if the endpoint of that connection is not the server we want to connect to. We have added a new method PingOp.execute(Executable pool, Endpoint endpoint) to solve this. In this way, if the connection obtained is not pointing to the required Endpoint, it can be discarded an ask for a new one.

Other alternatives to the retry mechanism that we have not explored could be:

  • Add the option for deactivating the ping mechanism for gw sender/gw receivers communication
  • Send the ping using just existing connections, not creating new ones.

Changes and Additions to Public Interfaces

N/A

Performance Impact

When getting the connection to execute the ping, some retries could happen until the right connection is obtained so this operation will take longer, but we do not think it will impact performance.

Backwards Compatibility and Upgrade Path

N/A

Prior Art

After checking with the dev mailing list, we received the suggestion to configure serverAffinity in Kubernetes to solve the issue with the pings, but that option broke the failover of gw senders when a gw receiver is down.

FAQ

TBD

Errata

N/A

Annex: testExecuteOp failing

After our changes we have been stuck trying to solve testExecuteOp from ConnectionPoolImplJUnitTest. The test hangs when executing an operation that has been implemented to throw an exception. Instead of trying to execute the operation on both servers, we have seen it tries continuously to execute it on the same server.

The problem is in handshakeWithServer function at ClientSideHandshakeImpl class. We have seen that after the operation fails on the first server, and it is going to be executed on the second server,  at this line:

...

not the server we want to connect to. We have added a new method PingOp.execute(Executable pool, Endpoint endpoint) to solve this. In this way, if the connection obtained is not pointing to the required Endpoint, it can be discarded an ask for a new one.

Other alternatives to the retry mechanism that we have not explored could be:

  • Add the option for deactivating the ping mechanism for gw sender/gw receivers communication
  • Send the ping using just existing connections, not creating new ones.

Changes and Additions to Public Interfaces

N/A

Performance Impact

When getting the connection to execute the ping, some retries could happen until the right connection is obtained so this operation will take longer, but we do not think it will impact performance.

Backwards Compatibility and Upgrade Path

N/A

Prior Art

After checking with the dev mailing list, we received the suggestion to configure serverAffinity in Kubernetes to solve the issue with the pings, but that option broke the failover of gw senders when a gw receiver is down.

FAQ

TBD

Errata

N/AThe variable contains the member id of the second server, but readServerMember return the id of the first server, so finally the operations is executed on that server again.