Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Migrated to Confluence 4.0

...

  • servlets-examples-cluster-node1.war - web application for Cluster Member 1
  • servlets-examples-cluster-node2.war - web application for Cluster Member 2
  • servlets-examples-tomcat-cluster-plan-node15.5.9.xml - Geronimo deployment plan for Tomcat 5.5.9 (this is the plan for node 1to use for Geronimo v1.0)
  • servlets-examples-tomcat-cluster-plan-node25.5.12.xml - Geronimo deployment plan for node 2Tomcat 5.5.12 (this is the plan to use when Geronimo moves to Tomcat 5.5.12 in the near future )

Each geronimo cluster member must have a unique jvmRoute designation. The jvmRoute attribute allows the mod-jk load balancer to provide "sticky session" (sending all requests for the same httpsession to the same cluster member). This is possible since the load balancer places the jvmRoute value in the session cookie (or encoded url) that is returned to the web browser.

The jvmRoute attribute can be set by updating for both load balancing and session replication.
In the near future, the jvmRoute attribute will be configurable via the var/config/config.xml file . Unfortunately, the only current way to setup the jvmRoute is by rebuilding the geronimo server for each cluster member. The jvmRoute should be configured with a unique value for each cluster member as shown below:by adding the lines indicated in bold below.

<configuration name="geronimo/tomcat/1.0/car">
<gbean name="TomcatResources">
</gbean> Update filename: geronimo/configs/tomcat/src/plan/plan.xml as follows
<gbean name="TomcatEngine" class="org.apache.geronimo.tomcat.EngineGBean">
<attribute name="initParams">
name=Geronimo
jvmRoute=nodeXYZ
</attribute>
</gbean>
<gbean name="TomcatWebConnector">
<attribute name="classNamehost">org>0.apache0.geronimo0.tomcat.TomcatEngine<0</attribute>
<attribute name="initParamsport">>8080</attribute>
<attribute name=Geronimo
jvmRoute=nodex"redirectPort">8443</attribute>
</attribute>

You must rebuild geronimo and uncompress the resulting image for the cluster member. Repeat the procedure for the next cluster member while assuring that a different jvmRoute value is used. Instructions for rebuilding geronimo is beyond the scope of this article. These instructions are available on the Geronimo Wiki at http://geronimo.apache.orgImage Removed and in the archives of the mailing lists.

gbean>

Remember that the jvmRoute value must be unique for each cluster member and the server must be stopped when updating config.xml.

Now start the geronimo server (e.g. bin/geronimo.bat|sh run) and Now that the jvmRoute is set correctly for each cluster member. You must deploy the example on each of the cluster members. For this example, the applications are slightly different for each cluster member. The difference is merely to indicate the current Server number (e.g. Server 1, Server 2) in the output of the application. This will be useful when trying to determine which cluster member is servicing the http request from the browser.

Start the geronimo server on each cluster member and then install Install the attached applications to the appropriate cluster member assuring that you use the correct deployment plan for each member. Note that the deployment plan must be updated with the hostname (or IP address) for each machine. The appropriate spots are identified in the plans with xx.yy.zz.aa. Tip: Memory to memory replication currently requires that all cluster members must reside on the same physical subnet since multicast broadcast is used.

Once you get the applications installed on each cluster member, you can test httpsession replication by hitting the application with your favorite browser. Probably something like: http://localhost:8080/servlets-examples-cluster/servlet/SessionExampleImage Removed . Note that the output page contains the ID of the server that is servicing the request. In your browser window, fill in the appropriate input fields and hit the submit button. The console dialogue (the prompt where you started geronimo) should show that that the httpsession data is being transmitted and received between the cluster members. Note that the transmit/receive confirmation messages in the log are only present for Tomcat 5.5.12.

Load Balancing and failover
Now you are ready to setup the Load Balancer. We recommend using Apache HTTP server and mod_jk for this example.

Install Apache HTTP server - instructions and downloads available at http://httpd.apache.org/Image Removed
Install Apache mod_jk - See http://tomcat.apache.org/tomcat-5.5-doc/balancer-howto.htmlImage Removed

Configuration tips for mod_jk:
worker.list=loadbalancer,status
worker.node1.port=8009
worker.node1.host=your.first.cluster.member.host.name
worker.node1.type=ajp13
worker.node1.lbfactor=1

...

Once you get Apache HTPP Server and mod_jk setup correctly.. You can test load balancing and failover by requesting the following URLs on port 80 (Apache HTTP Server default port).

http://Yourhost/servlets-examples-clusterImage Removed - HttpSession is not used here, hence no sticky session http://Yourhost/servlets-examples-cluster/servlet/SessionExampleImage Removed - HttpSession is used here, hence sticky session should be in effect

BTW, Sticky Session refers to the load balancer sending all requests for the same httpsession (indicated by a cookie or encodedUrl) to the same cluster member. This is valuable for applications that save state in an httpsession (e.g. shopping cart).

You can test failover by stopping the geronimo server that owns the sticky session and seeing that the next http request will failover into the remaining cluster member. The httpsession data from the previous request should be recovered and displayed in the refreshed browser window.

Tips

  • When testing using a web browser, make sure that you erase cookies and cached pages between test cases. Browser caching can cause confusion when testing.
  • Make sure your application has the distributable attribute defined in web.xml
  • Memory to memory replication currently requires that all cluster members must reside on the same physical subnet since multicast broadcast is used. Make sure all cluster members are on the same physical subnet and that multicast broadcast is supported on the subnet.

Also, see http://tomcat.apache.org/tomcat-5.0-doc/cluster-howto.htmlImage Removed for more information on tomcat clustering.

...