Version 15

    High Availability JMS with JBossMQ

     

    JBossMQ high availability features:

    1. Automatic server fail-over

    2. Remote client connectivity to HAJMS

    3. Lossless recovery after fail-over for messages targeted to durable subscribers

    4. Client notification via connection ExceptionListener on fail-over

     

    Note: Further in this text and throughout the documents on this site, we will often refer to the highly available JBossMQ implementation as HAJMS for brevity.

     

    Deploying HAJMS

     

    HAJMS is deployed as part of the "all" configuration bundled with the JBoss distribution.

    Initially the JMS server is configured to persist its data in a Hypersonic database.

     

    Normally you would be running a clustered application server against a more sophisticated database shared amongst all application server nodes.

     

    There are two JMS MBean services that need to be taken into consideration when running in a clustered environment with a shared database.

    • jboss.mq:service=PersistenceManager, which in the Hypersonic case is declared in hsqldb-jdbc2-service.xml located under server/all/deploy-hasingleton/jms.

    • jboss.mq:service=StateManager, which is declared in hsqldb-jdbc-state-service.xml under the same directory.

     

    To ensure that HAJMS works properly, all nodes in the cluster must be configured identically in regard to the JMS services and their persistent DataSource.

     

    To run HAJMS with a different shared database, you will need to replace the hsqldb-jdbc2-service.xml file with one tuned to the specific database.

    For example if you use MySQL the file is mysql-jdbc2-service.xml. Configuration files for a number of RDBMS are bundled with the JBoss distribution. They can be found under docs/examples/jms.

     

     

    Note: Work is under way to implement the JMS persistence layer via Hibernate, which will eliminate the requirement for maintaining different JMS configuration files per database. If you want to help, take a look at the TODO and JMS dev forums.

     

    Deploying Queues and Topics in an HAJMS Environment

     

    Deploy your queues and topics in the

    server/all/deploy-hasingleton

    directory.  In an HA configuration, the core JBossMQ server is deployed as an HASingleton, with core services like the DestinationManager only deployed on the cluster node that is serving as the singleton master.  This means that if you try to deploy your queues and topics in the

    deploy

    directory, on the nodes that aren't the master the deployment will fail due to the missing DestinationManager:

     

    ObjectName: jboss.mq.destination:name=HAQueue,service=Queue
      State: CONFIGURED
      I Depend On:
        jboss.mq:service=DestinationManager
    
    --- MBEANS THAT ARE THE ROOT CAUSE OF THE PROBLEM ---
    ObjectName: jboss.mq:service=DestinationManager
      State: NOTYETINSTALLED
      Depends On Me:
        jboss.mq.destination:name=HAQueue,service=Queue
    

     

    Deploying your queues and topics in

    server/all/deploy-hasingleton

    ensures they will only be deployed on the node where the DestinationManager is available.

     

    Clients that need to connect to the queue/topic then use HAJNDI to find the deployed destination on whichever cluster node is the current master.  See below for more details on this.

     

    Example client code for HAJMS

     

    The following example shows how to connect to HAJMS from a remote client and verify that HAJMS is functioning properly. A pre-built archive with the examples (hajms-examples.sar.zip) is attached on the LoadBalancedFaultTolerantMDBs Wiki page. It includes the binaries as well as the source code.

     

    Provided that you deployed the example SAR, here is a scenario you can use to verify that HAJMS is working for you.

     

    • Install and configure two identical instances of JBoss 3.2.4 or later on two computers that reside on the same subnet

    • Configure the DefaultDS on both JBoss instances to point to a single database, preferably PostgreSQL, MySQL or another production grade database

    • Start JBoss in configuration "all" on both servers

    • Deploy the provided example client code on a server (does not have to be node in the cluster)

    • Point your browser to the JMX Console where the client is hosted

      • Locate the MBean jboss.mq.examples:HAJMSClient

      • Run operation connectPublisher

      • Run operation registerDurableSubscriberAndDisconnect

      • Run operation publishMessageToTopic passing a test message (e.g. "TestHAJMS1")

      • See the value of attribute LastMessage; It should be null

    • In the JMX console for both server nodes

      • Locate MBean jboss.ha:HASingletonDeployer

      • See the value of attribute MasterNode; It should be true on only one of the server nodes

      • Run operation stop on the node that is currently the master

      • After a few moments the attribute MasterNode will become false on this server node and it will become true on the other server node

    • Observe the client log console

      • Wait a few minutes to see the following message: "Notification received by ExceptionListener..."

      • This is an indication that the client was notified of the server fail-over

    • Now go back to the client JMX console

      • Run operation connectPublisher

      • Run operation registerDurableSubscriberAndReceiveMessages

      • See the value of attribute LastMessage. It should now show the value of the test message

     

    If all the steps above passed as described, then you have a working HA-JMS deployment. If not, then examine the log files and think if you might have missed a step or there is something that went wrong, which you can fix. If you cannot fix the problem by yourself, consider the JMS users forum:

    http://jboss.org/index.html?module=bb&op=viewforum&f=48 .

     

    Backward compatibility

     

    It is important to note that applications, which used to run in JBoss configuration "all" might have to be refactored. Since JBoss 3.2.4 the JMS server is deployed and running on exactly one node in the cluster, whereas in the past each node used to run an independent JMS server instance.

     

    Environment Naming Context

     

    The least intrusive way to configure the lookup of the JMS resources is to bind the resources to the environment naming context of the bean performing the lookup.  The binding can then be configured to use HA-JNDI instead of a local mapping.

     

    Within the bean definition in the ejb-jar.xml you will need to define two resource-ref mappings, one for the connection factory and one for the destination.

      <resource-ref>
        <res-ref-name>jms/ConnectionFactory</res-ref-name>
        <res-type>javax.jms.QueueConnectionFactory</res-type>
        <res-auth>Container</res-auth>
      </resource-ref>
    
      <resource-ref>
        <res-ref-name>jms/Queue</res-ref-name>
        <res-type>javax.jms.Queue</res-type>
        <res-auth>Container</res-auth>
      </resource-ref>
    

     

     

    Using these examples the bean can obtain the connection factory bound by the JMS Resource Adapter (cf ConfigJMSRAConnectionFactory) by looking up 'java:comp/env/jms/ConnectionFactory' and can obtain a queue object (local or remote) by looking up 'java:comp/env/jms/Queue'.

     

    Within the descriptor jboss.xml these references need to mapped to a URL that makes use of HA-JNDI.

     

      <resource-ref>
        <res-ref-name>jms/ConnectionFactory</res-ref-name>
        <jndi-name>java:/JmsXA</jndi-name>
      </resource-ref>
    
      <resource-ref>
        <res-ref-name>jms/Queue</res-ref-name>
        <jndi-name>jnp://localhost:1100/queue/A</jndi-name>
      </resource-ref>
    

     

    The URL should contain the port which the HA-JNDI service is listening to in the cluster. In case this service does not run on the same server instance than the EJB, the lookup operation will then automatically query all of the nodes in the cluster to identify which instance has got the JMS resources available.

     

    Note:  When using JMS resources that have been secured against guest access, use the HA-JNDI URL for the ConnectionFactory as well, e.g. jnp://${jboss.bind.address:localhost}:1100/XAConnectionFactory.

     

    Manual Lookup

     

    If instead of using the environment naming context to lookup your code looks up the JMS resources from the global namespace the alternative approach is to obtain an InitialContext configured to use HA-JNDI. 

     

    For example the following code should be used to lookup plain Connection factory and Topic:

     

        Properties p = new Properties(); 
        p.put(Context.INITIAL_CONTEXT_FACTORY, "org.jnp.interfaces.NamingContextFactory"); 
        p.put(Context.URL_PKG_PREFIXES, "jboss.naming:org.jnp.interfaces"); 
        p.put(Context.PROVIDER_URL, "localhost:1100"); // HA-JNDI port. 
        InitialContext iniCtx = new InitialContext(p);
    
        TopicConnectionFactory connFactory = (TopicConnectionFactory)iniCtx.lookup("ConnectionFactory");
        topic_ = (Topic)iniCtx.lookup("topic/testTopic");
    

     

     

    Using java:/JmsXA and remote destination

     

     

    Question: "Using the same context that I used to lookup the JmsXA connection factory, I am getting "This destination does not exist!" when trying to retrieve a remote destination.

     

     

    Correct. Let's look at the source code

       InitialContext ic = new InitialContext();
       Object o = ic.lookup("java:/JmsXA");
       ConnectionFactory qcf = (ConnectionFactory)o;
       Connection c = qcf.createConnection();
       Session s = c.createSession(false, Session.AUTO_ACKNOWLEDGE);
       Queue q = (Queue) ic.lookup("queue/A");  <-- "This destination does not exist!" thrown here!!!
    

     

    If you execute this code on the node that runs HA-JMS, everything is alright. Since you are using the default InitialContext you have access to both naming spaces. Now when HA-JMS is failing over, the destination object is therefore bound on a remote server. Using the default context won't work anymore.

     

    The only portable way to address this issue is by using a resource ref as explained above. If you are not executing this code from an ejb you can either use two different context objects (one default and one ha-jndi) or a jboss specific approach as such:

     

     

     

       InitialContext ic = new InitialContext();
       Object o = ic.lookup("java:/JmsXA");
       ConnectionFactory qcf = (ConnectionFactory)o;
       .....
       Queue q = (Queue)new SpyQueue("A"); 
    

     

    at this point, a queue is "just" a name, not the actual destination holding messages on the jms provider side. Basically you just need this Queue object to tell the provider that you want the message to get into a queue with that given name. This statement applies to Topic objects as well.

     

     

     

    This has already been done for MDB and the JMSRA using deploy-hasingleton/hajndi-jms-ds.xml

     

    • The overhead of sending and receiving messages is increased by the remote connectivity to the JMS server node. Applications should consider using JMS as a reliable means of exchanging messages in a distributed environment, rather than a general purpose low-latency event mechanism.  For example when the problem at hand can be solved by implementing a strongly typed, in VM Observer/Observed pattern, the application should draw upon the best practices established by Java Swing, rather than using JMS.

     

     

    Related notes