JBoss.orgCommunity Documentation

Chapter 10. JGroups Services

10.1. Configuring a JGroups Channel's Protocol Stack
10.1.1. Common Configuration Properties
10.1.2. Transport Protocols
10.1.3. Discovery Protocols
10.1.4. Failure Detection Protocols
10.1.5. Reliable Delivery Protocols
10.1.6. Group Membership (GMS)
10.1.7. Flow Control (FC)
10.1.8. Fragmentation (FRAG2)
10.1.9. State Transfer
10.1.10. Distributed Garbage Collection (STABLE)
10.1.11. Merging (MERGE2)
10.2. Key JGroups Configuration Tips
10.2.1. Binding JGroups Channels to a Particular Interface
10.2.2. Isolating JGroups Channels
10.2.3. Improving UDP Performance by Configuring OS UDP Buffer Limits
10.3. JGroups Troubleshooting
10.3.1. Nodes do not form a cluster
10.3.2. Causes of missing heartbeats in FD

JGroups provides the underlying group communication support for JBoss AS clusters. The way the AS's clustered services interact with JGroups was covered previously in Section 3.1, “Group Communication with JGroups”. The focus of this chapter is on the details, particularly configuration details and troubleshooting tips. This chapter is not intended to be a complete set of JGroups documentation; we recommend that users interested in in-depth JGroups knowledge also consult:

The first section of this chapter covers the many JGroups configuration options in considerable detail. Readers should understand that JBoss AS ships with a reasonable set of default JGroups configurations. Most applications just work out of the box with the default configurations. You only need to tweak them when you are deploying an application that has special network or performance requirements.

The JGroups framework provides services to enable peer-to-peer communications between nodes in a cluster. Communication occurs over a communication channel. The channel built up from a stack of network communication "protocols", each of which is responsible for adding a particular capability to the overall behavior of the channel. Key capabilities provided by various protocols include, among others, transport, cluster discovery, message ordering, loss-less message delivery, detection of failed peers, and cluster membership management services.

Figure 10.1, “Protocol stack in JGroups” shows a conceptual cluster with each member's channel composed of a stack of JGroups protocols.


In this section of the chapter, we look into some of the most commonly used protocols, with the protocols organized by the type of behavior they add to the overall channel. For each protocol, we discuss a few key configuration attributes exposed by the protocol, but generally speaking changing configuration attributes is a matter for experts only. More important for most readers will be to get a general understanding of the purpose of the various protocols.

The JGroups configurations used in the AS appear as nested elements in the $JBOSS_HOME/server/all/cluster/jgroups-channelfactory.sar/META-INF/jgroups-channelfactory-stacks.xml file. This file is parsed by the ChannelFactory service, which uses the contents to provide appropriately configured channels to the AS clustered services that need them. See Section 3.1.1, “The Channel Factory Service” for more on the ChannelFactory service.

Following is an example protocol stack configuration from jgroups-channelfactory-stacks.xml:

<stack name="udp-async"
           description="UDP-based stack, optimized for high performance for
                        asynchronous RPCs (enable_bundling=true)">
        <config>
          <UDP
             singleton_name="udp-async"
             mcast_port="${jboss.jgroups.udp_async.mcast_port:45689}"
             mcast_addr="${jboss.partition.udpGroup:228.11.11.11}"
             tos="8"
             ucast_recv_buf_size="20000000"
             ucast_send_buf_size="640000"
             mcast_recv_buf_size="25000000"
             mcast_send_buf_size="640000"
             loopback="true"
             discard_incompatible_packets="true"
             enable_bundling="true"
             max_bundle_size="64000"
             max_bundle_timeout="30"
             ip_ttl="${jgroups.udp.ip_ttl:2}"
             thread_naming_pattern="cl"
             timer.num_threads="12"
             enable_diagnostics="${jboss.jgroups.enable_diagnostics:true}"
             diagnostics_addr="${jboss.jgroups.diagnostics_addr:224.0.0.75}"
             diagnostics_port="${jboss.jgroups.diagnostics_port:7500}"

             thread_pool.enabled="true"
             thread_pool.min_threads="8"
             thread_pool.max_threads="200"
             thread_pool.keep_alive_time="5000"
             thread_pool.queue_enabled="true"
             thread_pool.queue_max_size="1000"
             thread_pool.rejection_policy="discard"
      
             oob_thread_pool.enabled="true"
             oob_thread_pool.min_threads="8"
             oob_thread_pool.max_threads="200"
             oob_thread_pool.keep_alive_time="1000"
             oob_thread_pool.queue_enabled="false"
             oob_thread_pool.rejection_policy="discard"/>
          <PING timeout="2000" num_initial_members="3"/>
          <MERGE2 max_interval="100000" min_interval="20000"/>
          <FD_SOCK/>
          <FD timeout="6000" max_tries="5" shun="true"/>
          <VERIFY_SUSPECT timeout="1500"/>
          <BARRIER/>
          <pbcast.NAKACK use_mcast_xmit="true" gc_lag="0"
                   retransmit_timeout="300,600,1200,2400,4800"
                   discard_delivered_msgs="true"/>
          <UNICAST timeout="300,600,1200,2400,3600"/>
          <pbcast.STABLE stability_delay="1000" desired_avg_gossip="50000"
                   max_bytes="400000"/>          
          <VIEW_SYNC avg_send_interval="10000"/>
          <pbcast.GMS print_local_addr="true" join_timeout="3000"
                   shun="true"
                   view_bundling="true"
                   view_ack_collection_timeout="5000"
                   resume_task_timeout="7500"/>
          <FC max_credits="2000000" min_threshold="0.10" 
              ignore_synchronous_response="true"/>
          <FRAG2 frag_size="60000"/>
          <!-- pbcast.STREAMING_STATE_TRANSFER/ -->
          <pbcast.STATE_TRANSFER/>
          <pbcast.FLUSH timeout="0" start_flush_timeout="10000"/>
        </config>
    </stack> 

All the JGroups configuration data is contained in the <config> element. This information is used to configure a JGroups Channel; the Channel is conceptually similar to a socket, and manages communication between peers in a cluster. Each element inside the <config> element defines a particular JGroups Protocol; each Protocol performs one function, and the combination of those functions is what defines the characteristics of the overall Channel. In the next several sections, we will dig into the commonly used protocols and their options and explain exactly what they mean.

The transport protocols are responsible for actually sending messages on the network and receiving them from the network. They also manage the pools of threads that are used to deliver incoming messages up the protocol stack. JGroups supports UDP, TCP, and TUNNEL as transport protocols.

UDP is the preferred protocol for JGroups. UDP uses multicast (or, in an unusual configuration, multiple unicasts) to send and receive messages. If you choose UDP as the transport protocol for your cluster service, you need to configure it in the UDP sub-element in the JGroups config element. Here is an example.

          <UDP
             singleton_name="udp-async"
             mcast_port="${jboss.jgroups.udp_async.mcast_port:45689}"
             mcast_addr="${jboss.partition.udpGroup:228.11.11.11}"
             tos="8"
             ucast_recv_buf_size="20000000"
             ucast_send_buf_size="640000"
             mcast_recv_buf_size="25000000"
             mcast_send_buf_size="640000"
             loopback="true"
             discard_incompatible_packets="true"
             enable_bundling="true"
             max_bundle_size="64000"
             max_bundle_timeout="30"
             ip_ttl="${jgroups.udp.ip_ttl:2}"
             thread_naming_pattern="cl"
             timer.num_threads="12"
             enable_diagnostics="${jboss.jgroups.enable_diagnostics:true}"
             diagnostics_addr="${jboss.jgroups.diagnostics_addr:224.0.0.75}"
             diagnostics_port="${jboss.jgroups.diagnostics_port:7500}"

             thread_pool.enabled="true"
             thread_pool.min_threads="8"
             thread_pool.max_threads="200"
             thread_pool.keep_alive_time="5000"
             thread_pool.queue_enabled="true"
             thread_pool.queue_max_size="1000"
             thread_pool.rejection_policy="discard"
      
             oob_thread_pool.enabled="true"
             oob_thread_pool.min_threads="8"
             oob_thread_pool.max_threads="200"
             oob_thread_pool.keep_alive_time="1000"
             oob_thread_pool.queue_enabled="false"
             oob_thread_pool.rejection_policy="discard"/>

The available attributes in the above JGroups configuration are listed discussed below. First, we discuss the attributes that are particular to the UDP transport protocol. Then we will cover those attributes shown above that are also used by the TCP and TUNNEL transport protocols.

The attributes particular to UDP are:

The attributes that are common to all transport protocols, and thus have the same meanings when used with TCP or TUNNEL, are:

  • singleton_name provides a unique name for this transport protocol configuration. Used by the AS ChannelFactory to support sharing of a transport protocol instance by different channels that use the same transport protocol configuration. See Section 3.1.2, “The JGroups Shared Transport”.

  • bind_addr specifies the interface on which to receive and send messages. By default JGroups uses the value of system property jgroups.bind_addr, which in turn can be easily set via the -b command line switch. See Section 10.2, “Key JGroups Configuration Tips” for more on binding JGroups sockets.

  • receive_on_all_interfaces specifies whether this node should listen on all interfaces for multicasts. The default is false. It overrides the bind_addr property for receiving multicasts. However, bind_addr (if set) is still used to send multicasts.

  • send_on_all_interfaces specifies whether this node send UDP packets via all the NICs if you have a multi NIC machine. This means that the same multicast message is sent N times, so use with care.

  • receive_interfaces specifies a list of of interfaces on which to receive multicasts. The multicast receive socket will listen on all of these interfaces. This is a comma-separated list of IP addresses or interface names. E.g. "192.168.5.1,eth1,127.0.0.1".

  • send_interfaces specifies a list of of interfaces via which to send multicasts. The multicast s ender socket will send on all of these interfaces. This is a comma-separated list of IP addresses or interface names. E.g. "192.168.5.1,eth1,127.0.0.1".This means that the same multicast message is sent N times, so use with care.

  • enable_bundling specifies whether to enable message bundling. If true, the tranpsort protocol would queue outgoing messages until max_bundle_size bytes have accumulated, or max_bundle_time milliseconds have elapsed, whichever occurs first. Then the transport protocol would bundle queued messages into one large message and send it. The messages are unbundled at the receiver. The default is false. Message bundling can have significant performance benefits for channels that are used for high volume sending of messages where the sender does not block waiting for a response from recipients (e.g. a JBoss Cache instance configured for REPL_ASYNC.) It can add considerable latency to applications where senders need to block waiting for responses, so it is not recommended for usages like JBoss Cache REPL_SYNC.

  • loopback specifies whether the thread sending a message to the group should itself carry the message back up the stack for delivery. (Messages sent to the group are always delivered to the sending node as well.) If false the sending thread does not carry the message; rather the transport protocol waits to read the message off the network and uses one of the message delivery pool threads to deliver it. The default is false, however the current recommendation is to always set this to true in order to ensure the channel receives its own messages in case the network interface goes down.

  • discard_incompatible_packets specifies whether to discard packets sent by peers using a different JGroups version. Each message in the cluster is tagged with a JGroups version. When a message from a different version of JGroups is received, it will be silently discarded if this is set to true, otherwise a warning will be logged. In no case will the message be delivered. The default is false

  • enable_diagnostics specifies that the transport should open a multicast socket on address diagnostics_addr and port diagnostics_port to listen for diagnostic requests sent by JGroups' Probe utility.

  • The various thread_pool attributes configure the behavior of the pool of threads JGroups uses to carry ordinary incoming messages up the stack. The various attributes end up providing the constructor arguments for an instance of java.util.concurrent.ThreadPoolExecutorService. In the example above, the pool will have a core (i.e. minimum) size of 8 threads, and a maximum size of 200 threads. If more than 8 pool threads have been created, a thread returning from carrying a message will wait for up to 5000 ms to be assigned a new message to carry, after which it will terminate. If no threads are available to carry a message, the (separate) thread reading messages off the socket will place messages in a queue; the queue will hold up to 1000 messages. If the queue is full, the thread reading messages off the socket will discard the message.

  • The various oob_thread_pool attributes are similar to the thread_pool attributes in that they configure a java.util.concurrent.ThreadPoolExecutorService used to carry incoming messages up the protocol stack. In this case, the pool is used to carry a special type of message known as an "Out-Of-Band" message, OOB for short. OOB messages are exempt from the ordered-delivery requirements of protocols like NAKACK and UNICAST and thus can be delivered up the stack even if NAKACK or UNICAST are queueing up messages from a particular sender. OOB messages are often used internally by JGroups protocols and can be used applications as well. JBoss Cache in REPL_SYNC mode, for example, uses OOB messages for the second phase of its two-phase-commit protocol.

Alternatively, a JGroups-based cluster can also work over TCP connections. Compared with UDP, TCP generates more network traffic when the cluster size increases. TCP is fundamentally a unicast protocol. To send multicast messages, JGroups uses multiple TCP unicasts. To use TCP as a transport protocol, you should define a TCP element in the JGroups config element. Here is an example of the TCP element.

<TCP singleton_name="tcp" 
        start_port="7800" end_port="7800"/>
                

Below are the attributes that are specific to the TCP protocol.

When a channel on one node connects it needs to discover what other nodes have compatible channels running and which of those nodes is currently serving as the "coordinator" responsible for allowing new nodes to join the group. Discovery protocols are used to discover active nodes in the cluster and determine which is the coordinator. This information is then provided to the group membership protocol (GMS, see Section 10.1.6, “Group Membership (GMS)”) which communicates with the coordinator node's GMS to bring the newly connecting node into the group.

Discovery protocols also help merge protocols (see Section 10.1.11, “Merging (MERGE2)” to detect cluster-split situations.

Since the discovery protocols sit on top of the transport protocol, you can choose to use different discovery protocols based on your transport protocol. These are also configured as sub-elements in the JGroups config element.

PING is a discovery protocol that works by either multicasting PING requests to an IP multicast address or connecting to a gossip router. As such, PING normally sits on top of the UDP or TUNNEL transport protocols. Each node responds with a packet {C, A}, where C=coordinator's address and A=own address. After timeout milliseconds or num_initial_members replies, the joiner determines the coordinator from the responses, and sends a JOIN request to it (handled by). If nobody responds, we assume we are the first member of a group.

Here is an example PING configuration for IP multicast.

<PING timeout="2000"
    num_initial_members="3"/>
                

Here is another example PING configuration for contacting a Gossip Router.

<PING gossip_host="localhost"
      gossip_port="1234"
      timeout="2000" 
      num_initial_members="3"/>

The available attributes in the PING element are listed below.

If both gossip_host and gossip_port are defined, the cluster uses the GossipRouter for the initial discovery. If the initial_hosts is specified, the cluster pings that static list of addresses for discovery. Otherwise, the cluster uses IP multicasting for discovery.

The TCPPING protocol takes a set of known members and pings them for discovery. This is essentially a static configuration. It works on top of TCP. Here is an example of the TCPPING configuration element in the JGroups config element.

<TCPPING timeout="2000"
     num_initial_members="3"/
     initial_hosts="hosta[2300],hostb[3400],hostc[4500]"
     port_range="3">

The available attributes in the TCPPING element are listed below.

MPING uses IP multicast to discover the initial membership. Unlike the other discovery protocols, which delegate the sending and receiving of discovery messages on the network to the transport protocol, MPING handles opens its own sockets to send and receive multicast discovery messages. As a result it can be used with all transports. But, it usually is used in combination with TCP. TCP usually requires TCPPING, which has to list all possible group members explicitly, while MPING doesn't have this requirement. The typical use case for MPING is when we want TCP for regular message transport, but UDP multicasting is allowed for discovery.

<MPING timeout="2000"
    num_initial_members="3"
    bind_to_all_interfaces="true"
    mcast_addr="228.8.8.8"
    mcast_port="7500"
    ip_ttl="8"/>

The available attributes in the MPING element are listed below.

The failure detection protocols are used to detect failed nodes. Once a failed node is detected, a suspect verification phase can occur after which, if the node is still considered dead, the cluster updates its membership view so that further messages are not sent to the failed node and the service using JGroups is aware the node is no longer part of the cluster. The failure detection protocols are configured as sub-elements in the JGroups config element.

FD and FD_SOCK, each taken individually, do not provide a solid failure detection layer. Let's look at the the differences between these failure detection protocols to understand how they complement each other:

The aim of a failure detection layer is to report promptly real failures and yet avoid false suspicions. There are two solutions:

<FD_SOCK/>
<FD timeout="6000" max_tries="5" shun="true"/>
<VERIFY_SUSPECT timeout="1500"/>

This suspects a member when the socket to the neighbor has been closed abonormally (e.g. a process crash, because the OS closes all sockets). However, if a host or switch crashes, then the sockets won't be closed, so, as a second line of defense FD will suspect the neighbor after 30 seconds. Note that with this example, if you have your system stopped in a breakpoint in the debugger, the node you're debugging will be suspected after roughly 30 seconds.

A combination of FD and FD_SOCK provides a solid failure detection layer and for this reason, such technique is used across JGroups configurations included within JBoss Application Server.

Reliable delivery protocols within the JGroups stack ensure that messages are actually delivered and delivered in the right order (FIFO) to the destination node. The basis for reliable message delivery is positive and negative delivery acknowledgments (ACK and NAK). In the ACK mode, the sender resends the message until the acknowledgment is received from the receiver. In the NAK mode, the receiver requests retransmission when it discovers a gap.

The NAKACK protocol is used for multicast messages. It uses negative acknowlegements (NAK). Under this protocol, each message is tagged with a sequence number. The receiver keeps track of the received sequence numbers and delivers the messages in order. When a gap in the series of received sequence numbers is detected, the receiver schedules a task to periodically ask the sender to retransmit the missing message. The task is cancelled if the missing message is received. The NAKACK protocol is configured as the pbcast.NAKACK sub-element under the JGroups config element. Here is an example configuration.

<pbcast.NAKACK max_xmit_size="60000" use_mcast_xmit="false"   
   retransmit_timeout="300,600,1200,2400,4800" gc_lag="0"
   discard_delivered_msgs="true"/>

The configurable attributes in the pbcast.NAKACK element are as follows.

The group membership service (GMS) protocol in the JGroups stack maintains a list of active nodes. It handles the requests to join and leave the cluster. It also handles the SUSPECT messages sent by failure detection protocols. All nodes in the cluster, as well as any interested services like JBoss Cache or HAPartition, are notified if the group membership changes. The group membership service is configured in the pbcast.GMS sub-element under the JGroups config element. Here is an example configuration.

<pbcast.GMS print_local_addr="true"
    join_timeout="3000"
    join_retry_timeout="2000"
    shun="true"
    view_bundling="true"/>

The configurable attributes in the pbcast.GMS element are as follows.

The flow control (FC) protocol tries to adapt the data sending rate to the data receipt rate among nodes. If a sender node is too fast, it might overwhelm the receiver node and result in out-of-memory conditions or dropped packets that have to be retransmitted. In JGroups, flow control is implemented via a credit-based system. The sender and receiver nodes have the same number of credits (bytes) to start with. The sender subtracts credits by the number of bytes in messages it sends. The receiver accumulates credits for the bytes in the messages it receives. When the sender's credit drops to a threshold, the receivers send some credit to the sender. If the sender's credit is used up, the sender blocks until it receives credits from the receiver. The flow control protocol is configured in the FC sub-element under the JGroups config element. Here is an example configuration.

<FC max_credits="2000000"
    min_threshold="0.10" 
    ignore_synchronous_response="true"/>

The configurable attributes in the FC element are as follows.

Why is FC needed on top of TCP ? TCP has its own flow control !

The reason is group communication, where we essentially have to send group messages at the highest speed the slowest receiver can keep up with. Let's say we have a cluster {A,B,C,D}. D is slow (maybe overloaded), the rest are fast. When A sends a group message, it uses TCP connections A-A (conceptually), A-B, A-C and A-D. So let's say A sends 100 million messages to the cluster. Because TCP's flow control only applies to A-B, A-C and A-D, but not to A-{B,C,D}, where {B,C,D} is the group, it is possible that A, B and C receive the 100M, but D only received 1M messages. (By the way, this is also the reason why we need NAKACK, even though TCP does its own retransmission).

Now JGroups has to buffer all messages in memory for the case when the original sender S dies and a node asks for retransmission of a message sent by S. Because all members buffer all messages they received, they need to purge stable messages (i.e. messages seen by everyone) every now and then. (This is done purging process is managed by the STABLE protocol; see Section 10.1.10, “Distributed Garbage Collection (STABLE)”). In the above case, the slow node D will prevent the group from purging messages above 1M, so every member will buffer 99M messages ! This in most cases leads to OOM exceptions. Note that - although the sliding window protocol in TCP will cause writes to block if the window is full - we assume in the above case that this is still much faster for A-B and A-C than for A-D.

So, in summary, even with TCP we need to FC to ensure we send messages at a rate the slowest receiver (D) can handle.

So do I always need FC?

This depends on how the application uses the JGroups channel. Referring to the example above, if there was something about the application that would naturally cause A to slow down its rate of sending because D wasn't keeping up, then FC would not be needed.

A good example of such an application is one that uses JGroups to make synchronous group RPC calls. By synchronous, we mean the thread that makes the call blocks waiting for responses from all the members of the group. In that kind of application, the threads on A that are making calls would block waiting for responses from D, thus naturally slowing the overall rate of calls.

A JBoss Cache cluster configured for REPL_SYNC is a good example of an application that makes synchronous group RPC calls. If a channel is only used for a cache configured for REPL_SYNC, we recommend you remove FC from its protocol stack.

And, of course, if your cluster only consists of two nodes, including FC in a TCP-based protocol stack is unnecessary. There is no group beyond the single peer-to-peer relationship, and TCP's internal flow control will handle that just fine.

Another case where FC may not be needed is for a channel used by a JBoss Cache configured for buddy replication and a single buddy. Such a channel will in many respects act like a two node cluster, where messages are only exchanged with one other node, the buddy. (There may be other messages related to data gravitation that go to all members, but in a properly engineered buddy replication use case these should be infrequent. But if you remove FC be sure to load test your application.)

In the Transport Protocols section above, we briefly touched on how the interface to which JGroups will bind sockets is configured. Let's get into this topic in more depth:

First, it's important to understand that the value set in any bind_addr element in an XML configuration file will be ignored by JGroups if it finds that system property jgroups.bind_addr (or a deprecated earlier name for the same thing, bind.address) has been set. The system property trumps XML. If JBoss AS is started with the -b (a.k.a. --host) switch, the AS will set jgroups.bind_addr to the specified value.

Beginning with AS 4.2.0, for security reasons the AS will bind most services to localhost if -b is not set. The effect of this is that in most cases users are going to be setting -b and thus jgroups.bind_addr is going to be set and any XML setting will be ignored.

So, what are best practices for managing how JGroups binds to interfaces?

This setting tells JGroups to ignore the jgroups.bind_addr system property, and instead use whatever is specfied in XML. You would need to edit the various XML configuration files to set the various bind_addr attributes to the desired interfaces.

Within JBoss AS, there are a number of services that independently create JGroups channels -- possibly multiple different JBoss Cache services (used for HttpSession replication, EJB3 SFSB replication and EJB3 entity replication), two JBoss Messaging channels, and the general purpose clustering service called HAPartition that underlies most other JBossHA services.

It is critical that these channels only communicate with their intended peers; not with the channels used by other services and not with channels for the same service opened on machines not meant to be part of the group. Nodes improperly communicating with each other is one of the most common issues users have with JBoss AS clustering.

Whom a JGroups channel will communicate with is defined by its group name and, for UDP-based channels, its multicast address and port. So isolating JGroups channels comes down to ensuring different channels use different values for the group name, the multicast address and, in some cases, the multicast port.

The issue being addressed here is the case where, in the same environment, you have multiple independent clusters running. For example, a production cluster, a staging cluster and a QA cluster. Or multiple clusters in a QA test lab or in a dev team environment. Or a large set of production machines divided into multiple clusters.

To isolate JGroups clusters from other clusters on the network, you need to:

The issue being addressed here is the normal case where we have a cluster of 3 machines, each of which has, for example, an HAPartition deployed along with a JBoss Cache used for web session clustering. The HAPartition channels should not communicate with the JBoss Cache channels. Ensuring proper isolation of these channels is straightforward, and generally speaking the AS handles it for you without any special effort on your part. So most readers can skip this section.

To isolate JGroups channels for different services on the same set of AS instances from each other, each channel must have its own group name. The configurations that ship with JBoss AS of course ensure that this is the case. If you create a custom service that directly uses JGroups, just make sure you use a unique group name. If you create a custom JBoss Cache configuration, make sure you provide a unique value in the clusterName configuration property.

In releases prior to AS 5, different channels running in the same AS also had to use unique multicast ports. With the JGroups shared transport introduced in AS 5 (see Section 3.1.2, “The JGroups Shared Transport”), it is now common for multiple channels to use the same tranpsort protocol and its sockets. This makes configuration easier, which is one of the main benefits of the shared transport. However, if you decide to create your own custom JGroups protocol stack configuration, be sure to configure its transport protocols with a multicast port that is different from the ports used in other protocol stacks.

On some operating systems (Mac OS X for example), using different -g and -u values isn't sufficient to isolate clusters; the channels running in the different clusters need to use different multicast ports. Unfortunately, setting the multicast ports is not quite as simple as -g and -u. By default, a JBoss AS instance running the all configuration will use up to two different instances of the JGroups UDP transport protocol, and will thus open two multicast sockets. You can control the ports those sockets use by using system properties on the command line. For example,

/run.sh -u 230.1.2.3 -g QAPartition -b 192.168.1.100 -c all \\
        -Djboss.jgroups.udp.mcast_port=12345 -Djboss.messaging.datachanneludpport=23456

The jboss.messaging.datachanneludpport property controls the multicast port used by the MPING protocol in JBoss Messaging's DATA channel. The jboss.jgroups.udp.mcast_port property controls the multicast port used by the UDP transport protocol shared by all other clustered services.

The set of JGroups protocol stack configurations included in the $JBOSS_HOME/server/all/cluster/jgroups-channelfactory.sar/META-INF/jgroups-channelfactory-stacks.xml file includes a number of other example protocol stack configurations that the standard AS distribution doesn't actually use. Those configurations also use system properties to set any multicast ports. So, if you reconfigure some AS service to use one of those protocol stack configurations, just use the appropriate system property to control the port from the command line.

By default, the JGroups channels in JBoss AS use the UDP transport protocol in order to take advantage of IP multicast. However, one disadvantage of UDP is it does not come with the reliable delivery guarantees provided by TCP. The protocols discussed in Section 10.1.5, “Reliable Delivery Protocols” allow JGroups to guarantee delivery of UDP messages, but those protocols are implemented in Java, not at the OS network layer. To get peak performance from a UDP-based JGroups channel it is important to limit the need for JGroups to retransmit messages by limiting UDP datagram loss.

One of the most common causes of lost UDP datagrams is an undersized receive buffer on the socket. The UDP protocol's mcast_recv_buf_size and ucast_recv_buf_size configuration attributes are used to specify the amount of receive buffer JGroups requests from the OS, but the actual size of the buffer the OS will provide is limited by OS-level maximums. These maximums are often very low:


The command used to increase the above limits is OS-specific. The table below shows the command required to increase the maximum buffer to 25MB. In all cases root privileges are required: