Hibernate.orgCommunity Documentation

Chapter 2. Core Concepts

2.1. Types of Cached Data
2.1.1. Entities
2.1.2. Collections
2.1.3. Queries
2.1.4. Timestamps
2.2. Key JBoss Cache Behaviors
2.2.1. Replication vs. Invalidation vs. Local Mode
2.2.2. Synchronous vs. Asynchronous
2.2.3. Locking Scheme
2.2.4. Isolation Level
2.2.5. Initial State Transfer
2.2.6. Cache Eviction
2.2.7. Buddy Replication and Cache Loading
2.3. Matching JBC Behavior to Types of Data
2.3.1. The RegionFactory Interface
2.3.2. The CacheManager API
2.3.3. Sharable JGroups Resources
2.3.4. Bringing It All Together

This chapter focuses on some of the core concepts underlying how the JBoss Cache-based implementation of the Hibernate Second Level Cache works. There's a fair amount of detail, which certainly doesn't all need to be mastered to use JBoss Cache with Hibernate. But, an understanding of some of the basic concepts here will help a user understand what some of the typical configurations discussed in the next chapter are all about.

If you want to skip the details for now, feel free to jump ahead to Section 2.3.4, “Bringing It All Together”

The Second Level Cache can cache four different types of data: entities, collections, query results and timestamps. Proper handling of each of the types requires slightly different caching semantics. A major improvement in Hibernate 3.3 was the addition of the org.hibernate.cache.RegionFactory SPI, which allows Hibernate to tell the caching integration layer what type of data is being cached. Based on that knowledge, the cache integration layer can apply the semantics appropriate to that type.

Entities are the most common type of data cached in the second level cache. Entity caching requires the following semantics in a clustered cache:

Hibernate supports caching of query results in the second level cache. The HQL statement that comprised the query is cached (including any parameter values) along with the primary keys of all entities that comprise the result set.

The semantics of query caching are significantly different from those of entity caching. A database row that reflects an entity's state can be locked, with cache updates applied with that lock in place. The semantics of entity caching take advantage of this fact to help ensure cache consistency across the cluster. There is no clear database analogue to a query result set that can be efficiently locked to ensure consistency in the cache. As a result, the fail-fast semantics used with the entity caching put operation are not available; instead query caching has semantics akin to an entity insert, including costly synchronous cluster updates and the JBoss Cache two phase commit protocol. Furthermore, Hibernate must agressively invalidate query results from the cache any time any instance of one of the entity classes involved in the query's WHERE clause changes. All such query results are invalidated, even if the change made to the entity instance would not have affected the query result. It is not performant for Hibernate to try to determine if the entity change would have affected the query result, so the safe choice is to invalidate the query. See Section 2.1.4, “Timestamps” for more on query invalidation.

The effect of all this is that query caching is less likely to provide a performance boost than entity/collection caching. Use it with care and benchmark your application with it enabled and disabled. Be careful about replicating query results; caching them locally only on the node that executed the query will be more performant unless the query is quite expensive, is very likely to be repeated on other nodes, and is unlikely to be invalidated out of the cache.[2].

The JBoss Cache-based implementation of query caching adds a couple of interesting semantics, both designed to ensure that query cache operations don't block transactions from proceeding:

  • The insertion of a query result into the cache is very much like the insertion of a new entity. The difference is it is possible for two transactions, possibly on different nodes, to try to insert the same query at the same time. (If this happened with entities, the database would throw an exception with a primary key violation before any caching work could start). This could lead to long delays as the transactions compete for cache locks. To prevent such delays, the cache integration layer will set a very short (a few ms) lock timeout before attempting to cache a query result. If there is any sort of locking conflict, it will be detected quickly, and the attempt to cache the result will be quietly abandonded.

  • A read of a query result does not result in any long-lasting read lock in the cache. Thus, the fact that an uncommitted transaction had read a query result does not prevent concurrent transactions from subsequently invalidating that result and caching a new result set. However, an insertion of a query result into the cache will result in an exclusive write lock that lasts until the transaction that did the insert commits; this lock will prevent other transactions from reading the result. Since the point of query caching is to improve performance, blocking on a cache read for an extended period seems suboptimal. So, the cache integration code will set a very low lock acquisition timeout before attempting the read; if there is a lock conflict, the read will silently fail, resulting in a cache miss and a re-execution of the query against the database.

Timestamp caching is an internal detail of query caching. As part of each query result, Hibernate stores the timestamp of when the query was executed. There is also a special area in the cache (the timestamps cache) where, for each entity class, the timestamp of the last update to any instance of that class is stored. When a query result is read from the cache, its timestamp is compared to the timestamps of all entities involved in the query. If any entity has a later timestamp, the cached result is discarded and a new query against the database is executed.

The semantics of of the timestamp cache are quite different from those of the entity, collection and query caches.

JBoss Cache is a very flexible tool and includes a great number of configuration options. See the JBoss Cache User Guide for an in depth discussion of these options. Here we focus on the main concepts that are most important to the Second Level Cache use case. This discussion will focus on concepts; see Section 3.2, “Configuring JBoss Cache” for details on the actual configurations involved.

JBoss Cache provides three different choices for how a node in the cluster should interact with the rest of the cluster when its local state is updated:

If the MVCC or PESSIMISTIC node locking schemes are used, JBoss Cache supports different isolation level configurations that specify how different transactions coordinate the locking of nodes in the cache: READ_COMMITTED and REPEATABLE_READ. These are somewhat analogous to database isolation levels; see the JBoss Cache User Guide for an in depth discussion of these options. In both cases, cache reads do not block for other reads. In both cases a transaction that writes to a node in the cache tree will hold an exclusive lock on that node until the transaction commits, causing other transactions that wish to read the node to block. In the REPEATABLE_READ case, the read lock held by an uncommitted transaction that has read a node will cause another transaction wishing to write to that node to block until the read transaction commits. This ensures the reader transaction can read the node again and get the same results, i.e. have a repeatable read.

READ_COMMITTED allows the greatest concurrency, since reads don't block each other and also don't block a write.

If the deprecated OPTIMISTIC node locking scheme is used, any isolation level configuration is ignored by the cache. Optimistic locking provides a repeatable read semantic but does not cause writes to block for reads.

In most cases, a REPEATABLE_READ setting on the cache is not needed, even if the application wants repeatable read semantics. This is because the Second Level Cache is just that -- a secondary cache. The primary cache for an entity or collection is the Hibernate Session object itself. Once an entity or collection is read from the second level cache, it is cached in the Session for the life of the transaction. Subsequent reads of that entity/collection will be resolved from the Session cache itself -- there will be no repeated read of the Second Level Cache by that transaction. So, there is no benefit to a REPEATABLE_READ configuration in the Second Level Cache.

The only exception to this is if the application uses Session's evict() or clear() methods to remove data from the Session cache and during the course of the same transaction wants to read that same data again with a repeatable read semantic.

Note that for query and timestamp caches, the behavior of the Hibernate/JBC integration will not allow repeatable read semantics even if JBC is configured for REPEATABLE_READ. A cache read will not result in a read lock in the cache being held for the life of the transaction. So, for these caches there is no benefit to a REPEATABLE_READ configuration.

The preceding discussion has gone into a lot of detail about what Hibernate wants to accomplish as it caches data, and what JBoss Cache configuration options are available. What should be clear is that the configurations that are best for caching one type of data are not the best (and are sometimes completely incorrect) for other types. Entities likely work best with synchronous invalidation; timestamps require replication; query caching might do best in local mode.

Prior to Hibernate 3.3 and JBoss Cache 2.1, the conflicting requirements between the different cache types led to a real dilemna, particularly if query caching was enabled. This conflict arose because all four cache types needed to share a single underlying cache, with a single configuration. If query caching was enabled, the requirements of the timestamps cache basically forced use of synchronous replication, which is the worst performing choice for the more critical entity cache and is often inappropriate for the query cache.

With Hibernate 3.3 and JBoss Cache 2.1 it has become possible, even easy, to use separate underlying JBoss Cache instances for the different cache types. As a result, the entity cache can be optimally configured for entities while the necessary configuration for the timestamps cache is maintained.

There were three key changes that make this improvement possible:

JGroups is the group communication library JBoss Cache uses JGroups to send messages around a cluster. Each cache has a JGroups Channel; different channels around the cluster that have the same name and compatible configurations detect each other and form a group for message transmission.

A Channel is a fairly heavy object, typically using a good number of threads, several sockets and some good sized network I/O buffers. Creating multiple different channels in the same VM was therefore costly, and was an administrative burden as well, since each channel would need separate configuration to use different network addresses or ports. Architecturally, this mitigated against having multiple JBoss Cache instances in an application, since each would need its own Channel.

Added in JGroups 2.5 and much improved in the JGroups 2.6 series is the concept of sharable JGroups resources. Basically, the heavyweight JGroups elements can be shared. An application (e.g. the Hibernate/JBoss Cache integration layer) uses a JGroups ChannelFactory. The ChannelFactory is provided with a set of named channel configurations. When a Channel is needed (e.g. by a JBoss Cache instance), the application asks the ChannelFactory for the channel by name. If different callers ask for a channel with the same name, the ChannelFactory ensures that they get channels that share resources.

The effect of all this is that if a user wants to use four separate JBoss Cache instances, one for entity caching, one for collection caching, one for query caching and one for timestamp caching, those four caches can all share the same underlying JGroups resources.

The task of a Hibernate Second Level Cache user is to:

See Section 3.3, “JGroups Configuration” for more on JGroups.

So, we've seen that Hibernate caches up to four different types of data (entities, collections, queries and timestamps) and that Hibernate + JBoss Cache gives you the flexibility to use a separate underlying JBoss Cache, with different behavior, for each type. You can actually deploy four separate caches, one for each type.

In practice, four separate caches are unnecessary. For example, entities and collection caching have similar enough semantics that there is no reason not to share a JBoss Cache instance between them. The queries can usually use the same cache as well. Similarly, queries and timestamps can share a JBoss Cache instance configured for replication, with the hibernate.cache.jbc.query.localonly=true configuration letting you turn off replication for the queries if you want to.

Here's a decision tree you can follow:

  1. Decide if you want to enable query caching.

  2. Decide if you want to use invalidation or replication for your entities and collections. Invalidation is generally recommended for entities and collections.

  3. If you are using query caching, from the above decision tree you've either got your timestamps sharing a cache with other data types, or they are by themselves. Either way, the cache being used for timestamps must have initial state transfer enabled. Now, if the timestamps are sharing a cache with entities, collections or queries, decide whether you want initial state transfer for that other data. See Section 2.2.5, “Initial State Transfer” for the implications of this. If you don't want initial state transfer for the other data, you'll need to have a separate cache for the timestamps.

  4. Finally, if your queries are sharing a cache configured for replication, decide if you want the cached query results to replicate. (The timestamps cache must replicate.) If not, you'll want to set the hibernate.cache.region.jbc2.query.localonly=true option when you configure your SessionFactory

Once you've made these decisions, you know whether you need just one underlying JBoss Cache instance, or more than one. Next we'll see how to actually configure the setup you've selected.



[2] See the discussion of the hibernate.cache.jbc.query.localonly property in Section 3.1, “Configuring the Hibernate Session Factory”

for more on how to only cache query results locally.