Tuesday, May 31, 2011

Maven Releases - To Run or Not to Run (tests)

Firstly, I think Maven is great (yeah I said it). Yes it can be a bit of a pain in the back side at times, but who/what isn't. Now since I've got that out of my chest, a quick look at maven releases. Specifically running tests during maven releases.

Maven has a release plugin that can be used to produce and publish release artifacts. When you have a maven based project, simply running;

mvn release:prepare release:perform

Will present a series of questions that would then end up producing and publishing release artifiacts for your project(s). Leaving the details out of the 'prepare' and 'perform' phases out of this write-up (as it is sufficiently documented), this post will dwell on running tests during these releases.

When one runs the above, maven would run all tests by default. This is because both phases runs golas that executes tests. 'perform' by default executes 'clean verify' goals while 'perform' by default executes 'deploy'. This means if you have 100s of tests, they will run twice.

While this seems completely normal, it can be argued that once a preparation has been done, i.e. All source code compiled, tests passed, scm tags created, it really isn't necessary to run the same tests again. While there are compelling reasons as to why you still want to run tests, this was one of the things that was bugging us when we run maven releases. Specially when you run preparation and perform together, it seems reasonable to be able to avoid the tests for the second time. When a project has 100s of integration/functional tests that gets executed as part of a release, this means a lot of time spent in running tests that we already know that passed.

And it so happens the release plugin does provide the capability to avoid tests running during the perform phase if one wants to do so. It's all down to the plugin configuration.

    maven-release-plugin
    2.1
    
        deploy -Dmaven.test.skip=true
    


The above configuration would make sure that tests don't run as part of the 'perform' phase of your maven release, and there by saving you as many minutes it takes to run the tests. While this might not be a preferred choice (not running tests in the perform phase), the plugin is configurable for us to make it play how we want it to play in case the second test run is seen as avoidable.

Thursday, February 24, 2011

Multiple Web Applications - One Spring Context

Its normal to have multiple web applications deployed as a complete solution that serve a product or enterprise. And if these web applications use spring, its normal to see multiple spring contexts associated to each of these web applications. This is because although spring beans are in fact singletons, they are not "single"ton per vm under multiple class-loaders. When multiple web applications are deployed (typically in a J2EE container or servlet container) each web application has its own class-loader. And when spring beans under each of these web applications get loaded, they are singletons within the class-loader.

While this is perfectly acceptable, it would be nice (and beneficial) to have these spring beans to be singletons for all web applications. This would mean that the web applications would be sharing the same spring context. Spring does provides this capability out of the box and is easily configurable.

One of the pre-requisite to enable this capability is however to have all web applications packaged and deployed as a EAR. This means the deployment container would need to be a J2EE container (like WebLogic, JBoss, Glassfish etc) and not simply a servlet container like Tomcat.

The key to this deployment model is the class-loader hierarchy that we get from a EAR deployment. An EAR deployment which would typically have multiple web applications (WAR files) and shared application library (with multiple JAR files) will work on a class-loader hierarchy. Below the standard system/bootstrap class-loaders for application servers, an EAR would have a top-level class-loader (say this is the Application Class-loader) and a bunch of class-loaders as children. These child class-loaders are associated to the web applications. And its standard in a EAR deployment to package all jar files that are shareable by all web applications under a directory to which the class-path is set by the META-INF file. All classes loaded by the Application class-loader are visible to the web application class-loaders. But if a web application contains any jar file under its own lib directory, they wont be accessible to the Application class-loader and certainly not to the other web applications within the EAR.

So with regards to sharing spring beans, this means that we can place jar files for all spring beans in the shared application library. While these then get loaded by the Application class-loader, each web application will have access to them hence resulting a shared spring context - Not really. For spring, this is still not complete to achieve a shared context.
A web application is spring configured through its web.xml either using a ContextLoaderListener or a ContextLoaderServlet (depending on the servelt API implemented by your container). Typically we'd use the 'contextConfigLocation' where we specify the location of our spring bean configurations.


contextConfigLocation
/WEB-INF/my-application-servlet.xml

Assuming the above spring configuration consists of beans specific to the web application concern (validators, controllers), to have bunch of beans share a single spring context, we use the 'locatorFactorySelector' and 'parentContextKey'. We simply add the following into our web.xml(s)


locatorFactorySelector
classpath:common-beans.xml


parentContextKey
commonContext

The above would mean that you would have a file called common-beans.xml in the classpath for the web application, which has the following bean configured;



classpath:service-beans.xml






The above configuration defines the 'commonContext' bean. This is a bean representing the 'ClassPathXmlApplicationContext', which is a convenience class used to build application contexts. The list passed into the constructor is a list of configuration file locations, of which the beans will be loaded from the definitions given in the configuration files.
With a configuration like the one above in each web application of a EAR, all web applications will share the same beans configured through the 'commonContext' bean.

This approach can bring few advantages on your deployment architecture, maintenance and development;

From a deployment architecture point of view,
  • If using hibernate in the mix of things, we can benefit from a single SessionFactory. If caching is configured, the cache would be applicable for all web applications and it would be one cache. Saving your heap usage.
  • Reduces the classes to load and hence saving on your permgen.

From a maintenance and development point of view,
  • Common beans can be configured in one place one configuration file that would be used by others. There's no need to duplicate the bean declarations in multiple spring configuration files.

As mentioned previously, this is only if the deployment model is EAR based. If all web applications are deployed as their own WAR files, there's not concept of class-loader hierarchy to achieve the above.

While the decision on whether to EAR or Not is a separate set of notes, if an EAR packaging method is decided for an application, spring does provide the capability reap benefits from the decision.

Useful links on the subject;
http://download.oracle.com/javaee/1.4/tutorial/doc/Overview5.html
http://java.sun.com/developer/technicalArticles/Programming/singletons/

Saturday, January 29, 2011

Spring transactions readOnly - What's it all about

When using spring transactions, it's stated that using 'readOnly' provides the underlying data layers to perform optimisations.
When using spring transactions in conjunction with hibernate and using the HibernateTransaction manager, this translates to optimisation applied on the hibernate Session. When persisting data, a hibernate session works based on the FlushMode set on the session. A FlushMode defines the flushing strategy that synchronizes database state with session state.
We look at a class with transactions demarcated as follows
@Transactional(propagation = Propagation.REQUIRED)
public class FooService {
    
    public void doFoo(){
        //doing foo
    }

    public void doBar() {
        // doing bar
    }
}
The class Foo is demarcated with a transaction boundary. And all operations in Foo will have the transactional attributes specified by the @Transactional annotation at the class level. An operation within a transaction (or starting a transaction) would set the session FlushMode to AUTO and is also identified as a read-write transaction. This would mean, that the session state is sometimes flushed to make sure the transaction doesn't suffer from stale state.
However, if in the above example, we'd have the doBar() simply performing some read operations through hibernate, we wouldn't want hibernate trying to flush the session state. And the way to tell hibernate not to do this is through the FlushMode. In this instance the above example turns out as follows
@Transactional(propagation = Propagation.REQUIRED)
public class FooService {
    public void doFoo() {
        // do foo
    }

    @Transactional (readOnly = true)
    public void doBar() {
        // do bar 
    }
The above change in doBar() forces the session FlushMode to be set to NEVER. This would mean that we wont have hibernate trying to synchronise the session state within the scope of the session used in this method. After all it would be a waste to perform session synchronisation on a read operation. One thing to note in this configuration is that we are indeed spawning a new transaction. This happens by applying the @Transactional annotation.
However, this is only true if doBar() is called from a client who has not initiated or participated in a transaction. In other words, if doBar() is called within doFoo() (which has started a transaction), then the readOnly aspect wouldn't have any affect on the FlushMode. This is due to the fact that @Transactional uses Propagation.REQUIRED as the default propagation strategy and in this instance it would participate in the same transaction started by doFoo(). Thereby not overriding any of its transaction attributes. 
If for some reason doBar() needs to still have readOnly applied within an existing read-write transaction, then the propagation strategy for doBar() would need to be set to Propagation.REQUIRES_NEW. This forces the existing transaction to be suspended, and create a new transaction, which also sets the FlushMode to NEVER. However once it exists the new transaction, it would continue the first transaction and would also have the FlushMode set to AUTO (Following the transaction propagation model). However, I cant think of a scenario which would need such a configuration though.

While the readOnly attribute can also provide hints to underlying jdbc drivers, where supported, the implications of this attribute can vary based on the underlying persistence framework (Hibernate, Spring JPA, Toplink or even raw JDBC) in use. The above details aren't necessarily true for all approaches, it's only valid in the spring+hibernate land.

Another way of looking at all of this is probably asking as to 'Do we really need to spawn a transaction and then mark it as read-only for a pure read operation at all'? The answer is probably 'No'. And 'Yes' its best to avoid transactions for pure read operations. However this can come handy in the approach one picks to demarcate transactions. For example take a service implementation that has its transactional properties defined at a class level, which is configured to be read-write transactions. If majority of the operations of this service implementation shares these transactional properties, it makes sense to have these properties applied at a class level. Now if there are couple of operations that are actually read operations, the readOnly attribute comes handy in configuring only those operations as @Transactional (readOnly = true). This is still not perfect in the premise of creating a transaction for a read operation. So on this example another configuration might be @Transactional (propagation = Propagation.SUPPORTS, readOnly = true), which would mean a new transaction will not be created if one does not exist (relatively more efficient) and also has the readOnly applied for other optimisations (possibly on the drivers).

It all boils down to the fact that spring has the readOnly attributes as an option to use. But, the when where and why to use it is entirely up to the developer depending on the design to which an application is built. So it is quite useful to know what the 'readOnly' attribute is all about to put it to proper use.

Monday, January 24, 2011

Too many open files

I've had my share of dreaded moments of the 'Too many open files' exception. When you see 'java.io.IOException[java.net.SocketException]: Too many open files' it can be somewhat tricky finding out what exactly is causing it. But not so tricky if you know what needs to be done to get to the root cause.

First up though what's this exception really telling us?
When a file/socket is accessed in linux, a file descriptor is created for the process that deals with the operation. This information is available under /proc/process_id/fd. The number of file descriptors allowed are however restricted. Now if a file/socket is accessed, and the stream used to access the file/socket is not closed properly, then we run the danger of exhausting the limit of open files. This when we see 'Too many open files' cropping up in our logs.
However the fix to the root cause will vary from what's been uncovered. It's easy if it's an error made in your code base (simply because you can fix your own code easily - at least in theory)  and harder if it's a third-party library or worse the jdk (not so worse if its documented though).

So what do we do when this is upon us?
As with any other thing, find the root cause and cure it.
In order to find the root cause in relation to this exception, the first thing that would be nice to find out is, what files are opened and how many of them are opened to cause the exception. The 'lsof' command is your friend here.
shell>lsof -p [process_id]
(To state the obvious the process id is the pid of your java process)
The output of the above could be 'grep'd to find out what files are repeated and is increasing as the application runs.

Once we know the file(s), and if that file/socket is accessed within our code its a no brainer. Go fix the code. This could be something simple as not closing a stream properly like so;
public void doFoo() {
    Properties props = new Properties();
    props.load(Thread.currentThread().getContextClassLoader().getResourceAsStream("file.txt"));
}
The stream above is not closed. This would result in a open file handle against filt.txt.
If its third-party code, and you have access to the code, you have put your self through the some-what painful process of finding out the buggy code in order to work out what can be done about it.
In some situations third party software would require increasing the hard and soft limits applicable to the number of file descriptors. This can be done at /etc/conf/limits.conf by changing the figures for hard and soft values like so;
*   hard   nofile   1024
*   soft   nofile   1024
Usage of '*' is best replaced by user:group for which the change is applicable to.
Some helpful information about lsof