Translate

Sunday, February 10, 2013

Spring Data JDBC generic DAO implementation


Spring Data JDBC generic DAO implementation – most lightweight ORM ever


Source: http://www.javacodegeeks.com/2013/01/spring-data-jdbc-generic-dao-implementation-most-lightweight-orm-ever.html

I am thrilled to announce first version of my Spring Data JDBC repository project. The purpose of this open source library is to provide generic, lightweight and easy to use DAO implementation for relational databases based on JdbcTemplate fromSpring framework, compatible with Spring Data umbrella of projects.

Design objectives

  • Lightweight, fast and low-overhead. Only a handful of classes, no XML, annotations, reflection
  • This is not full-blown ORM. No relationship handling, lazy loading, dirty checking, caching
  • CRUD implemented in seconds
  • For small applications where JPA is an overkill
  • Use when simplicity is needed or when future migration e.g. to JPA is considered
  • Minimalistic support for database dialect differences (e.g. transparent paging of results)

Features

Each DAO provides built-in support for:
  • Mapping to/from domain objects through RowMapper abstraction
  • Generated and user-defined primary keys
  • Extracting generated key
  • Compound (multi-column) primary keys
  • Immutable domain objects
  • Paging (requesting subset of results)
  • Sorting over several columns (database agnostic)
  • Optional support for many-to-one relationships
  • Supported databases (continuously tested):
    • MySQL
    • PostgreSQL
    • H2
    • HSQLDB
    • Derby
    • …and most likely most of the others
  • Easily extendable to other database dialects via SqlGenerator class.
  • Easy retrieval of records by ID

API

Compatible with Spring Data PagingAndSortingRepository abstraction, all these methods are implemented for you:
public interface PagingAndSortingRepository<T, ID extends Serializable> extends CrudRepository<T, ID> {
             T  save(T entity);
    Iterable<T> save(Iterable<? extends T> entities);
             T  findOne(ID id);
        boolean exists(ID id);
    Iterable<T> findAll();
           long count();
           void delete(ID id);
           void delete(T entity);
           void delete(Iterable<? extends T> entities);
           void deleteAll();
    Iterable<T> findAll(Sort sort);
        Page<T> findAll(Pageable pageable);
}
Pageable and Sort parameters are also fully supported, which means you get paging and sorting by arbitrary properties for free. For example say you have userRepository extending PagingAndSortingRepository<User, String> interface (implemented for you by the library) and you request 5th page of USERS table, 10 per page, after applying some sorting:
Page<User> page = userRepository.findAll(
    new PageRequest(
        5, 10,
        new Sort(
            new Order(DESC, "reputation"),
            new Order(ASC, "user_name")
        )
    )
);
Spring Data JDBC repository library will translate this call into (PostgreSQL syntax):
SELECT *
FROM USERS
ORDER BY reputation DESC, user_name ASC
LIMIT 50 OFFSET 10
…or even (Derby syntax):
SELECT * FROM (
    SELECT ROW_NUMBER() OVER () AS ROW_NUM, t.*
    FROM (
        SELECT *
        FROM USERS
        ORDER BY reputation DESC, user_name ASC
        ) AS t
    ) AS a
WHERE ROW_NUM BETWEEN 51 AND 60
No matter which database you use, you’ll get Page<User> object in return (you still have to provide RowMapper<User> yourself to translate from ResultSet to domain object. If you don’t know Spring Data project yet, Page<T> is a wonderful abstraction, not only encapsulating List<User>, but also providing metadata such as total number of records, on which page we currently are, etc.

Reasons to use

  • You consider migration to JPA or even some NoSQL database in the future.Since your code will rely only on methods defined inPagingAndSortingRepository and CrudRepository from Spring Data Commons umbrella project you are free to switch fromJdbcRepository implementation (from this project) to: JpaRepositoryMongoRepositoryGemfireRepository orGraphRepository. They all implement the same common API. Of course don’t expect that switching from JDBC to JPA or MongoDB will be as simple as switching imported JAR dependencies – but at least you minimize the impact by using same DAO API.
  • You need a fast, simple JDBC wrapper library. JPA or even MyBatis is an overkill
  • You want to have full control over generated SQL if needed
  • You want to work with objects, but don’t need lazy loading, relationship handling, multi-level caching, dirty checking… You needCRUD and not much more
  • You want to by DRY
  • You are already using Spring or maybe even JdbcTemplate, but still feel like there is too much manual work
  • You have very few database tables

Getting started

For more examples and working code don’t forget to examine project tests.

Prerequisites

Maven coordinates:
<dependency>
    <groupId>com.blogspot.nurkiewicz</groupId>
    <artifactId>jdbcrepository</artifactId>
    <version>0.1</version>
</dependency>
Unfortunately the project is not yet in maven central repository. For the time being you can install the library in your local repository by cloning it:
$ git clone git://github.com/nurkiewicz/spring-data-jdbc-repository.git
$ git checkout 0.1
$ mvn javadoc:jar source:jar install
In order to start your project must have DataSource bean present and transaction management enabled. Here is a minimal MySQL configuration:
@EnableTransactionManagement
@Configuration
public class MinimalConfig {
 
    @Bean
    public PlatformTransactionManager transactionManager() {
        return new DataSourceTransactionManager(dataSource());
    }
 
    @Bean
    public DataSource dataSource() {
        MysqlConnectionPoolDataSource ds = new MysqlConnectionPoolDataSource();
        ds.setUser("user");
        ds.setPassword("secret");
        ds.setDatabaseName("db_name");
        return ds;
    }
 
}

Entity with auto-generated key

Say you have a following database table with auto-generated key (MySQL syntax):
CREATE TABLE COMMENTS (
    id INT AUTO_INCREMENT,
    user_name varchar(256),
    contents varchar(1000),
    created_time TIMESTAMP NOT NULL,
    PRIMARY KEY (id)
);
First you need to create domain object User mapping to that table (just like in any other ORM):
public class Comment implements Persistable<Integer> {
 
    private Integer id;
    private String userName;
    private String contents;
    private Date createdTime;
 
    @Override
    public Integer getId() {
        return id;
    }
 
    @Override
    public boolean isNew() {
        return id == null;
    }
 
    //getters/setters/constructors/...
}
Apart from standard Java boilerplate you should notice implementing Persistable<Integer> where Integer is the type of primary key. Persistable<T> is an interface coming from Spring Data project and it’s the only requirement we place on your domain object.
Finally we are ready to create our CommentRepository DAO:
@Repository
public class CommentRepository extends JdbcRepository<Comment, Integer> {
 
    public CommentRepository() {
        super(ROW_MAPPER, ROW_UNMAPPER, "COMMENTS");
    }
 
    public static final RowMapper<Comment> ROW_MAPPER = //see below
 
    private static final RowUnmapper<Comment> ROW_UNMAPPER = //see below
 
    @Override
    protected Comment postCreate(Comment entity, Number generatedId) {
        entity.setId(generatedId.intValue());
        return entity;
    }
}
First of all we use @Repository annotation to mark DAO bean. It enables persistence exception translation. Also such annotated beans are discovered by CLASSPATH scanning.
As you can see we extend JdbcRepository<Comment, Integer> which is the central class of this library, providing implementations of all PagingAndSortingRepository methods. Its constructor has three required dependencies: RowMapperRowUnmapper and table name. You may also provide ID column name, otherwise default "id" is used.
If you ever used JdbcTemplate from Spring, you should be familiar with RowMapper interface. We need to somehow extract columns from ResultSet into an object. After all we don’t want to work with raw JDBC results. It’s quite straightforward:
public static final RowMapper<Comment> ROW_MAPPER = new RowMapper<Comment>() {
 
    @Override
    public Comment mapRow(ResultSet rs, int rowNum) throws SQLException {
        return new Comment(
                rs.getInt("id"),
                rs.getString("user_name"),
                rs.getString("contents"),
                rs.getTimestamp("created_time")
        );
    }
};
RowUnmapper comes from this library and it’s essentially the opposite of RowMapper: takes an object and turns it into a Map. This map is later used by the library to construct SQL CREATE/UPDATE queries:
private static final RowUnmapper<Comment> ROW_UNMAPPER = new RowUnmapper<Comment>() {
    @Override
    public Map<String, Object> mapColumns(Comment comment) {
        Map<String, Object> mapping = new LinkedHashMap<String, Object>();
        mapping.put("id", comment.getId());
        mapping.put("user_name", comment.getUserName());
        mapping.put("contents", comment.getContents());
        mapping.put("created_time", new java.sql.Timestamp(comment.getCreatedTime().getTime()));
        return mapping;
    }
};
If you never update your database table (just reading some reference data inserted elsewhere) you may skip RowUnmapper parameter or use MissingRowUnmapper.
Last piece of the puzzle is the postCreate() callback method which is called after an object was inserted. You can use it to retrieve generated primary key and update your domain object (or return new one if your domain objects are immutable). If you don’t need it, just don’t override postCreate(). Check out JdbcRepositoryGeneratedKeyTest for a working code based on this example.
By now you might have a feeling that, compared to JPA or Hibernate, there is quite a lot of manual work. However various JPA implementations and other ORM frameworks are notoriously known for introducing significant overhead and manifesting some learning curve. This tiny library intentionally leaves some responsibilities to the user in order to avoid complex mappings, reflection, annotations… all the implicitness that is not always desired. This project is not intending to replace mature and stable ORM frameworks. Instead it tries to fill in a niche between raw JDBC and ORM where simplicity and low overhead are key features.

Entity with manually assigned key

In this example we’ll see how entities with user-defined primary keys are handled. Let’s start from database model:
CREATE TABLE USERS (
    user_name varchar(255),
    date_of_birth TIMESTAMP NOT NULL,
    enabled BIT(1) NOT NULL,
    PRIMARY KEY (user_name)
);
…and User domain model:
public class User implements Persistable<String> {
 
    private transient boolean persisted;
 
    private String userName;
    private Date dateOfBirth;
    private boolean enabled;
 
    @Override
    public String getId() {
        return userName;
    }
 
    @Override
    public boolean isNew() {
        return !persisted;
    }
 
    public User withPersisted(boolean persisted) {
        this.persisted = persisted;
        return this;
    }
 
    //getters/setters/constructors/...
 
}
Notice that special persisted transient flag was added. Contract of CrudRepository.save() from Spring Data project requires that an entity knows whether it was already saved or not (isNew()) method – there are no separate create() and update() methods. Implementing isNew() is simple for auto-generated keys (see Comment above) but in this case we need an extra transient field. If you hate this workaround and you only insert data and never update, you’ll get away with return true all the time from isNew().
And finally our DAO, UserRepository bean:
@Repository
public class UserRepository extends JdbcRepository<User, String> {
 
    public UserRepository() {
        super(ROW_MAPPER, ROW_UNMAPPER, "USERS", "user_name");
    }
 
    public static final RowMapper<User> ROW_MAPPER = //...
 
    public static final RowUnmapper<User> ROW_UNMAPPER = //...
 
    @Override
    protected User postUpdate(User entity) {
        return entity.withPersisted(true);
    }
 
    @Override
    protected User postCreate(User entity, Number generatedId) {
        return entity.withPersisted(true);
    }
}
"USERS" and "user_name" parameters designate table name and primary key column name. I’ll leave the details of mapper and unmapper (see source code). But please notice postUpdate() and postCreate() methods. They ensure that once object was persisted, persisted flag is set so that subsequent calls to save() will update existing entity rather than trying to reinsert it.
Check out JdbcRepositoryManualKeyTest for a working code based on this example.

Compound primary key

We also support compound primary keys (primary keys consisting of several columns). Take this table as an example:
CREATE TABLE BOARDING_PASS (
    flight_no VARCHAR(8) NOT NULL,
    seq_no INT NOT NULL,
    passenger VARCHAR(1000),
    seat CHAR(3),
    PRIMARY KEY (flight_no, seq_no)
);
I would like you to notice the type of primary key in Peristable<T>:
public class BoardingPass implements Persistable<Object[]> {
 
    private transient boolean persisted;
 
    private String flightNo;
    private int seqNo;
    private String passenger;
    private String seat;
 
    @Override
    public Object[] getId() {
        return pk(flightNo, seqNo);
    }
 
    @Override
    public boolean isNew() {
        return !persisted;
    }
 
    //getters/setters/constructors/...
 
}
Unfortunately we don’t support small value classes encapsulating all ID values in one object (like JPA does with @IdClass), so you have to live with Object[] array. Defining DAO class is similar to what we’ve already seen:
public class BoardingPassRepository extends JdbcRepository<BoardingPass, Object[]> {
    public BoardingPassRepository() {
        this("BOARDING_PASS");
    }
 
    public BoardingPassRepository(String tableName) {
        super(MAPPER, UNMAPPER, new TableDescription(tableName, null, "flight_no", "seq_no")
        );
    }
 
    public static final RowMapper<BoardingPass> ROW_MAPPER = //...
 
    public static final RowUnmapper<BoardingPass> UNMAPPER = //...
 
}
Two things to notice: we extend JdbcRepository<BoardingPass, Object[]> and we provide two ID column names just as expected:"flight_no", "seq_no". We query such DAO by providing both flight_no and seq_no (necessarily in that order) values wrapped by Object[]:
BoardingPass pass = repository.findOne(new Object[] {"FOO-1022", 42});
No doubts, this is cumbersome in practice, so we provide tiny helper method which you can statically import:
import static com.blogspot.nurkiewicz.jdbcrepository.JdbcRepository.pk;
//...
 
BoardingPass foundFlight = repository.findOne(pk("FOO-1022", 42));
Check out JdbcRepositoryCompoundPkTest for a working code based on this example.

Transactions

This library is completely orthogonal to transaction management. Every method of each repository requires running transaction and it’s up to you to set it up. Typically you would place @Transactional on service layer (calling DAO beans). I don’t recommend placing@Transactional over every DAO bean.

Caching

Spring Data JDBC repository library is not providing any caching abstraction or support. However adding @Cacheable layer on top of your DAOs or services using caching abstraction in Spring is quite straightforward. See also: @Cacheable overhead in Spring.

Contributions

..are always welcome. Don’t hesitate to submit bug reports and pull requests. Biggest missing feature now is support for MSSQL and Oracle databases. It would be terrific if someone could have a look at it.

Testing

This library is continuously tested using Travis (Build Status). Test suite consists of 265 tests (53 distinct tests each run against 5 different databases: MySQL, PostgreSQL, H2, HSQLDB and Derby.
When filling bug reports or submitting new features please try including supporting test cases. Each pull request is automatically tested on a separate branch.

Building

After forking the official repository building is as simple as running:
$ mvn install
You’ll notice plenty of exceptions during JUnit test execution. This is normal. Some of the tests run against MySQL and PostgreSQL available only on Travis CI server. When these database servers are unavailable, whole test is simply skipped:
Results :
Tests run: 265, Failures: 0, Errors: 0, Skipped: 106
Exception stack traces come from root AbstractIntegrationTest.

Design

Library consists of only a handful of classes, highlighted in the diagram below:
UML diagram
JdbcRepository is the most important class that implements all PagingAndSortingRepository methods. Each user repository has to extend this class. Also each such repository must at least implement RowMapper and RowUnmapper (only if you want to modify table data).
SQL generation is delegated to SqlGeneratorPostgreSqlGenerator. and DerbySqlGenerator are provided for databases that don’t work with standard generator.

Effective Logging in Java/JEE


What is MDC?
MDC stands for Mapped Diagnostic Context. It helps you to distinguish inter-leaving logs from multiple sources. Let me explain in detail. When we have multiple user-requests coming in for a given servlet, each request of an user is serviced using a thread. This leaves multiple users logging to the same log file and the log statements get inter-mixed. Now, to filter out logs of a particular user, we need to append the user-id to the log statements so that we can grep(search) them in the log file, to make some sense of it.
An obvious way of logging, is to append the user-id in the log statements i.e. log.info(userId+” logged something “);
A non-invasive way of logging is to use MDC. With MDC, you put the user-id in a context-map which is attached to the thread (of each user request) by the logger.
MDC is thread-safe and uses a Map internally to store the context information.[Courtesy : Kalyan Dabburi]
How to use MDC?
a. Configure the information, which needs to be logged (user-id in this case) in the log4j.xml as part of ConversionPattern.
log4j.appender.consoleAppender.layout.ConversionPattern
= %d %i - %m - %X{user-id}%n
b. In your respective class, before you start processing the user request, place the actual user-id in the context(MDC).
MDC.put("user-id","SKRS786");
c. Remove the context information from MDC at the end of the processing.
MDC.remove("user-id");
 
References :
 
What is NDC ? Which one to use MDC or NDC?
NDC stands for Nested Diagnostic Context. It is a stack-based implementation of attaching context information. For all purposes, use MDC over NDC, as MDC is memory efficient. For a detailed comparison, click here.

NDC vs MDC - Which one should I use?


The NDC and MDC log4j classes are used to store program/application contextual information that can then be used when logging messages. The NDC class name is org.apache.log4j.NDC. "NDC" stands for "Nested Diagnostic Context". The MDC class name is org.apache.log4j.MDC. "MDC" stands for "Mapped Diagnostic Context". NDC has been part of the log4j framework longer than MDC. If you haven't already, you may want to review the javadoc information for each class.

NDC


The "Nested Diagonostic Context" implements a "stack" onto which context information can be pushed and popped (ie "nested"). The context is stored per thread, so different threads can have different context information. When a program entered section "A" of its code, it could use NDC.push() to put the string "A" into the context. When it exited section "A", it would then NDC.pop() to remove "A" from the context. As you can see, you can continue to push/pop contexts. It is up to the application to make sure that the proper NDC.pop() call is made for each NDC.push().
When a message is logged, the current contents of the NDC are attached to it, and can be displayed in the log messages by using the '%x' option in PatternLayout. In this way, information specific to the context of a particular thread can be displayed in the log.
The beauty of this is that the logger sending the message does not have any clue about the context or contents of the NDC, and it doesn't need to. But appenders and filters can use the NDC information in the log message to affect the routing and display of log message. Besides the '%x' option in  PatternLayout , a new log4j filter for v1.3 (see org.apache.log4j.filters.NDCMatchFilter  in the current cvs) will accept or deny a log message based on the contents of the NDC information.

MDC


The "Mapped Diagnostic Context" implements a "map" into which key/value pair information can be stored. Just like NDC, the context is stored per thread. Values are stored by key name. Each thread could use the same key name but have different stored values. Values are stored/retreived/removed by using the familiar pattern of MDC.put(), MDC.get(), and MDC.remove() methods.
When a message is logged, the current contents of the MDC are attached to it, and can be displayed in the log messages by using the '%X' option in PatternLayout. More than one MDC value can be displayed in a single log message.
Just as with NDC, appenders and filters can use the MDC information attached to a log message for display and routing. Log4j v1.3 will contain a filter based on the contents of the MDC (see org.apache.log4j.filters.MDCMatchFilter  in the current cvs).

Which one to use?


Now that you have some idea of how the NDC and MDC store context information, it should be straight forward to choose which one to use. If nested/stack like information is important when logging information, use NDC. If key/value pair information is more appropriate, use MDC.

Known Gotchas


  • MDC requires JDK 1.2 or later. It is not compatible with JDK 1.1, unlike NDC which is.
  • NDC use can lead to memory leaks if you do not periodically call the NDC.remove() method. The current NDC implementation maintains a static hard link to the thread for which it is storing context. So, when the thread is released by its creator, the NDC maintains the link and the thread (and its related memory) is not released and garbage collected like one might expect. NDC.remove() fixes this by periodically checking the threads referenced by NDC and releasing the references of "dead" threads. But, you have to write your code to call NDC.remove().
So, give both NDC and MDC a try. Write some test code to set various values and log messages to see how the output changes. NDC and MDC are powerful tools for logging that no log4j user should be ignorant of.
Which logging framework to use? Log4J or SLF4J or logback?
For all new application development, use logbacklogback is a run-time implementation of SLF4J. If you have an existing application with Log4J, it is still worth-while to switch to logback. For a detailed explanation, click here.
Reasons to prefer logback over log4j
Logback brings a very large number of improvements over log4j, big and small. They are too many to enumerate exhaustively. Nevertheless, here is a non-exhaustive list of reasons for switching to logback from log4j. Keep in mind that logback is conceptually very similar to log4j as both projects were founded by the same developer. If you are already familiar with log4j, you will quickly feel at home using logback. If you like log4j, you will probably love logback.

Faster implementation

Based on our previous work on log4j, logback internals have been re-written to perform about ten times faster on certain critical execution paths. Not only are logback components faster, they have a smaller memory footprint as well.

Extensive battery of tests

Logback comes with a very extensive battery of tests developed over the course of several years and untold hours of work. While log4j is also tested, logback takes testing to a completely different level. In our opinion, this is the single most important reason to prefer logback over log4j. You want your logging framework to be rock solid and dependable even under adverse conditions.

logback-classic speaks SLF4J natively

Since the Logger class in logback-classic implements the SLF4J API natively, you incur zero overhead when invoking an SLF4J logger with logback-classic as the underlying implementation. Moreover, since logback-classic strongly encourages the use of SLF4J as its client API, if you need to switch to log4j or to j.u.l., you can do so by replacing one jar file with another. You will not need to touch your code logging via the SLF4J API. This can drastically reduce the work involved in switching logging frameworks.

Extensive documentation

Logback ships with detailed and constantly updated documentation.

Configuration files in XML or Groovy

The traditional way of configuring logback is via an XML file. Most of the examples in the documentation use this XML syntax. However, as of logback version 0.9.22, configuration files written in Groovy are also supported. Compared to XML, Groovy-style configuration is more intuitive, consistent and has a shorter syntax.

Automatic reloading of configuration files

Logback-classic can automatically reload its configuration file upon modification. The scanning process is fast, contention-free, and dynamically scales to millions of invocations per second spread over hundreds of threads. It also plays well within application servers and more generally within the JEE environment as it does not involve the creation of a separate thread for scanning.

Graceful recovery from I/O failures

Logback's FileAppender and all its sub-classes, including RollingFileAppender, can gracefully recover from I/O failures. Thus, if a file server fails temporarily, you no longer need to restart your application just to get logging working again. As soon as the file server comes back up, the relevant logback appender will transparently and quickly recover from the previous error condition.

Automatic removal of old log archives

By setting the maxHistory property of TimeBasedRollingPolicy or SizeAndTimeBasedFNATP, you can control the maximum number of archived files. If your rolling policy calls for monthly rollover and you wish to keep one year's worth of logs, simply set the maxHistory property to 12. Archived log files older than 12 months will be automatically removed.

Automatic compression of archived log files

RollingFileAppender can automatically compress archived log files during rollover. Compression always occurs asynchronously so that even for large log files, your application is not blocked for the duration of the compression.

Prudent mode

In prudent mode, multiple FileAppender instances running on multiple JVMs can safely write to the same log file. With certain limitations, prudent mode extends to RollingFileAppender.

Lilith

Lilith is a logging and access event viewer for logback. It is comparable to log4j's chainsaw, except that Lilith is designed to handle large amounts of logging data without flinching.

Conditional processing of configuration files

Developers often need to juggle between several logback configuration files targeting different environments such as development, testing and production. These configuration files have substantial parts in common, differing only in a few places. To avoid duplication, logback supports conditional processing of configuration files with the help of <if><then> and <else> elements so that a single configuration file can adequately target several environments.

Filters

Logback comes with a wide array of filtering capabilities going much further than what log4j has to offer. For example, let's assume that you have a business-critical application deployed on a production server. Given the large volume of transactions processed, logging level is set to WARN so that only warnings and errors are logged. Now imagine that you are confronted with a bug that can be reproduced on the production system but remains elusive on the test platform due to unspecified differences between those two environments (production/testing).
With log4j, your only choice is to lower the logging level to DEBUG on the production system in an attempt to identify the problem. Unfortunately, this will generate large volume of logging data, making analysis difficult. More importantly, extensive logging can impact the performance of your application on the production system.
With logback, you have the option of keeping logging at the WARN level for all users except for the one user, say Alice, who is responsible for identifying the problem. When Alice is logged on, she will be logging at level DEBUG while other users can continue to log at the WARN level. This feat can be accomplished by adding 4 lines of XML to your configuration file. Search for MDCFilter in the relevant section of the manual.

SiftingAppender

SiftingAppender is an amazingly versatile appender. It can be used to separate (or sift) logging according to any given runtime attribute. For example, SiftingAppender can separate logging events according to user sessions, so that the logs generated by each user go into distinct log files, one log file per user.

Stack traces with packaging data

When logback prints an exception, the stack trace will include packaging data. Here is a sample stack trace generated by the logback-demo web-application.
14:28:48.835 [btpool0-7] INFO  c.q.l.demo.prime.PrimeAction - 99 is not a valid value
java.lang.Exception: 99 is invalid
  at ch.qos.logback.demo.prime.PrimeAction.execute(PrimeAction.java:28) [classes/:na]
  at org.apache.struts.action.RequestProcessor.processActionPerform(RequestProcessor.java:431) [struts-1.2.9.jar:1.2.9]
  at org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:236) [struts-1.2.9.jar:1.2.9]
  at org.apache.struts.action.ActionServlet.doPost(ActionServlet.java:432) [struts-1.2.9.jar:1.2.9]
  at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) [servlet-api-2.5-6.1.12.jar:6.1.12]
  at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:502) [jetty-6.1.12.jar:6.1.12]
  at ch.qos.logback.demo.UserServletFilter.doFilter(UserServletFilter.java:44) [classes/:na]
  at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1115) [jetty-6.1.12.jar:6.1.12]
  at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:361) [jetty-6.1.12.jar:6.1.12]
  at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:417) [jetty-6.1.12.jar:6.1.12]
  at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230) [jetty-6.1.12.jar:6.1.12]
From the above, you can recognize that the application is using Struts version 1.2.9 and was deployed under jetty version 6.1.12. Thus, stack traces will quickly inform the reader about the classes intervening in the exception but also the package and package versions they belong to. When your customers send you a stack trace, as a developer you will no longer need to ask them to send you information about the versions of packages they are using. The information will be part of the stack trace. See "%xThrowable" conversion word for details.
This feature can be quite helpful to the point that some users mistakenly consider it a feature of their IDE.

Logback-access, i.e. HTTP-access logging with brains, is an integral part of logback

Last but not least, the logback-access module, part of the logback distribution, integrates with Servlet containers such as Jetty or Tomcat to provide rich and powerful HTTP-access log functionality. Since logback-access was part of the initial design, all the logback-classic features you love are available in logback-access as well.

In summary

We have listed a number of reasons for preferring logback over log4j. Given that logback builds upon on our previous work on log4j, simply put, logback is just a better log4j.
To understand the evolution of logging in Java and JEE world, refer to this article by Micheal Andrews.

Some Interview Questions to Hire a Java EE Developer


Source: http://www.hildeberto.com/2011/09/some-interview-questions-to-hire-java.html


Some Interview Questions to Hire a Java EE Developer

The Internet is full of interview questions for Java developers. The main problem of those questions is that they only prove that the candidate has a good memory, remmembering all that syntax, structures, constants, etc. There is not real evaluation of his/her logical reasoning.

I'm listing bellow some examples of interview questions that check the knowledge of the candidate based on his/her experience. The questions were formulated to verify whether the candidate is capable of fulfilling the role of a Java enterprise applications developer. I'm also putting the anwsers in case anybody want to discuss the questions.

1. Can you give some examples of improvements in the Java EE5/6 specification in comparison to the J2EE specification?

The new specification favours convention over configuration and introduces annotations to replace the use of XML for configuration. Inheritance is not used to define components anymore. They are defined, instead, as POJOs. To empower those POJOs with enterprise features, dependency injection was put in place, simplifying the use of EJBs. The persistence layer was fully replaced by the Java Persistence API (JPA).

2. Considering two enterprise systems developed in different platforms, which good options do you propose to exchange data between them?

We can see as potential options nowadays the use of web services and message queues, depending on the scenario. For example: when a system needs to send data, as soon as they are available, to another system or make data available for several systems, then a message queuing system is recommended. When a system has data to be processed by another system and needs back the result of this processing synchronously, then web service is the most indicated option.

3. What do you suggest to implement asynchronous code in Java EE?

There are several options: one can post messages to a queue to be consumed by a Message-Driven Bean (MDB); or annotate a method with @Timer to define the time to execute the code programmatically; or annotate a method with @Scheduler to define the time to execute the code declaratively.

4. Can you illustrate the use of Stateless Session Bean, Statefull Session Bean and Singleton Session Bean?

Stateless Session Beans are used when there is no need to preserve the state of objects between several business transactions. Every transaction has its own instances and instances of components can be retrieved from pools of objects. It is recommended for most cases, when several operations are performed within a transaction to keep the database consistency.

Statefull Session Beans are used when there is the need to preserve the state of objects between business transactions. Every instance of the component has its own objects. These objects are modified by different transactions and they are discarded after reaching a predefined time of inactivity. They can be used to cache those data with intensive use, such as reference data and long record sets for pagination, in order to reduce the volume of IO operations with the database.

A singleton session bean is instantiated once per application and exists for the lifecycle of the application. Singleton session beans are designed for circumstances in which a single enterprise bean instance is shared across and concurrently accessed by clients. They maintain their state between client invocations, which requires a careful implementation to avoid conflicts when accessed concurrently. This kind of component can be used, for example, to initialize the application at its start-up and share a specific object across the application.

5. What is the difference between queue and topic in a message queuing system?

In a queue there is only one producer of messages and only one consumer of these messages (1 – 1). In a topic there is a publisher of messages and several subscribers that will receive those messages (1 - N).

6. Which strategies do you consider to import and export XML content?

If the XML document is formally defined in a schema, we can use JAXB to serialize and deserialize objects into/from XML according to the schema. If the XML document does not have a schema, then there are two situations: 1) when the whole XML content should be consider: In this case, serial access to the whole document is recommended using SAX, or accessed randomly using DOM; 2) when only parts of the XML content should be considered, than XPath can be used or StAX in case operations should be executed immediately after each desired part is found in the document.

7. Can you list some differences between a relational model and an object model?

An object model can be mapped to a relational model, but there are some differences that should be taken into consideration. In the relational model a foreign key has the same type of the target's primary key, but in the object model and attribute points to the entire related object. In the object model it is possible to have N-N relationships while in the relational model an intermediary entity is needed. There is no support for inheritance, interface, and polymorphism in the relational model.

8. What is the difference between XML Schema, XSLT, WSDL and SOAP?

A XML Schema describes the structure of an XML document and it is used to validate these documents. WSDL (Web Service Definition Language) describes the interface of SOAP-based web services. It can refer to XML schemas to define existing complex types passed by parameter or returned to the caller. SOAP (Simple Object Access Protocol) is the format of the message used to exchange data in a web service call. XSLT (eXtensible Stylesheet Language Transformation) is used to transform XML documents into other document formats.

9. How would you configure an environment to maximize productivity of a development team?

Every developer should have a personal environment capable of executing the whole application in his/her local workstation. The project should be synchronized between developers using a version control system. Integration routines must be executed periodically in order to verify the compatibility and communication between all components of the system. Unit and integration tests must be executed frequently.
---

You can increment this set of questions covering other subjects like unit testing, dependence injection,  version control and so on. Try to formulate the questions in a way that you don't get a single answer, but a short analysis from the candidate. People can easily find answers on the Internet, but good analysis can be provided only with accumulated experience.