Expression evaluation in Java (1)

There are times when you need to evaluate some expression in your program. Sometimes you can create something that is not text/string based, that is you construct the expression in a different way – typically so it is easier to validate, evaluate, etc. Sometimes, nothing beats the textual representation. It’s easy and quick to write, and I prefer it any time to various click-based solutions (typically based on some tree representation).

But the expression is a tree after all and we need to parse it somehow. Or do we? In my first sentence I said I needed to evaluate the expression. Can we forgo the parsing altogether?

This will be just the first of (probably) two posts and after some general thoughts I will focus on direct string expression evaluation. Parsing and what follows will follow in another post.

Parsing the grammar or what?

If you know tools like ANTLR then you may be asking what am I going to parse here? Is it a grammar written in some meta-language? No – I’m not interested in grammar itself now at all. Grammar is given and I want to parse the expression written in that grammar. If we used ANTLR, we’d be working with classes generated from the grammar.

Also, just to be sure, anytime I say parse/parsing mean it in a broader sense – both lexer and parser process together (except for cases where I explicitly mention tokenization).

Evaluate or parse as well?

Unless you internally represent the expression as a tree already, you will need to parse it from the string/stream – or something has to do it for you. Next question is whether you want to work with the expression tree and interpret it yourself, or you are really interested only in the result of the expression evaluation.

Evaluation in any case means that I can feed the same expression (whatever the representation is) with various inputs – which is valuable when when the expression contains variables. Without variables it doesn’t make any sense anyway. For instance in case of Java’s ScriptEngine, this is what Bindings do.

So what are our options?

  • Expression evaluation from string to result in one go. I’m not interested in a tree, any internals, I just want results. String input may be compiled for efficiency if it is to be evaluated many times.
  • I have tree representation of the particular expression based on well-know types (custom or from some API, doesn’t matter) and I interpret it. This implies that the tree construction is supported by UI, tree is stored in a meaningful way, etc.
    I don’t see any easy way to compile it here, unless we produce string representation first and then follow the previous bullet. I’d go this path only if compilation provided big performance win (after thorough benchmarks).
  • I have string input, but I want to parse it first and then follow previous bullet. There are many parsers out there, many blog posts about the topic, etc. However, I’d just take ANTLR and use that. You generate the classes from a grammar and you’re done, then it’s just pure Java with a bit of antlr-runtime. It’s the state of the art technology, reinventing wheel here is pointless unless you have some extraordinary requirements.

Obviously – the first option hides both parsing and interpreting (or compiling) the parsed result. What it doesn’t provide is any introspection or any way to transform one representation of the expression into the other (except for the most simple string replacements).

Another thing to consider is how you represent the expression?

  • String in a specific grammar – this is most straightforward. When I write “1+2” you can imagine the semantics of such a grammar right away. You can also guess the type of returned value (some number), just as you can for an expression like “a>30” (boolean). It’s easy to store in a DB, file, anywhere. But it’s furthest from anything computer can directly work with, although many frameworks offer some shortcut such as an eval(…) method.
  • Tree representing the expression – but hey, what does it look like? In JVM memory it’s obvious – there are objects of various AST nodes (or ParseTree in ANTLR4). But how do you persist this? Serialize? Good, you may not need to read it and although Java serialization is not very good it may suffice for this. Or will it be tree like structure in a database? Then you need to process it somehow to form the tree in memory, but it may be much easier than parsing raw string indeed. Or you may store it as a string after all – like XML, JSON, or anything else that supports tree serialization in any form. Then you are parsing, but not with your/custom parser, you’re using well-known tools for ubiquitous formats and just transforming the input into a tree. You’re not using any custom grammar – maybe just custom XML/JSON schema or serialization mechanism. In any case, the core representation is the tree – and the tree only.

Just evaluate my string, please!

Here you want your expression to be just “executed”. If you want to do it many times for the same expression then compilation (if provided) typically makes it faster. Let’s see how you can do this easily with Java’s ScriptEngine aka JSR 223 (use your favourite IDE(A) to help you with imports ;-)):

String expr = "1+1";
ScriptEngine scriptEngine = new ScriptEngineManager().getEngineByName("ecmascript");
CompiledScript compiledScript = ((Compilable) scriptEngine).compile(expr);
Object result = compiledScript.eval();
System.out.println("result = " + result + " (" + result.getClass().getName() + ")");

Easy, isn’t it? And it is compiled! We used built-in JavaScript that is available in JDK since 1.6 (Rhino for 1.6/7 and then famous Nashorn since Java 8).

But hey, something is missing, we don’t want the result 2 every time. Let’s try different expression with some variables:

String expr = "a < b";
ScriptEngine scriptEngine = new ScriptEngineManager().getEngineByName("ecmascript");
CompiledScript compiledScript = ((Compilable) scriptEngine).compile(expr);
Bindings bindings = scriptEngine.createBindings();
bindings.put("a", 30);
bindings.put("b", 25);
Object result = compiledScript.eval(bindings);
System.out.println("result = " + result + " (" + result.getClass().getName() + ")");

Here we have expression with variables and we create Bindings (implements Map) and provide values for a and b. The Bindings are used as a parameter for eval – and the result is naturally Boolean.

Other options for evaluation?

When I needed expression evaluation for configuring Java Simon callback mechanism it was in times of Java 5 – and no scripting engine. First I use JEval (it was on, I’m not sure whether it’s the same project) but when I moved Java Simon to Maven build JEval was lagging behind and I was looking for another solution.

So I moved to MVEL which was way too powerful for my needs – and also the JAR was around 1MB. But MVEL (MVFLEX Expression Language) is more than just some expression evaluator – it really is expression language, like you know it from JSP or Spring (SpEL). It traverses beans, can call methods, etc. If this is what you need then look for this kind of beast (EL).

Consider also the extensibility of the solution. Will you need to add custom functions? These are all things that are already covered by existing libraries. All you have to do is choose. I, personally, didn’t needed this kind of power and was happy with script engine. It is also more powerful than I need, but at least it is built-in already. (Although technically it’s just an API and your non-Oracle JDK may not provide any concrete engine.)

Expression string preprocessing

Whatever evaluator you may use, it may happen that the supported grammar is not exactly what you want. Sometimes it’s about details. Consider writing expressions into XML configuration – all those <, > and && are not very XML friendly. You may use CDATA section (better) or character entity (ugly).

Or you may introduce some simple replacement preprocessing and support LT, GT and AND (and others) instead!

// code in main(...)
String expr = "a &gt; 10 AND NOT (b == 20)";
expr = preprocessExpression(expr);
// ...the rest of the code follows as before

private static String preprocessExpression(String expr) {
    return expr
        .replaceAll(" AND ", " &amp;&amp; ")
        .replaceAll(" OR ", " || ")
        .replaceAll(" NOT ", " !")

If you expect a lot of replacing for a lot of expressions, you may use precompiled Patterns, but if you preprocess each expression just once and then store it as a CompiledScript the gain is probably not that big.

I used this strategy in Java Simon where I compare ns measurements and in order to do so I even provide shortcuts for seconds, millis, micros (us) – check the code here. Very simple replacements but they can help a lot. User might not guess that there is JavaScript evaluation behind.

How to validate used variables?

Imagine you want to allow only some variables in your expressions – which is reasonable. Using ScriptEngine, you know what you put into the Bindings, anything other variable would fail anyway. How to check the expression in this case? Compilation is not enough, of course.

The easiest way to check the expression is to prepare bindings with any values (just the type must be right, of course) – and then let the expression to evaluate with these bindings. This checks that everything clicks. There is possible problem though, in case of expressions where not all parts must be evaluated (like a || b when a is true) the question is whether the names in the not-evaluated part are checked or not. In case of lazy evaluation some things may slip – this definitely happens with Nashorn engine for instance.

What you can do is somehow check all identifier-like “words” in your string – you can use regex to extract them one by one. But it is cumbersome of course. This is much more natural in case of parsed expressions stored in syntax tree. But that one is for another post.


If we don’t need to work with the parsed syntax tree, these mentioned options are preferable to using some parser and then interpreting the expression represented as a tree. Especially when the compilation is available the result may be quite fast – of course depending on the cost of entering the runtime of the expression evaluator. It’s also mentally easy to understand – you want to evaluate the expression, and that’s what you see in the code.

In the next post we will look at the cases when you want a little bit more than just the evaluation.

Why Gradle doesn’t provide “provided”?

Honestly, I don’t know. I’ve been watching Gradle for around 3 years already, but except for primitive demos I didn’t have courage to switch to it. And – believe it or not – provided scope was the biggest practical obstacle in my case.

What is provided, anyway?

Ouch, now I got myself too. I know when to use it, or better said – I know with what kind of dependencies I use it. Use it with any dependency (mostly an API) that are provided (hence the name I guess :-)) at the runtime platform where your artifact will be run. Typical case is javax:javaee-api:7.0. You want to compile your classes that use various Java EE API. This one is kinda “javaee-all” and you can find separate dependencies for particular JSRs. But why not to make your life easier when you don’t pack this into your final artifact (WAR/EAR) anyway?

So it seems to be like compile (Maven’s default scope for dependencies) except that it should not be wrapped in WAR’s lib directory, right? I guess so, except that provided is not transitive, so you have to name it again and again, while compile dependencies are taken from upstream projects.

BTW: This is why I like writing blog posts – I have to make it clear to myself (sometimes not for the first time, of course). Maven’s dependency scopes are nicely described here.

But Gradle has provided!

Without being strict what Gradle is and what are its plugins, when you download Gradle, you can use this kind of scope – if you use ‘war’ plugin, just like in this simple example. If you want to run it (and other examples from this post), just try the following commands in git-bash (or adjust as necessary):

$ svn export <a href=""></a>
$ cd gradle-provided
$ gradle -b build-war.gradle build

Works like a charm – but it’s WAR! Good thing is you can now really check that the provided dependency is not in the built WAR file, only Guava sits there in WEB-INF/lib directory. But often we just need JARs. Actually, when you modularize your project, you mostly work with JARs that are put together in a couple of final artifacts (WAR/EAR). That doesn’t mean you don’t need Java EE imports in these JARs – on the contrary.

So this providedCompile is dearly missed in Gradle’s java plugin. And we have to work around it.

Just Google it!

I tried. Too many results. Various results. Different solutions, different snippets. And nothing worked for me.

The main reason for my failures must have been the fact that I tried to apply various StackOverflow answers or blog advices into an existing project. I should have tried to create something super-simple first.

Recently I created my little “litterbin” project on GitHub. It contains any tests, demos or issue reproductions I need to share (mostly with my-later-self, or when I’m on a different computer). And today, finally, I tried to proof my latest “research” in provided scope – you can check various files using vanilla aproach or propdeps plugins (read further). You can also “svn export” (download) the project as I showed higher and play with it.

My final result without using any fancy plugin is this:

apply plugin: 'maven'
apply plugin: 'java'
apply plugin: 'idea'

repositories {

configurations {

sourceSets {
    main {
        compileClasspath += configurations.provided
        test.compileClasspath += configurations.provided
        test.runtimeClasspath += configurations.provided

// if you use 'idea' plugin, otherwise fails with: Could not find method idea() for arguments...
idea {
    module {
         * If you omit [ ] around, it fails with: Cannot change configuration ':provided' after it has been resolved
         * This is due Gradle 2.x using Groovy 2.3 that does not allow += for single elements addition.
         * More:
         */ += [configurations.provided]
        downloadJavadoc = true
        downloadSources = true

dependencies {
    compile ''
    provided 'javax:javaee-api:7.0'

In the comments you can see the potential problems.

With strictly contained proof-of-concept “project” I can finally be sure what works and what doesn’t. If it works here and doesn’t work when combined with something else, the problem is somewhere else (or in the interaction of various parts of the build). Before I always tried to migrate some multi-module build from Maven, and although I tried to do it incrementally, it simply got over my head when I wanted to tackle provided dependencies.

Just use something pre-cooked!

If you want provided scope you can also use something that just gives it to you. Spring Boot plugin does, for instance, but it may also add something you don’t want. In this StackOverflow answer it was suggested to use propdeps plugin managed by Spring. This just adds the scope you may want – and nothing else. Let’s try it! I went to the page and copied the snippets – the build looked like this:

apply plugin: 'maven'
apply plugin: 'java'
apply plugin: 'idea'

repositories {

buildscript {
    repositories {
        maven { url '' }
    dependencies {
        classpath ''

configure(allprojects) {
    apply plugin: 'propdeps'
    apply plugin: 'propdeps-maven'
    // following line causes Cannot change configuration ':provided' with Gradle 2.x (uses += without [ ] internally)
    apply plugin: 'propdeps-idea'
    apply plugin: 'propdeps-eclipse'

dependencies {
    compile ''
    provided 'javax:javaee-api:7.0'

As the added comment suggest, it wasn’t complete success. Without IDEA plugin and the section, it worked. But the error with the IDEA parts was this:

Cannot change configuration ':provided' after it has been resolved.

You google and eventually find this discussion, where the key message by Peter Niederwieser (core Gradle developer) is:

Gradle 2 updated to Groovy 2.3, which no longer supports the use of ‘+=’ for adding a single element to a collection. So instead of ‘ += configurations.provided’ it’s now ‘ += [configurations.provided]’.

Funny part is, that it is actually fixed in the spring-projects/gradle-plugins version 0.0.7, they have just forgotten to update the examples in the README. :-) So yeah, with 0.0.7 instead of 0.0.6 in the example, it works fine.

How can this stop you?

Maybe provided scope is not that trivial. Scope is actually not the right word in Gradle world, but my mentality and vocabulary is rooted in Maven world after all the years. If provided was obvious and easy they’d probably resolve this never ending story already. Now the issue is polluted with advocates for the scope (yeah, I didn’t resist either) and it’s difficult to understand what the problem is on the side of the Gradle team, except it seems they’re just ignoring it for the couple of years.

Original reporter claimed it doesn’t make sense to stay with Maven for this – and he is right. He is also right that many developers don’t understand how Configuration works (true for me as well) and how it relates to ClassLoader (true again). I’ve read some Gradle book and read many parts of the manual, trouble is that my problems were always about migrating existing Maven builds. Not big ones, but definitely multi-module with provided dependencies. And it really is not easy from this position.

I successfully used Gradle for one-time projects, demos, etc. Every time I try to learn something new about it. I acknowledge that building domain is hard domain. Gradle has good documentation, but it doesn’t mean it’s always easy to find the right recipe. I never worked with a team where someone was dedicated for this task and I was mostly (and sadly) the best learned member when it came to builds with tons of other stuff on my hands. (Sorry for rant. It springs from the fact that builds are considered secondary matter, or worse. And there is too much primary concerns anyway.)

When one doesn’t know how to get to “provided” scope – that was available “for free” in Maven – any obstacle seems much bigger than it really is. There is simply too much we don’t know when we tackle the Gradle the first time. Nobody tells you “don’t use propdeps-plugin:0.0.6, try 0.0.7 instead”.

Or you get Gradle like message “Cannot change configuration ‘:provided’ after it has been resolved” which is probably perfectly OK from Gradle point of view – it nicely covers underlying technology. But it also covers the root cause that Groovy 2.3 simply doesn’t support += without wrapping the right side into […] – and even that only in some cases:

// correct line, but fails without [ ]
idea { module { += [configurations.provided] }}

Even –stacktrace –debug will not help you to find the root cause. Maybe if you’d debug the build in IDE, but I’m definitely not there yet (not with Gradle, I debug Maven builds sometimes).

I hope you can now appreciate how subtle the whole problem is and how much difficulty it may cause.

provided or providedCompile?

And that is another trick – people call it differently. “providedCompile” is probably more Gradle-like (and available with war plugin), “provided” is what we are used to from Maven. Now imagine you experiment with various solutions how to introduce this kind of scope – that is you test different plugins. And all these call it differently. Every time you have to go to your dependency list and fix it there, or wonder why it doesn’t work when you forget. It just adds to the chaos when you already navigate unknown territory.

And it also nicely underlines the fact how much it is missing for java plugin out of the box. Because “it is supported in ‘war’ plugin” is not satisfactory answer. I want to use Java EE imports in my JAR that may be later put to WAR. Or I may run it in embedded container that will be declared with different dependencies. “This mostly affects only library developers” is also not true. Sure, it affects my Java Simon (which is a library), but I used provided scope for JAR modules on every single project in my past.

Now imagine this is your first battle with Gradle (which more or less was in my case). How should I be confident about releasing to Maven Central? It reportedly works, but then, for experienced Gradle users everything is easy…


During my research I found also the article Provided Scope in Gradle. I don’t know how accurate it is for Gradle 2.x or whether Android guys didn’t solve it already somehow. Author added nice pictures and also started with “What is provided anyway?” question (I swear it was a natural choice for my first subheader too :-)). And again it just shows how much complicated Gradle builds are when it’s not available out of the box.

It doesn’t mean I don’t want to try to get to Gradle build. I don’t like Maven’s rigidity – although I appreciate the conventions and I’ll follow those with my Gradle builds too. But sometimes you just want to switch something from false to true – and it takes 10 XML lines. You may say, meh! But it means you see less on screen, builds are not readable, etc. And we already agreed, I hope, that building is a (potentially) complex domain. Readability is a must.

Sure there is something about polyglot Maven, but there is still also the issue with the lack of flexibility. I’m absolutely convinced that Gradle is the way to go. I tried it for simple things and I liked it, and I have no doubt I’ll learn it well enough to master bigger builds too.

Hopefully, provided will not be problem anymore. :-)

JPA – modularity denied

Here we are for another story about JPA (Java Persistence API) and its problems. But before we start splitting our persistence unit into multiple JARs, let’s see – or rather hear – how much it is denied. I can’t even think about this word normally anymore after I’ve heard the sound. :-)

Second thing to know before we go on – we will use Spring. If your application is pure Java EE this post will not work “out of the box” for you. It can still help you with summarizing the problem and show some options.

Everybody knows layers

I decided to modularize our application, because since reading Java Application Architecture book I couldn’t sleep that well with typical mega-jars containing all the classes at a particular architecture level. Yeah, we modularize but only by levels. That is, we typically have one JAR with all entity classes, some call it “domain”, some just “jpa-entities”, whatever… (not that I promote ad-lib names).

In our case we have @Service classes in different JAR (but yeah, all of them in one) – typically using DAO for group of the entities. Complete stack actually goes from REST resource class (again in separate JAR, using Jersey or Spring MVC) which uses @Service which in turns talks to DAOs (marked as @Repository, although it’s not a repository in the pure sense). Complex logic is pushed to specific @Components somewhere under service layer. It’s not DDD, but at least the dependencies flow nicely from top down.

Components based on features

But how about dependencies between parts of the system at the same level? Our system has a lot of entity classes, some are pure business (Clients, their Transactions, financial Instruments), some are rather infrastructural (meta model, localization, audit trail). Why can’t these be separated? Most of the stuff depends on meta model, localization is quite independent in our case, audit trail needs meta model and permission module (containing Users)… It all clicks when one thinks about it – and it is more or less in line with modularity based on features, not on technology or layers. Sure we can still use layer separation and have permission-persistence and permission-service as well.

Actually, this is quite repeating question: Should we base packages by features or by layer/technology/pattern? From sources I’ve read (though I might have read what I wanted to :-)) it seems that the consensus was reached – start by feature – which can be part of your business domain. If stuff gets big, you can split them into layers too.

(If you read the linked StackOverflow page, you might have noticed, that it links another topic – I found JPA, or alike, don’t encourage DAO pattern – I don’t think it’s just an accident.)

Multiple JARs with JPA entities

So I carefully try to put different entity classes into different JAR modules. Sure, I can just repackage them in the same JAR and check how tangled they are with Sonar, but it is recommended to enforce the separation and to make dependencies explicit (not only in the Java Application Architecture book). My experience is not as rich as of the experts writing books, but it is much richer compared to people not reading any books at all – and, gosh, how many of them is there (both books and people not reading them)! And this experience confirms it quite clearly – things must be enforced.

And here comes the problem when your persistence is based on JPA. Because JPA clearly wasn’t designed to have a single persistence unit across multiple persistence.xml files. So what are these problems actually?

  1. How to distribute persistence.xml across these JARs? Do we even have to?
  2. What classes need to be mentioned where? E.g., we need to mention classes from upstream JARs in persistence.xml if they are used in relations (breaks DRY principle).
  3. When we have multiple persistence.xml files, how to merge them in our persistence unit configuration?
  4. What about configuration in persistence XML? What properties are used from what file? (Little spoiler, you just don’t know reliably!) Where to put them so you don’t have to repeat yourself again?
  5. We use EclipseLink – how to use static weaving for all these modules? How about a module with only abstract mapped superclass (some dao-common module)?

That’s quite a lot of problems for age when modularity is so often mentioned. And for technology that is from “enterprise” stack. And they are mostly phrased as questions – because the answers are not readily available.

Distributing persistence.xml – do I need it?

This one is difficult and may depend on the provider you use and the way you use it. We use EclipseLink and its static weaving. This requires persistence.xml. Sure we may try keep it together in some neutral module (or any path, as it can be configured for weaving plugin), but it kinda goes against the modularity quest. What options do we have?

  • I can create some union persistence.xml in module that depends on all needed JARs. This would be OK if I had just one such module – typically some downstream module like WAR or runnable JAR or something. But we have many. If I made persistence.xml for each they would contain a lot of repetition. And I’d reference downstream resource, which is ugly!
  • We can have dummy upstream module or out of module path with union persistence.xml. This would keep things simple, but it would be more difficult to develop modules independently, maybe even with different teams.
  • Keep persistence.xml in the JAR with related classes. This seems best from modularity point of view, but it means we need to merge multiple persistence.xml files when the persistence unit starts.
  • Or can we have different persistence unit for each persistence.xml? This is OK, if they truly are in different databases (different JDBC URL), otherwise it doesn’t make sense. In our case we have rich DB and any module can see the part it is interested in – that is entities from the JARs it has on the classpath. If you have data in different databases already, you’re probably sporting microservices anyway. :-)

I went for third option – especially because EclipseLink’s weaving plugin likes it and I didn’t want to redirect to non-standard path to persistence.xml – but it also seems to be the right logical way. However, there is nothing like dependency between persistence.xml files. So if you have b.jar that uses a.jar, and there is entity class B in b.jar that contains @ManyToOne to A entity from a.jar, you have to mention A class in persistence.xml in b.jar. Yes, the class is already mentioned in a.jar, of course. Here, clearly, engineers of JPA didn’t even think about possibility of using multiple JARs in a really modular way.

In any case – this works, compiles, weaves your classes during build – and more or less answers questions 1 and 2 from our problem list. And now…

It doesn’t start anyway

When you have a single persistence.xml, it will get found as a unique resource, typically in META-INF/persistence.xml – in any JAR actually. But when you have more of them, they don’t get all picked and merged magically – and the application fails during startup. We need to merge all those persistence.xml files during the initialization of our persistence unit. Now we’re tackling questions 3 and 4 at once, for they are linked.

To merge all the configuration XMLs into one unit, you can use this configuration for PersistenceUnitManger in Spring (clearly, using MergingPersistenceUnitManager is the key):

public PersistenceUnitManager persistenceUnitManager(DataSource dataSource) {
    MergingPersistenceUnitManager persistenceUnitManager =
        new MergingPersistenceUnitManager();
    // default persistence.xml location is OK, goes through all classpath*
    return persistenceUnitManager;

But before I unveil the whole configuration we should talk about the configuration that was in the original singleton persistence.xml – which looked something like this:

    <property name="eclipselink.weaving" value="static"/>
    <property name="eclipselink.allow-zero-id" value="true"/>
Without this there were other corner cases when field change was ignored. This can be worked-around calling setter, but that sucks.
    <property name="eclipselink.weaving.changetracking" value="false"/>

The biggest question here is: What is used during build (e.g. by static weaving) and what can be put into runtime configuration somewhere else? Why somewhere else? Because we don’t want to repeat these properties in all XMLs.

But before finishing the programmatic configuration we should take a little detour to shared-cache-mode that showed the problem with merging persistence.xml files in the most bizarre way.

Shared cache mode

Firstly, if you mean it seriously with JPA, I cannot recommend enough one excellent and comprehensive book that answered tons of my questions – often before I even asked them. I’m talking about Pro JPA 2, of course. Like, seriously, go and read it unless you are super-solid in JPA already.

We wanted to enable cached entities selectively (to ensure that @Cacheable annotations have any effect). But I made a big mistake when I created another persistence.xml file – I forgot to mention shared-cache-mode there. My persistence unit picked both XMLs (using MergingPersistenceUnitManager), but my caching went completely nuts. It cached more than expected and I was totally confused. The trouble here is – persistence.xml don’t get really merged. The lists of classes in them do, but the configurations do not. Somehow my second persistence XML became dominant (one always does!) and because there was no shared-cache-mode specified, it used defaults – which is anything EclipseLink thinks is the best. No blame there, just another manifestation that JPA people didn’t even think about this setup scenarios.

It’s actually the other way around – you can have multiple persistence units in one XML, that’s a piece of cake.

If you really want to get some hard evidence how things are in your setup, put a breakpoint somewhere where you can reach your EntityManagerFactory, and when it stops there, dig deeper to find what your cache mode is. Or anything else – you can check the list of known entity classes, JPA properties, … anything really. And it’s much faster than mess around just guessing.

jpa-emf-debuggedIn the picture above you can see, that now I can be sure what shared cache mode I use. You can also see which XML file was used, in our case it was from meta-model module (JAR), so this one would dominate. Luckily, I don’t rely on this anymore, not for runtime configuration at least…

Putting the Spring configuration together

Now we’re ready to wrap up our configuration and move some stuff from persistence.xml into Spring configuration – in my case it’s Java-based configuration (XML works too, of course).

Most of our properties were related to EclipseLink. I read their weaving manual, but I still didn’t understand what works when and how. I had to debug some of the stuff to be really sure.

It seems that eclipselink.weaving is the crucial property namespace, that should stay in your persistence.xml, because it gets used by the plugin performing the static weaving. I debugged maven build and the plugin definitely uses eclipselink.weaving.changetracking property value (we set it to false which is not default). Funny enough, it doesn’t need eclipselink.weaving itself, because running the plugin implies you wish for static weaving. During startup it gets picked though, so EclipseLink knows it can treat classes as statically weaved – which means it can be pushed into programmatic configuration too.

The rest of the properties (and shared cache mode) are clearly used at the startup time. Spring configuration may then look like this:

@Bean public DataSource dataSource(...) { /* as usual */ }

public JpaVendorAdapter jpaVendorAdapter() {
    EclipseLinkJpaVendorAdapter jpaVendorAdapter = new EclipseLinkJpaVendorAdapter();
        Database.valueOf(env.getProperty("jpa.dbPlatform", "SQL_SERVER")));
    return jpaVendorAdapter;

public PersistenceUnitManager persistenceUnitManager(DataSource dataSource) {
    MergingPersistenceUnitManager persistenceUnitManager =
        new MergingPersistenceUnitManager();
    // default persistence.xml location is OK, goes through all classpath*

    return persistenceUnitManager;

public FactoryBean<EntityManagerFactory> entityManagerFactory(
    PersistenceUnitManager persistenceUnitManager, JpaVendorAdapter jpaVendorAdapter)
    LocalContainerEntityManagerFactoryBean emfFactoryBean =
        new LocalContainerEntityManagerFactoryBean();

    Properties jpaProperties = new Properties();
    jpaProperties.setProperty("eclipselink.weaving", "static");
        env.getProperty("eclipselink.allow-zero-id", "true"));
        env.getProperty("eclipselink.logging.parameters", "true"));
    return emfFactoryBean;

Clearly, we can set the database platform, shared cache mode and all runtime relevant properties programmatically – and we can do it just once. This is not a problem for a single persistence.xml, but in any case it offers better control. You can now use Spring’s @Autowired private Environment env; and override whatever you want with property files or even -D JVM arguments – and still fallback to default values – just as we do for database property of the JpaVendorAdapter. Or you can use SpEL. This is flexibility persistence.xml simply cannot provide.

And of course, all the things mentioned in the configuration can now be removed from all your persistence.xml files.

I’d love to get rid of eclipselink.weaving.changetracking in the XML too, but I don’t see any way how to provide this as the Maven plugin configuration option, which we have neatly unified in our parent POM. That would also eliminate some repeating.

Common DAO classes in separate JAR

This one (question 5 from our list) is no problem after all the previous, just a nuisance. EclipseLink refuses to weave your base class regardless of @MappedSuperclass usage. But as mentioned in one SO question/answer, just add dummy concrete @Entity class and you’re done. You never use it, it is no problem at all. And you can vote for this bug.

This is probably not problem for load-time weaving (haven’t tried it), or for Hibernate. I never had to solve any weaving problem with Hibernate, but on the other hand current project pushed my JPA limits further, so maybe I would learn something about Hibernate too (if it was willing to work for us in the first place).

Any Querydsl problems?

Ah, I forgot to mention my favourite over-JPA solution! Were there any Querydsl related problems? Well, not really. The only hiccup I got was NullPointerException when I moved some entity base classes and my subclasses were not compilable. Before javac could have printed reasonable error, Querydsl went in and gave up without good diagnostic on this most unexpected case. :-) I filed an issue for this, but after I fixed my import statements for the superclasses, everything was OK again.


Let’s do it in bullets, shall we?

  • JPA clearly wasn’t designed with modularity in mind – especially not when modules form a single persistence unit, which is perfectly legitimate usage.
  • It is possible to distribute persistence classes into multiple JARs, and then:
    • You can go either with a single union persistence.xml, which can be downstream or upstream – this depends, if you need it only in runtime or during build too.
    • I believe it is more proper to pack partial persistence.xml in each JAR, especially if you need it during build. Unfortunately, there is no escape from repeating some upstream classes in the module again, just because they are referenced in relations (typical culprit when “I don’t understand how this is not entity, when it clearly is!”).
  • If you have multiple persistence.xml files, it is possible to merge them using Spring’s MergingPersistenceUnitManager. I don’t know if you can use it for non-Spring applications, but I saw this idea reimplemented and it wasn’t that hard. (If I had to reimplement it, I’d try to merge the configuration part too!)
  • When you’re merging persistence.xml files, it is recommended to minimize configuration in them, so it doesn’t have to be repeated. E.g., for Eclipselink we leave only stuff necessary for built-time static weaving, the rest is set programmatically in our Spring @Configuration class.

There are still some open questions, but I think they lead nowhere. Can I use multiple persistence units with a single data source? This way I can have each persistence.xml as a separate unit. But I doubt relationships would work across these and the same goes for transactions (without XA that is). If you think multiple units is relevant solution, let me know, please.

I hope this helps if you’re struggling with the noble quest of modularity. Don’t be shy to share your experiences in the comments too! No registration required. ;-)

Siberia Elite Prism – my last buy from SteelSeries

Note: See the comments for a solution to my problem. Silly, yet simple and effective.

And it’s actually also my first one. I wanted to replace my older headset and I chose SteelSeries Siberia Elite Prism – the white one, but that doesn’t really matter. It costed me 160 EUR and first I felt excited. Ok, so you actually buy headset and pay also for fancy led lights on them. Reportedly up to 16M colour, which obviously is b/s, because the fact that you can control something with 24 bits doesn’t mean there are really 16M colours. But you don’t see them when you play, do you?

Then you pay also for funny led light on the mic, that tells you when it’s muted. Some say it should be the other way around, but as I expect mic normally on, I think it makes more sense to indicate muted state. But both these led thingies work only when you drive the headset through their USB audio card – that, conveniently, is provided. And here we come to my problems.

Their software sucks

It’s called SteelSeries Engine 3 and it caused crashes related to audio playback on my Lenovo notebook. Actually the whole system went completely dead, except I could have send it to sleep with a power button and then it awoke in locked screen running for 10 more minutes or so if you listened to the audio through their card. On the desktop there was no problem – again luckily, because I disabled the power button and I’d just have to hard-reset it. It seems that without Engine started it worked better. Or maybe it had to start as administrator (why software that requires it doesn’t say so? I don’t know then!)… or maybe it was the driver.

After some time I managed to listen to the music without interruptions but I don’t know how and why. And because I didn’t use it that much on the notebook, I can’t elaborate on the problem. Let’s go to my desktop then.

I want to plug both headset and speakers

And you can! Their audio card offers the special connector for the headset. This – obviously – makes the headset kinda useless if the cable breaks after the warranty. I can solder new jack on an audio cable, but here you’re out of luck. In this price level removable and replaceable cable may be expected.

But their audio card also contains audio jacks for microphone and speakers. Great! So I thought. My setup is plain silly simple – I use my headphones and my speakers! Yes, sometimes even both. When I have a Skype call I leave the speakers on a bit for the kids to listen what grandma is saying and I use the headset. But to my shock, grandma can’t hear me! What-the… we tried this particular headset already! Microphone is detected, it just doesn’t pick any sound.

After some messing around I disconnected the speakers and voila – microphone works…

They really designed it this way?!

If it was common jack for speakers and mic (TRRS), I’d be inclined to understand. But how hard can it be to split the sound? Not at all – and they do it! Except they turn off the microphone that is completely unrelated to the output path.

I thought it silly, so I contacted their support. They responded within two days, they can reproduce the issue (half the solution, right? :-)) and they will ask the technical team. I expressed my patience and waited.

We solve tickets, not problems

After seven days my ticket got closed. I’m just a customer and not expert on their Zendesk setup, so I was rather surprised. “Hey, c’mon, you didn’t resolve my problem! I want to know what is happening!” I opened a follow-up expressing my dissatisfaction being treated this way. These support people, or anyone who sets up the workflow, absolutely ignores how things really work. They want to get rid of the problem, get rid of your ticket (and maybe even you…). For them, ticket is the problem, not the real cause of the problem.

I know this mentality because I got familiar with it in one of my previous employments. KPIs are set around how long you have the problem (read “ticket”) open, you get rid of the problem (read “ticket”) and the customer (even the one from the same company) has a couple of days to express their satisfaction. But how this solves the problems that occur when I do something I do once a month? My previous ticket is closed, I didn’t express my satisfaction, I open the new one, they can’t reproduce it, ticket gets closed and the cycle repeats.

Actually it’s not only about the things I alone discovered and felt I need to report them whatever the cost (although my one of my mottos is “if you don’t tell, nobody can care”). There were notorious problems everybody knew abound and the company was searching for ways, heck – processes even! – how to deal with them. “We’re good in resolving incidents, but we should get better at solving the problems.”

Like, really? Isn’t starting to do something about the problems good enough? Why we are creating meta-problem (that is a new process for it) instead of trying some involved hard work. Maybe the problem would be resolved within hours. But I know, I know… first we have to think through all the KPIs. How to measure the productivity of this problem solving. Oh, c’mon…

It really is designed that way

SteelSeries support came back to me within two days after I opened the follow-up ticket. And I learned that they are sorry for the automatic ticket closing (maybe other people don’t care I guess) and that the hardware of their USB audio is designed this way. Yes, even if it doesn’t make any sense.

So the problem is simple – if I’m receiving Skype call, I have to crawl under the desk quickly, disconnect the speaker jack, and get back up – and I can chat. Easy. What are the alternatives? Let’s say I’ll put the USB sound card on the desk, so I can easily pull out the jack within seconds. Actually – that’s how I have it now, so I can share the experiences. You may expect some inconvenience, “yeah, the guy needs to lead the cables the way he didn’t plan to”, but there’s more than that.

Putting USB audio card on the extension

You need to USB extension cable. And that may be problem. I don’t have any ultra high quality shielded USB extension cable. And I don’t know how much it would help. The cable I used brought me back to my student’s times when any mouse movement created buzz in the speakers whose cable went around mouse’s USB cable. Here I use much better cable for the speaker, but the buzz somehow gets stronger as I add more USB length between the USB audio and the computer. Funny enough, it does not translate to the headset.

Another problem is with the jack in the USB card. It doesn’t feel very robust really. You touch the cable and it cracks and creates noise. When you don’t move the card and just bend the cable, it’s alright (it’s a new cable and I will not bend it any more, at last not on purpose :-)), so it really is inside the USB card. Jacks on cards are susceptible to this and if I could, I’d not touch it. But here I have to plug/unplug it regularly. I’ll see how long the audio card will last.

But in any case the noise in the speakers is my biggest trouble now. I have to turn the volume all the way down and live with funny noises between the songs. And yes, it really is that annoying and that audible.

Or just use jacks and soundcard in the computer!

And that is actually my last backup option. I can use provided jack converter, I’ll split the sound into headset and speakers with hardware Y splitter and I’m done. That renders the USB card useless, it will be another fancy thing in the box I didn’t need. With this solution you may miss: 1) colours on the headphones (probably not), 2) muted led indication on the mic (that one is handy but not a show stopper), 3) noise cancelling for the mic that works in tandem with the card. And this last thing sounds useful although I don’t know how well it actually works.

Reportedly mic muting and volume control rings on the headset both work even without USB sound card. And both are actually not just cool, but also practical things.

Anything positive?

It’s really hard to say something positive now. I don’t know how much is the headset and its sound quality worth for me. If Windows could route the sounds through two soundcards, my speaker setup would be saved. But Windows and sound routing is incredibly inflexible – I found out when I wanted something so primitive like stereo reversing, because my cable setup forces me to switch my active monitors. Now I switch left to right using cable with jack on one side and RCA connectors on the other – and these I can just switch easily. But back to the headset.

Sound is definitely better than any headphones I had home until now – but then, none of them costed more than 40 EUR or so.

It’s not like the whole headset is completely useless, but I’m frustrated when someone creates something fancy with such an incredibly stupid design flaw. Am I the only one who want’s to drive both speakers and headphones from one sound card? The additional jack absolutely begs me to do it this way – and it works! Just… it disables the microphone.

I’d say 1/5 stars would be harsh and unfair rating (although I feel like that), but 2/5 is just spot on when I consider the software quality, useless fancy things and this design bug. If you check for other people’s problems with SteelSeries, the pattern is very similar. Sometimes it just doesn’t work for you (I mean altogether, not like my case) and support can’t help you. I know it just happens, PCs vary, etc. But the more they complicate their products (and 16M colours of your ears are waste of effort really) the more problems one can expect.

I don’t know yet what my next headset will be, but it will not be SteelSeries for sure. Though I hope this one will last reasonably long. And I hope I’ll find no more design problems.

IntelliJ IDEA – subscription renewed

It has been over 16 months now since JetBrains presented their IntelliJ IDEA Personal Licensing Changes. The discussion under the post speaks for itself – there was hardly anybody who really liked it. Before you paid 100% when you bought it the first time (not to mention sales they offered ;-)) and 50% anytime you bought an upgrade. That means, you skipped major version or two (roughly a year each) and then bought the one you liked/needed for an upgrade price of 50%.

Now you buy it the first time for 100% and every next year you pay 50% without worrying anymore. Or you don’t renew your subscription and buy next year for 75% as a returning customer. That is like 50% + 50% of it. Long story short – version skipping is now more expensive, whatever your reasons for skipping are.

New model is not all bad…

One good point is that nobody can now complain that they bought version X and the next day you can get X and upcoming major version X+1 for the same price. What can still happen is that the next major version will be released the day after your subscription ran out. Which is more or less the same problem actually, except that now you can make decision that costs you with more information (before you had to be an oracle).

Again – long story short – if all customers stayed and all subscribed, JetBrains will have more money for their (and our favourite!) IDE. I guess there are ever more features, broader scope of problems, and whatnot. Price difference for every version skipping would be 50€ here in Slovakia. Nothing terrible actually.

JetBrains defence of this step was based also on possibility to release more often, even major versions. With IDEA 13 coming on Dec 3rd, 2013 and IDEA 14 on Nov 5th, 2014, we can’t say they lied, but it’s far from radical. And the question is whether it shows in reality, not just on the splash-screen.

…but I can’t see any big change

So that’s how I feel. I more or less (rather less actually) agreed to continuing partnership with the producer of my favourite IDE. It costs a little bit more, obviously there are no sales you can speculate on, etc. Business. But then, it’s not really anything that would ruin me and it’s worth the productivity you get, especially when you’re used to it. There is still Community Edition, if that’s all you need. And man, if you need just Java, version control and build tool support, it’s just incredible.

I wasn’t sure what to imagine regarding potentially faster release cycle and we’ve got just that – nothing radical, no harm done. Some versions are better, some worse, fresh 14.1 definitely needs some refinement as sometimes it stops checking files and finding usages altogether, but it restarts quickly and I hope it will be fixed soon.

What I miss

If I could channel IDEA developer’s energy into any particular area it would be fixing already existing Youtrack issues in general. I was rather a vigorous reporter – sometimes even successful. (Did you know you can copy the error message from status bar clicking on it with right mouse button? How handy is that when you want to paste it into a bug report!) But there are open issues that are probably obsolete already, some cleanup would be great. “Forgotten” issues are no good.

I remember how long it took to bring reasonable javadoc formatting into IDEA – and it still lacks here and there, although it was postponed one or two major versions. These are the things were I’d like to see more effort. But I understand there are customers waiting for support of their favourite framework as well.

Final words

So that’s it. Not a big change in the price, especially if IDEA is your primary axe, or whatever you like as a metaphor for a sharp tool (of course we can talk about tens of percent, but really…). Perceived quality is good, although varying – like anytime before. No tens of percent change there. :-) But anytime I see my colleagues struggling with something in Netbeans (“you can’t run normal main method from test sources?!”) or Eclipse (SVN and Maven working normally already?) I know that IDEA is still worth it. Although some people should learn their IDEs in the first place, whatever they use. Sometimes what I see is like a woodcutter beating the tree with the axe instead of cutting it – when we used that metaphor before. But that’s beyond the scope of this post.

Jetty hardening

Hardening may be a bit of stretch here, so before you spent your time with the post, I’ll tell you what we’re going to do here:

  • Disable directory browsing (definitely security hole) in the web application context.
  • Disable default Jetty error messages – both in application context and out of it (if you’re not running application in the root context).
  • And a little shutdown hook that helps your application lifecycle in case of unexpected process termination (covers break/ctrl+C, not kill -9 obviously :-)).

This all applies to Jetty 9.2.x, I’m pretty sure these things change slightly between major Jetty versions, so you may need to adjust it for versions 8 or 7.

This is a compilation of some research on the internet and actual Jetty debugging, applied to our Spring based web application run under Jetty (and wrapped with Launch4J) and if you know how to do things better I’ll gladly hear about it in comments of course. It is also loose continuation of our previous parts:

So, let’s see some code!

The result

Ok, maybe not the best story-telling, but let’s see the actual result here first. I left also lines with logging, we can argue about log levels, but that’s not the point here. Because it’s a longer listing with all the features from previous post, I highlighted the lines that are related to today’s post:

import java.nio.file.DirectoryStream;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;

import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;

import org.eclipse.jetty.server.Request;
import org.eclipse.jetty.server.Server;
import org.eclipse.jetty.servlet.ErrorPageErrorHandler;
import org.eclipse.jetty.webapp.WebAppContext;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

 * Port is customizable using -Dport=8181 - default is 8080.
 * Application context is customizable using -Dcontext=xxx - default is "". Initial slash is
 * always prepended internally after any leading or trailing slashes (/) are discarded from the provided value,
 * so both -Dcontext= and -Dcontext=/ define root context.
public class RestMain {

    private static final Logger log = LoggerFactory.getLogger(RestMain.class);

    public static void main(String[] args) throws Exception {
        int port = Integer.parseInt(System.getProperty("port", "8080"));
        String contextPath = System.getProperty("context", "");

        log.debug("Going to start web server on port {} with context path {}", port, contextPath);
        Server server = new Server(port);

        WebAppContext context = new WebAppContext();
        context.setContextPath('/' + contextPath);
        context.setInitParameter("org.eclipse.jetty.servlet.Default.dirAllowed", "false");
        context.setErrorHandler(new ErrorHandler());

        ProtectionDomain protectionDomain = RestMain.class.getProtectionDomain();
        String warPath = protectionDomain.getCodeSource().getLocation().toExternalForm();
        if (warPath.toLowerCase().endsWith(".exe")) {
            warPath = prepareWarPathFromExe(warPath, "WEB-INF");
        } // else we assume dir or jar/war
        log.debug("WebAppContext set for server with location {}", warPath);

        // default error handler for resources out of "context" scope
        server.addBean(new ErrorHandler());

        if (!context.isAvailable()) {
            //noinspection ThrowableResultOfMethodCallIgnored
            log.error("Application did NOT started properly: {}", context.getUnavailableException().toString());
        } else if (context.getWebInf() == null) {
            log.error("Application was NOT FOUND");
        } else {
            log.debug("Application READY");
        log.debug("Exiting application");

    private static void addJettyShutdownHook(final Server server) {
        Runtime.getRuntime().addShutdownHook(new Thread() {
            public void run() {
                try {
                    log.debug("Exiting application (shutdown hook)");
                } catch (Exception e) {
                    log.warn("Exception during server stop in shutdown hook", e);

    private static String prepareWarPathFromExe(String pathToExe, String... prefixes) throws IOException {
        Path tmpWarDir = Files.createTempDirectory("restmod");
        final String warPath = tmpWarDir.toString();
        log.debug("Extracting WAR from EXE into {}, prefixes {}", warPath, prefixes);

        WarExploder warExploder = new WarExploder(pathToExe, warPath);

        Runtime.getRuntime().addShutdownHook(new Thread() {
            public void run() {
                try {
                    log.debug("Temporary WAR directory deleted");
                } catch (IOException e) {
                    log.warn("Problems with deleting temporary directory", e);
        return warPath;

    private static void deleteRecursive(Path dir) throws IOException {
        if (Files.isDirectory(dir)) {
            try (DirectoryStream<Path> directoryStream = Files.newDirectoryStream(dir)) {
                for (Path path : directoryStream) {

     * Dummy error handler that disables any error pages or jetty related messages and returns our
     * ERROR status JSON with plain HTTP status instead. All original error messages (from our code) are preserved
     * as they are not handled by this code.
    static class ErrorHandler extends ErrorPageErrorHandler {
        public void handle(String target, Request baseRequest, HttpServletRequest request, HttpServletResponse response) throws IOException {
                .append("{\"status\":\"ERROR\",\"message\":\"HTTP ")

Now we can walk through these parts.

Directory browsing

This was the easy part after we accidently found out it is allowed. Just add this line to your WebAppContext:

context.setInitParameter("org.eclipse.jetty.servlet.Default.dirAllowed", "false");

We are running REST-like API under some context path and if we comment out this line, Jetty will nicely allow us to browse the directory (WEB-INF included). This line ensures that HTTP 403 Forbidden will be returned instead. How this status is treated…

Error handling in your application

There are errors you can treat in your application – these are not problem. And then there are cases that somehow slip out, there is no reasonable way how to intercept them and server displays some ugly error for it, introducing itself to the user completely. If I run application on http://localhost:8080/xxx and I hit that URL, I get the following output:

<meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/>
<title>Error 404 Not Found</title>
<body><h2>HTTP ERROR 404</h2>
<p>Problem accessing /finrisk/easd. Reason:
<pre>    Not Found</pre></p><hr><i><small>Powered by Jetty://</small></i><hr/>


That is not bad, but I’d like short JSON we use for error messages. This can be customized in web.xml:

<!-- somewhere at the end of web.xml →

This will point the application to our error resource and this is returned:

{"status":"ERROR","message":"HTTP 404"}

Alternatively we can achieve the same with custom Jetty ErrorHandler as we defined it at the end of our JettyMain class:

context.setErrorHandler(new ErrorHandler());
static class ErrorHandler extends ErrorPageErrorHandler {
  public void handle(String target, Request baseRequest, HttpServletRequest request, HttpServletResponse response) throws IOException {
      .append("{\"status\":\"ERROR\",\"message\":\"HTTP ")

This effectively replaces your web.xml definition (if present) and if you debug your application you can confirm that with a breakpoint that would be reached without this jetty line and will not be when it’s present.

How about URLs out of webapp context?

Let’s now access some URL out of our application – like http://localhost:8080/yy – this will result in very similar default Jetty error page like we’ve seen already:

<meta http-equiv="Content-Type" content="text/html;charset=ISO-8859-1"/>
<title>Error 404 </title>
<h2>HTTP ERROR: 404</h2>
<p>Problem accessing /. Reason:
<pre>    Not Found</pre></p>
<hr /><i><small>Powered by Jetty://</small></i>

This is also easy to fix, because we can reuse our ErrorHandler – just add this line for your Jetty Server instance:

server.addBean(new ErrorHandler());

But are we really done?

Not so fast…

The trouble is that error handler (or error resource configured in web.xml) is only used when Jetty (or any servlet container) thinks there was an error. If you handled the exception some other way, then this ErrorHandler is not used – whatever HTTP status you send.

We use Jersey as our REST API provider where you can register ExceptionMapper for particular exception type (and all its subtypes). When this is triggered and you populate with the output and set HTTP status, it will not trigger this error page. We handled the error once, no reason to do it twice.

But in case of Jersey there are exception mappers for JSON parse errors and these may leak some information. Let’s try any URL that expects JSON in POST with unpaired curly brace and you’ll get this error (going around your custom ErrorHandler or error page):

Unexpected end-of-input: expected close marker for OBJECT (from [Source: org.glassfish.jersey.message.internal.ReaderInterceptorExecutor$UnCloseableInputStream@1986598; line: 1, column: 0])
 at [Source: org.glassfish.jersey.message.internal.ReaderInterceptorExecutor$UnCloseableInputStream@1986598; line: 1, column: 3]

This is not acceptable, but in this case you have to fight the framework (Jersey) to unregister internal mappers and register yours to get back on track. But obviously, this is out of scope of this blog post. Just remember that we’re covering only unhandled errors and any other error/exception handlers has to be checked too.

Shutdown hook

Finally something straightforward and not Java EE at all. When your Jetty server starts and application (Java process) gets terminated somehow, it will not stop the Jetty server in a graceful manner. With it goes down our Spring application – and if you have components with @PreDestroy, these method will likely not get run.

To do better than this we utilize Java’s shutdown hooks. The code goes like this:

// after Jetty starts OK with our app running
private static void addJettyShutdownHook(final Server server) {
  Runtime.getRuntime().addShutdownHook(new Thread() {
    public void run() {
      try {
        log.debug("Exiting application (shutdown hook)");
      } catch (Exception e) {
        log.warn("Exception during server stop in shutdown hook", e);

All we have to do is call server.stop(). Would finally block in the main method do the same? Probably – I’m too tired to try… but mainly – shutdown hook communicates “I’ll be run when the process goes down” much better I think.


There’s a lot more to do with Jetty – and I don’t know most of it. I’m not doing any SSL here or Jetty based authentication, but aforementioned little patches should help it a lot. In case of error handler vs error page within the webapp context – it’s your choice probably. If you don’t have error page yet, then it is the easy one. But unless you run the application on the root context (/) you definitely want to shut up Jetty’s default messages in production. Finally – shutdown hook will help the webapp going down nicely. This is twice so important if you have another container inside (like Spring). In any case, it makes our applications graceful, right? Who doesn’t want to have graceful applications? :-)

JPA joins without mapped attributes

OK, whoever knows it is possible, just stop reading right away (before my JPA rant starts). And congratulation you’ve learnt it, hopefully on time.

JPA is hard

Java Persistence API is not easy – let’s face it. Please. Because it is easy only for two kinds of programmers – those that know everything (again, congratulation) and those who think they know everything. Or something similar. While I’m all for minimizing complexity to make things easy, it is known fact that making things simple (that is non-complex) is hard. Everytime I hear “no problem, that’s easy” I know what I’ll get. Complex ball of mud that does not correspond to “easy” at all.

I’ve read parts of Hibernate reference, I’ve read Pro JPA 2/2.1 – and I’d love to read it all finally, but there is always something else to read too. And so while I was aching for solution how to get rid of some my @Many/OneToOne mappings from entities (because these are not lazy and can trigger too many selects), I thought I couldn’t because I’ll be unable to join in the direction of that association. I was wrong, but first…

Can’t you avoid triggered selects for *ToOne somehow?

Yes, you can. You can eagerly fetch that association. Let’s talk about these entities:

jpa-security-deleteI took them from my previous JPA post  (and you can ignore the Domain completely now) where I compared some of worse features of our beloved leading ORMs (and JPA implementations). I tackled *ToOne there as well, but only superficially so I don’t need to correct anything essential. Not that I have problem to correct my previous work and admit I was wrong and mistaken. Back to the topic…

A Security points to a Client (acting as a security issuer in our case). How can I populate table for a user with selected attributes from the security and its issuer as well?

  • The worst case (and still traditionally used!) – select Securities and don’t care. This will likely result in N+1 select (officially name does not reflect the cause and effect – as it is rather 1+N). This plain sucks. I really don’t know why mainstream ORM solutions still execute this as N+1 even when it’s eager (*ToOne is by default), but that just underlines… well, the state of ORM, obviously.
  • Fetch the relationship! Of course. It is bound to happen anyway, so let’s deal with it in one select. Does it mean we’re done in single select? If you have more *ToOne relations on your Security – or Client – then you’re not.
  • Fetch and select columns explicitly. This works, but you have to deal with Lists of arrays of Objects, or – in better case – Querydsl’s handy Tuple. Or you may use projection to DTO. Do this with 60 columns (maybe not for table row, but later for detail) and you’ll know the pain.

But yes, it is possible. The question is if it’s worth to use JPA if 80% of time you go around these problems. Depending on relationship optionality you need to use LEFT JOIN of course. “Use LEFT JOIN, Luke!” is my answer when another developer asks me questions like “when I want to display client’s fields, some rows just disappear from my results!” or similar with a bit of fun in it “count select returns different count than is the size of final result!” – because you don’t need to join for count unless you perform WHERE part on joined columns.

Summed up, it’s hard. Gone is the premise that you’ll enjoy working with some mapped objects. No, no, deal with Lists of Object[] or DTOs! It’s hard to say what is better – DTO in theory, but if the list of columns is changing a lot then it’s just another place you’ll have to change. If you still use vanilla JPA, consider Querydsl, seriously. Tuple may be lightweight DTO for you and on the other side you can get the stuff out using expression paths that are compile time safe – without the need to mess with accessors. (Old enough to remember this article? :-))

Dropping @ManyToOne annotation

(The same applies for @OneToOne where relevant.)

To write left join from Security to Client you can find virtually exclusively examples like this (Querydsl, but the same goes for JPQL):

QClient issuer = new QClient(“issuer”); // alias
List<Tuple> result = new JPAQuery().from(
  .leftJoin(, issuer)

That is, in left join you first state the path how to get to the issuer from the security and then you assign an alias to it (issuer). Sometimes you can live without aliases, but in general you need them for any deeper joins. Notice also how the leftJoin implies ON clause. This is logical and expected, that’s why we have the mapping there.

But thing I never realized (and decided to try it the first time today) is that you can just leftJoin to alias and add your ON explicitly – just like you would in SQL!

QClient issuer = new QClient(“issuer”); // alias
List<Tuple> result = new JPAQuery().from(

Obviously you have to have issuerId attribute mapped on Security – but that is probably some plain type like Integer. This will not trigger any additional select. BTW – if you really want, you can have dual mapping for the same column like this:

@Column(name = "issuer_id"<b>, insertable = false, updatable = false</b>)
private Integer issuerId;

@ManyToOne(fetch = FetchType.LAZY)
@JoinColumn(name = "issuer_id")
private Client issuer;

Notice the insertable/updatable=false – this has to be present on one of those mappings. The question is on which one. If you save/update the entity a lot and issuer’s ID is good enough then move it to @JoinColumn annotation. But if you don’t need Client in most cases at all, remove it completely. Also notice how the code is lying to you about LAZY fetch. It is not. In theory it can be with some byte-code modified Client, but that is not portable anyway. If you absolutely need that attribute in Security class, you can still make it @Transient, fetch it and list it explicitly (here I show it for singleResult for brevity):

QClient issuer = new QClient(“issuer”); // alias
Tuple result = new JPAQuery().from(
  .singleResult(, issuer);
Security security = result.get(;
security.setIssuer(result.get(issuer)); // sets transient field

No problem at all. Full control. Couple of lines longer. Probably way more performant on scale.

What if JPA allowed us to ignore relations?

If you use only id value that copies exactly FK in your entities then you’ll lose some of the ORM benefits – but as said you’ll also dump a lot of its burden that is in many cases unconsidered. Imagine you could specify mappings like this:

@Column(name = "issuer_id")
@ManyToOne(<b>targetEntity = Client.class</b>) // exists, but invalid when types don’t match
private Integer issuerId;

This way you’d say what the target entity is, generated metamodels could offer you joins, etc. I’m not going to evolve this idea at all because there are some obvious disadvantages. Your JOIN path would be security.issuerId, not issuer, so if you followed it to Client’s attributes it would read confusingly (assuming that when used as a path it would act as Client of course). Then maybe this:

@ManyToOne(fetch = <b>FetchType.JOIN_ONLY</b>) // does not exist
@JoinColumn(name = "issuer_id")
private Client issuer;

This would mean that unless you require JOIN explicitly in your select this field will NOT be populated. Actually, this would be best to combine with dual mapping as shown sooner. ID is easy to get without JOIN – and referenced entity would be insertable/updatable false. Otherwise semantics of save would be questionable. Does null mean that entity is removed and FK should be nulled? Or does it mean to ignore it? Should JPA ignore it you need that explicit issuerId field. If not ignored even trivial find/change single attribute/save (or let it happen automatically at the transaction commit) would delete your relation unexpectedly. You can add some dirty checking magic of course (“if the attribute changes then do that, otherwise don’t do anything”), but we’d just add to existing mess. So this definitely breaks “least surprise” principle. But with dual mapping that is already possible, I’d love to have this really lazy fetch type.

In case you wonder why LAZY currently does not work (without byte-code modification magic)… The answer is that issuer attribute should be null only if NULL in database. So if the FK is there, you need to instantiate some Client and if that supposes to be lazy, it has to be some kind of proxy to a real one that is lazily loaded when you actually touch (get something from) the issuer object. This is very easy with collections (*ToMany) and ORM’s can do that (although Eclipselink’s Vector is spectacularly wrong way to do it) – that’s why I’m so possessed with *ToOne problem. :-)

Is it still ORM?

Had dual mapping with possibility of ignoring the relationship (unless explicitly joined) worked, it would still be ORM as we know it. You’d have bigger control, single em.find would not trigger 47 selects (real story, seriously, and all traversing *ToOne relationships) and there would be minimal implications to JPA (although there are more competent to say so or otherwise).

Currently I plan to drop quite a lot of *ToOne mappings because:

  • There is no portable way to avoid the fetch when I know I don’t want it.
  • When I want it, I can join it myself anyway.
  • If I want to pick specific relation I can just find it by ID that is available on the entity (and then store it to @Transient field for example).

Am I still using it as ORM then? I’m dropping a lot of “mappings” of my “relations(hips)” from my “objects”. But I can still utilize a lot of JPA (and Querydsl). There still is persistence context, you can use L2 cache (if you want), and maybe you don’t have uber-cool mapping to your domain… but let’s face it. How often we really map it? How often we just mirror our databases?

Do you map @ManyToMany or do you map the association table with two IDs instead? Looking back to our picture up there – can you get all Domain IDs for known Security ID (or list of IDs)? Yes you can. In SQL by querying a single table. Eclipselink joins Security, SecurityDomain and Domain to get information that is clearly available on SecurityDomain table itself (not mapped explicitly). Is it so difficult to treat IDs properly? It probably is. How is the JPA for lowering the complexity?

Execution postponed

JPA adds tons of accidental complexity every time. Not first, but later. And virtually all teams working with it have no real JPA expert. I study JPA more than most people I know and it still keeps surprising me. Rarely in a good way. Left join using unrelated alias (at least from mapping point of view) and explicit ON is the good one. I don’t even know why I never tried it when I officially longed for it! I ranted: “Why don’t we have free joins on any numbers (PK/FK) just like SQL can do?” And nobody ever contradicted me. Most examples always start with path going from FROM entity, using alias only as their second argument (Querydsl, but JPQL examples are the same). We are slaves of our *ToOne relations and can’t get rid of undesired selects – and there is a simple way how to do it all the time (although I don’t know how it worked before JPA 2.1, but it probably did).

If you don’t care about *ToOne problem, no problem. If you do though – the treatment is:

  • Replace your *ToOne mapping with simple ID as a value attribute.
  • Change anyJoin(x.y, alias) to anyJoin(alias).on(x.yId.eq( – this is Querydsl example but it transforms to JPQL directly. List both entities in projection if you need it (maybe you just WHERE on it).
  • If you really want +1 select, do it with em.find yourself. That may actually use L2 cache (if it’s not in the persistence context already).

With this tiny speck of knowledge I’m backing off from my plans to get rid of JPA in our project (you can imagine what pain that would be). Now it gives me so-so enough control to contain the select explosions to required levels.


Get every new post delivered to your Inbox.

Join 237 other followers