Last three years with software

Long time ago I decided to blog about my technology struggles – mostly with software but also with consumer devices. Don’t know why it happened on Christmas Eve though. Two years later I repeated the format. And here we are three years after that. So the next post can be expected in four years, I guess. Actually, I split this into two – one for software, mostly based on professional experience, and the other one for consumer technology.

Without further ado, let’s dive into this… well… dive, it will be obviously pretty shallow. Let’s skim the stuff I worked with, stuff I like and some I don’t.

Java case – Java 8 (verdict: 5/5)

This time I’m adding my personal rating right into the header – little change from previous post where it was at the end.

I love Java 8. Sure, it’s not Scala or anything even more progressive, but in context of Java philosophy it was a huge leap and especially lambda really changed my life. BTW: Check this interesting Erik Meijer’s talk about category theory and (among other things) how it relates to Java 8 and its method references. Quite fun.

Working with Java 8 for 17 months now, I can’t imagine going back. Not only because of lambda and streams and related details like Map.computeIfAbsent, but also because date and time API, default methods on interfaces and the list could probably go on.

JPA 2.1 (no verdict)

ORM is interesting idea and I can claim around 10 years of experience with it, although the term itself is not always important. But I read books it in my quest to understand it (many programmers don’t bother). The idea is kinda simple, but it has many tweaks – mainly when it comes to relationships. JPA 2.1 as an upgrade is good, I like where things are going, but I like the concept less and less over time.

My biggest gripes are little control over “to-one” loading, which is difficult to make lazy (more like impossible without some nasty tricks) and can result in chain loading even if you are not interested in the related entity at all. I think there is reason why things like JOOQ cropped up (although I personally don’t use it). There are some tricks how to get rid of these problems, but they come at cost. Typically – don’t map these to-one relationships, keep them as foreign key values. You can always fetch the stuff with query.

That leads to the bottom line – be explicit, it pays off. Sure, it doesn’t work universally, but anytime I leaned to the explicit solutions I felt a lot of relief from struggles I went through before.

I don’t rank JPA, because I try to rely on less and less ORM features. JPA is not a bad effort, but it is so Java EE-ish, it does not support modularity and the providers are not easy to change anyway.

Querydsl (5/5)

And when you work with JPA queries a lot, get some help – I can only recommend Querydsl. I’ve been recommending this library for three years now – it never failed me, it never let me down and often it amazed me. This is how criteria API should have looked like.

It has strong metamodel allowing to do crazy things with it. We based kinda universal filtering layer on it, whatever the query is. We even filter queries with joins, even on joined fields. But again – we can do that, because our queries and their joins are not ad-hoc, they are explicit. 🙂 Because you should know your queries, right?

Sure, Querydsl is not perfect, but it is as powerful as JPQL (or limited for that matter) and more expressive than JPA criteria API. Bugs are fixed quickly (personal experience), developers care… what more to ask?

Docker (5/5)

Docker stormed into our lives, for some practically for others at least through the media. We don’t use it that much, because lately I’m bound to Microsoft Windows and SQL Server. But I experimented with it couple of times for development support – we ran Jenkins in the container for instance. And I’m watching it closely because it rocks and will rock. Not sure what I’m talking about? Just watch DockerCon 2015 keynote by Solomon Hykes and friends!

Sure – their new Docker Toolbox accidentally screwed my Git installation, so I’ll rather install Linux on VirtualBox and test docker inside it without polluting my Windows even further. But these are just minor problems in this (r)evolutionary tidal wave. And one just must love the idea of immutable infrastructure – especially when demonstrated by someone like Jérôme Petazzoni (for the merit itself, not that he’s my idol beyond professional scope :-)).

Spring 4 and on (4/5)

I have been aware of the Spring since the dawn of microcontainers – and Spring emerged victorious (sort of). A friend of mine once mentioned how much he was impressed by Rod Johnson’s presentation about Spring many years ago. How structured his talk and speech was – the story about how he disliked all those logs pouring out of your EE application server… and that’s how Spring was born (sort of).

However, my real exposure to Spring started in 2011 – but it was very intense. And again, I read more about it than most of my colleagues. And just like with JPA – the more I read, the less I know, so it seems. Spring is big. And start some typical application and read those logs – and you can see EE of 2010’s (sort of).

That is not that I don’t like Spring, but I guess its authors (and how many they are now) simply can’t see anymore what beast they created over the years. Sure, there is Spring Boot which reflects all the trends now – like don’t deploy into container, but start the container from within, or all of its automagic features, monitoring, clever defaults and so on. But that’s it. More you don’t do, but you better know about it. Or not? Recently I got to one of the newer Uncle Bob’s articles – called Make the Magic go away. And there is undeniably much to it.

Spring developers do their best, but the truth is that many developers just adopt Spring because “it just works”, while they don’t know how and very often it does not (sort of). You actually should know more about it – or at least some basics for that matter – to be really useful. Of course – this magic problem is not only about Spring (or JPA), but these are the leaders of the “it simply works” movement.

But however you look at it, it’s still “enterprise” – and that means complexity. Sometimes essential, but mostly accidental. Well, that’s also part of the Java landscape.

Google Talk (RIP)

And this is for this post’s biggest let down. Google stopped supporting their beautifully simple chat client without any reasonable replacement. Chrome application just doesn’t seem right to me – and it actually genuinely annoys me with it’s chat icon that hangs on the desktop, sometimes over my focused application, I can’t relocate it easily… simply put, it does not behave as normal application. That means it behaves badly.

I switched to pidgin, but there are issues. Pidgin sometimes misses a message in the middle of the talk – that was the biggest surprise. I double checked, when someone asked me something reportedly again, I went to my Gmail account and really saw the message in Chat archive, but not in my client. And if I get messages when offline, nothing notifies me.

I activated the chat in my Gmail after all (against my wishes though), merely to be able to see any missing messages. But sadly, the situation with Google talk/chat (or Hangout, I don’t care) is dire when you expect normal desktop client. 😦

My Windows toolset

Well – now away from Java, we will hop on my typical developer’s Windows desktop. I mentioned some of my favourite tools, some of them couple of times I guess. So let’s do it quickly – bullet style:

  • Just after some “real browser” (my first download on the fresh Windows) I actually download Rapid Environment Editor. Setting Windows environment variables suddenly feels normal again.
  • Git for Windows – even if I didn’t use git itself, just for its bash – it’s worth it…
  • …but I still complement the bash with GnuWin32 packages for whatever is missing…
  • …and run it in better console emulator, recently it’s ConEmu.
  • Notepad2 binary.
  • And the rest like putty, WinSCP, …
  • Also, on Windows 8 and 10 I can’t imagine living without Classic Shell. Windows 10 is a bit better, but their Start menu is simply unusable for me, classic Start menu was so much faster with keyboard!

As an a developer I sport also some other languages and tools, mostly JVM based:

  • Ant, Maven, Gradle… obviously.
  • Groovy, or course, probably the most popular alternative JVM language. Not to mention that groovsh is good REPL until Java 9 arrives (recently delayed beyond 2016).
  • VirtualBox, recently joined by Vagrant and hopefully also something like Chef/Puppet/Ansible. And this leads us to my plans.

Things I want to try

I was always friend of automation. I’ve been using Windows for many years now, but my preference of UNIX tools is obvious. Try to download and spin up virtual machine for Windows and Linux and you’ll see the difference. Linux just works and tools like Vagrant know where to download images, etc.

With Windows people are not even sure how/whether they can publish prepared images (talking about development only, of course), because nobody can really understand the licenses. Microsoft started to offer prepared Windows virtual machines – primarily for web development though, no server class OS (not that I appreciate Windows Server anyway). They even offer Vagrant, but try to download it and run it as is. For me Vagrant refused to connect to the started VirtualBox machine, any reasonable instructions are missing (nothing specific for Vagrant is in the linked instructions), no Vagrantfile is provided… honestly, quite lame work of making my life easier. I still appreciate the virtual machines.

But then there are those expiration periods… I just can’t imagine preferring any Microsoft product/platform for development (and then for production, obviously). The whole culture of automation on Windows is just completely different – read anything from “nonexistent for many” through “very difficult” to “made artificially restricted”. No wonder many Linux people can script and too few Windows guys can. Licensing terms are to be blamed as well. And virtual machine sizes for Windows are also ridiculous – although Microsoft is reportedly trying to do something in this field to offer reasonably small base image for containerization.

Anyway, back to the topic. Automation is what I want to try to improve. I’m still doing it anyway, but recently the progress is not that good I wished it to be. I fell behind with Gradle, I didn’t use Docker as much as I’d like to, etc. Well – but life is not work only, is it? 😉


Good thing is there are many tools available for Windows that make developer’s (and former Linux user’s) life so much easier. And if you look at Java and its whole ecosystem, it seems to be alive and kicking – so everything seems good on this front as well.

Maybe you ask: “What does 5/5 mean anyway?” Is it perfect? Well, probably not, but at least it means I’m satisfied – happy even! Without happiness it’s not 5, right?

JPA – modularity denied

Here we are for another story about JPA (Java Persistence API) and its problems. But before we start splitting our persistence unit into multiple JARs, let’s see – or rather hear – how much it is denied. I can’t even think about this word normally anymore after I’ve heard the sound. 🙂

Second thing to know before we go on – we will use Spring. If your application is pure Java EE this post will not work “out of the box” for you. It can still help you with summarizing the problem and show some options.

Everybody knows layers

I decided to modularize our application, because since reading Java Application Architecture book I couldn’t sleep that well with typical mega-jars containing all the classes at a particular architecture level. Yeah, we modularize but only by levels. That is, we typically have one JAR with all entity classes, some call it “domain”, some just “jpa-entities”, whatever… (not that I promote ad-lib names).

In our case we have @Service classes in different JAR (but yeah, all of them in one) – typically using DAO for group of the entities. Complete stack actually goes from REST resource class (again in separate JAR, using Jersey or Spring MVC) which uses @Service which in turns talks to DAOs (marked as @Repository, although it’s not a repository in the pure sense). Complex logic is pushed to specific @Components somewhere under service layer. It’s not DDD, but at least the dependencies flow nicely from top down.

Components based on features

But how about dependencies between parts of the system at the same level? Our system has a lot of entity classes, some are pure business (Clients, their Transactions, financial Instruments), some are rather infrastructural (meta model, localization, audit trail). Why can’t these be separated? Most of the stuff depends on meta model, localization is quite independent in our case, audit trail needs meta model and permission module (containing Users)… It all clicks when one thinks about it – and it is more or less in line with modularity based on features, not on technology or layers. Sure we can still use layer separation and have permission-persistence and permission-service as well.

Actually, this is quite repeating question: Should we base packages by features or by layer/technology/pattern? From sources I’ve read (though I might have read what I wanted to :-)) it seems that the consensus was reached – start by feature – which can be part of your business domain. If stuff gets big, you can split them into layers too.

Multiple JARs with JPA entities

So I carefully try to put different entity classes into different JAR modules. Sure, I can just repackage them in the same JAR and check how tangled they are with Sonar, but it is recommended to enforce the separation and to make dependencies explicit (not only in the Java Application Architecture book). My experience is not as rich as of the experts writing books, but it is much richer compared to people not reading any books at all – and, gosh, how many of them is there (both books and people not reading them)! And this experience confirms it quite clearly – things must be enforced.

And here comes the problem when your persistence is based on JPA. Because JPA clearly wasn’t designed to have a single persistence unit across multiple persistence.xml files. So what are these problems actually?

  1. How to distribute persistence.xml across these JARs? Do we even have to?
  2. What classes need to be mentioned where? E.g., we need to mention classes from upstream JARs in persistence.xml if they are used in relations (breaks DRY principle).
  3. When we have multiple persistence.xml files, how to merge them in our persistence unit configuration?
  4. What about configuration in persistence XML? What properties are used from what file? (Little spoiler, you just don’t know reliably!) Where to put them so you don’t have to repeat yourself again?
  5. We use EclipseLink – how to use static weaving for all these modules? How about a module with only abstract mapped superclass (some dao-common module)?

That’s quite a lot of problems for age when modularity is so often mentioned. And for technology that is from “enterprise” stack. And they are mostly phrased as questions – because the answers are not readily available.

Distributing persistence.xml – do I need it?

This one is difficult and may depend on the provider you use and the way you use it. We use EclipseLink and its static weaving. This requires persistence.xml. Sure we may try keep it together in some neutral module (or any path, as it can be configured for weaving plugin), but it kinda goes against the modularity quest. What options do we have?

  • I can create some union persistence.xml in module that depends on all needed JARs. This would be OK if I had just one such module – typically some downstream module like WAR or runnable JAR or something. But we have many. If I made persistence.xml for each they would contain a lot of repetition. And I’d reference downstream resource, which is ugly!
  • We can have dummy upstream module or out of module path with union persistence.xml. This would keep things simple, but it would be more difficult to develop modules independently, maybe even with different teams.
  • Keep persistence.xml in the JAR with related classes. This seems best from modularity point of view, but it means we need to merge multiple persistence.xml files when the persistence unit starts.
  • Or can we have different persistence unit for each persistence.xml? This is OK, if they truly are in different databases (different JDBC URL), otherwise it doesn’t make sense. In our case we have rich DB and any module can see the part it is interested in – that is entities from the JARs it has on the classpath. If you have data in different databases already, you’re probably sporting microservices anyway. 🙂

I went for third option – especially because EclipseLink’s weaving plugin likes it and I didn’t want to redirect to non-standard path to persistence.xml – but it also seems to be the right logical way. However, there is nothing like dependency between persistence.xml files. So if you have b.jar that uses a.jar, and there is entity class B in b.jar that contains @ManyToOne to A entity from a.jar, you have to mention A class in persistence.xml in b.jar. Yes, the class is already mentioned in a.jar, of course. Here, clearly, engineers of JPA didn’t even think about possibility of using multiple JARs in a really modular way.

In any case – this works, compiles, weaves your classes during build – and more or less answers questions 1 and 2 from our problem list. And now…

It doesn’t start anyway

When you have a single persistence.xml, it will get found as a unique resource, typically in META-INF/persistence.xml – in any JAR actually. But when you have more of them, they don’t get all picked and merged magically – and the application fails during startup. We need to merge all those persistence.xml files during the initialization of our persistence unit. Now we’re tackling questions 3 and 4 at once, for they are linked.

To merge all the configuration XMLs into one unit, you can use this configuration for PersistenceUnitManger in Spring (clearly, using MergingPersistenceUnitManager is the key):

public PersistenceUnitManager persistenceUnitManager(DataSource dataSource) {
    MergingPersistenceUnitManager persistenceUnitManager =
        new MergingPersistenceUnitManager();
    // default persistence.xml location is OK, goes through all classpath*
    return persistenceUnitManager;

But before I unveil the whole configuration we should talk about the configuration that was in the original singleton persistence.xml – which looked something like this:

    <property name="eclipselink.weaving" value="static"/>
    <property name="eclipselink.allow-zero-id" value="true"/>
Without this there were other corner cases when field change was ignored. This can be worked-around calling setter, but that sucks.
    <property name="eclipselink.weaving.changetracking" value="false"/>

The biggest question here is: What is used during build (e.g. by static weaving) and what can be put into runtime configuration somewhere else? Why somewhere else? Because we don’t want to repeat these properties in all XMLs.

But before finishing the programmatic configuration we should take a little detour to shared-cache-mode that showed the problem with merging persistence.xml files in the most bizarre way.

Shared cache mode

Firstly, if you mean it seriously with JPA, I cannot recommend enough one excellent and comprehensive book that answered tons of my questions – often before I even asked them. I’m talking about Pro JPA 2, of course. Like, seriously, go and read it unless you are super-solid in JPA already.

We wanted to enable cached entities selectively (to ensure that @Cacheable annotations have any effect). But I made a big mistake when I created another persistence.xml file – I forgot to mention shared-cache-mode there. My persistence unit picked both XMLs (using MergingPersistenceUnitManager), but my caching went completely nuts. It cached more than expected and I was totally confused. The trouble here is – persistence.xml don’t get really merged. The lists of classes in them do, but the configurations do not. Somehow my second persistence XML became dominant (one always does!) and because there was no shared-cache-mode specified, it used defaults – which is anything EclipseLink thinks is the best. No blame there, just another manifestation that JPA people didn’t even think about this setup scenarios.

It’s actually the other way around – you can have multiple persistence units in one XML, that’s a piece of cake.

If you really want to get some hard evidence how things are in your setup, put a breakpoint somewhere where you can reach your EntityManagerFactory, and when it stops there, dig deeper to find what your cache mode is. Or anything else – you can check the list of known entity classes, JPA properties, … anything really. And it’s much faster than mess around just guessing.

jpa-emf-debuggedIn the picture above you can see, that now I can be sure what shared cache mode I use. You can also see which XML file was used, in our case it was from meta-model module (JAR), so this one would dominate. Luckily, I don’t rely on this anymore, not for runtime configuration at least…

Putting the Spring configuration together

Now we’re ready to wrap up our configuration and move some stuff from persistence.xml into Spring configuration – in my case it’s Java-based configuration (XML works too, of course).

Most of our properties were related to EclipseLink. I read their weaving manual, but I still didn’t understand what works when and how. I had to debug some of the stuff to be really sure.

It seems that eclipselink.weaving is the crucial property namespace, that should stay in your persistence.xml, because it gets used by the plugin performing the static weaving. I debugged maven build and the plugin definitely uses eclipselink.weaving.changetracking property value (we set it to false which is not default). Funny enough, it doesn’t need eclipselink.weaving itself, because running the plugin implies you wish for static weaving. During startup it gets picked though, so EclipseLink knows it can treat classes as statically weaved – which means it can be pushed into programmatic configuration too.

The rest of the properties (and shared cache mode) are clearly used at the startup time. Spring configuration may then look like this:

@Bean public DataSource dataSource(...) { /* as usual */ }

public JpaVendorAdapter jpaVendorAdapter() {
    EclipseLinkJpaVendorAdapter jpaVendorAdapter = new EclipseLinkJpaVendorAdapter();
        Database.valueOf(env.getProperty("jpa.dbPlatform", "SQL_SERVER")));
    return jpaVendorAdapter;

public PersistenceUnitManager persistenceUnitManager(DataSource dataSource) {
    MergingPersistenceUnitManager persistenceUnitManager =
        new MergingPersistenceUnitManager();
    // default persistence.xml location is OK, goes through all classpath*

    return persistenceUnitManager;

public FactoryBean<EntityManagerFactory> entityManagerFactory(
    PersistenceUnitManager persistenceUnitManager, JpaVendorAdapter jpaVendorAdapter)
    LocalContainerEntityManagerFactoryBean emfFactoryBean =
        new LocalContainerEntityManagerFactoryBean();

    Properties jpaProperties = new Properties();
    jpaProperties.setProperty("eclipselink.weaving", "static");
        env.getProperty("eclipselink.allow-zero-id", "true"));
        env.getProperty("eclipselink.logging.parameters", "true"));
    return emfFactoryBean;

Clearly, we can set the database platform, shared cache mode and all runtime relevant properties programmatically – and we can do it just once. This is not a problem for a single persistence.xml, but in any case it offers better control. You can now use Spring’s @Autowired private Environment env; and override whatever you want with property files or even -D JVM arguments – and still fallback to default values – just as we do for database property of the JpaVendorAdapter. Or you can use SpEL. This is flexibility persistence.xml simply cannot provide.

And of course, all the things mentioned in the configuration can now be removed from all your persistence.xml files.

I’d love to get rid of eclipselink.weaving.changetracking in the XML too, but I don’t see any way how to provide this as the Maven plugin configuration option, which we have neatly unified in our parent POM. That would also eliminate some repeating.

Common DAO classes in separate JAR

This one (question 5 from our list) is no problem after all the previous, just a nuisance. EclipseLink refuses to weave your base class regardless of @MappedSuperclass usage. But as mentioned in one SO question/answer, just add dummy concrete @Entity class and you’re done. You never use it, it is no problem at all. And you can vote for this bug.

This is probably not problem for load-time weaving (haven’t tried it), or for Hibernate. I never had to solve any weaving problem with Hibernate, but on the other hand current project pushed my JPA limits further, so maybe I would learn something about Hibernate too (if it was willing to work for us in the first place).

Any Querydsl problems?

Ah, I forgot to mention my favourite over-JPA solution! Were there any Querydsl related problems? Well, not really. The only hiccup I got was NullPointerException when I moved some entity base classes and my subclasses were not compilable. Before javac could have printed reasonable error, Querydsl went in and gave up without good diagnostic on this most unexpected case. 🙂 I filed an issue for this, but after I fixed my import statements for the superclasses, everything was OK again.


Let’s do it in bullets, shall we?

  • JPA clearly wasn’t designed with modularity in mind – especially not when modules form a single persistence unit, which is perfectly legitimate usage.
  • It is possible to distribute persistence classes into multiple JARs, and then:
    • You can go either with a single union persistence.xml, which can be downstream or upstream – this depends, if you need it only in runtime or during build too.
    • I believe it is more proper to pack partial persistence.xml in each JAR, especially if you need it during build. Unfortunately, there is no escape from repeating some upstream classes in the module again, just because they are referenced in relations (typical culprit when “I don’t understand how this is not entity, when it clearly is!”).
  • If you have multiple persistence.xml files, it is possible to merge them using Spring’s MergingPersistenceUnitManager. I don’t know if you can use it for non-Spring applications, but I saw this idea reimplemented and it wasn’t that hard. (If I had to reimplement it, I’d try to merge the configuration part too!)
  • When you’re merging persistence.xml files, it is recommended to minimize configuration in them, so it doesn’t have to be repeated. E.g., for Eclipselink we leave only stuff necessary for built-time static weaving, the rest is set programmatically in our Spring @Configuration class.

There are still some open questions, but I think they lead nowhere. Can I use multiple persistence units with a single data source? This way I can have each persistence.xml as a separate unit. But I doubt relationships would work across these and the same goes for transactions (without XA that is). If you think multiple units is relevant solution, let me know, please.

I hope this helps if you’re struggling with the noble quest of modularity. Don’t be shy to share your experiences in the comments too! No registration required. 😉

JPA Pagination of entities with @OneToMany/@ManyToMany

Displaying paginated results of entities with their *-to-many sub-entities is a tricky thing and many people actually don’t even know about the problems. Let’s check what JPA does and what it does not for you. Besides standard JPA I’ll use Querydsl for queries – this effectively results in JPQL. Querydsl is type-safe and compared to Criteria API much more readable.

Source of N+1 select problem

Imagine having an entity called User that can have many Roles. Mapping looks like this:

@JoinTable(name = "user_roles",
  joinColumns = @JoinColumn(name = "user_id", referencedColumnName = "id"),
  inverseJoinColumns = @JoinColumn(name = "role_id", referencedColumnName = "id"))
@ManyToMany(fetch = FetchType.EAGER)
private Set<Role> roles;

You want to display a paginated list of users that contains list of their roles – for instance as tiny icons with tooltips containing role names.

Naive approach is to query the Users and then wait for whatever may come:

QUser u = QUser.user; // we will use this alias further
List<User> result = new JPAQuery(em).from(u)
    .offset((page - 1) * pageSize)

Now something may or may not happen. Simple select from USER is definitely executed and if you don’t touch user.role that will be all. But you want those roles and you want to display them – for instance when you iterate the results. Each such access executes another select from ROLE where user_id = iterated user’s id. This is the infamous N+1 select problem (although the order is rather 1+N :-)). It is extremely bad for selects with large result sets, while for pages N has reasonable limits. But still – no reason to do additional 25 selects instead of… how many actually?

Left join for the help?

Were the mapping @…ToOne, then you could just left join this entity (e.g. user’s Address assuming there is a single address needed for a user). Left join in this case does not change the number of rows in the result – even for optional addresses, whereas inner join would omit users without addresses (also surprisingly common error for programmers who try to avoid SQL for no good reasons).

Left join for *ToOne relations is good solution for both paginated and non-paginated results.

But for *ToMany paginating and left join don’t go well together. The problem is that SQL can only paginate rows of the resulting join, it cannot know the proper offset and limit to select for instance second user only. Here we want page 2 with page size 2:

List<User> result = new JPAQuery(em).from(u)

The result is delivered in a single select – but what a result…

We definitely don’t want User1 with a single RoleC and User2 with RoleA. That is not only not the page we want (we should get User3 on the final page 2), but it’s just plain disinformation (aka lie).

(Don’t) Use Eager!

For most cases EAGER mapping for *ToMany is not a good default. But in case you’re asking what will happen… JPA provider at least knows at the time of select from USER that we want the ROLEs too. EclipseLink performs N additional selects to get users’ roles although it utilizes any cached data possible. Actually I tested it with cache turned off with these lines in persistence.xml:

<property name="eclipselink.query-results-cache" value="false"/>
<property name="eclipselink.cache.shared.default" value="false"/>
<property name="eclipselink.cache.size.default" value="0"/>
<property name="eclipselink.cache.type.default" value="None"/>

Cache can bring some relief, but it actually just obscures the problem and may mislead you that the problem is not there. Especially if you create the data using JPA just before the test. 🙂

In case of LAZY JPA provider has to wait when we touch the role list – and it always loads only the one that is needed. But for our page we need to check all of them, so we will end up with N selects again. Provider has no way to optimize for this, not to mention that you may end up with LazyInitializationException (your fault) unless you use OSIV (open session in view), which I consider bad design after my previous experiences with it.

If provider knows that you want the roles upfront (EAGER) it may optimize, but I would not count on that. EclipseLink does not, and you simply cannot know. So let’s just do it ourselves, shall we? And switch that EAGER back to LAZY, which is default, so you can just as well remove the fetch parameter altogether.

1+1 select, no problem

We need to get the proper page of users and then load their roles. There are two ways how to do this:

  1. if you have reverse mappings (role.users) then you can get List<User> and then select roles containing these roles;
  2. if there is no reverse mappings, you list users first and then again with left join to roles – again only where user is in the first list.

In both cases list of user IDs may be enough. Because our role does not have reverse mapping (I don’t like it that much for @ManyToMany relations) we end up with code like this:

List<Integer> uids = find.queryFrom(u)
List<GuiUser> users = find.queryFrom(u)

What you display is the result of the second query, obviously. There are two important points here though:

  • You have to repeat the order. While in the first select it is essential for paging (paging without order is meaningless, right?) in the second case you could do it in memory, but there is no reason why, as you let DB to order just a single page and you save Java lines of code. Not to mention Java possibly (and quite probably!) using different collator… this really is not worth it.
  • You have to use distinct() before the list() of the second query. This not only adds distinct to the select, but also instructs Querydsl or JPA (really don’t know who does the job, sorry :-)) to give you the list of distinct users, not list of size of that joined select. Such distinct is required in most leftJoins on *ToMany entities (unless you create projection results for them).

1+1 for OneToMany

Now let’s pretend we have different situation – some @OneToMany case, like an Invoice and its Items. Item will have invoice attribute and Invoice will have @OneToMany items mapped by Item.invoice – like this:

@OneToMany(<b>mappedBy = "invoice"</b>)
private Set<Item> items;

And now the select part:

QInvoice i = QInvoice.invoice;
Map<Integer, Invoice> invoices = find.queryFrom(i)
    .map(, i);
// this performs items = new HashSet<>() for each invoice

QItem ii = QItem.item;
List<Item> items = find.queryFrom(ii)
for (Item item : items) {
    // addItem is always nicer than getItems().add(item)

Always check what selects are generated in the end. In my case performs no additional join and puts ids into the IN list – and that’s how it should be.

Also note the usage of map(, i) – this returns LinkedHashMap, so the order is preserved (cool!) and makes getting the invoice for item much easier. Difference in select is just one redundant id which is no problem at all.


We demonstrated the essential problem with pagination of one/many-to-many joins and checked two solutions that are both based on one additional select. These solutions are not necessary if pagination is not required, obviously. There may be other ways to do it, there are even some extensions for specific JPA providers (like @BatchFetch for EclipseLink), but I’d prefer this for couple of reasons:

  • Full control over selects – well, at least as full as you can get with Querydsl over JPQL. You can select into DTOs, you can enumerate required columns – there is much more you can do around the basic idea. (Funny. Just minutes after writing this sentence I travelled home from work, reading neat little book SQL Performance Explained – and coincidentally I was going through the part about Joins and Nested Loops. Markus Winard provided also short analysis of ORMs in various languages, how they perform joins and what you can do about it. The final tip was pretty clear about it: Get to know your ORM and take control of joins.)
  • It scales! There is no N. As long as your pagination is reasonably fast (but this is separate problem and you have to do what you have to do), second select goes after IDs, which should be pretty fast anyway.
  • It is unrelated to any specific features of any specific JPA provider.
  • And after all – you have full control over your contract. Don’t leave presentation layer trigger your selects. Even better – close that persistence context after leaving the service layer.

The only problem you should be aware of is limitation about amount of expressions in the IN clause. Check it out, read documentation (both for your JPA provider and RDBMS), experiment, find out what the maximum page size will be – and (obviously) cover it in your tests (at least when it crashes the first time ;-)).

The most important takeaway from this post should be: 1) EAGER is not a solution, 2) result of the left join with “to-many” cardinality cannot be reasonably paginated in SQL. We’re not talking about specific solutions here – hell, clever DB guys can even provide you procedures delivering you the right page plus total count of all records – all in a single call to DB. But that’s beyond basic SQL/JPA. 🙂

Happy New Year 2013!

Happy New year, of course! My last year was a bit poorer blog-wise. For some reasons I was more lazy to write about things. Heck, sometimes I think that I was less lucky with new technology in overall. I achieved some nice results with testing in our company during the previous year. This year I wanted to push Continuous Integration, testing a bit further, maybe Gradle – but results in CI area are mixed and the rest brought no real results at all.

On the brighter side, I managed to finish my quest for system time shifter on JVM that would be usable for testing purposes – all documented in my post. Blogging is not all of course and I am quite happy how topics around Clean Code got some attention around me. We pushed Java Simon project a bit further too, I learned a few interesting things around Spring, MVC and jQuery… Add this beautiful Scala class on Coursera and this year was more than fun after all.

Still I’d like to make some resolutions. I discovered QueryDSL (thanks to a colleague of mine) and this seems to be answer to readable and compile time safe Criteria – because those shipped with JPA2 are simply horrible to read. It works well with IDEA’s annotation processor, Maven and it should be no problem with Gradle either. Ah, Gradle! For around two years I’m watching this guy but for whatever reason I was not able to use it for anything more than a few tests – but that is not Gradle’s fault. I like it, I like the idea, I like the language – and I think this year is time to switch Java Simon from Maven to Gradle. And after that I’ll go on with projects in our company, although the battle there will be more difficult I guess.

Out of technology, I managed to put together a few songs with my colleagues and it was fun – the first time I played in something close to a band. We played only on our company party but it doesn’t change anything… it was a real fun. We didn’t have a drummer so I used my Native Instruments Maschine Mikro and pre-programmed our songs – and I was really happy with the results. I’ll probably dedicate a post to Maschine Mikro, because it is one really interesting controller (and software too!).

Maschine Mikro controller

Talking about music, I managed to upload two full-blown tracks to my Soundcloud and later added two simple guitar+voice tracks. While mixing/mastering is still my weakness, I’m happy that I was able to pull through this recording-wise. And just how I imagined – my songs composed with paper, pen and acoustic guitar many years ago can really work as rock recording too.

So what about this year and those resolutions? Gradle – sure. More testing methodology on our projects – maybe I’ll even manage to document it here on the blog. Pushing Continuous delivery just a bit further again. Scala or other JVM language? I don’t know. Maybe, maybe for tests. And a bit of my music – I need to practice more with keyboard, guitar and bass guitar (yeah, I bought lovely Yamaha bass too).

Bass guitar Yamaha RBX375

Last resolution is no resolution at all – we have to survive somehow “socialistic” experiments of our government here in Slovakia (although there is nothing social about them). Europe has its own deal of problems – and USA? Well they saved themselves from falling down that fiscal cliff or what – just a few hours ago. And it probably means to make the cliff a bit higher for the next time. So we might have escaped one Doom’s day lately at the end of 2012, but who knows how our civilization will fare in the future.

Then I remember those really poor and I know we have nothing really horrible to complain about. So once again – Happy New year – and whole year of 2013!