Falling in love with Spring Java Configuration

Spring guys spent significant effort to give us alternatives to original XML configuration. Hats off, they have it all thought out – starting with the fact that runtime representation is not directly tied to any particular format of configuration file. Or configuration class for that matter. We have many annotations for like… forever now. And then there is the possibility of pure Java config. This post is not tutorial, but rather a short discussion why it is cool with little bonus annotation at the end.

Going Java Config? Why?

I can’t exactly remember pros/cons of one or the other (XML) right now, you can mix them anyway if needed. But recently I decided to give it a go – as our current project has not that complicated application context – mostly typical JPA stuff, transaction manager, property placeholder – and that’s it. Reason? Better control. And who loves XML anyway. :-) We can also discuss compile time safety, but dependencies are resolved later anyway, then there is component scan that finds stuff not mentioned directly, Spring’s FactoryBean also adds some fog… so compile time safety is not the main win here, especially if you had good tool for Spring XML before (IntelliJ IDEA is one). So better control is the main reason.

Sure we have a lot of control in Spring already. There are profiles I use for couple of years for configuration adjustments. Let’s say I have a standalone app configured with Spring. During development I run my main class from test scope and my test master configuration contains many profile sections where my datasource is pointed to various testing databases. Because it is test scope it doesn’t go into production JAR, so I can lower my guard and commit DB user/password into this test configuration. (Not that people sometimes doesn’t commit IPs and user/passwords into production code/configurations too – but that’s another story altogether. :-))

The rest of the configuration is imported from main resources, so I don’t repeat myself. All I have to do is to add various run configurations with various -Dspring.profiles.active=XXX VM parameters (or one and change it on fly, your choice). Sure, you can do this with property placeholder, but profile is easier – one switch and all “properties” can have different names. Actually I never tried putting property placeholder configuration into profile, but that would be also interesting ways how to externalize this configuration.

Now this works, but with Java configuration you don’t have to use profile. You use some kind of if/switch in your @Bean annotated method. You may control anything – what is set, what is instantiated (as long as return type fits, e.g. any implementation of DataSource), whether you go for URL/name/password configuration or pull your resource from JNDI… It’s all up to you and you can do it in language that allows you to execute – which XML is not.

First impression? Awesome!

Well, I’m not complete greenhorn with Spring and I know annotation based configuration pretty well. Also it wasn’t the first time I wrote @Configuration over the class. But it was the first time I did it without XML altogether. First time I called new AnnotationConfigApplicationContext(MyConfig.class). And the result was good. I got stuck for some time because entityManagerFactory method that used class LocalContainerEntityManagerFactoryBean (which is a FactoryBean, which I know) didn’t work when I naively returned factoryBean.getObject(). But when I changed it to return factoryBean itself (returning type FactoryBean<EntityManagerFactory>), everything was fine.

Second problem we encountered was reusing some @Configuration class that needed Spring properties in project with XML based master configuration. XML configuration contained context:property-placeholder element, but injected @Autowired private Environment env; didn’t contain these properties. (They seem to be somewhere deep in the environment, but were not returned by env.getProperty. Now these are things I don’t understand fully (Spring is big!) but using @PropertySource on our @Configuration class that autowired environment fixed it.

Those were hardly any problems at all when you consider the big change in the way how the configuration is expressed – and think about possibilities.

Composing configurations and Master configurations

Any serious configuration gets bigger after some time – and I prefer to split it into numerous files, mostly related to specific technical aspect. With Java configuration you can @Import another configuration class (or @ImportResource XML configuration) too. Or you can go to autopilot and using @ComponentScan (equivalent of XML’s context:component-scan) let Spring find all other @Configurations.

But imagine having something I call “master configuration” – which is the only class you mention to your bootstrap code. You most likely have one for production code (src/main) – it contains default configuration that works OK for production. All other partial configurations are auto-discovered and applied. Mine looks simple:

@ComponentScan(basePackages = "com.acme.project")
public class MasterConfig {
// something may be here

Now you want to run alternative master configuration – as mentioned I put these into test code (src/test). What I don’t want is to include my production master configuration. While the one up here looks harmless, I may have my reasons why I don’t want to apply @EnableScheduling for instance. For automatic test I actually don’t want any @Scheduled methods firing up at all. The reasons can be many, let’s not argue here. My test configuration may look like this:

@ComponentScan(basePackages = "com.acme.myproject",
  excludeFilters = @ComponentScan.Filter(type = FilterType.ASSIGNABLE_TYPE, value = MasterConfig.class))
public class MasterTestConfig {
// something else

Code speaks for itself – we have the same component scan – but excluding production config. And omitting scheduling, and possibly doing something else in the body of the config. We may read different properties (using annotations), etc.

This works, but can we somehow streamline it? Yes we can…

ConfigurationMaster annotation

I hope I’m not reinventing wheel here, but my solution was ultra-simple and it not only made me happy, but it just plain underlines the beauty of Java Configuration. It is ConfigurationMaster annotation that looks like this:

@ComponentScan(basePackages = "com.acme.myproject",
    excludeFilters = {@ComponentScan.Filter(value = ConfigurationMaster.class)})
public @interface ConfigurationMaster {

This time we excluded classes annotated by… whow isn’t it a bit self-centered? :-) Yup, probably the most egocentric annotation I created. You place this annotation over any “entry-point” configuration you want to load in your bootstrap code. Other configurations for your other project components are auto-discovered (it’s up to you to keep only necessary stuff in your classpath). And should it happen you want alternative configurations, you just mark them as @ConfigurationMaster as well.

Annotation is part of the project – you see the base package for component scan there. Of course it is not universal solution to everything, but it works for many cases. Our previous configurations would now look like this – first the production one:

public class MasterConfig {
// something may be here

And the test one:

public class MasterTestConfig {
// something else

It may be unlikely you want to have many master configurations in main code, but it is quite expected in test code. At least I have one for automatic tests and then some to run my application in development mode. This can use different property files (can be loaded from test scope and have different name explicitly stating they are test properties), different bean implementations – or (don’t tell anyone) hardcoded JDBC URL/username/passwords. The point is – only the one you bootstrap apply – other master configs are excluded.

Don’t know how about you, but I love this Java Configuration stuff. I may encounter new problems of course, but at least it’s really fun. You can even actually debug how it loads your configuration!

JPA – is it worth it? Horror stories with EclipseLink and Hibernate

One friend of mine brought Hibernate to our dev-team back in 2004 or so. But now he uses something much simpler and avoids ORM/JPA whenever possible. He can, because he is mostly master of his projects.

I had to get more familiar with JPA on my path. There is always more and more to learn about it. When you discover something like orphanRemoval = true on @OneToMany, it may bring you to the brink of crying. Because of happiness of course. (Stockholm Syndrome probably.) But then there are other days, when you just suffer. Days when you find bugs in JPA implementations – or something close to them. And you actually can’t choose from so many providers, can you? There are just two mainstream players.

Right now we are using EclipseLink and there are two bugs (or missing features or what) that kinda provoked our big effort to switch to Hibernate. What were our problems and what was the result of our switch? Read on…

EclipseLink and Java 8

You can avoid this problem, although you have to be extra careful. EclipseLink’s lazy list – called IndirectList – cannot be used with streams. Actually – if it didn’t compile, it would be good. Worse is it works but very badly.

Guys creating Java do their best to make streams work effortlessly, there are those default methods in interfaces, etc. But no, no luck here. IndirectList returns empty stream. Why? Because of this bug. (It works fine for IndirectSet.)

What is the cause of this problem? Well… someone decided to extend Vector, but instead of sticking with it, they also decided that the extended Vector will not be the actual backing collection, but they added their Vector delegate too. This works fine for interfaces like List, but not so for extended things. Now add “ingenious” design of Vector – full of protected fields you have to take care of… here IndirectList clearly collided with Vector badly.

Why do you need to take care of protected fields? Because creating the stream on Vector uses its special java.util.Vector.VectorSpliterator, that uses these protected values. So if you delegate to other Vector, you either delegate java.util.Vector#spliterator too (but then it won’t compile with earlier Java) or – preferably – don’t delegate to other vector and use that extended one as your backing collection. Of course, guys at Java might have used size() and get(int) instead of accessing Vector’s protected elementCount, elementData, etc. – but that would not be as efficient.

BTW: Anybody here likes Vector? I personally hate Java’s wannabe compatibility that really hurts its progress as we need to drag more and more burden with us.

JPA and counts on joins with distinct

The other trouble was related to a specific select where we used join with distinct instead of subquery with exists. We have an entity called Client that can be bound to any number of Domains, @ManyToMany mapping is used on Client side. Domain does not know about Client – hence the arrow.


We often need to obtain Clients where any of their domains is in a provided list (actual input parameter contains IDs of those domains). Using Querydsl, it looks like this:

QDomain cd = new QDomain("cd");
JPAQuery query = new JPAQuery(em).from(QClient.client)
    .join(QClient.client.domains, cd)

I’m sure you can easily transform it mentally to Criteria or JPQL – Querydsl produces JPQL, BTW. Distinct here has a special meaning that is used by JPA specification. Now you can call query.list(QClient.client) – it produces nice join and returns only distinct Clients. You try query.count(), and it works as expected as well.

But imagine that Client has composite primary key. We actually have exactly the same case, an entity bound again to domains, everything looks the same – just that entity has composite PK. List works alright, but if you try query.count() you’ll get completely different illogical query using exists and the join to Domains is lost completely. Results are wrong, definitely not the size of the list result.

There are some bugs filed for this – for instance this one. Because there was a discussion about this behavior we decided to find out how Hibernate treats this.


The trouble is that it is not the bug after all – at least not according to JPA 2.1specification which reads in section 4.8.5: The use of DISTINCT with COUNT is not supported for arguments of embeddable types or map entry types.

I found this today when I was gathering proofs for this post. Our story however would unfold in different direction. Are we using JPA? Yes. Are there other options? Yes!

Spoiler 2:

We used subselect with exists in the end, but that’s not the point. ;-)

So we switch to Hibernate, right?

We got kinda carried away by some bug reports resolved as FIXED – like this one or the ones you can see in its links. But switching JPA provider isn’t that easy as promised. In the end we found out that count distinct for entities with composite PKs doesn’t work with SQL Server anyway. But we learned a lot of interesting things about how far both implementations are apart when it comes to translating JPQL to SQL. Mind you, I’m not actually sure whether everything we wrote in Querydsl (that generates JPQL) is 100% correct, but then it should say something. When it works I expect it to work after changing a JPA implementation.

Hibernate screwing deletes with “joins”

We have a lot of delete clauses that worked just fine in EclipseLink:

new JPADeleteClause(em, QGuiRolePermission.guiRolePermission)

This essentially means “delete all role permission assignments for role with specified description”. Piece-a-cake, right? We’re using implicit join there again, but it is the same story all over again – and it worked in EclipseLink just fine. EclipseLink creates proper SQL with exists:

  SELECT t1.gui_role_id FROM GUI_Roles t0, GUI_RolePermissions t1
    WHERE ((t0.description = ?) AND (t0.id = t1.gui_role_id)) AND t1.gui_role_id = GUI_RolePermissions.gui_role_id
      AND t1.domain_id = GUI_RolePermissions.domain_id AND t1.permission_id = GUI_RolePermissions.permission_id)

It is not perfect – and we get back to it in section Random Query Generator – but it works. Let’s just compare it to Hibernate now. This is JPQL (actually this is the same for both providers as it’s produced by Querydsl):

delete from GuiRolePermission guiRolePermission
  where guiRolePermission.guiRole.description = ?1

This does not seem alarming – but the SQL is completely off:

delete from GUI_RolePermissions cross join GUI_Roles guirole1_ where description=?

This does not work on our currently used SQL Server, whatever dialect we choose. Why not go with exists? We have helper method that takes entity (Querydsl base path) and its where condition and performs delete. Now instead of:


We have to write this:

    new JPASubQuery().from(QGuiRolePermission.guiRolePermission)

Not terrible… but why?

Another Hibernate’s twist

Things may get complicated when more relationships are involved. Take this simple JPQL for instance:

delete from Security security
  where security.issuer.priority = ?1

We want to remove all Securities with issuer (Client) having specific priority value:


There is another implicit join there, but one subquery with exists should cover it. Security class contains @ManyToMany relationships to Domain class through intermediate table Securities_Domains. That’s why we need two deletes here – and this is what EclipseLink generates (issuer is of class Client):

  SELECT t1.id FROM "Clients" t0, Securities t1
    WHERE ((t0."priority" = ?) AND (t0."id" = t1."issuer_id"))
      AND t1.id = Securities_Domains.security_id);
  SELECT t1.id FROM "Clients" t0, Securities t1
    WHERE ((t0."priority" = ?) AND (t0."id" = t1."issuer_id"))
      AND t1.id = Securities.id);

It works just fine. But Hibernate shows its muscles really strong here!

delete from Securities_Domains where (security_id) in (
   select id from Securities where priority=?)

Obviously right the first delete is missing join to the Client entity in that subselect – and fails spectacularly. And we’re actually lucky we don’t have any other column called priority on Securities as well. :-) That could hide the error for ages.

With or without id?

Love that song… I mean the U2 one, not the Hibernate’s one. When you see JPQL with equality test on two entities you assume it is performed on their ids. So if you actually specify those ids, it should be still the same, right? Maybe it’s just some JPA myth after all. Consider this JPQL:

delete from MetaObjectSetup metaObjectSetup
  where not exists (select 1 from Permission permission
    where permission.metaObjectSetup = metaObjectSetup)

This produces query that does not work properly, probably taking last id from permission1_.

delete from META_Object_Setups where  not (exists (
  select 1 from Permissions permission1_
    where permission1_.meta_object_setup_id=id))

Version working with Hibernate must perform eq on id fields explicitly (EclipseLink doesn’t mind either):

   new JPASubQuery().from(QPermission.permission)

SQL result:

delete from META_Object_Setups where  not (exists (
  select 1 from Permissions permission1_
    where permission1_.meta_object_setup_id=META_Object_Setups.id))

Experiences like these were the last drop for us and we reverted all the efforts to start using Hibernate.

Happy ending?

We switched back to EclipseLink after this. I don’t remember so much resistance from Hibernate like… ever. Maybe our JPQLs were too loose for it, joins were not explicit, aliases were missing, etc. But in the end it did not solve our problem.

It is really shame that count and distinct are not possible in the way making using query.list and query.count operations consistent in libraries like Querydsl. It is also shame that when it is not supported (as officially stated in the specification) it does not throw exception and does something fishy instead – silently.

You can do the same in SQL wrapping the select into another one with count – but JPQL does not support queries in the FROM clause. Pitty. However this is one of those cases when you can’t go wrong with correlated subquery. You just have to remember that subquery does not imply any equality implied by JPA joins (it can’t actually, there are cases when this would be an obstacle) and you have to do it yourselves – see the last examples from previous part (With Or Without Id).

This is all really crazy. Remotely, it reminds me horror stories about JSF and bugs of its various implementations (always different). Sure, things are really complicated, but then maybe the trouble is they are. Maybe it can be simpler. Maybe it’s wrong that I have to define @ManyToOne to be able to express joins. Any @XToOne has consequences often unseen even by experienced JPA users. Maybe some jOOQ or Querydsl over plain SQL is better than this stuff. I just don’t know…

Random Query Generator in EclipseLink

Let’s look at something else while we stick with JPA. Here we are back to Eclipse and there is one very easy thing I want to do in SQL:

  SELECT t1.id FROM Clients t0 WHERE ((t0.priority = ?) AND t0.id = Contacts.id)

In plain wording – I want to remove all Client’s Contacts when this Client has some specific priority. If you ask for mapping… I hope there is nothing screwed here:

@Table(name = "Clients")
public class Client {
    @OneToMany(mappedBy = "client", cascade = CascadeType.ALL)
    private List<ClientContact> contacts;

@Table(name = "Contacts")
public class ClientContact {
    @Column(name = "client_id", insertable = false, updatable = false)
    private Integer clientId;

    @JoinColumn(name = "client_id")
    private Client client;

Back reference to client is mapped both ways, but one mapping is read-only. Both primary keys are simple Integers. No miracles here. And now three ways we tried, always with Querydsl, JPQL and SQL:

new JPADeleteClause(em, QClientContact.clientContact).where(
    new JPASubQuery().from(QClient.client)

// JPQL - logically exactly what we want
delete from ClientContact clientContact
  where exists (select client from Client client
    where client.priority = ?1 and client = clientContact.client)

// SQL contains double EXISTS and two unnecessary Clients more
  SELECT t0.id FROM Contacts t0 WHERE EXISTS (
    SELECT ? FROM Clients t2, Clients t1
      WHERE ((t2.priority = ?) AND (t2.id = t0.client_id)))  AND t0.id = Contacts.id)
bind => [1, 47]

Ok, it works, right? Can we help it when we mention id equality explicitly (client.id.eq(…client.id))?

new JPADeleteClause(em, QClientContact.clientContact).where(
    new JPASubQuery().from(QClient.client)

// JPQL looks promising again, I bet this must generate proper query - or at least the same
delete from ClientContact clientContact
  where exists (select client from Client client
    where client.priority = ?1 and client.id = clientContact.client.id)

// Three clients?!
  SELECT t0.id FROM Contacts t0 WHERE EXISTS (
    SELECT ? FROM Clients t3, Clients t2, Clients t1
      WHERE (((t2.priority = ?) AND (t2.id = t3.id)) AND (t3.id = t0.client_id)))
        AND t0.id = Contacts.id)
bind => [1, 47]

You gotta be kidding me, right? What for?! I always believed that if id is entitie’s @Id, than it can do virtually the same with or without it. I even dreamt about using foreign key directly – but that’s way beyond capabilities of current JPA providers, so it seems.

Ok, let’s try something lame. Something I’d write only in JPA, not in real SQL, of course. Something with a lot of implicit identity equality hidden between the lines:

new JPADeleteClause(em, QClientContact.clientContact)

// not like SQL at all, but pretty logical if you know the mapping
delete from ClientContact clientContact
  where clientContact.client.priority = ?1

/* Surprise! Still one unnecessary Contacts in inner join that could go through FK,
 * but definitely the best result so far.
 * Mind though - this refused to work with Hibernate, at least with SQL Server dialect (any). */
  SELECT t1.id FROM Clients t0, Contacts t1
    WHERE ((t0.priority = ?) AND (t0.id = t1.client_id)) AND t1.id = Contacts.id)
bind => [47]

I hope this demonstrates the impotence of this field of human activity after more than a decade of effort. I’m not saying it all sucks – but after seeing this I’m not far from that.

With JPA doing too many things you don’t want or expect (unless you’re an expert) and many limitations compared to SQL I’d expect at least semi-good SQL. This is not close really.

Ramble On

So that’s another day with JPA. With proper query (or best approximation when it’s not critical) there is just one last thing we have to pay attention to… those IndirectLists that can’t be streamed. Just one last thing I said? Ouch… of course, we have to check our SQLs still and watch for any bugs or gray zones.

Yeah, we’re staying with JPA for a while on this project. But the next one I’ll start with metamodel directly over DB. I hope I can’t suffer more that way. :-) And maybe I’ll be able to do ad-hoc joins without @XToY annotations that always bring in stuff you don’t want. That’s actually one of my biggest gripes with JPA.

Many people avoid as much relation mappings as possible while they still can express their joins. Many people avoid @ManyToMany and map the association table explicitly, so they can reach all entities A for B’s id (or list of ids) – with a single join. Otherwise EclipseLink stubbornly joins all three tables as it’s understanding of PKs and FKs is obviously lacking.

Lessons learned? Not that much actually. But we’re definitely not switching provider for another couple of months at least!

JPA Pagination of entities with @OneToMany/@ManyToMany

Displaying paginated results of entities with their *-to-many sub-entities is a tricky thing and many people actually don’t even know about the problems. Let’s check what JPA does and what it does not for you. Besides standard JPA I’ll use Querydsl for queries – this effectively results in JPQL. Querydsl is type-safe and compared to Criteria API much more readable.

Source of N+1 select problem

Imagine having an entity called User that can have many Roles. Mapping looks like this:

@JoinTable(name = "user_roles",
  joinColumns = @JoinColumn(name = "user_id", referencedColumnName = "id"),
  inverseJoinColumns = @JoinColumn(name = "role_id", referencedColumnName = "id"))
@ManyToMany(fetch = FetchType.EAGER)
private Set<Role> roles;

You want to display a paginated list of users that contains list of their roles – for instance as tiny icons with tooltips containing role names.

Naive approach is to query the Users and then wait for whatever may come:

QUser u = QUser.user; // we will use this alias further
List<User> result = new JPAQuery(em).from(u)
    .offset((page - 1) * pageSize)

Now something may or may not happen. Simple select from USER is definitely executed and if you don’t touch user.role that will be all. But you want those roles and you want to display them – for instance when you iterate the results. Each such access executes another select from ROLE where user_id = iterated user’s id. This is the infamous N+1 select problem (although the order is rather 1+N :-)). It is extremely bad for selects with large result sets, while for pages N has reasonable limits. But still – no reason to do additional 25 selects instead of… how many actually?

Left join for the help?

Were the mapping @…ToOne, then you could just left join this entity (e.g. user’s Address assuming there is a single address needed for a user). Left join in this case does not change the number of rows in the result – even for optional addresses, whereas inner join would omit users without addresses (also surprisingly common error for programmers who try to avoid SQL for no good reasons).

Left join for *ToOne relations is good solution for both paginated and non-paginated results.

But for *ToMany paginating and left join don’t go well together. The problem is that SQL can only paginate rows of the resulting join, it cannot know the proper offset and limit to select for instance second user only. Here we want page 2 with page size 2:

List<User> result = new JPAQuery(em).from(u)

The result is delivered in a single select – but what a result…

We definitely don’t want User1 with a single RoleC and User2 with RoleA. That is not only not the page we want (we should get User3 on the final page 2), but it’s just plain disinformation (aka lie).

(Don’t) Use Eager!

For most cases EAGER mapping for *ToMany is not a good default. But in case you’re asking what will happen… JPA provider at least knows at the time of select from USER that we want the ROLEs too. EclipseLink performs N additional selects to get users’ roles although it utilizes any cached data possible. Actually I tested it with cache turned off with these lines in persistence.xml:

<property name="eclipselink.query-results-cache" value="false"/>
<property name="eclipselink.cache.shared.default" value="false"/>
<property name="eclipselink.cache.size.default" value="0"/>
<property name="eclipselink.cache.type.default" value="None"/>

Cache can bring some relief, but it actually just obscures the problem and may mislead you that the problem is not there. Especially if you create the date using JPA just before the test. :-)

In case of LAZY JPA provider has to wait when we touch the role list – and it always loads only the one that is needed. But for our page we need to check all of them, so we will end up with N selects again. Provider has no way to optimize for this, not to mention that you may end up with LazyInitializationException (your fault) unless you use OSIV (open session in view), which I consider bad design after my previous experiences with it.

If provider knows that you want the roles upfront (EAGER) it may optimize, but I would not count on that. EclipseLink does not, and you simply cannot know. So let’s just do it ourselves, shall we? And switch that EAGER back to LAZY, which is default, so you can just as well remove the fetch parameter altogether.

1+1 select, no problem

We need to get the proper page of users and then load their roles. There are two ways how to do this:

  1. if you have reverse mappings (role.users) then you can get List<User> and then select roles containing these roles;
  2. if there is no reverse mappings, you list users first and then again with left join to roles – again only where user is in the first list.

In both cases list of user IDs may be enough. Because our role does not have reverse mapping (I don’t like it that much for @ManyToMany relations) we end up with code like this:

List<Integer> uids = find.queryFrom(u)
List<GuiUser> users = find.queryFrom(u)

What you display is the result of the second query, obviously. There are two important points here though:

  • You have to repeat the order. While in the first select it is essential for paging (paging without order is meaningless, right?) in the second case you could do it in memory, but there is no reason why, as you let DB to order just a single page and you save Java lines of code. Not to mention Java possibly (and quite probably!) using different collator… this really is not worth it.
  • You have to use distinct() before the list() of the second query. This not only adds distinct to the select, but also instructs Querydsl or JPA (really don’t know who does the job, sorry :-)) to give you the list of distinct users, not list of size of that joined select. Such distinct is required in most leftJoins on *ToMany entities (unless you create projection results for them).

1+1 for OneToMany

Now let’s pretend we have different situation – some @OneToMany case, like an Invoice and its Items. Item will have invoice attribute and Invoice will have @OneToMany items mapped by Item.invoice – like this:

@OneToMany(<b>mappedBy = "invoice"</b>)
private Set<Item> items;

And now the select part:

QInvoice i = QInvoice.invoice;
Map<Integer, Invoice> invoices = find.queryFrom(i)
    .map(i.id, i);
// this performs items = new HashSet<>() for each invoice

QItem ii = QItem.item;
List<Item> items = find.queryFrom(ii)
for (Item item : items) {
    // addItem is always nicer than getItems().add(item)

Always check what selects are generated in the end. In my case ii.invoice.in performs no additional join and puts ids into the IN list – and that’s how it should be.

Also note the usage of map(i.id, i) – this returns LinkedHashMap, so the order is preserved (cool!) and makes getting the invoice for item much easier. Difference in select is just one redundant id which is no problem at all.


We demonstrated the essential problem with pagination of one/many-to-many joins and checked two solutions that are both based on one additional select. These solutions are not necessary if pagination is not required, obviously. There may be other ways to do it, there are even some extensions for specific JPA providers (like @BatchFetch for EclipseLink), but I’d prefer this for couple of reasons:

  • Full control over selects – well, at least as full as you can get with Querydsl over JPQL. You can select into DTOs, you can enumerate required columns – there is much more you can do around the basic idea. (Funny. Just minutes after writing this sentence I travelled home from work, reading neat little book SQL Performance Explained – and coincidentally I was going through the part about Joins and Nested Loops. Markus Winard provided also short analysis of ORMs in various languages, how they perform joins and what you can do about it. The final tip was pretty clear about it: Get to know your ORM and take control of joins.)
  • It scales! There is no N. As long as your pagination is reasonably fast (but this is separate problem and you have to do what you have to do), second select goes after IDs, which should be pretty fast anyway.
  • It is unrelated to any specific features of any specific JPA provider.
  • And after all – you have full control over your contract. Don’t leave presentation layer trigger your selects. Even better – close that persistence context after leaving the service layer.

The only problem you should be aware of is limitation about amount of expressions in the IN clause. Check it out, read documentation (both for your JPA provider and RDBMS), experiment, find out what the maximum page size will be – and (obviously) cover it in your tests (at least when it crashes the first time ;-)).

The most important takeaway from this post should be: 1) EAGER is not a solution, 2) result of the left join with “to-many” cardinality cannot be reasonably paginated in SQL. We’re not talking about specific solutions here – hell, clever DB guys can even provide you procedures delivering you the right page plus total count of all records – all in a single call to DB. But that’s beyond basic SQL/JPA. :-)

Converting Java enums to values and back – with Java 8!

Yeah, I know, I know – it seems just like yesterday when I talked about this topic. I focused mostly on conversion to values and back in the context of JPA 2.1 converters. This time we’ll focus on that part that helped us with “reverse resolution” – that is you have the value (for instance an int, but not ordinal number, of course!) that represents particular enum instance – and you want that enum instance. Mapping between these values and enum instances is a bijection, of course.

Our solution (here on GitHub) works fine, but there are two limitations:

  • ConvertedEnumResolver depends on the common interface ConvertedEnum our enums must implement, hence it is implicitly tied to the conversion framework.
  • If we need another representation (mapping) to different set of values we have to develop new resolver class.

Ok, the first one may not be a big deal. The second one is a real thing though. I realized this limitation before – and few weeks later, here we are, with enum that has not only DB representation (int) but also some String representation for different purposes. New resolver class was developed… But was it really necessary?

Everything in the resolver class is the same – but the instance method getting the value on the enum. I’m not going for any reflection, forget it. But hey, we recently switched to Java 8 and I’ve heard it has these method references! If we can pass it to the resolver constructor… is it possible?

Those who know Java 8 know that the answer is indeed positive. Our new resolver will look like this (I renamed it since the last time):

 * Helps reverse resolving of enums from any value back to enum instance.
 * Resolver uses provided function that obtains value from enum instance.
 * @param <T> type of an enum
 * @param <Y> type of a value
public final class ReverseEnumResolver<T extends Enum, Y> {
    private final String classCanonicalName;
    private final Map<Y, T> valueMap = new HashMap<>();

    public ReverseEnumResolver(Class<T> enumClass, Function<T, Y> toValueFunction) {
        classCanonicalName = enumClass.getCanonicalName();
        for (T t : enumClass.getEnumConstants()) {
            valueMap.put(toValueFunction.apply(t), t);

    public T get(Y value) {
        T enumVal = valueMap.get(value);

        if (enumVal == null) {
            throw new IllegalArgumentException("No enum constant for '" + value + "' in " + classCanonicalName);
        return enumVal;

There is no conversion mentioned anymore. Just reverse resolving. So, our new enum does not have to be “Converted” anymore – unless you still need this interface for JPA conversion (which you may). Notice those method references:

/** Now we don't need to implement any interface for the sake of conversion/reverse resolving itself. */
public enum SomeEntityType {
    NORMAL(0, "norm"),
    SPECIAL(1, "spec");

    private final Integer dbValue;
    private final String strValue;

    private SomeEntityType(Integer dbValue, String strValue) {
        this.dbValue = dbValue;
        this.strValue = strValue;

    public Integer getDbValue() {
        return dbValue;

    public String getStrValue() {
        return strValue;

    // static resolving for DB values
    public static final ReverseEnumResolver<SomeEntityType, Integer> dbValueResolver =
        new ReverseEnumResolver<>(SomeEntityType.class, SomeEntityType::getDbValue);

    public static SomeEntityType fromDbValue(Integer dbValue) {
        return dbValueResolver.get(dbValue);

    // static resolving for String values
    public static final ReverseEnumResolver<SomeEntityType, String> strResolver =
        new ReverseEnumResolver<>(SomeEntityType.class, SomeEntityType::getStrValue);

    public static SomeEntityType fromStrValue(String value) {
        return strResolver.get(value);

Plus we demonstrated that you can use two different values (dbValue, strValue). Of course it works:

System.out.println("SomeEntityType.fromStrValue(\"norm\") = " + SomeEntityType.fromStrValue("norm"));
System.out.println("SomeEntityType.fromDbValue(1) = " + SomeEntityType.fromDbValue(1));

This prints NORMAL and SPECIAL – as expected. Code for the classes can be found here, the whole demo is here and I’ll point you also directly to the README addition for this solution.

While Java libs get more and more bloated, these Java 8 syntax additions are definitely more than welcome. :-)

Making Windows 7/8 work again

OK, kinda questionable title, as Windows somehow works after install (and reinstall). But I have some additional professional needs. I don’t want to edit system properties in a single line, I want proper command line – preferably bash – and when at it, I’d welcome many Unix text/file utils too. I want proper console that resizes, allows more than silly block-mode copy. I want alt-tab working properly and not disappearing after a while. Better basic editor.

I wrote about some of the stuff, but maybe someone will find it helpful concentrated in a single post.

I’m actually no tools freak, I don’t check the landscape for new fancy tools every now and then, so there may be better alternative (“better” can have many meanings though :-)). So let’s just go through the stuff I need to do with a fresh Windows installation. Some stuff is related to 7 or 8, but most applies for both.

Many of my needs come from my profession. I’ll omit Java, Gradle/Maven, TortoiseSVN and others, but I’ll name those that help me be productive in command line for instance.

Alt Tab working again (Win 7)

Depends whether you like it or not. I want Alt+Tab to display the application switcher and keep it that way. Because that’s how I switch faster. Aero Peek is function that displays the application – which is cool when you’re in doubt which of those many same icons is the one you want – but when you continue hitting Alt+Tab it just does not display the switcher anymore. That sucks. Hit the Start button (or WinKey) and write “performance”. One of the filtered options should be “Adjust the appearance and performance of Windows”. Go for that. In Visual Effects tab, there is long list of checkboxes. And you want to disable Enable Aero Peek. Never missed it actually.

Windows 8 adjustments

I’ll be swift here, as I have adjusted just a single installation so far. My biggest concern I remember – from those that are easy to fix – was the sticky corner (corner clip) when I used dual-head. When I move window from one monitor to the other, I slide on the top edge – and I stumble upon the edge of the screen here. Fix is easy, although you have to edit the registry.

And then there was Narrator on Win+Enter which I accidentally pressed here and there. Just delete the executable or disable it other way. It can’t be disabled in any normal way.

Browsers, more browsers

So this one is obvious. What you need Internet Explorer for? To download Firefox (Chrome, whatever), right? Then you may need some Web Developer extension, Firebug and you’re ready to go.

BTW: If for nothing else, then for funny bug that ruins my Google Drive document writing on Firefox every time… copy/paste works for a while and then suddenly it stops. Restart does not help, nothing seems to help. Chrome does not have this problem (or it didn’t manifest so far). Copy/paste on Google Drive is quite popular problem so it seems, I don’t care who’s “fault” it is here. Just one of those many funny things about current innovations. ;-)

Notepad2 – what Notepad was supposed to be

No tabs, no complications, just Notepad. But with toolbar (should you want it) and some basic settings options. Better status line. That’s Notepad2. My biggest gripes with default Notepad are:

  • doesn’t work with unix newlines properly;
  • no reasonable support for various file encodings or re-coding.

Notepad2 does have these and some basic syntax highlight, newline conversion, regex search and more. Check FAQs to find out Notepad2’s position on some additional features. If it had hex editing, I’d probably never need anything else. For anything serious I use IntelliJ IDEA anyway.

Do you need a bigger gun? Go for Notepad++, Ultraedit, or whatever else, the list is way too long. What I like about Notepad2 is that it is exactly what I’d expect from Notepad – simple single document editor that doesn’t lack features you need way too often today – especially work with various encodings. Instead of reconfiguring apps to use this as a default editor, or associating file extensions, I simply replace both c:\windows\notepad.exe and c:\windows\system32\notepad.exe with this one. You need to claim the ownership and all this hardness, but it is well worth it.

Or you may try this BAT to do it (not sure how it works with domains, but you can’t screw anything except those notepad.exe binaries :-)). It uses notepad2.exe already copied into Windows directory which you have to do as Administrator:

REM run as admin
cd \windows

takeown /f notepad.exe
icacls notepad.exe /grant %USERNAME%:F
copy notepad.exe notepad.exe.orig
copy notepad2.exe notepad.exe

takeown /f system32\notepad.exe
icacls system32\notepad.exe /grant %USERNAME%:F
copy system32\notepad.exe system32\notepad.exe.orig
copy notepad2.exe system32\notepad.exe

Total Commander

Or any other beast you like. I personally can’t get rid of these two-panel commanders. Bookmarks, tabs, many handy features (synchronize, multi-rename tool, built-in ZIP). I strip my Total Commander to hide F-buttons, toolbar, command line (right-arrow displays it) switch Quick Search to Letter only – and I’m ready to go.


Ok, so I excluded SVN but here I come with Git – what is that? Well with Git for Windows you’ll get git bash – that is bash! And tons of useful Unix commands, though not all of course. So for the rest I go for…


GnuWin32 contains tons of GNU tools ported for Windows. I don’t know the status of GnuWin64, but GnuWin32 works just fine on 64-bit system too. All packages neatly put their binaries into common bin directory, so a single PATH entry is enough.

Give some thought to your PATH order, I personally have git’s binaries before GnuWin32 because they are in some cases better integrated with its bash. But if you use also GnuPG, put it even before git.

Rapid Environment Editor

Talking about variables… I mentioned this one already some time ago. This is my hero for setting environment variables. Download it, use it, love it. It is handy to put all the *_HOME variables (JAVA_HOME, GRADLE_HOME, …) to System Variables and just reference them in PATH. And for bash usage it is handy to define HOME (for cd without arguments if for nothing else) in User Variables and set it to %USERPROFILE% – use variable of type Expandable String for that. Don’t forget to Save it (Ctrl+S as expected) and restart any program you expect using the stuff.

Real console, please!

And here I’m split a bit. I used Console2 for a long time, works fine, except it doesn’t maximize in Windows fashion. I also used ConEmu on another computer. Works fine, maximizes… but I didn’t work with it that hard like with Console 2. So I don’t know. ConEmu seems to have more features and more actively developed, so I’d check that one now. Both are far better than default cmd, where you – of course – can switch to bash, when you have it on path. For ConEmu, download something that extracts 7z – like 7-Zip – unless you go for MSI (whatever works better for you) or you install a packer plugin (WCX) to your Total Commander.

As for ConEmu settings (Win+Alt+P), you may want to make some changes:

  • First in Keys & Macro – Controls section disable Change prompt text cursor position, so we can make copy/paste work both from left mouse button select and from other programs too.
  • In Keys & Macro – Mark/Copy section for Text selection change the value to Always.
  • In Keys & Macro you may want to change Scroll buffer one page up/down from Ctrl+PgUp/Dn to Shift. It may be easier to find the functions if you sort the list by Description.

With these two I’m mostly ready.

Console with bash from Total Commander

BTW: I talked about Total Commander and Git Bash Here integration already – using Console 2. For ConEmu, let’s do it with editing your INI files – they will probably be in %USERPROFILE%\AppData\Roaming\GHISLER\. Just define your custom command in usercmd.ini (adjust paths of course, bash is in PATH already in my case):

param=-cmd bash --login -i

And in wincmd.ini just use this command – I chose Ctrl+B shortcut:


Happy bashing.

Bashing who?

Obviously I’m no big Windows lover. I even hate it as a server platform. I love ssh to my boxes spitting commands on them. This is probably possible for Windows too, but actually not many people use it. I’m still waiting to meet good Windows administrator and enthusiast who will show me some meaning of Windows as a server. Combined with Java. Screw performance comparisons now, just talking about my life as an administrator or developer.

But I’m using Windows in my daily job as my desktop and I like many of the stuff. I don’t turn it into Linux (ok, bash, GnuWin…, but…), I don’t use virtual desktops (I loved them), or any advanced keyboard shortcut solutions. All these tools and adjustments are really rather elementary to me.

I like some of the Windows stuff (when I remove some other). So I don’t want to bash Windows. Installing and integrating all the things together is no doubt more demanding than a couple of apt-get commands Linux users are used to. But then, they often need to go through many configurations as well.

Cool stuff that works

Learn WinKey shortcuts. It really pays off. You may stun your boss when you tell them, “just press Win+Shift+Right to get it on the projector” (switching app between screens).

Pin your applications on the taskbar and start or switch to them using Win+number.

If you use localized keyboard a lot, learn AltGr shortcuts, at least the basic ones. When you need a single curly brace it is much faster to press AltGr+B than Alt+Shift, { and Alt+Shift again. Often these AltGr symbols are printed on the keyboard.

Windows 8?

Windows 8 is generally step in a wrong direction for (not only) me, but it has some interesting additions. While you can’t Alt+F4 after closing the last application to get Shutdown dialog anymore, you can use new Win+X (Power User Menu) and then just double tap U, U to do the same. And many, many more through this menu. There is more for sure, but generally Windows 8 rather does maximum to get into our way if we use the PC the old way (no touch display, etc.). Kinda shame.

I hope you found some of the tips and tools helpful. If you have your tricks and favourite tools, feel free to share them in comments! :-)

Releasing to Maven Central with Git on Windows

Originally I wanted to write about two problems. I think both were somehow related to interactions between Maven, Git and GPG on my Windows. But because I didn’t document it both on time, things got lost. So this will be only half of the story.

Short version? Don’t mix various gpg on your PATH. If you use Gpg4Win, prefer that one.

Switching to Git

When you start with Git, you’ll probably get lost often even when you get the basic ideas and read those intro manuals. There was no big deal about SVN coming from CSV, but Git is different beast altogether and definitely worth studying. For example, it may happen that your local master branch is ahead of remote master by hundreds of commits (the whole history really), while push is pushing just new commits… And you have no idea what you did first – and I bet there will be more surprises like this. (Little note, for me fetch didn’t work, but the second answer with rebase -p did. Don’t ask, I still have to study it. :-))

With just a few glitches like this I was reasonably afraid of my first release to Maven central with Java Simon on Git repository. But it was much smoother in the end, especially from Maven’s side. Credit to Maven this time, I remember its release plugin throwing bugs at me – and now it did its job with VCS I don’t understand yet.

GPG vs Git’s GPG

The problem I managed to write down is related to the fact, that I had Gpg4Win installed previously (for Maven Central requires signed artifacts) and Git brought another GPG with it too. So this is – obviously – kinda Windows specific.

First I needed to create my new GPG keys for signing artifacts going to Maven central, because for unknown reasons I elegantly deleted the .gnupg directory some time before. So you create the key with:

gpg --gen-key

And then you go to upload it:

gpg --keyserver hkp://pool.sks-keyservers.net --send-keys 08XXXXXX

And there is this error:

gpg: sending key 081A513E to hkp server pool.sks-keyservers.net
gpg: system error while calling external program: No error
gpg: WARNING: unable to remove tempfile (out) `...\AppData\Local\Temp\gpg-515B3D\tempout.txt': No such file or directory
gpg: no handler for keyserver scheme `hkp'
gpg: keyserver send failed: keyserver error

It sounds like anything but the proper solution – your gpg did not find gpgkeys_* executables in the same directory. You can eventually Google it and there is some StackOverflow question too, but answers are various. This is the first strike of git-bash and my PATH settings. There is /bin/gpg, but you better use gpg from where it is installed:

/c/Program\ Files/GNU/GnuPG/gpg --keyserver hkp://pool.sks-keyservers.net \
--send-keys 08XXXXXX

Fixing the PATH

Cleanest solution is to prefer Gpg4Win, so just put C:\Program Files\GNU\GnuPG to the first spot of your PATH – or at least anywhere before Git’s bin directory (which is also high on my list to prefer its commands before binaries from GnuWin32). If you have C:\Program Files\GNU\GnuPG\pub on the list too, bump it up as well. Or just check what Gpg4Win put onto your PATH and move that. Sorry for being a bit vague here – this happened to me with gpg.exe in GnuPG directory first, but now I have only gpg2.exe there and gpg.exe in pub sub-directory which was added to the PATH. This seems to be a change since 2.x versions of gpg or Gpg4Win.

You could – of course – just delete or rename /bin/gpg.exe. But after unplanned Git reinstall it would be there again. Proper order in PATH is important anyway, although I too feel much better when there are no colliding binaries.

GPG speaking different languages?

Is your GPG talking to you in native language and you want it English? Simply set LC_MESSAGES=C (Rapid Environment Editor recommended, use User Variables). Alternatively just delete GnuPG/share/locale directory :-) seems not to hurt anything.


I believe my other problem was also related to gpg, it was another twist on the PATH problem when running Maven – which uses gpg to sign those artefacts with keys you prepared. So the main advice still applies. Organize your PATH properly, preferably in Windows System Variables, as Git’s bash will translate it into UNIX format just fine. Also with proper system-wide PATH things will run properly in both cmd and bash, unless they are shell specific. For instance mvn clean release should work in both.

Git Bash strikes back? (edit 2014-10-05)

Fixing the path is nice, but not working – unless you run bash without git’s initialization. Try running just bash and check your path with echo $PATH. Then do the same with git bash (the difference is bash –login -i instead of just plain bash command). This sources not only .bashrc from your home, but also your .profile and (and here comes the blow) also c:\Program Files (x86)\Git\etc\profile (or similar depending on your installation). This is the file that does git bash from just plain bash, sets the prompt, etc. But it also adds couple of paths at the head of your original path – namely $HOME/bin, . (current dir), /usr/local/bin, /mingw/bin and /bin (three out of these 5 items don’t even exist in my case).

So yes, this actually renders my previous advice useless – but only for git bash! If you’re using cmd/bash you don’t need to change anything and you still have all your needs on your path in the proper order.

And what can you do about it in git bash? Fixing that global profile seems kinda crude, so you may prefix the PATH in your .profile with GnuPG path (again). Or delete that gpg.exe in git installation (talking about crude measures :-)).

Converting Java enums to values and back

You know the stuff, you have some enums that logically wraps for instance integer constants. Typical case is numeric DB column mapped to enum. We all know that using EnumType.ORDINAL is one of those worse solutions. And EnumType.STRING is not really loved by our DB admins (if you have them) – even though it’s easy to read for sure. :-) Normalization suffers, enum constant renaming is a bit problem too. So we want to map it to numeric (for instance) which would be Integer in JPA entity.

This mapping does not necessarily related only to JPA, there are many cases when you want to convert enum to something and back. For instance, with generated web-services I often map generated enum to my “business” one and if it suits me (as it is not always clean) I embed this mapping into my enum (not the generated one of course :-)). So I map enum to another related enum and back. But we will work with JPA example. JPA 2.1 with AttributeConverter<X,Y> to be precise. You can do something similar with Hibernate converters too, I’m sure.

This blog post is accompanied by this GitHub project containing all the variations. Copy and paste and use whatever however you like.

Naive approach

Before going on, know that you can check all examples in this GitHub repository. First case can be found in this location. We have our simple enum:

public enum SomeEntityType {

And we have our entity that is mapping numeric column to our enum:

public class SomeEntity {
	@Id private Integer id;

	@Convert(converter = SomeEntityTypeConverter.class)
	@Column(name = "type")
	private SomeEntityType type;

This entity will be the same throughout all of our solutions, so we will not repeat it. Important line is the one with @Convert annotation that finally allows us to do conversion (JPA 2.1, part of Java EE 7). All we have to do now is to implement that converter:

import javax.persistence.AttributeConverter;

 * This is coupled too much to enum and you always have to change both classes in tandem.
 * That's a big STOP (and think) sign in any case.
public class SomeEntityTypeConverter implements AttributeConverter<SomeEntityType, Integer> {
	public Integer convertToDatabaseColumn(SomeEntityType someEntityType) {
		switch (someEntityType) {
			case NORMAL: return 0;
			case SPECIAL: return 1;
				// do we need this? if for nothing else it catches forgotten case when enum is modified
				throw new IllegalArgumentException("Invalid value " + someEntityType);
				// actually the value is valid, just this externalized switch sucks of course

	public SomeEntityType convertToEntityAttribute(Integer dbValue) {
		switch (dbValue) {
			case 0: return SomeEntityType.NORMAL;
			case 1: return SomeEntityType.SPECIAL;
		// now what? probably exception would be better just to warn programmer
		// but if it happens in production, it doesn't really matter if it's here or NPE later
		return null;

Ok, so I revealed the problems in the comments. There may be just one reason to do it this way. If you need the enum independent from that dbValue mapping – but I really don’t know why. On the contrary, you should go for cohesive solution. And in this case you will be just fine with encapsulation. Put the stuff that changes into one place – the enum.

Encapsulated conversion

You need to implement AttributeConverter, because it is kind of glue between JPA and your class. But there is no reason to leave the actual mapping in this infrastructure class. So let’s enhance the enum to keep the converter simple. Converter I want to see looks like this:

public class SomeEntityTypeConverter implements AttributeConverter<SomeEntityType, Integer> {
	public Integer convertToDatabaseColumn(SomeEntityType someEntityType) {
		return someEntityType.getDbValue();

	public SomeEntityType convertToEntityAttribute(Integer dbValue) {
		// this can still return null unless it throws IllegalArgumentException
		// which would be in line with enums static valueOf method
		return SomeEntityType.fromDbValue(dbValue);

Much better, we don’t have to think about this class at all when we add values to enum. The complexity is now here, but it’s all tight in a single class:

public enum SomeEntityType {

	// first part is easy and most programmers get here (Effective Java 101 after all)
	// fields are not final implicitly, but we better make them
	private final Integer dbValue;

	private SomeEntityType(Integer dbValue) {
		this.dbValue = dbValue;

	public Integer getDbValue() {
		return dbValue;

	// static reverse resolving:
	public static final Map<Integer, SomeEntityType> dbValues = new HashMap<>();

	static {
		for (SomeEntityType value : values()) {
			dbValues.put(value.dbValue, value);

	public static SomeEntityType fromDbValue(Integer dbValue) {
		// this returns null for invalid value, check for null and throw exception if you need it
		return dbValues.get(dbValue);

I saw also some half-solutions without the static reverse resolving, but I hope we all agree it goes into the enum. If it’s two value enum, you may start with switch in fromDbValue, but that’s just another thing to think about – and one static map will not kill anyone. You can find this solution here on GitHub.

Now this works, so let’s imagine we need this for many enums. Can we find some common ground here? I think we can.

Conversion microframework

Let’s say we want to have order in these things. We will require the method named toDbValue, etc. So our enums will implement interface ConvertedEnum:

 * Declares this enum as converted into database, column value of type Y.
 * In addition to implementing {@link #toDbValue()} converted enum should also
 * provide static method for reverse conversion, for instance {@code X fromDbValue(Y)}.
 * This one should throw {@link IllegalArgumentException} just as {@link Enum#valueOf(Class, String)} does.
 * Check {@link EnumAttributeConverter} for helper methods that can be used during reverse conversion.
public interface ConvertedEnum<Y> {
	Y toDbValue();

It’s parametrized, hence flexible. Javadoc says it all – we can’t enforce the static stuff, because that’s how Java works. While I suggest that reverse fromDbValue should throw IllegalArgumentException, I’ll leave it to return null for now – just know that I’m aware of this. :-) I’d personally go strictly for IAE, but I’ll show how we can use null in conversion.

What are the changes in the enum? Minimal really, just add implements ConvertedEnum<Integer> and you can add @Override over toDbValue method. Not worth the listing as it’s all here on GitHub anyway.

Now to utilize all this we need a base class for AttributeConverter – here it goes:

 * Base implementation for converting enums stored in DB.
 * Enums must implement {@link ConvertedEnum}.
public abstract class EnumAttributeConverter<X extends ConvertedEnum<Y>, Y>
	implements AttributeConverter<X, Y>
	public final Y convertToDatabaseColumn(X x) {
		return x.toDbValue();

// you can end here, or you can add handy stuff if necessary
	public X notNull(X x) {
		return notNull(x, null);

	public X notNull(X x, Y dbValue) {
		if (x == null) {
			throw new IllegalArgumentException("No enum constant" + (dbValue != null ? (" for DB value " + dbValue) : ""));
		return x;

	public X withDefault(X x, X defaultValue) {
		return x != null ? x : defaultValue;

As you can see, only convertToDatabaseColumn is mandatory here. Here is our concrete converter:

public class SomeEntityTypeConverter extends EnumAttributeConverter<SomeEntityType, Integer> {
	public SomeEntityType convertToEntityAttribute(Integer integer) {
		return notNull(SomeEntityType.fromDbValue(integer));

What a beauty suddenly! I also used notNull to enforce IllegalArgumentException (unless you do it on enum level), or I could use withDefault to fallback to some default value – if it makes sense of course (better when it doesn’t :-)).


And now the last push. What is repeating? And what we can do about it?

  • It would be cool to have just a single converter class – but this is impossible, because there is no way how to instruct the converter about its types. Especially method convertToEntityAttribute is immune to any approach because there is nothing during runtime that can tell you what the expected enum type would be. No reflection or anything helps here, so it seems.
  • So we have to have separate AttributeConverter classes, but can we pull convertToEntityAttribute into our EnumAttributeConverter? Not easily really, but we’ll try something.
  • How about the static resolving? Can we get rid of that static initialization block? It is static, so it seems problematic – but indeed, we can do something about it.

Let’s try to hack our converters first. We need to get the type information into the instance of the superclass. It can be protected field like this:

public abstract class EnumAttributeConverter<X extends ConvertedEnum<Y>, Y>
	implements AttributeConverter<X, Y>
	protected Class<X> enumClass;

And subclass would initialize it this way:

public class SomeEntityTypeConverter extends EnumAttributeConverter<SomeEntityType, Integer> {
		enumClass = SomeEntityType.class;

But this is not enforced in any way! Rather use abstract method that must be implemented. In abstract converter:

	protected abstract Class<X> enumClass();

And in concrete class:

public class SomeEntityTypeConverter extends EnumAttributeConverter<SomeEntityType, Integer> {
	protected Class<SomeEntityType> enumClass() {
		return SomeEntityType.class;

But we’re back to 3 lines of code (excluding @Override) and we didn’t get to ugly unified convertToEntityAttribute:

	public X convertToEntityAttribute(Y dbValue) {
		try {
			Method method = enumClass().getMethod("fromDbValue", dbValue.getClass());
			return (X) method.invoke(null, dbValue);
		} catch (IllegalAccessException | InvocationTargetException | NoSuchMethodException e) {
			throw new IllegalArgumentException("...this really doesn't make sense", e);

Maybe I missed something on the path, but this doesn’t sound like good solution. It would be if it lead to unified converter class, but it is not. There may be one more problem with hunt for unified solution. While the concrete implementation contains methods that have concrete parameter and return types, unified abstract implementation don’t. They can use the right types during runtime, but the method wouldn’t tell you if you used reflection. Imagine JPA checking this. Right now I know that unified public final Y convertToDatabaseColumn(X x) {…} works with EclipseLink, but maybe we’re asking for problems. Let’s check it really:

// throws NoSuchMethodException
// Method method = converter.getClass().getMethod("convertToDatabaseColumn", Integer.class);
Method method = converter.getClass().getMethod("convertToDatabaseColumn", Object.class);
System.out.println("method = " + method);

// prints:
method = public java.lang.Object com.github.virgo47.enumconv._4evenmore.EnumAttributeConverter.convertToDatabaseColumn(java.lang.Object)

Great… so if something strictly matches the method types with column/enum types, we may have a misunderstanding. Sometimes too smart may be just that – too smart. Check this overboard solution here on GitHub.

Simplified static resolving

Anyway, let’s look into that enum’s static resolving. This is actually really useful. Without further ado, this is how enum part may look like:

	// static resolving:
	public static final ConvertedEnumResolver<SomeEntityType, Integer> resolver = new ConvertedEnumResolver<>(SomeEntityType.class);

	public static SomeEntityType fromDbValue(Integer dbValue) {
		return resolver.get(dbValue);

So we got rid of static initializer. But the code must be somewhere:

 * Helps reverse resolving of {@link ConvertedEnum} from a DB value back to enum instance.
 * Enums that can be resolved this way must have unified interface in order to obtain
 * {@link ConvertedEnum#toDbValue()}.
 * @param <T> type of an enum
 * @param <Y> type of DB value
public class ConvertedEnumResolver<T extends ConvertedEnum<Y>, Y> {

	private final String classCanonicalName;
	private final Map<Y, T> dbValues = new HashMap<>();

	public ConvertedEnumResolver(Class<T> enumClass) {
		classCanonicalName = enumClass.getCanonicalName();
		for (T t : enumClass.getEnumConstants()) {
			dbValues.put(t.toDbValue(), t);

	public T get(Y dbValue) {
		T enumValue = dbValues.get(dbValue);
		if (enumValue == null) {
			throw new IllegalArgumentException("No enum constant for dbValue " + dbValue + " in " + classCanonicalName);
		return enumValue;

And this I actually really like. Here I went for strict checking, throwing exception. Without it, or without needing/wanting the type name it could be even shorter. But you write this one once and save in each enum. Check the complete sources here on GitHub.

So there are three players in this “framework”:

  • ConvertedEnum – interface for your enums that are converted in this way.
  • ConvertedEnumResolver – wraps the reverse mapping and saves you most of the static lines in each converted enum.
  • EnumAttributeConverter – maybe the most questionable part here. It takes care of one direction of the conversion, just be aware of potential problems if something introspect the method types.

Alternative conversions with entities

While not exactly on the topic of enum to value mapping (and back), we were demonstrating this all in context of JPA 2.1. So I’d like to at least mention alternative solutions for older JPA versions.

  • You can always use vendor specific extensions. Hibernate has its Custom types and I’m sure EclipseLink doesn’t fall behind.
  • But there is also possibility to use mapping annotations on properties. That is put your @Id on the getter, or specify AccessType.PROPERTY. This allows you to convert anything to anything in getter/setter. I even used it to back Dates by long (or Long if nullable). This way various tools (like Sonar) didn’t complain about silly mutable Date breaking the encapsulation when used directly in get/set methods, because it was not stored directly anymore. Hibernate used get/set for Date, and I had @Transient get/set for millis long available. I actually liked comparing millis more than before/after methods on Date, but that’s another story. The same can be used for mapping enums – just have JPA compatible type mapped for JPA and @Transient get/set with enum type. Most of the stuff about enum encapsulation and static resolving still applies.

I hope you liked our tour around enums. If you knew or used this before, good for you. I’m surprised how many programmers don’t try to design and refactor these little things where you don’t even need to think in UML.

And if you have better solutions, share them in comments, please. :-)


Get every new post delivered to your Inbox.

Join 223 other followers