Live architecture with Java, Spring, JPA and OSIV

This post is about an architecture where live (attached) JPA objects are used in the presentation layer. You can expect OSIV (Open Session In View) pattern mentioned, though I’ll focus more on ways how we made it work well enough for us – safely and without LIEs (LazyInitializationException). It is just my story with my experiences, no big discovery here. 🙂

I can’t tell if it is any official name, but we call it “Live architecture” because live JPA entities are available in the presentation layer. While we use it with Spring/Wicket mostly, it is the same with any other presentation framework – and probably applies to JavaEE without Spring too (if you use OSIV).

DTO vs Live architecture

In our company there are “DTO guys” and “live architecture guys”. We all know DTOs (Data Transfer Object) and how to work with them, more or less. Their rise to fame came with the need of coarse-grained calls to remote EJBs and they became prominent “pattern” then. Even with local calls people use them to strictly divide layers. I used them on some projects, then not on others and then again I used them with GWT/Seam applications (never liked the idea of JPA entities being preprocessed for me and dragged all the way to the GWT application).

Everytime I start talking about “live architecture” that drags entity objects into the view there are architects who just say “that is no architecture at all”. And I say “whatever…” I remember projects where we “broke” a clean architecture (e.g. “everything must go through this facade!”) and the result was less and cleaner code, easier to understand, better performance even. Was it universal? Hell no, it wouldn’t scale in most cases, but in that particular case scaling was not (and after all those years still is not) necessary.

My recent story with the live architecture is based on a project where it was settled that it will be used instead of DTOs. You have to translate DTOs somehow from business objects and back. You can generate it, you can automate it, use reflection – or do it manually. Any way always adds something that is not necessary for all cases. Our views were mostly based on JPA entities and it was just shame to translate them to DTOs for the sake of transformation itself. I’m not saying DTOs are bad – well we use them for more complicated views, mostly for lists showing joined tables. You can of course build a view and design an entity over it – and we do it too…

There is no fundamentalism in this – we use entities as much as we can. I strongly believe that in normal scope projects people often overdo it with “clean architecture” and don’t care about “clean code” as much. And I strongly believe that cleaner code itself matters much more than that cloud castle of architecture (without underestimating the architecture itself!). After all our projects are quite simple multi-tier applications with a bit of clustering. No grid, no hi-perf, no America. So we use entities, because they are placed under the presentation layer (good dependency direction) and they only carry data. And when this is not enough, we use DTOs too. Simple.

Business logic objects and dumb entities

You may have different rules for your live architecture (projects using OSIV) – and that is fine. Ours start with don’t use entities to anything else – no business logic, maybe some simple computed properties, that is alright. You may call this Anemic Domain Model – but I don’t care. Logic is in separated objects that use one or more entities. It is not exactly DCI, but it is not very far from this. For many other reasons (unrelated to the live architecture) I prefer having business logic objects that performs specific scenario – the best case is 1-to-1 mapping with a Use case from the analysis document.

Let’s talk about this picture for a while:

Presentation layer can be anything – component (Wicket) or controller (Web MVC) driven. It calls the service layer (typically a Spring bean or EJB) and this further uses that “cloud” with various business logic objects. Very often I prefer create/use/throw-away pattern. In constructor the object gets its context and then it does something – preferably in one method call, but it may be a sequence too, although this is more fragile approach. Important thing is that business object can store its state during the business logic execution – it is thread safe if it is created locally for one service call (that’s why I don’t use singletons here). Sometimes state is not necessary, but in more complex cases it is. And I like fields much more than dragging list of parameters between private methods.

This business logic uses DAOs (or @EntityManager directly) to work with the DB – and of course works with entities in the process. Because entities are dumb (DCI idea, but not only theirs) they are perfect DTOs (that are also dumb). Of course there are some concerns about entities used as DTOs and you can find many questions about this issue (and not only in the Java world). Entities are POJOs – in theory – but you may drag some proxy object up there into the presentation layer. There is a lot of magic in entities, you sometimes don’t know what they are (my class or some modified class already?) – but under the most circumstances you don’t have to care that much really.

Best practices

Now let’s talk about our best practices. Presentation layer code knows entities, but doesn’t know ORM! This is probably the most important thing. Of course the dependency on the JPA is implied somehow. Of course client programmer has to know the data model and has to know how to traverse the objects he wants to display. But he absolutely can’t use EntityManager. Our first “live architecture” project didn’t have clear separation of these roles and some LIEs were fixed like “you know, here in this page before you call the service… put evict on this object there”. I wasn’t there when this project started, so I just went like “what?!?!” And I forbade this for the next project I could affect.

Next rule is rather about the communication than the technical one – presentation programmer always has to know what he gets from the service call. Otherwise he risks that LIE again. But LIEs in presentation are easy. They are easy to fix in model, in service/business code or in the presentation code (that is the most of the cases). You always have to share some model between business logic and presentation (and developers!) – and we share the data model itself. If you don’t plan to change your layers this is perfectly acceptable. I’ve actually never saw any change of technology that would satisfy using different model introduced on the facade level. So why to do it if you ain’t gonna need it? (Of course, you may need it – and you are there to say as an architect.)

Getting data is easy (talking about live architecture problems only :-)). You may need separate methods for every view – especially if selects are not generic enough. We have “filter beans” with single superclass and we use these beans with a few service methods (getSingleResult, getList, etc.) that are rather generic in nature. DAO-like even. It works for us, filter beans are the common ground for client and server programmer to communicate and they are part of the service layer API. We can have common FilterBean interface, because we use our custom filter framework behind. But you can use filter beans without common ancestor and have many service methods to obtain data. This is probably even cleaner.

Transactions, saves, updates

Originally we used DAO-like save on service layer too. We also didn’t have clear strategies when objects are alive and when not when the presentation layer called the service layer. If you had in one HTTP request read and write call, then the entities were alive if the write used result of the read. If you had just an update, then they were not. “Objects may come alive or not, let’s not assume that they are alive,” was our first strategy, though I didn’t feel very well about “or” used in the sentence. Never use contradictions in your assumptions. With a big help of our tests we managed to clean this mess up.

Our tests were TestNG based, they were not unit tests but mostly we tested the service layer playing the role of the presentation layer. It was funny how often the test passed and the user test (using browser) failed, but also vice-versa! Sometimes the test didn’t prepare the same environment – and we started to realize, that the service layer must assume less and be more strict. The biggest problem was that the presentation layer could change an entity A that was read in the request (hence alive) and then call service saving an entity B. The service layer had no chance to know about the A being saved in the same transaction. This lead to one very simple idea – we always clear session before calling transactional service methods. I forgot to say that we use transactions on service layer, so you can have more transactions in one HTTP request/persistence session.

Stepping back for a bit – client programmer knows that when he calls a service, his objects are alive. He can call multiple reads – and he knows that all things are still alive and he can base the next read on an attribute that is loaded lazily. In our case there is only one write/transaction called in one HTTP request – and it’s mostly the last call as well. If I wanted to make our policies even more precise I could say “always clear the session – for every service call”. This would mean less comfort for the client programmer. Or you can go for “dead” entities instead of live ones (see Other possibilities further).

Now the business programmer knows that any object that enters transactional service is detached and he can choose what to do with it. Do you need just to save the changes? Merge it (or call JPQL update, or whatever). Do you need to compare it to its original state? Read the object by its id and do what you need. Do you want to traverse its attributes? Well, better reload it first to make it attached again. We enforce this by a custom aspect that is hooked on an existing Spring @Transactional annotation.

This assumption would be very useful for read/list method too. Now the developer never knows if he has to reload or not. But read methods are not so complex and reload of the parameter entity should never harm either. Also – read/list methods are not transactional, so whatever he does, he can’t mess up with the persisted data. So this is our compromise between the client programmer using live objects and the service layer being secured enough. There is much less LIEs in our back-end code (which are harder to catch than those on the presentation layer) – actually I didn’t see one for a long time – and there is no chance to tamper with the data accidentally.

As a side note: Many of our problems were also caused by our presentation architecture – we load data, display them, then forget the content to keep page/session small and we just remember the IDs of the objects. When edit action comes, we reload the object from the service by its ID, modify it and then call the transactional write service method. To make this more convenient we have our custom ReloadableModel class for our Wicket pages, so before the model (entity obect) is to be updated, it is always reloaded from the service too (this is not a big performance hit, it often goes from the 2nd level cache anyway). This may not be very lucky solution but it was one of those we had to stick with for the time. You may or may not run into these kinds of problems. In any case, making your contracts and policies more strict and clean is always a good thing.

Other possibilities

There is not only Live vs DTO option. You can also use entities, yet always closing the session when the service call ends. This gives you the same model, less easy presentation changes, but it definitely is cleaner from the service layer point of view. You can make more strict contracts, performance is all down there and not ruined by lazy loads on the presentation layer, etc. I know this, we use this for other projects too. But I also know that people use OSIV a lot and that is why I wanted to wrap-up our experiences with it. You can come up with other policies too – for instance one read or write per request and nothing more. Do it all in one proper service call, don’t call many selects for every single combo-box model for instance. I agree with these approaches actually. But sometimes we don’t have the luxury of choice. 🙂

In any case, try to do your best to clean up the contracts as much as possible, avoid contradictory ORs in your assumptions and – I didn’t focus on this point much in this post – test your service/business layer. Contract and policy is one thing, but you have to ensure them – force them, otherwise they are not contracts, just promises. Because that is your safety net not only from the architectural standpoint, but also from the functional one. But that is a completely different story.

Advertisements

About virgo47
Java Developer by profession in the first place. Gamer and amateur musician. And father too. Naive believer in brighter future. Step by step.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s