Last three years with software

Long time ago I decided to blog about my technology struggles – mostly with software but also with consumer devices. Don’t know why it happened on Christmas Eve though. Two years later I repeated the format. And here we are three years after that. So the next post can be expected in four years, I guess. Actually, I split this into two – one for software, mostly based on professional experience, and the other one for consumer technology.

Without further ado, let’s dive into this… well… dive, it will be obviously pretty shallow. Let’s skim the stuff I worked with, stuff I like and some I don’t.

Java case – Java 8 (verdict: 5/5)

This time I’m adding my personal rating right into the header – little change from previous post where it was at the end.

I love Java 8. Sure, it’s not Scala or anything even more progressive, but in context of Java philosophy it was a huge leap and especially lambda really changed my life. BTW: Check this interesting Erik Meijer’s talk about category theory and (among other things) how it relates to Java 8 and its method references. Quite fun.

Working with Java 8 for 17 months now, I can’t imagine going back. Not only because of lambda and streams and related details like Map.computeIfAbsent, but also because date and time API, default methods on interfaces and the list could probably go on.

JPA 2.1 (no verdict)

ORM is interesting idea and I can claim around 10 years of experience with it, although the term itself is not always important. But I read books it in my quest to understand it (many programmers don’t bother). The idea is kinda simple, but it has many tweaks – mainly when it comes to relationships. JPA 2.1 as an upgrade is good, I like where things are going, but I like the concept less and less over time.

My biggest gripes are little control over “to-one” loading, which is difficult to make lazy (more like impossible without some nasty tricks) and can result in chain loading even if you are not interested in the related entity at all. I think there is reason why things like JOOQ cropped up (although I personally don’t use it). There are some tricks how to get rid of these problems, but they come at cost. Typically – don’t map these to-one relationships, keep them as foreign key values. You can always fetch the stuff with query.

That leads to the bottom line – be explicit, it pays off. Sure, it doesn’t work universally, but anytime I leaned to the explicit solutions I felt a lot of relief from struggles I went through before.

I don’t rank JPA, because I try to rely on less and less ORM features. JPA is not a bad effort, but it is so Java EE-ish, it does not support modularity and the providers are not easy to change anyway.

Querydsl (5/5)

And when you work with JPA queries a lot, get some help – I can only recommend Querydsl. I’ve been recommending this library for three years now – it never failed me, it never let me down and often it amazed me. This is how criteria API should have looked like.

It has strong metamodel allowing to do crazy things with it. We based kinda universal filtering layer on it, whatever the query is. We even filter queries with joins, even on joined fields. But again – we can do that, because our queries and their joins are not ad-hoc, they are explicit. 🙂 Because you should know your queries, right?

Sure, Querydsl is not perfect, but it is as powerful as JPQL (or limited for that matter) and more expressive than JPA criteria API. Bugs are fixed quickly (personal experience), developers care… what more to ask?

Docker (5/5)

Docker stormed into our lives, for some practically for others at least through the media. We don’t use it that much, because lately I’m bound to Microsoft Windows and SQL Server. But I experimented with it couple of times for development support – we ran Jenkins in the container for instance. And I’m watching it closely because it rocks and will rock. Not sure what I’m talking about? Just watch DockerCon 2015 keynote by Solomon Hykes and friends!

Sure – their new Docker Toolbox accidentally screwed my Git installation, so I’ll rather install Linux on VirtualBox and test docker inside it without polluting my Windows even further. But these are just minor problems in this (r)evolutionary tidal wave. And one just must love the idea of immutable infrastructure – especially when demonstrated by someone like Jérôme Petazzoni (for the merit itself, not that he’s my idol beyond professional scope :-)).

Spring 4 and on (4/5)

I have been aware of the Spring since the dawn of microcontainers – and Spring emerged victorious (sort of). A friend of mine once mentioned how much he was impressed by Rod Johnson’s presentation about Spring many years ago. How structured his talk and speech was – the story about how he disliked all those logs pouring out of your EE application server… and that’s how Spring was born (sort of).

However, my real exposure to Spring started in 2011 – but it was very intense. And again, I read more about it than most of my colleagues. And just like with JPA – the more I read, the less I know, so it seems. Spring is big. And start some typical application and read those logs – and you can see EE of 2010’s (sort of).

That is not that I don’t like Spring, but I guess its authors (and how many they are now) simply can’t see anymore what beast they created over the years. Sure, there is Spring Boot which reflects all the trends now – like don’t deploy into container, but start the container from within, or all of its automagic features, monitoring, clever defaults and so on. But that’s it. More you don’t do, but you better know about it. Or not? Recently I got to one of the newer Uncle Bob’s articles – called Make the Magic go away. And there is undeniably much to it.

Spring developers do their best, but the truth is that many developers just adopt Spring because “it just works”, while they don’t know how and very often it does not (sort of). You actually should know more about it – or at least some basics for that matter – to be really useful. Of course – this magic problem is not only about Spring (or JPA), but these are the leaders of the “it simply works” movement.

But however you look at it, it’s still “enterprise” – and that means complexity. Sometimes essential, but mostly accidental. Well, that’s also part of the Java landscape.

Google Talk (RIP)

And this is for this post’s biggest let down. Google stopped supporting their beautifully simple chat client without any reasonable replacement. Chrome application just doesn’t seem right to me – and it actually genuinely annoys me with it’s chat icon that hangs on the desktop, sometimes over my focused application, I can’t relocate it easily… simply put, it does not behave as normal application. That means it behaves badly.

I switched to pidgin, but there are issues. Pidgin sometimes misses a message in the middle of the talk – that was the biggest surprise. I double checked, when someone asked me something reportedly again, I went to my Gmail account and really saw the message in Chat archive, but not in my client. And if I get messages when offline, nothing notifies me.

I activated the chat in my Gmail after all (against my wishes though), merely to be able to see any missing messages. But sadly, the situation with Google talk/chat (or Hangout, I don’t care) is dire when you expect normal desktop client. 😦

My Windows toolset

Well – now away from Java, we will hop on my typical developer’s Windows desktop. I mentioned some of my favourite tools, some of them couple of times I guess. So let’s do it quickly – bullet style:

  • Just after some “real browser” (my first download on the fresh Windows) I actually download Rapid Environment Editor. Setting Windows environment variables suddenly feels normal again.
  • Git for Windows – even if I didn’t use git itself, just for its bash – it’s worth it…
  • …but I still complement the bash with GnuWin32 packages for whatever is missing…
  • …and run it in better console emulator, recently it’s ConEmu.
  • Notepad2 binary.
  • And the rest like putty, WinSCP, …
  • Also, on Windows 8 and 10 I can’t imagine living without Classic Shell. Windows 10 is a bit better, but their Start menu is simply unusable for me, classic Start menu was so much faster with keyboard!

As an a developer I sport also some other languages and tools, mostly JVM based:

  • Ant, Maven, Gradle… obviously.
  • Groovy, or course, probably the most popular alternative JVM language. Not to mention that groovsh is good REPL until Java 9 arrives (recently delayed beyond 2016).
  • VirtualBox, recently joined by Vagrant and hopefully also something like Chef/Puppet/Ansible. And this leads us to my plans.

Things I want to try

I was always friend of automation. I’ve been using Windows for many years now, but my preference of UNIX tools is obvious. Try to download and spin up virtual machine for Windows and Linux and you’ll see the difference. Linux just works and tools like Vagrant know where to download images, etc.

With Windows people are not even sure how/whether they can publish prepared images (talking about development only, of course), because nobody can really understand the licenses. Microsoft started to offer prepared Windows virtual machines – primarily for web development though, no server class OS (not that I appreciate Windows Server anyway). They even offer Vagrant, but try to download it and run it as is. For me Vagrant refused to connect to the started VirtualBox machine, any reasonable instructions are missing (nothing specific for Vagrant is in the linked instructions), no Vagrantfile is provided… honestly, quite lame work of making my life easier. I still appreciate the virtual machines.

But then there are those expiration periods… I just can’t imagine preferring any Microsoft product/platform for development (and then for production, obviously). The whole culture of automation on Windows is just completely different – read anything from “nonexistent for many” through “very difficult” to “made artificially restricted”. No wonder many Linux people can script and too few Windows guys can. Licensing terms are to be blamed as well. And virtual machine sizes for Windows are also ridiculous – although Microsoft is reportedly trying to do something in this field to offer reasonably small base image for containerization.

Anyway, back to the topic. Automation is what I want to try to improve. I’m still doing it anyway, but recently the progress is not that good I wished it to be. I fell behind with Gradle, I didn’t use Docker as much as I’d like to, etc. Well – but life is not work only, is it? 😉

Conclusion

Good thing is there are many tools available for Windows that make developer’s (and former Linux user’s) life so much easier. And if you look at Java and its whole ecosystem, it seems to be alive and kicking – so everything seems good on this front as well.

Maybe you ask: “What does 5/5 mean anyway?” Is it perfect? Well, probably not, but at least it means I’m satisfied – happy even! Without happiness it’s not 5, right?

Advertisements

Falling in love with Spring Java Configuration

Spring guys spent significant effort to give us alternatives to original XML configuration. Hats off, they have it all thought out – starting with the fact that runtime representation is not directly tied to any particular format of configuration file. Or configuration class for that matter. We have many annotations for like… forever now. And then there is the possibility of pure Java config. This post is not tutorial, but rather a short discussion why it is cool with little bonus annotation at the end.

Going Java Config? Why?

I can’t exactly remember pros/cons of one or the other (XML) right now, you can mix them anyway if needed. But recently I decided to give it a go – as our current project has not that complicated application context – mostly typical JPA stuff, transaction manager, property placeholder – and that’s it. Reason? Better control. And who loves XML anyway. 🙂 We can also discuss compile time safety, but dependencies are resolved later anyway, then there is component scan that finds stuff not mentioned directly, Spring’s FactoryBean also adds some fog… so compile time safety is not the main win here, especially if you had good tool for Spring XML before (IntelliJ IDEA is one). So better control is the main reason.

Sure we have a lot of control in Spring already. There are profiles I use for couple of years for configuration adjustments. Let’s say I have a standalone app configured with Spring. During development I run my main class from test scope and my test master configuration contains many profile sections where my datasource is pointed to various testing databases. Because it is test scope it doesn’t go into production JAR, so I can lower my guard and commit DB user/password into this test configuration. (Not that people sometimes doesn’t commit IPs and user/passwords into production code/configurations too – but that’s another story altogether. :-))

The rest of the configuration is imported from main resources, so I don’t repeat myself. All I have to do is to add various run configurations with various -Dspring.profiles.active=XXX VM parameters (or one and change it on fly, your choice). Sure, you can do this with property placeholder, but profile is easier – one switch and all “properties” can have different names. Actually I never tried putting property placeholder configuration into profile, but that would be also interesting ways how to externalize this configuration.

Now this works, but with Java configuration you don’t have to use profile. You use some kind of if/switch in your @Bean annotated method. You may control anything – what is set, what is instantiated (as long as return type fits, e.g. any implementation of DataSource), whether you go for URL/name/password configuration or pull your resource from JNDI… It’s all up to you and you can do it in language that allows you to execute – which XML is not.

First impression? Awesome!

Well, I’m not complete greenhorn with Spring and I know annotation based configuration pretty well. Also it wasn’t the first time I wrote @Configuration over the class. But it was the first time I did it without XML altogether. First time I called new AnnotationConfigApplicationContext(MyConfig.class). And the result was good. I got stuck for some time because entityManagerFactory method that used class LocalContainerEntityManagerFactoryBean (which is a FactoryBean, which I know) didn’t work when I naively returned factoryBean.getObject(). But when I changed it to return factoryBean itself (returning type FactoryBean<EntityManagerFactory>), everything was fine.

Second problem we encountered was reusing some @Configuration class that needed Spring properties in project with XML based master configuration. XML configuration contained context:property-placeholder element, but injected @Autowired private Environment env; didn’t contain these properties. (They seem to be somewhere deep in the environment, but were not returned by env.getProperty. Now these are things I don’t understand fully (Spring is big!) but using @PropertySource on our @Configuration class that autowired environment fixed it.

Those were hardly any problems at all when you consider the big change in the way how the configuration is expressed – and think about possibilities.

Composing configurations and Master configurations

Any serious configuration gets bigger after some time – and I prefer to split it into numerous files, mostly related to specific technical aspect. With Java configuration you can @Import another configuration class (or @ImportResource XML configuration) too. Or you can go to autopilot and using @ComponentScan (equivalent of XML’s context:component-scan) let Spring find all other @Configurations.

But imagine having something I call “master configuration” – which is the only class you mention to your bootstrap code. You most likely have one for production code (src/main) – it contains default configuration that works OK for production. All other partial configurations are auto-discovered and applied. Mine looks simple:

@Configuration
@ComponentScan(basePackages = "com.acme.project")
@EnableScheduling
public class MasterConfig {
// something may be here
}

Now you want to run alternative master configuration – as mentioned I put these into test code (src/test). What I don’t want is to include my production master configuration. While the one up here looks harmless, I may have my reasons why I don’t want to apply @EnableScheduling for instance. For automatic test I actually don’t want any @Scheduled methods firing up at all. The reasons can be many, let’s not argue here. My test configuration may look like this:

@Configuration
@ComponentScan(basePackages = "com.acme.myproject",
  excludeFilters = @ComponentScan.Filter(type = FilterType.ASSIGNABLE_TYPE, value = MasterConfig.class))
public class MasterTestConfig {
// something else
}

Code speaks for itself – we have the same component scan – but excluding production config. And omitting scheduling, and possibly doing something else in the body of the config. We may read different properties (using annotations), etc.

This works, but can we somehow streamline it? Yes we can…

ConfigurationMaster annotation

I hope I’m not reinventing wheel here, but my solution was ultra-simple and it not only made me happy, but it just plain underlines the beauty of Java Configuration. It is ConfigurationMaster annotation that looks like this:

@Target(ElementType.TYPE)
@Retention(RetentionPolicy.RUNTIME)
@Configuration
@ComponentScan(basePackages = "com.acme.myproject",
    excludeFilters = {@ComponentScan.Filter(value = ConfigurationMaster.class)})
public @interface ConfigurationMaster {
}

This time we excluded classes annotated by… whow isn’t it a bit self-centered? 🙂 Yup, probably the most egocentric annotation I created. You place this annotation over any “entry-point” configuration you want to load in your bootstrap code. Other configurations for your other project components are auto-discovered (it’s up to you to keep only necessary stuff in your classpath). And should it happen you want alternative configurations, you just mark them as @ConfigurationMaster as well.

Annotation is part of the project – you see the base package for component scan there. Of course it is not universal solution to everything, but it works for many cases. Our previous configurations would now look like this – first the production one:

@ConfigurationMaster
@EnableScheduling
public class MasterConfig {
// something may be here
}

And the test one:

@ConfigurationMaster
public class MasterTestConfig {
// something else
}

It may be unlikely you want to have many master configurations in main code, but it is quite expected in test code. At least I have one for automatic tests and then some to run my application in development mode. This can use different property files (can be loaded from test scope and have different name explicitly stating they are test properties), different bean implementations – or (don’t tell anyone) hardcoded JDBC URL/username/passwords. The point is – only the one you bootstrap apply – other master configs are excluded.

Don’t know how about you, but I love this Java Configuration stuff. I may encounter new problems of course, but at least it’s really fun. You can even actually debug how it loads your configuration!

My last two years with technology

I’ve been quite busy lately – hence the pause in my blogging. My last post was very specific Java related article, today we’re going to do something lighter – a little whine about various devices, gizmos and maybe even software/services I’ve encountered during the last two years. While generally I love our age for the current level of technology, sometimes I’m desperate seeing unnecessary flaws, often just software ones – and these can seriously affect the final experience. However, today I decided to add a lot of good examples, too, and every case should be short (though I’m bad at keeping stuff short ;-)).

BTW: Now I see that this is actually sort of continuation of this post.

Phone case: HTC One V

I like Android phones in general. And I like both my Wildfire and One V – however they both have quite funny flaws. Wildfire’s display is unresponsive when I pull it out of my pocket while ringing (that’s why I call back my friends right after I can’t pick them up) and One V – for a change – is very quite in-call. In both cases those are quite crucial phone related issues – and in both cases many people observe the same (but not all). Other than that – on Wildfire (2.x Android) I liked that default HTC clock application showed time of the next alarm and this view was removed from newer OS. Of course One V is faster and better in overall, but still… 3 out of 5 stars. Both.

Partition case: EASEUS Partition Master Home Edition

I wanted some free legal replacement for Partition Magic – and I found this. It may not be all-powered tool for every thing around partitions, but it did anything I wanted and so far never failed. Partitions smaller or bigger? Merge partitions? (Here it actually needs enough space on the partition you are going to merge to for all the files from the other partition – but that’s rather a non-issue.) System copied to my new SSD disk? No problem. Using Windows repair process took much longer after that. 😉 5/5! For free? Yes!

Disc case: SSD drives (Crucial M4 in my case)

Talking about SSD – well, technically it is indeed non-disk, but you know how it goes. SSD generally is fast, of course, but also cheap enough nowadays – so to put there your system volume at least is a really good idea. I did so and my computer runs and starts programs much faster. This is currently probably the best boost you can get for your money. CPU or memory or GPU? Phew… SSD made my computer fleet-footed. I can’t say more really, I somehow decided one day, checked the prices, cross-referenced all the new names for me (like Crucial, never heard of them before!), reviews were good, so I bought this one. And I’m not even having SATA3 on my mobo. 5/5

Java case: Spring JdbcTemplate

When I can I program against standard APIs – like JPA2 instead of Hibernate. When I can. Sometimes you need to go through the select cursor-style and while I could use underlying Hibernate, I decided to go straight for JDBC. And I wrote all the code. With ifs and wheres and parameters. After a couple of hours, I was done, piece was tested and then it hit me! “Man, there is supposed to be that Spring class making it much easier!” JdbcTemplate made the job, I didn’t have to write my ifs twice (first to get the query, then the parameters again), all the exceptions were handled for me and there was even every case you could think about how to process the result set (in my case callback for every row of it). This is how I like stuff made. Documentation clear… actually I mostly just let IDEA to offer me the choices and I made them right there the right way thanks to proper names. Love that. 5/5

JavaScript case: File download plugin for jQuery

Check what this plugin is about – and also its demo. Users sometimes want silly things like “can you disable the button after I press download…” (sure I can!) “…and then after download enable it again?” (are you crazy?!) But you can do it with this plugin based on hidden iframe and a cookie. I had to adjust it a bit because I had a corner case (but quite common) that there were no data and no download, which is third case from user’s perspective in addition to success and (server) failure. I’m no JavaScript expert, on the contrary – but I fell in love with jQuery in the process. And however silly HTTP is for delivering applications, things like jQuery and this plugin make it more bearable (though HTML5 and things like WebSocket mend a lot of my 10-year old concerns). For this plugin and the whole idea – 5/5.

Command-line case: GNU tools for Windows – GnuWin

I never liked all that CygWin heavy-weighted stuff, but GnuWin packages made my day. I just installed them, added c:\Program Files (x86)\GnuWin\bin into my PATH (they all go to the same dir, luckily) and stared cmd just to enjoy grep (package grep), awk (gawk), ls (and many more in coreutils), zip/unzip/gzip/tar/bzip2 and of course sed! Mentioning typical Unix/Linux tools – you may also like (not related to GnuWin) vim, though I’m happy with notepad2 for most cases (read second half of this post). But you never know when you need vim’s macros. But yeah, these are not really command line tools in Windows. GnuWin packages definitely are and they deserve 5/5 for making my life easier.

Windows environment case: Rapid Environment Editor

After so many times heading into System, Advanced, bla-bla, setting the PATH in that super short line I realized “there must be a better way and someone must have already fixed this”. Yes, they have – with Rapid Environment Editor. Adding new paths with this tool is just so much better, it checks whether the path is valid – even with other variables you are referencing (if those are paths, like JAVA_HOME for instance). No more needs to be added: 5/5

Corporate tool case: Planview

For me Planview is just a tool to report my hours. I don’t use its powerful project management features. And every time I need some report out of it I don’t understand the language it is speaking to me. This tool one of those tools forgetting that they are not my only primary tools I use. Honestly, I openly hate it. Terms out of other world, a lot of misuses of the application (not only mine actually), tons of discussion how we should use it – and still we’re not using it the right way. Personally – I blame the tool. I can use Jira, Confluence and many other tools without any problem, but Planview is simply killing me. 1/5 (and yup, it’s IE only)

More-than-a-mail case: Lotus Notes

Lotus – I think this is love or hate thing, but however defended by people who like it, it is still viewed as pain by the vast majority of users. My Lotus for instance doesn’t display mouse cursor in mail editor when it’s not focused window, wrongly shows which tab is selected when two are opened at once, pastes Excel tables is as image by default and there are many other silly defaults. Date you see in trash is trash date, not the date of the message? You can’t reply to mail in your sent mail?! My contacts get often screwed by some cashing I don’t understand and don’t care at all. Not to mention it doesn’t look like normal Windows application (not that I’m a big fan of Windows, but still). Once a colleague closed Notes by accident and I just thought it funny to remark “see, stupid Lotus Notes” – just because whatever bad happens there is kinda Notes’ fault. I read people testifying how Notes rocks, etc. But these people live in the closed world of Lotus. Linux guys can hate Outlook, but it is really usable. Lotus? As a mail and calendar? Not a chance here… 2/5

Blu-ray case: Samsung BD-E6100

Recently I’ve got myself a blu-ray player (finally). I wanted Samsung, because my TV is Samsung, price was alright, I chose model with wi-fi, brought it home and after some initial scare (it didn’t play any disk first, I had to unplug it and after this kind of restart it was fine) I was happy about its performance, speed and everything, especially compared to our older DVD player (newer Philips luckily have remotes for common people too, not only for snipers). I managed to play content from computers (with Serviio installed, though SRT subtitles don’t work unexpectedly) and remote control provided four crucial buttons for TV (on/off, Source, volume up/down) – actually many of buttons from Samsung TV remote work as well (expected). After some time I decided to plug ethernet cable in though, because wi-fi often lost the connection to the router (our notebooks never have the problem, even from the same place). Even with ethernet most of the Smart HUB stuff is quite slow. In overall it was a big upgrade compared to DVD player and I was actually surprised how well it works and plays. And Smart stuff? They still might upgrade it somehow and I didn’t buy it for that anyway. 4/5
Edit 2014:
I’d lower the rating to 3/5 after thorough experience. Compared to previous DVD player it does not continue where it ended when turned on again. Actually it does for files, but not for discs. Actually sometimes yes… but nobody knows when and why as it is not even consistent with the same disc. And sometimes it makes funny noises and is very slow with blu-ray discs. With too many of them to be the disc fault.

Conclusion

Not once I was thinking about myself as a “toiletologist”, because I just think too much about every single flaw of toilets as well. I never could understand why we – mankind – are unable to develop total toilet that always flushes everything, why we again and again put urinals so close together that you can use only 2 of 3 in the end, why we put toilet cubicles with legs on shiny reflective floor, not to mention various silly ways how to screw with automation of flushing, washing, drying or whatever.

Sometimes I want to scream “how could you do such a silly mistake?” But then I realize: “Man, it’s just software, it’s meant to be buggy (not that I agree that much :-)), the whole computer science is so much younger compared to the building industry – and look what they are able to do in a silly way every now and then. Not only on toilets, but when these are still not ‘debugged’ after all those millennia then what should we expect from the software, hm?”

The more I am happy for technology that really helps and doesn’t “think” that it will be the only thing I need to pay attention to. I have my own real life beyond technology too, after all.

Live architecture with Java, Spring, JPA and OSIV

This post is about an architecture where live (attached) JPA objects are used in the presentation layer. You can expect OSIV (Open Session In View) pattern mentioned, though I’ll focus more on ways how we made it work well enough for us – safely and without LIEs (LazyInitializationException). It is just my story with my experiences, no big discovery here. 🙂

I can’t tell if it is any official name, but we call it “Live architecture” because live JPA entities are available in the presentation layer. While we use it with Spring/Wicket mostly, it is the same with any other presentation framework – and probably applies to JavaEE without Spring too (if you use OSIV).

DTO vs Live architecture

In our company there are “DTO guys” and “live architecture guys”. We all know DTOs (Data Transfer Object) and how to work with them, more or less. Their rise to fame came with the need of coarse-grained calls to remote EJBs and they became prominent “pattern” then. Even with local calls people use them to strictly divide layers. I used them on some projects, then not on others and then again I used them with GWT/Seam applications (never liked the idea of JPA entities being preprocessed for me and dragged all the way to the GWT application).

Everytime I start talking about “live architecture” that drags entity objects into the view there are architects who just say “that is no architecture at all”. And I say “whatever…” I remember projects where we “broke” a clean architecture (e.g. “everything must go through this facade!”) and the result was less and cleaner code, easier to understand, better performance even. Was it universal? Hell no, it wouldn’t scale in most cases, but in that particular case scaling was not (and after all those years still is not) necessary.

My recent story with the live architecture is based on a project where it was settled that it will be used instead of DTOs. You have to translate DTOs somehow from business objects and back. You can generate it, you can automate it, use reflection – or do it manually. Any way always adds something that is not necessary for all cases. Our views were mostly based on JPA entities and it was just shame to translate them to DTOs for the sake of transformation itself. I’m not saying DTOs are bad – well we use them for more complicated views, mostly for lists showing joined tables. You can of course build a view and design an entity over it – and we do it too…

There is no fundamentalism in this – we use entities as much as we can. I strongly believe that in normal scope projects people often overdo it with “clean architecture” and don’t care about “clean code” as much. And I strongly believe that cleaner code itself matters much more than that cloud castle of architecture (without underestimating the architecture itself!). After all our projects are quite simple multi-tier applications with a bit of clustering. No grid, no hi-perf, no America. So we use entities, because they are placed under the presentation layer (good dependency direction) and they only carry data. And when this is not enough, we use DTOs too. Simple.

Business logic objects and dumb entities

You may have different rules for your live architecture (projects using OSIV) – and that is fine. Ours start with don’t use entities to anything else – no business logic, maybe some simple computed properties, that is alright. You may call this Anemic Domain Model – but I don’t care. Logic is in separated objects that use one or more entities. It is not exactly DCI, but it is not very far from this. For many other reasons (unrelated to the live architecture) I prefer having business logic objects that performs specific scenario – the best case is 1-to-1 mapping with a Use case from the analysis document.

Let’s talk about this picture for a while:

Presentation layer can be anything – component (Wicket) or controller (Web MVC) driven. It calls the service layer (typically a Spring bean or EJB) and this further uses that “cloud” with various business logic objects. Very often I prefer create/use/throw-away pattern. In constructor the object gets its context and then it does something – preferably in one method call, but it may be a sequence too, although this is more fragile approach. Important thing is that business object can store its state during the business logic execution – it is thread safe if it is created locally for one service call (that’s why I don’t use singletons here). Sometimes state is not necessary, but in more complex cases it is. And I like fields much more than dragging list of parameters between private methods.

This business logic uses DAOs (or @EntityManager directly) to work with the DB – and of course works with entities in the process. Because entities are dumb (DCI idea, but not only theirs) they are perfect DTOs (that are also dumb). Of course there are some concerns about entities used as DTOs and you can find many questions about this issue (and not only in the Java world). Entities are POJOs – in theory – but you may drag some proxy object up there into the presentation layer. There is a lot of magic in entities, you sometimes don’t know what they are (my class or some modified class already?) – but under the most circumstances you don’t have to care that much really.

Best practices

Now let’s talk about our best practices. Presentation layer code knows entities, but doesn’t know ORM! This is probably the most important thing. Of course the dependency on the JPA is implied somehow. Of course client programmer has to know the data model and has to know how to traverse the objects he wants to display. But he absolutely can’t use EntityManager. Our first “live architecture” project didn’t have clear separation of these roles and some LIEs were fixed like “you know, here in this page before you call the service… put evict on this object there”. I wasn’t there when this project started, so I just went like “what?!?!” And I forbade this for the next project I could affect.

Next rule is rather about the communication than the technical one – presentation programmer always has to know what he gets from the service call. Otherwise he risks that LIE again. But LIEs in presentation are easy. They are easy to fix in model, in service/business code or in the presentation code (that is the most of the cases). You always have to share some model between business logic and presentation (and developers!) – and we share the data model itself. If you don’t plan to change your layers this is perfectly acceptable. I’ve actually never saw any change of technology that would satisfy using different model introduced on the facade level. So why to do it if you ain’t gonna need it? (Of course, you may need it – and you are there to say as an architect.)

Getting data is easy (talking about live architecture problems only :-)). You may need separate methods for every view – especially if selects are not generic enough. We have “filter beans” with single superclass and we use these beans with a few service methods (getSingleResult, getList, etc.) that are rather generic in nature. DAO-like even. It works for us, filter beans are the common ground for client and server programmer to communicate and they are part of the service layer API. We can have common FilterBean interface, because we use our custom filter framework behind. But you can use filter beans without common ancestor and have many service methods to obtain data. This is probably even cleaner.

Transactions, saves, updates

Originally we used DAO-like save on service layer too. We also didn’t have clear strategies when objects are alive and when not when the presentation layer called the service layer. If you had in one HTTP request read and write call, then the entities were alive if the write used result of the read. If you had just an update, then they were not. “Objects may come alive or not, let’s not assume that they are alive,” was our first strategy, though I didn’t feel very well about “or” used in the sentence. Never use contradictions in your assumptions. With a big help of our tests we managed to clean this mess up.

Our tests were TestNG based, they were not unit tests but mostly we tested the service layer playing the role of the presentation layer. It was funny how often the test passed and the user test (using browser) failed, but also vice-versa! Sometimes the test didn’t prepare the same environment – and we started to realize, that the service layer must assume less and be more strict. The biggest problem was that the presentation layer could change an entity A that was read in the request (hence alive) and then call service saving an entity B. The service layer had no chance to know about the A being saved in the same transaction. This lead to one very simple idea – we always clear session before calling transactional service methods. I forgot to say that we use transactions on service layer, so you can have more transactions in one HTTP request/persistence session.

Stepping back for a bit – client programmer knows that when he calls a service, his objects are alive. He can call multiple reads – and he knows that all things are still alive and he can base the next read on an attribute that is loaded lazily. In our case there is only one write/transaction called in one HTTP request – and it’s mostly the last call as well. If I wanted to make our policies even more precise I could say “always clear the session – for every service call”. This would mean less comfort for the client programmer. Or you can go for “dead” entities instead of live ones (see Other possibilities further).

Now the business programmer knows that any object that enters transactional service is detached and he can choose what to do with it. Do you need just to save the changes? Merge it (or call JPQL update, or whatever). Do you need to compare it to its original state? Read the object by its id and do what you need. Do you want to traverse its attributes? Well, better reload it first to make it attached again. We enforce this by a custom aspect that is hooked on an existing Spring @Transactional annotation.

This assumption would be very useful for read/list method too. Now the developer never knows if he has to reload or not. But read methods are not so complex and reload of the parameter entity should never harm either. Also – read/list methods are not transactional, so whatever he does, he can’t mess up with the persisted data. So this is our compromise between the client programmer using live objects and the service layer being secured enough. There is much less LIEs in our back-end code (which are harder to catch than those on the presentation layer) – actually I didn’t see one for a long time – and there is no chance to tamper with the data accidentally.

As a side note: Many of our problems were also caused by our presentation architecture – we load data, display them, then forget the content to keep page/session small and we just remember the IDs of the objects. When edit action comes, we reload the object from the service by its ID, modify it and then call the transactional write service method. To make this more convenient we have our custom ReloadableModel class for our Wicket pages, so before the model (entity obect) is to be updated, it is always reloaded from the service too (this is not a big performance hit, it often goes from the 2nd level cache anyway). This may not be very lucky solution but it was one of those we had to stick with for the time. You may or may not run into these kinds of problems. In any case, making your contracts and policies more strict and clean is always a good thing.

Other possibilities

There is not only Live vs DTO option. You can also use entities, yet always closing the session when the service call ends. This gives you the same model, less easy presentation changes, but it definitely is cleaner from the service layer point of view. You can make more strict contracts, performance is all down there and not ruined by lazy loads on the presentation layer, etc. I know this, we use this for other projects too. But I also know that people use OSIV a lot and that is why I wanted to wrap-up our experiences with it. You can come up with other policies too – for instance one read or write per request and nothing more. Do it all in one proper service call, don’t call many selects for every single combo-box model for instance. I agree with these approaches actually. But sometimes we don’t have the luxury of choice. 🙂

In any case, try to do your best to clean up the contracts as much as possible, avoid contradictory ORs in your assumptions and – I didn’t focus on this point much in this post – test your service/business layer. Contract and policy is one thing, but you have to ensure them – force them, otherwise they are not contracts, just promises. Because that is your safety net not only from the architectural standpoint, but also from the functional one. But that is a completely different story.

The worst way to explain dependency injection

I don’t know, but sometimes I feel like I’m talked to like a child. When I read a book about some technology I expect to learn how to use technology too. I remember reading one book about UML with such a nonsense example from real life that it debased the whole explanation and made it ridiculous. Add a few cases where picture said something and description something else and a few flaws that were clear even to new learner (yet still confusing) and you have one of those books I can’t recommend at all (I think it was some older edition of Teach Yourself UML in 24 Hours – and in Czech translation that probably added just more to it).

Now I’m reading (among too many other things) Spring in Action – and there is that first chapter with Knight and Quest and how you can send Knight for a Quest using dependency injection. But who would do that? Now I was thinking: “Is it possible to manage runtime relations with Spring? Can I control entity creation and inject something to them?” What a stupid questions these are when you know what Spring really is, right? But why they are using Knight and Quest? Who would ever create a system with a knight (or a few of them) and *configure* them and their quests with Spring? Obviously – nobody.

And just this evening I was checking presentations from one local Oracle conference and professional Java Developer Advocate was explaining CDI and ManagedBeans with one SpaceCapsule and a Cosmonaut who’s ManagedBean was named “yuri”. How real is such an example? It’s not! It’s misleading altogether and that’s what is likely shown to CDI/DI/IoC newbie nowadays! I just can’t understand why…

Of course, we all play some little games with our code here and there, but I’d expect from new technology introduction also the right way how to use it. What type of situations, when yes and not. I don’t want them to suggest (or just accidentally imply) that I can manage various cosmonauts in my system in CDI – because that’s just wrong. I can name roles for managed beans, what they should act like in this and that scope… but “yuri”? Come on. Stop it. Please.

Or do you, readers, think this is the good way? Maybe it is, I don’t know. Sometimes I’m just off the crowd, really – and not always the right time. 🙂 If you feel it some other way, drop me a line in comments.