Bratislava World Usability Day 2016 and government IT

I wrote about sustainability and design takeaways from Bratislava World Usability Day 2016 in my previous post. World Usability Day (WUD) 2016 was organized on November 10th, 2016 in many places around the world. Theme for this year was Sustainability, but for us, working with and for the public sector, it was even more attractive thanks to the guest from UK and Estonia government agencies that implement or oversee the government services – services for real people, citizens. Services that should serve – just like the state itself should. And that is very touchy topic here in Slovakia.

Videos from Bratislava event can be found here, the page is in Slovak, but videos are easy to find and are in English.

Estonia: pathfinder or e-Narnia?

Risto Hinno came to us from Estonia, the state renown for it’s level of e-government. But if you imagined their world as a perfect place with flawless services you’d be wrong. Risto came to talk about their approach to the services and the problems they had to overcome and are overcoming.

Estonia and Slovakia are both countries from the Eastern Bloc, Slovakia is the successor of Czechoslovakia, while Estonia is one of the post-Soviet states. Both states are in NATO and EU and both use Euro, but there are also some important differences. I may not be historically accurate, but while in Slovakia we still have plenty of “old guard” people in their posts (like judges) and plenty of old-thinking politicians, many of them previously members of Communist party, now often using the sticker saying “social democrat”. In Estonia most of these were Russians and they simply were gone after Estonia became independent. And that allowed for deeper change, change that is much needed here in Slovakia but haven’t happened. Some ask: “Will it ever?”

But back to the services. As Risto put it, what we (citizens) want is value, but what we typically get is bureaucracy. The answer to this problem is to make everything smaller and simpler and really focus on the value.

Problems small and big

But just as with value-vs-bureaucracy problem there are opposite forces in play here. Even when the stakeholders agree on delivering maximum value for the money they often don’t agree on how to do it.

Very often the expectations are big and the budget follows them. Very often we don’t respect the systems our users already work with. And very often we deliver little value for a lot of money afterwards. Or worse, we often make the life of our users harder and they simply can’t understand what are the new system advantages we are talking about.

It is very important to understand that we need to deliver value in small chunks. Many times in my career I’ve heard: “…but we can’t deliver this useful system in small!” Really? How do you know you can deliver it on a bigger scale then? History shows us time after time that megalomaniac plans crumble. And, to make matters worse, they crumble often over many, many years.

Managers often expect that developers can plan their work while the developers have trouble to account for all the complexity in advance – often the accidental (that is “not essential”) complexity. And the accidental complexity always gets higher with bigger system, there is simply no remedy for that. Analyse as much as you want, you find out something unexpected the minute you start coding. Or when you meet with a customer. These are truths known for decades now, but still they seemingly make no sense to many managers and other key decision makers.

And so far we’ve only talked about mismatch in beliefs how to build complex systems. What does it matter whether you want to “build it” or “let it grow”, whether you are forced to “fixed time, fixed price” contract or you can do it really incrementally using whatever agile is currently chic – this all is not important at all when the true reason to spend the money is… well, to spend the money!

Yes, public money, aka nobody’s money – who cares? People care, of course, people who are in the chain somewhere. People who decide who should participate and have some piece from that big cake – competent or not, doesn’t matter. There are always subcontractor that will do it in the end. Money talks. And value is just standing aside. Just as users and their needs do.

It can be scaled down

Of course, it can, the question is whether we dare to be accountable and flexible to deliver clear value for the money. Value that is easy to see and evaluate whether it’s worth it or not. In Estonia they are also far from perfect, but they try hard to keep it small and simple (KISS principle). They limit their evaluation/analysis projects to 50k Euro and implementation projects to 500k.

I saw people laugh at this but 500k in these countries is a reasonable cost for 8-10 person team for a year. Yes, you have to mix juniors and seniors, which is pretty normal – and no, you can’t pay for 3 more levels of management above the team. Get out of their way and they will likely deliver more value than a similar team in a typical corporate environment that has to spend 20% of their time with reporting and other overhead (and that’s a low estimate).

If the cost calculation doesn’t work for you, take less people and make the project last half a year, not full. I’m not to be convinced that there is no way to deliver visible value within 500k Euro.

Risto Hinno also mentioned another very interesting thing. They decide how many services – or how much work if you will – they want implemented at a time. This way they prevent IT market in Estonia from heating up too much because that leads to very low quality. Companies start hiring everyone and anyone, a lot of code is written by external workers who often don’t care and everything is also done at way too high pace. These are all recipes for disaster. Things they seem to know in Estonia, but not here in Slovakia.

Problems with services

Risto talked also about typical problems they faced. The learned the hard way that services must have an owner. He also presented the maturity model of the services. Using my notes and not necessarily exactly his words the levels are:

  1. ad hoc services,
  2. simple list of services is managed,
  3. services have their owners,
  4. services are measured (including their value),
  5. service management is a daily routine.

He talked about building measurement in the services. This part of the talk rang a lot of devops/continuous delivery bells. And he also talked about the future visions they have:

  • Base future services on life events. This makes them more obvious to their consumers, that is citizens.
  • Aggregated services – many simple services can collaborate to achieve more complex scenarios. Risto actually mentioned some crazy number of services they have, but also noted that many of them are really simple. Still – it’s easier to put together simple blocks than to cut and slice big blocks.
  • Link between public and IT services.

So Estonia seems to have started well and they keep improving. I wish they keep on track because I loved the ideas presented – and many of them were familiar to me. I just needed to hear that it actually somewhere works. And now it’s time to get to the next level.

Designing the next generation of government services around user needs

That was the title of the presentation by Ciara Green who came to tell us how they do it in the United Kingdom. She works for GDS, Government Digital Service and she talked about the transformation of government services that, if we simplify it, started around 2010 with quite a short letter (11 pages) by Martha Lane Fox after she was asked to oversee a review of the state of the government services at the time. Sure, 11 pages seems long for a letter, but it was short in a world where you likely get hundred(s) pages of analysis that is not to the point in the end. The letter was.

After this government services all came under a single domain gov.uk and many other good things happen. UK is way ahead of Slovakia, historically, mentally of course (despite the Brexit and all the lies leading to it) – so it doesn’t come as a surprise that they decided to focus on value and they also used current agile methodologies.

They knew what happens if you deliver over many years and then surprise your customer or users – invariably not a good surprise. So they started to deliver fast and often, tested a lot, tested with real users including seniors, focused on UX. Just as Risto, Ciara too argued for making things simple. It is very easy to do things complex and longer and we should do the opposite. We should start with needs, real world needs, remind us these often. And we should do less    (reminds me the powerful “less but better” mantra).

Another interesting point was Good services are Verbs. Bad services are names. Of course there are also other components, various registers, but in the end the focus should be on services and on the activities (e.g. life situations) they cover. Sure, the verbs are a bit unusual sometimes. One very important service is called Verify and it verifies the identity of the user with various partners (banks, Royal Mail, and more) because in UK there is no central register of citizens. So they can do this without keeping personal data (I don’t know the details) and here in Slovakia we build various registers for years and they often add more problems than they solve.

Funny enough, when she talked about some services and the name was used, it still functioned as a noun in the sentence – quite naturally. So I believe the word class used for names may not be the most important thing but using verbs may remind us what we try to solve.

Back to Slovak reality

Ciara’s talk was pure sci-fi for us. She works for government agency where they develop services in an agile way. How crazy is that? Pretty much, if you say it in Slovakia. Slovak services are portioned between many companies, most of them with political background (not official, of course), and we spent around 800 million Euro for government IT that looks like a very bad joke. Each ministry takes care of its own sandbox and if there is some initiative to unify how IT is done it is executed extremely bad.

For example, there is some central portal for public services that acts as a hub and connects various parties in government IT. However, this “service” is good mostly for the provider of the service, not for its users. The protocols are crazy complicated, if you need to connect to it (you want to or is forced to, which is more likely) you need to conform to some strict plan for testing, etc. There is no way to do it agile, it only separates you from the service you want to talk to. It adds another barrier between you and the other party, not only technically but also organizationally.

It is said that one minister mentioned to a young woman working at the ministry, horrified how the state works, that she should not be naive, that sometimes things are as they are and we have to be realistic. He, reportedly, pointed at government IT and the bad companies who suck money out of it. Now this is all a matter of speculation, but the words could have been said. The tragedy is twofold.

First: The companies do what they are allowed to do. It is not that bad companies do whatever they want, they do it with connections to the officials of the government and various bureaus. As crazy as it sounds, stories that someone who worked for some company now works for the state and manages projects his previous employer delivers. Stories like this are uncovered on virtually a daily basis now.

Second: Even if it was true and the bad companies did whatever they wanted… then the state totally failed to do its basic job. It actually did fail in the first case too, but here it seems to be a very weak state, not the state our officials depict to us.

Final words

While the Slovak reality is pretty bleak, it was very refreshing to see that it can be done and how it’s done somewhere else. It’s nice to see that agile can work, even more – it can work in a state agency. And that state agencies can deliver true value, when they really focus on it. We have also learned that state can regulate how much he wants at once. This can – and should – be done in IT, but also in infrastructure projects like motorways (another anti-example from Slovakia). It gives you better quality for lower price and surprisingly it still gets done sooner in the end!

In any case, there is a long way for Slovakia and Slovaks to get to the point when we can focus on value and don’t have to fight with elemental lack of political culture (to which I include wasting/misusing/frauding public money as well).

Neither Risto nor Ciara brought any political topics in, but some of the Slovak political “folklore” obviously affected the questions that were asked afterwards. Corruption was mentioned not once. But these were areas where our speakers couldn’t help us (oh, how I envy them).

The presented topics were so interesting for us that UX parts were often left aside a bit – although focusing on value and user from the start is pretty useful recommendation. But as with anything simple, it is much harder to do it than to do something complicated and big.

Bratislava World Usability Day 2016 and the future of design

By a lucky coincidence I visited the World Usability Day (WUD) event here in Bratislava. It was November 10th, 2016 – as any other event of the same name around the world. Theme for this year was Sustainability, but for us, working with and for the public sector, it was even more attractive thanks to the guest from UK and Estonia government agencies that implement or oversee the government services – services for real people, citizens.

I will talk about government services in the followup post. This one will be more about design and how I feel about it. Mind you, I’m a software developer with some experiences with real users – I always prefer to hear from them although listening blindly to your users is also not a recipe for success. I’m not a designer. But I’m also a user of many things – and not only modern technology gadgets. Maybe I have some twisted programmer’s perspective but that doesn’t make me less a user.

Design of everyday things

Before going on, let me divert to a book I’m just reading – The Design of Everyday Things. I’ll probably never be a great designer but there are many basic aspects of design we can learn about and use them every day in software development. In the book I also found many funny examples of frustrating experiences – experiences we all have to go through sometimes.

I’m personally torn between the progress and stability. I understand the progress is inevitable – and in many cases it doesn’t affect the design. Technology performance and capacities get higher and it all gets smaller at the same time – this doesn’t mean we have to change how we interact with computers or computer-based devices like smartphones. On the other hand we can – and we even should because previous UIs were insufficient and current performance allows us to do so much better. Are we doing better?

Everybody now experiments with design but I doubt they test it properly. I wonder how Google – that definitely has facilities and resources – tested when they changed “add document” button to the bottom-right corner. Anyone I met who used computers and not tiny screens couldn’t find that button. Then you have products developed by a single developer – how should they experiment in design? How much should then learn before? How much of they learn can inhibit their creativity?

One of the ideas of the book is that the importance of the design will only grow. I have to agree. How is it possible that you need to set the current time on your oven to be able to bake a cake (not just one brand)? If we screw ovens after decades they worked already how can we design revolutionary devices? But maybe we can – perhaps the problem is not with designing new types of devices where we expect some searching. Perhaps we’re just too meddlesome and can’t resist redesigning what doesn’t need redesign anymore.

Role of Sustainability or Sustainability of Roles

Back to the WUD 2016 and the presentation that had the designated theme in its title, presented by Lukáš “Bob” Mrvan (with Avast). Videos from Bratislava event can be found here – and while the page is in Slovak, it is easy to find the videos there – and most of them are in English (all I mentioned are at least). Pity they are not made as a split screen between the slides and the presenter or that they don’t take the slide more often.

Sustainability definitely resonated throughout the presentation. This may seem annoying to some but not to me as I’m convinced our current lifestyle is unsustainable.

Another interesting idea was that too often we focus on technical part of the design and not on the whole experience. E.g. Bob was talking about their call centre – they needed to replace their insufficient application, but the most important change might have been designing their call scripts properly. Of course this wasn’t the first time I’d heard about this more holistic approach. So, just as the book says the importance of the design will grow, Bob claims the role of designer will change. And I agree.

But this all raises more questions, obviously. Maybe we need dedicated design experts on big projects, but what about small ones? How much of the design essentials must we take in to deliver useful software? How much an analyst and developer and tester should know about the design? And how to keep track of it when it develops like crazy nowadays? How to distinguish lasting advices from fashion trends?

Focus on people…

Part of the presentation discussed the speed of progress and its acceleration, talking about exponential Moore’s law vs our slow linear improvements in IQ. I take these only as visualization aids for the idea that the change is indeed inevitable. But when someone puts exponential curve on a linear scale and says “look at the pace of change since 2000” then I can move myself to 2000 and say “look at the crazy pace of change since 1985”. The rate is still the same it just affects more and more of our lives, that’s all.

Yes, society changes, design of things should get better and easier. The exponential curve doesn’t tell us anything different now than any time before. But right now it governs lives of virtually everyone (or soon it will). What to do with that is beyond the discussion about design, but the design is affected too.

…not just users, but workers as well

But there is one positive about these facts. Knowing that people evolve slower than technology we can focus on them – learn how we work, something about psychology (and psychopathology) of design, how we interact with things. This knowledge will last, it’s much better investment than learning something about the newest framework. Learning the technology is also necessary, of course, but we should find time for learning more important bigger ideas as well.

Bob mentioned it can be difficult to persuade our managers to give us time for learning and added a chart of performance of the top organizations vs average. The top organizations have also higher levels of employee satisfaction and learning culture is part of it. These are all known facts documented in many books, some of them decades old.

Some believe that in our line of work we should educate in our free time – and while I agree with this to a degree I refuse the idea that we should just be prepared anytime for anything at work. If organization doesn’t want us to practice at work at all, it can’t expect we will do it home, especially later in our lives with families. It’s also different to have a solo practice and a team practice.

To wrap it up

Bob’s presentation was much more cultural than technical. This seems to be the trend at the conferences nowadays. This is a good shift in overall although not all presentations are quality. This one was one of the better ones, definitely on the inspiring side of a spectrum. Bob also organized an exhibition about design, he is active in the community – so he’s got experiences of his own to present on the topic.

One of the questions about the design is – do we need revolutionary changes or will evolutionary suffice? Bob was more on the revolutionary side, it seemed to me. I understand the need for these in new areas. But revolutionary changes make me personally tired in many existing devices – especially the phones and web applications.

Productivity is directly tied to the design of things. If we need to relearn how to work with a phone every other year I don’t call that good progress. Like switching back and menu buttons? I have two phones with each of the buttons on the opposite sides!

Applications come and go and nothing is developed for reasonable time. Smart TVs are called a failure because people refused them, but producers refuse the idea that their Smart hubs (or whatever they call it) suck. They don’t improve the applications there. It’s been reported years ago that YouTube on Samsung smart TV does not use external keyboard – and it still doesn’t. If we don’t care about improving applications evolutionary as well, revolution will not bring anything good.

With this I’ll finish this post – mostly about design – and in the next one I’ll talk about government services. Those should also be about the design but are much more about politics, especially here in Slovakia.

AWS did it again, this time with Lightsail

I still remember how AWS (Amazon Web Services) re:Invent 2015 conference impressed me. AWS is extremely successful but they keep pushing innovations at us. I played with their smallest EC2 instances couple of times, considering whether or not I want my own server.

This year at re:Invent 2016 happening during this week Andy Jassy, CEO of Amazon Web Services, announced many new services in his keynote. One that resonated with me immediately was Lightsail. It is extremely affordable server with plenty to offer even in its weakest configuration. For $5 per month you’ll get 512MB of memory, 1 vCPU, 20GB SSD and 1TB data transfer. See also this blog post for more information.

With such a reasonable base price I decided to try it – by the way, the first month is free! My plans were nothing big, I actually just wanted to move my homepage there. But you have that flexibility of a Linux box ready for you anytime you fancy.

I spun my Lightsail instance in no time, choose AMI Linux just for a change (considering my Ubuntu preference) and with Google’s help I got nginx and PHP 7 up and running very quickly indeed. I used the in-browser SSH but because it’s not quite the console I like (but close, even Shift+PgUp/Down works as expected) I wanted to put my public key in ~/.ssh/authorized_keys. I didn’t know how to copy/paste it from my computer, but when you press Ctrl+Alt+Shift in the web SSH it will open a sidebar where you can paste anything into the clipboard and right-click will do the rest.

I really liked this experience, I like the price as well and in-browse SSH comes very handy when you are at a place where port 22 is blocked for whatever reason. (I’m sure it has something with compliance, but don’t want me to understand.) I’m definitely keeping my new pet server although I know that cattle is more common now. Especially in the cloud.

Eclipse 5 years later – common formatter quest

It was 5 years ago when I compared IntelliJ IDEA and Eclipse IDE in a series of blog posts (links to all installments at the bottom of that post). I needed to migrate from IDEA to Eclipse, tried it for couple of months and then found out that I actually can go on with IDEA. More than that, couple of years later many other developers used IDEA – some in its free Community edition, others invested into the Ultimate (comparison here).

Today I have different needs – I just want to “develop” formatter for our project that would conform to what we already use in IDEA. I don’t know about any automatic solution. So I’ll install Eclipse, tune its formatter until reformat of the sources produces no differences in my local version of the project and then I’ll just add that formatter into the project for convenience of our Eclipse developers.

Importing Maven project in Eclipse

I went to their download page, downloaded, started the executable and followed the wizard. There were no surprises here, Eclipse Mars.2 started. With File/Import… I imported our Maven project – sure that wizard is overwhelming with all the options, but I handled. Eclipse went on with installing some Maven plugin support. This is unknown for IDEA users – but it’s more a problem of Maven model that doesn’t offer everything for IDE integration, especially when it comes to plugins. It also means that plugins without additional helper for Eclipse are still not properly supported anyway. In any case, it means that Eclipse will invade your POM with some org.eclipse.m2e plugins. Is it bad? Maybe not, Gradle builds also tend to support IDEs explicitly.

Eclipse definitely needed to restart after this step (but you can say no).

SVN support

We use Subversion to manage our sources. I remembered that this was not built-in – and it still is not. Eclipse still has this universal platform feeling, I’m surprised it knows Java and Maven out of the box.

But let’s say Subversion is not that essential. I wasn’t sure how to add this – so I googled. Subversive is the plugin you may try. How do I install it? Help/Install New Software… does the trick. I don’t know why it does not offer some reasonable default in Work with input – this chooses the software repository which is not clear to me at all from that “work with”. I chose an URL ending with releases/mars, typed “subv…” in the next filter field and – waited without any spinner or other notification.

Eventually it indeed found some Subversive plugin…s – many of them actually. I chose Subversive SVN Team Provider, which was the right thing to do. Confirm, OK, license, you know the drill… restart.

But this still does not give you any SVN options, I don’t remember how IDEA detects SVN on the project and just offers it, but I definitely don’t remember any of torturing like this. Let’s try Subversive documentation – I have no problem reading. Or watching Getting started video linked therein. 🙂

And here we go – perspectives! I wonder how other IDEs can do it without reshuffling your whole screen. Whatever, Eclipse == perspectives, let’s live with it. But why should I add repository when the URL to it is already somewhere in the .svn directory in the root of the project? Switching to SVN Repository Exploring perspective, Eclipse asked for SVN connector. Oh, so much flexibility. Let’s use SVN Kit 1.8.11, ok, ok, license, ok. Restart again? No, not this time, let’s wait until it installs more and do it at once. This was wrong decision this time.

I followed the video to add the SVN repository, but it failed not having the connector. Were I not writing this as I go, I’d probably remember I have installed one. 🙂 But I wasn’t sure, maybe I cancelled it, so let’s try SVN Kit, sounds good. It failed with “See error log for details.” Ok, I give up, let’s try Native JavaHL 1.8.14 (also by Polarion). This one works. Restart needed. Oh, … this time I rather really restarted as my mistake dawned on me.

I checked the list of installed software, but SVN plugins don’t seem to be there – this is rather confusing. But if you go to Windows/Preferences, in the tree choose Team/SVN, then tab SVN Connector – there you can find the connectors you can use. Sure I had both of them there. My fault, happy ending.

So I added SVN repository, but as the Getting started video went on, I knew I’m in trouble. It doesn’t say how to associate existing SVN project on my disk with a repo. I found the answer (stackoverflow of course). Where should I right click so that Team menu shows enabled Share project…? In Package explorer of course. I added one project, hooray! Now it shows SVN information next to it. But I noticed there is Share projects…, I don’t want to do it one by one, right? Especially when Eclipse does not show the projects in the natural directory structure (this sucks). I selected it all my projects but Team menu didn’t offer any Share at all now!

Ok, this would throw me out of balance at 20, but now I know that what can go wrong goes wrong. That only project associated with SVN already – I had to deselect to let Eclipse understand what I want. Strictly speaking there is some logic in eliminating that menu item, but as a user I think otherwise. So now we are SVN ready after all!

I updated the project (not using other perspective), no information whether it did something or not – IDEA shows you updated files without getting into your way. Should have used synchronize, I know…

Oh, and it’s lunch time, perfect – I really need a break.

Quick Diff

This one is for free with IDEA, with Eclipse we have to turn it on. It’s that thing showing you changes in a sidebar (or gutter).

Windows/Preferences, filter for “quick” and there you see it under General/Editors/Text Editors. Well it says enabled, but you want to check Show differences in overview ruler too. In my case I want to change colours to IDEA-ish yellow/green/red. (Who came with these Sun/enterprise violetish colours for everything in Eclipse?) What to use as reference source? Well, with IDEA there is no sense for “version on disk” option. I chose SVN Working Copy Base in hope it does what I think it does (shows my actual outgoing changes that are to be committed).

Outgoing changes contain unmanaged files!

Ah yeah, I recall something like this. This is the most stupid aspect of SVN integration – it does not respect how SVN work. After seeing my outgoing changes in Team Synchronizing perspective (probably my least favourite and most confusing one for me) I was really afraid to click on Team/Commit… But as the three dots indicate, there is one more dialog before any damage is done – and only managed files are checked by default. So commit looks good, but disrespect of outgoing changes to the SVN underlying principles is terrible. Eclipse users will tell you to add files to ignore, but that is just workaround – you can then see in the repository all the files people needed to ignore for the sake of stupid team synchronization. Just don’t auto-add unmanaged files, show some respect!

Eclipse Code Style options

With quick diff ready I can finally start tuning my formatter. There are some good news and some bad news. Well, these are no news actually, nothing has changed since 2011. Code Style in IDEA is something you can set up for many languages in IDEA. It also includes imports. In Eclipse when you filter for “format” in Preferences you see Formatter under Java/Code Style and more options for XML/XML Files/Editor. These are completely separated parts and you cannot save them as one bunch. For Imports you have Java/Code Style/Organize Imports.

In my case it doesn’t make sense to use project specific settings. What I change now will become workspace specific, which is OK with me – but only because I don’t want to use Eclipse for any other projects with different settings (that would probably either kill me or I’d have to put them into separate workspaces).

And then we have Java/Code Style/Clean Up configuration (this is what Source/) and Java/Editor/Save Actions to configure and put into project(s) as well. Plenty of stuff you need to take care of separately.

Line wrapping and joining

One of the most important thing we do with our code in terms of readability is line wrapping – and one thing I expect from any formatter is an option that respects my line breaks. Eclipse offers “Never join lines” on Line Wrapping and Comment tab. It seems you have to combine them with “Wrap where necessary” option for most options on Line Wrapping tab, but it still does not allow you to split line arbitrarily – it joins the lines back on reformat, to be precise.

Sometimes I want to say “wrap it HERE” and keep it that way. In IDEA I can wrap before = in assignment or after – and it respects it. I don’t know about any specific line-break/wrapping option for this specific case. Eclipse respects the wrap after, but not the one before – it re-joins the lines in the latter case. Sure I don’t mind, I prefer the break after = as well. But obviously, Eclipse will not respect my line breaks that much as IDEA.

Just to be absolutely clear, I don’t mind when a standalone { is joined to the previous line when rules say so. There are good cases when control structures should be reformatted even when wrapped – but these are cases revolving mostly around parentheses, braces or brackets.

When I investigated around “Never join lines” behaviour I also noticed that people complain about Eclipse Mars formatter when compared to Luna one. Do they rewrite them all the time or do they just make them better? Do they have good tests for them? I don’t know. Sure, formatters are beasts and we all want different things from them.

Exporting Eclipse settings

Let’s say you select top right corner link Configure Project Specific Settings… in particular settings (e.g. Organize Imports). This opens dialog Project Specific Configuration – but do I know what is the scope of it when I select my top-level project? Actually – I don’t even see my top level project (parent POM in our case), only subset of my open projects based on I don’t know what criteria. That is ridiculous.

I ended up exporting settings using Export All… button – but you have to do it separately for whatever section you want. In our case it’s Clean Up, Formatter, Organize Imports and Save Actions. I simply don’t know how to do it better. I’ll add these XML exported configs into SVN, but everybody has to import them manually.

IDEA with its project (where project is really a project in common sense) and modules (which may be Maven “project”, but in fact just a part of the main project) makes much more sense. Also, in IDEA when you copy the code style into the project you feel sure we’re talking about all of the code style. If I add it to our SVN everybody can use it.

You can also export Code Style as XML – but a single one. Or you can export all of IDE settings and choose (to a degree) what portions you want to import. While this is also manual solution you need to do it once with a single exported config.

(This situation is still not that bad as with keybinds where after all these years you still can’t create your own new Scheme in a normal fashion inside the Eclipse IDE.)

Conclusion

Maybe the way of Go language, where formatting is part of the standard toolchain, is the best way – although if it means joining lines anywhere I definitely wouldn’t like it either.

I can bash IDEA formatter a bit too. For me it’s perfectly logical to prefer annotations for fields and methods on separate line, but NOT wrapping them when they are on the same line. Just keep the damn lines a bit different when I want it this way! Something like soft format with prefered way how to do the new code. This is currently not possible all the way. I can set IDEA formatter in such a way that it keeps annotations on separate lines and respects them at the same line as well – but all the new code has annotations by default on the same line.

This concept combining “how I prefer it” with “what I want to keep preserved even if it’s not the way I’d do it” is generally not the way formatters work now. I believe it would be a great way how they should work. This can partially be solved by formatting only changed lines, but that has its own drawbacks – especially when the indentation is not unified yet.

How I unknowingly deviated from JPA

In a post from January 2015 I wrote about possibility to use plain foreign key values instead of @ManyToOne and @OneToOne mappings in order to avoid eager fetch. It built on the JPA 2.1 as it needed ON clause not available before and on EclipseLink which is a reference implementation of the specification.

To be fair, there are ways how to make to-one lazy, sure, but they are not portable and JPA does not assure that. They rely on bytecode magic and properly configured ORM. Otherwise lazy to-one mapping wouldn’t have spawned so many questions around the internet. And that’s why we decided to try it without them.

Total success

We applied this style on our project and we liked it. We didn’t have to worry about random fetch cascades – in complex domain models often triggering many dozens of fetches. Sure it can be “fixed” with second-level cache, but that’s another thing – we could stop worrying about cache too. Now we could think about caching things we wanted, not caching everything possibly reachable even if we don’t need it. Second-level cache should not exist for the sole reason of making this flawed eager fetch bearable.

When we needed a Breed for a Dog we could simply do:

Breed breed = em.find(Breed.class, dog.getBreedId());

Yes, it is noisier than dog.getBreed() but explicit solutions come with a price. We can still implement the method on an entity, but it must somehow access entityManager – directly or indirectly – and that adds some infrastructure dependency and makes it more active-record-ish. We did it, no problem.

Now this can be done in JPA with any version and probably with any ORM. The trouble is with queries. They require explicit join condition and for that we need ON. For inner joins WHERE is sufficient, but any outer join obviously needs ON clause. We don’t have dog.breed path to join, we need to join breed ON dog.breedId = breed.id. But this is no problem really.

We really enjoyed this style while still benefiting from many perks of JPA like convenient and customizable type conversion, unit of work pattern, transaction support, etc.

I’ll write a book!

Having enough experiences and not knowing I’m already outside of JPA specification scope I decided to conjure a neat little book called Opinionated JPA. The name says it all, it should have been a book that adds a bit to the discussion about how to use and tweak JPA in case it really backfires at you with these eager fetches and you don’t mind to tune it down a bit. It should have been a book about fighting with JPA less.

Alas, it backfired on me in the most ironic way. I wrote a lot of material around it before I got to the core part. Sure, I felt I should not postpone it too long, but I wanted to build an argument, do the research and so on. What never occurred to me is I should have tested it with some other JPA too. And that’s what is so ironic.

In recent years I learned a lot about JPA, I have JPA specification open every other day to check something, I cross reference bugs in between EclipseLink and Hibernate – but trying to find a final argument in the specification – I really felt good at all this. But I never checked whether query with left join breed ON dog.breedId = breed.id works in anything else than EclipseLink (reference implementation, mind you!).

Shattered dreams

It does not. Today, I can even add “obviously”. JPA 2.1 specification defines Joins in section 4.4.5 as (selected important grammar rules):

join::= join_spec join_association_path_expression [AS] identification_variable [join_condition]
join_association_path_expression ::=
  join_collection_valued_path_expression |
  join_single_valued_path_expression |
  TREAT(join_collection_valued_path_expression AS subtype) |
  TREAT(join_single_valued_path_expression AS subtype)
join_spec::= [ LEFT [OUTER] | INNER ] JOIN
join_condition ::= ON conditional_expression

The trouble here is that breed in left join breed does not conform to any alternative of the join_association_path_expression.

Of course my live goes on, I’ve got a family to feed, I’ll ask my colleagues for forgiveness and try to build up my professional credit again. I can even say: “I told myself so!” Because the theme that JPA can surprise again and again is kinda repeating in my story.

Opinionated JPA revisited

What does it mean for my opinionated approach? Well, it works with EclipseLink! I’ll just drop JPA from the equation. I tried to be pure JPA for many years but even during these I never ruled out proprietary ORM features as “evil”. I don’t believe in an easy JPA provider switch anyway. You can use the most basic JPA elements and be able to switch, but I’d rather utilize chosen library better.

If you switch from Hibernate, where to-one seems to work lazily when you ask for it, to EclipseLink, you will need some non-trivial tweaking to get there. If JPA spec mandated lazy support and not define it as mere hint I wouldn’t mess around this topic at all. But I understand that the topic is deeper as Java language features don’t allow it easily. With explicit proxy wrapping the relation it is possible but we’re spoiling the domain. Still, with bytecode manipulation being rather ubiquitous now, I think they could have done it and remove this vague point once for all.

Not to mention very primitive alternative – let the user explicitly choose he does not want to cascade eager fetches at the moment of usage. He’ll get a Breed object when he calls dog.getBreed(), but this object will not be managed and will contain only breed’s ID – exactly what user has asked for. There is no room for confusion here and at least gives us the option to break the deadly fetching cascade.

And the book?

Well the main argument is now limited to EclipseLink and not to JPA. Maybe I should rename it to Opinionated ORM with EclipseLink (and Querydsl). I wouldn’t like to leave it in a plane of essay about JPA and various “horror stories”, although even that may help people to decide for or against it. If you don’t need ORM after all, use something different – like Querydsl over SQL or alternatives like JOOQ.

I’ll probably still describe this strategy, but not as a main point anymore. Main point now is that JPA is very strict ORM and limited in options how to control its behavior when it comes to fetching. These options are delegated to JPA providers and this may lock you to them nearly as much as not being JPA compliant at all.

Final concerns

But even when I accept that I’m stuck to EclipseLink feature… is it a feature? Wouldn’t it be better if reference implementation strictly complained about invalid JPQL just like Hibernate does? Put aside the thought that Hibernate is perfect JPA 2.1 implementation, it does not implement other things and is not strict in different areas.

What if EclipseLink reconsiders and removes this extension? I doubt the next JPA will support this type of paths after JOINs although that would save my butt (which is not so important after all). I honestly believed I’m still on the standard motorway just a little bit on the shoulder perhaps. Now I know I’m away from any mainstream… and the only way back is to re-introduce all the to-one relations into our entities which first kills the performance, then we turn on the cache for all, which hopefully does not kill memory, but definitely does not help. Not to mention we actually need distributed cache across multiple applications over the same database.

In the most honest attempt to get out of the quagmire before I get stuck deep in it I inadvertently found myself neck-deep already. ORM indeed is The Vietnam of Computer Science.

Last three years with software

Long time ago I decided to blog about my technology struggles – mostly with software but also with consumer devices. Don’t know why it happened on Christmas Eve though. Two years later I repeated the format. And here we are three years after that. So the next post can be expected in four years, I guess. Actually, I split this into two – one for software, mostly based on professional experience, and the other one for consumer technology.

Without further ado, let’s dive into this… well… dive, it will be obviously pretty shallow. Let’s skim the stuff I worked with, stuff I like and some I don’t.

Java case – Java 8 (verdict: 5/5)

This time I’m adding my personal rating right into the header – little change from previous post where it was at the end.

I love Java 8. Sure, it’s not Scala or anything even more progressive, but in context of Java philosophy it was a huge leap and especially lambda really changed my life. BTW: Check this interesting Erik Meijer’s talk about category theory and (among other things) how it relates to Java 8 and its method references. Quite fun.

Working with Java 8 for 17 months now, I can’t imagine going back. Not only because of lambda and streams and related details like Map.computeIfAbsent, but also because date and time API, default methods on interfaces and the list could probably go on.

JPA 2.1 (no verdict)

ORM is interesting idea and I can claim around 10 years of experience with it, although the term itself is not always important. But I read books it in my quest to understand it (many programmers don’t bother). The idea is kinda simple, but it has many tweaks – mainly when it comes to relationships. JPA 2.1 as an upgrade is good, I like where things are going, but I like the concept less and less over time.

My biggest gripes are little control over “to-one” loading, which is difficult to make lazy (more like impossible without some nasty tricks) and can result in chain loading even if you are not interested in the related entity at all. I think there is reason why things like JOOQ cropped up (although I personally don’t use it). There are some tricks how to get rid of these problems, but they come at cost. Typically – don’t map these to-one relationships, keep them as foreign key values. You can always fetch the stuff with query.

That leads to the bottom line – be explicit, it pays off. Sure, it doesn’t work universally, but anytime I leaned to the explicit solutions I felt a lot of relief from struggles I went through before.

I don’t rank JPA, because I try to rely on less and less ORM features. JPA is not a bad effort, but it is so Java EE-ish, it does not support modularity and the providers are not easy to change anyway.

Querydsl (5/5)

And when you work with JPA queries a lot, get some help – I can only recommend Querydsl. I’ve been recommending this library for three years now – it never failed me, it never let me down and often it amazed me. This is how criteria API should have looked like.

It has strong metamodel allowing to do crazy things with it. We based kinda universal filtering layer on it, whatever the query is. We even filter queries with joins, even on joined fields. But again – we can do that, because our queries and their joins are not ad-hoc, they are explicit. 🙂 Because you should know your queries, right?

Sure, Querydsl is not perfect, but it is as powerful as JPQL (or limited for that matter) and more expressive than JPA criteria API. Bugs are fixed quickly (personal experience), developers care… what more to ask?

Docker (5/5)

Docker stormed into our lives, for some practically for others at least through the media. We don’t use it that much, because lately I’m bound to Microsoft Windows and SQL Server. But I experimented with it couple of times for development support – we ran Jenkins in the container for instance. And I’m watching it closely because it rocks and will rock. Not sure what I’m talking about? Just watch DockerCon 2015 keynote by Solomon Hykes and friends!

Sure – their new Docker Toolbox accidentally screwed my Git installation, so I’ll rather install Linux on VirtualBox and test docker inside it without polluting my Windows even further. But these are just minor problems in this (r)evolutionary tidal wave. And one just must love the idea of immutable infrastructure – especially when demonstrated by someone like Jérôme Petazzoni (for the merit itself, not that he’s my idol beyond professional scope :-)).

Spring 4 and on (4/5)

I have been aware of the Spring since the dawn of microcontainers – and Spring emerged victorious (sort of). A friend of mine once mentioned how much he was impressed by Rod Johnson’s presentation about Spring many years ago. How structured his talk and speech was – the story about how he disliked all those logs pouring out of your EE application server… and that’s how Spring was born (sort of).

However, my real exposure to Spring started in 2011 – but it was very intense. And again, I read more about it than most of my colleagues. And just like with JPA – the more I read, the less I know, so it seems. Spring is big. And start some typical application and read those logs – and you can see EE of 2010’s (sort of).

That is not that I don’t like Spring, but I guess its authors (and how many they are now) simply can’t see anymore what beast they created over the years. Sure, there is Spring Boot which reflects all the trends now – like don’t deploy into container, but start the container from within, or all of its automagic features, monitoring, clever defaults and so on. But that’s it. More you don’t do, but you better know about it. Or not? Recently I got to one of the newer Uncle Bob’s articles – called Make the Magic go away. And there is undeniably much to it.

Spring developers do their best, but the truth is that many developers just adopt Spring because “it just works”, while they don’t know how and very often it does not (sort of). You actually should know more about it – or at least some basics for that matter – to be really useful. Of course – this magic problem is not only about Spring (or JPA), but these are the leaders of the “it simply works” movement.

But however you look at it, it’s still “enterprise” – and that means complexity. Sometimes essential, but mostly accidental. Well, that’s also part of the Java landscape.

Google Talk (RIP)

And this is for this post’s biggest let down. Google stopped supporting their beautifully simple chat client without any reasonable replacement. Chrome application just doesn’t seem right to me – and it actually genuinely annoys me with it’s chat icon that hangs on the desktop, sometimes over my focused application, I can’t relocate it easily… simply put, it does not behave as normal application. That means it behaves badly.

I switched to pidgin, but there are issues. Pidgin sometimes misses a message in the middle of the talk – that was the biggest surprise. I double checked, when someone asked me something reportedly again, I went to my Gmail account and really saw the message in Chat archive, but not in my client. And if I get messages when offline, nothing notifies me.

I activated the chat in my Gmail after all (against my wishes though), merely to be able to see any missing messages. But sadly, the situation with Google talk/chat (or Hangout, I don’t care) is dire when you expect normal desktop client. 😦

My Windows toolset

Well – now away from Java, we will hop on my typical developer’s Windows desktop. I mentioned some of my favourite tools, some of them couple of times I guess. So let’s do it quickly – bullet style:

  • Just after some “real browser” (my first download on the fresh Windows) I actually download Rapid Environment Editor. Setting Windows environment variables suddenly feels normal again.
  • Git for Windows – even if I didn’t use git itself, just for its bash – it’s worth it…
  • …but I still complement the bash with GnuWin32 packages for whatever is missing…
  • …and run it in better console emulator, recently it’s ConEmu.
  • Notepad2 binary.
  • And the rest like putty, WinSCP, …
  • Also, on Windows 8 and 10 I can’t imagine living without Classic Shell. Windows 10 is a bit better, but their Start menu is simply unusable for me, classic Start menu was so much faster with keyboard!

As an a developer I sport also some other languages and tools, mostly JVM based:

  • Ant, Maven, Gradle… obviously.
  • Groovy, or course, probably the most popular alternative JVM language. Not to mention that groovsh is good REPL until Java 9 arrives (recently delayed beyond 2016).
  • VirtualBox, recently joined by Vagrant and hopefully also something like Chef/Puppet/Ansible. And this leads us to my plans.

Things I want to try

I was always friend of automation. I’ve been using Windows for many years now, but my preference of UNIX tools is obvious. Try to download and spin up virtual machine for Windows and Linux and you’ll see the difference. Linux just works and tools like Vagrant know where to download images, etc.

With Windows people are not even sure how/whether they can publish prepared images (talking about development only, of course), because nobody can really understand the licenses. Microsoft started to offer prepared Windows virtual machines – primarily for web development though, no server class OS (not that I appreciate Windows Server anyway). They even offer Vagrant, but try to download it and run it as is. For me Vagrant refused to connect to the started VirtualBox machine, any reasonable instructions are missing (nothing specific for Vagrant is in the linked instructions), no Vagrantfile is provided… honestly, quite lame work of making my life easier. I still appreciate the virtual machines.

But then there are those expiration periods… I just can’t imagine preferring any Microsoft product/platform for development (and then for production, obviously). The whole culture of automation on Windows is just completely different – read anything from “nonexistent for many” through “very difficult” to “made artificially restricted”. No wonder many Linux people can script and too few Windows guys can. Licensing terms are to be blamed as well. And virtual machine sizes for Windows are also ridiculous – although Microsoft is reportedly trying to do something in this field to offer reasonably small base image for containerization.

Anyway, back to the topic. Automation is what I want to try to improve. I’m still doing it anyway, but recently the progress is not that good I wished it to be. I fell behind with Gradle, I didn’t use Docker as much as I’d like to, etc. Well – but life is not work only, is it? 😉

Conclusion

Good thing is there are many tools available for Windows that make developer’s (and former Linux user’s) life so much easier. And if you look at Java and its whole ecosystem, it seems to be alive and kicking – so everything seems good on this front as well.

Maybe you ask: “What does 5/5 mean anyway?” Is it perfect? Well, probably not, but at least it means I’m satisfied – happy even! Without happiness it’s not 5, right?

Exploring the cloud with AWS Free Tier (2)

In the first part of this “diary” I found a cloud provider for my developer’s testing needs – Amazon’s AWS. This time we will mention some hiccups one may encounter when doing some basic operations around their EC2 instance. Finally, we will prepare some Docker image for ourselves, although this is not really AWS specific – at least not in our basic scenario case.

Shut it down!

When you shutdown your desktop computer, you see what it does. I’ve been running Windows for some years although being a Linux guy before (blame gaming and music home recording). On servers, no doubt, I prefer Linux every time. But I honestly don’t remember what happens if I enter shutdown now command without further options.

If I see the computer going on and on although my OS is down already, I just turn it off and remember to use -h switch the next time. But when “my computer” runs far away and only some dashboard shows what is happening, you simply don’t know for sure. There is no room for “mechanical sympathy”.

Long story short – always use shutdown now -h on your AMI instance if you really want to stop it. Of course, check instance’s Shutdown Behavior setup – by default it’s Stop and that’s probably what you want (Terminate would delete the instance altogether). With magical -h you’ll soon see that the state of the instance goes through stopping to stopped – without it it just hangs there running, but not really reachable.

Watch those volumes

When you shut down your EC2 instances they will stop taking any “instance-hours”. On the other hand, if you spin up 100 t2.micro instances and run them for an hour, you’ll spend 100 of your 750 limit for a month. It’s easy to understand this way of “spending”.

However, volumes (disk space for your EC2 instance) work a bit differently. They are reserved for you and they are billed for all the time you have them available – whether the instance runs or not. Also, how much of it you really use is NOT important. Your reserved space (typically 8 GiB for t2.micro instance if you use defaults) is what counts. Two sleeping instances for the whole month would not hit the limit, but three would – and 4 GiB above 20GiB/month would be billed to you (depending on the time you are above limit as well).

In any case, Billing Management Console is your friend here and AWS definitely provides you with all the necessary data to see where you are with your usage.

Back to Docker

I wanted to play with Docker before I decided to couple it with cloud exploration. AWS provides so called EC2 Container Service (ECS) to give you more power when managing containers, but today we will not go there. We will create Docker image manually right on our EC2 instance. I’d rather take baby steps than skip some “maturity levels” without understanding the basics.

When I want to “deploy” a Java application in a container, I want to create some Java base image for it first. So let’s connect to our EC2 instance and do it.

Java 32-bit base image

Let’s create our base image for Java applications first. Create a dir (any name will do, but something like java-base sounds reasonable) and this Dockerfile in it:

FROM ubuntu:14.04
MAINTAINER virgo47

# We want WGET in any case
RUN apt-get -qqy install wget

# For 32-bit Java we need to enable 32-bit binaries
RUN dpkg --add-architecture i386
RUN apt-get -qqy update
RUN apt-get -qqy install libc6:i386 libncurses5:i386 libstdc++6:i386

ENV HOME /root

# Install 32-bit JAVA
WORKDIR $HOME
RUN wget -q --no-check-certificate --no-cookies --header "Cookie: oraclelicense=accept-securebac kup-cookie" http://download.oracle.com/otn-pub/java/jdk/8u60-b27/jdk-8u60-linux-i586.tar.gz
RUN tar xzf jdk-8u60-linux-i586.tar.gz
ENV JAVA_HOME $HOME/jdk1.8.0_60
ENV PATH $JAVA_HOME/bin:$PATH

Then to build it (you must be in the directory with Dockerfile):

$ docker build -t virgo47/jaba .

Jaba stands for “java base”. And to test it:

$ docker run -ti virgo47/jaba
root@46d1b8156c7c:~# java -version
java version "1.8.0_60"
Java(TM) SE Runtime Environment (build 1.8.0_60-b27)
Java HotSpot(TM) Client VM (build 25.60-b23, mixed mode)
root@46d1b8156c7c:~# exit

My application image

Now I want to run my HelloWorld application in that base image. That means creating another image based on virgo47/jaba. Create another directory (myapp) and the following Dockerfile:

FROM virgo47/jaba
MAINTAINER virgo47

WORKDIR /root/
COPY HelloWorld.java ./
RUN javac HelloWorld.java
CMD java HelloWorld

Easy enough, but before we can build it we need that HelloWorld.java too. I guess anybody can do it, but for the sake of completeness:

public class HelloWorld {
        public static void main(String... args) {
                System.out.println("Hello, world!");
        }
}

Now let’s build it:

$ docker build -t virgo47/myapp .

And to test it:

$ docker run -ti virgo47/myapp
Hello, world!

So it actually works! But we should probably deliver JAR file directly into the image build and not compiling it during the build. Can we automate it? Sure we can, but maybe in another post.

To wrap up…

I hope I’ll get to Amazon’s ECS later, because the things above are working, are kinda Docker(file) practice, but they definitely are not for real world. You may at least run it all from your local machine as a combination of scp/ssh, instead of creating Dockerfiles and other sources on the remote machine – because that doesn’t make sense, of course. We need to build Docker image as part of our build process, publish it somewhere and just download it to the target environment. But let’s get away from Docker and back to AWS.

In the meantime one big AWS event occurred – AWS re:Invent 2015. I have to admit I wasn’t aware of this at all until now, I just got email notifications about the event and the keynotes as an AWS user. I am aware of other conferences, I’m happy enough to attend some European Sun TechDays (how I miss those :-)), TheServerSide Java Symposiums (miss those too) and one DEVOXX – but just judging from the videos, the re:Invent was really mind-blowing.

I don’t know what more to say, so I’m over and out for now. It will probably take me another couple of weeks to get more of concrete impressions about AWS, but I plan to add the third part – hopefully again loosely coupled to Docker.