Eclipse 5 years later – common formatter quest

It was 5 years ago when I compared IntelliJ IDEA and Eclipse IDE in a series of blog posts (links to all installments at the bottom of that post). I needed to migrate from IDEA to Eclipse, tried it for couple of months and then found out that I actually can go on with IDEA. More than that, couple of years later many other developers used IDEA – some in its free Community edition, others invested into the Ultimate (comparison here).

Today I have different needs – I just want to “develop” formatter for our project that would conform to what we already use in IDEA. I don’t know about any automatic solution. So I’ll install Eclipse, tune its formatter until reformat of the sources produces no differences in my local version of the project and then I’ll just add that formatter into the project for convenience of our Eclipse developers.

Importing Maven project in Eclipse

I went to their download page, downloaded, started the executable and followed the wizard. There were no surprises here, Eclipse Mars.2 started. With File/Import… I imported our Maven project – sure that wizard is overwhelming with all the options, but I handled. Eclipse went on with installing some Maven plugin support. This is unknown for IDEA users – but it’s more a problem of Maven model that doesn’t offer everything for IDE integration, especially when it comes to plugins. It also means that plugins without additional helper for Eclipse are still not properly supported anyway. In any case, it means that Eclipse will invade your POM with some org.eclipse.m2e plugins. Is it bad? Maybe not, Gradle builds also tend to support IDEs explicitly.

Eclipse definitely needed to restart after this step (but you can say no).

SVN support

We use Subversion to manage our sources. I remembered that this was not built-in – and it still is not. Eclipse still has this universal platform feeling, I’m surprised it knows Java and Maven out of the box.

But let’s say Subversion is not that essential. I wasn’t sure how to add this – so I googled. Subversive is the plugin you may try. How do I install it? Help/Install New Software… does the trick. I don’t know why it does not offer some reasonable default in Work with input – this chooses the software repository which is not clear to me at all from that “work with”. I chose an URL ending with releases/mars, typed “subv…” in the next filter field and – waited without any spinner or other notification.

Eventually it indeed found some Subversive plugin…s – many of them actually. I chose Subversive SVN Team Provider, which was the right thing to do. Confirm, OK, license, you know the drill… restart.

But this still does not give you any SVN options, I don’t remember how IDEA detects SVN on the project and just offers it, but I definitely don’t remember any of torturing like this. Let’s try Subversive documentation – I have no problem reading. Or watching Getting started video linked therein. 🙂

And here we go – perspectives! I wonder how other IDEs can do it without reshuffling your whole screen. Whatever, Eclipse == perspectives, let’s live with it. But why should I add repository when the URL to it is already somewhere in the .svn directory in the root of the project? Switching to SVN Repository Exploring perspective, Eclipse asked for SVN connector. Oh, so much flexibility. Let’s use SVN Kit 1.8.11, ok, ok, license, ok. Restart again? No, not this time, let’s wait until it installs more and do it at once. This was wrong decision this time.

I followed the video to add the SVN repository, but it failed not having the connector. Were I not writing this as I go, I’d probably remember I have installed one. 🙂 But I wasn’t sure, maybe I cancelled it, so let’s try SVN Kit, sounds good. It failed with “See error log for details.” Ok, I give up, let’s try Native JavaHL 1.8.14 (also by Polarion). This one works. Restart needed. Oh, … this time I rather really restarted as my mistake dawned on me.

I checked the list of installed software, but SVN plugins don’t seem to be there – this is rather confusing. But if you go to Windows/Preferences, in the tree choose Team/SVN, then tab SVN Connector – there you can find the connectors you can use. Sure I had both of them there. My fault, happy ending.

So I added SVN repository, but as the Getting started video went on, I knew I’m in trouble. It doesn’t say how to associate existing SVN project on my disk with a repo. I found the answer (stackoverflow of course). Where should I right click so that Team menu shows enabled Share project…? In Package explorer of course. I added one project, hooray! Now it shows SVN information next to it. But I noticed there is Share projects…, I don’t want to do it one by one, right? Especially when Eclipse does not show the projects in the natural directory structure (this sucks). I selected it all my projects but Team menu didn’t offer any Share at all now!

Ok, this would throw me out of balance at 20, but now I know that what can go wrong goes wrong. That only project associated with SVN already – I had to deselect to let Eclipse understand what I want. Strictly speaking there is some logic in eliminating that menu item, but as a user I think otherwise. So now we are SVN ready after all!

I updated the project (not using other perspective), no information whether it did something or not – IDEA shows you updated files without getting into your way. Should have used synchronize, I know…

Oh, and it’s lunch time, perfect – I really need a break.

Quick Diff

This one is for free with IDEA, with Eclipse we have to turn it on. It’s that thing showing you changes in a sidebar (or gutter).

Windows/Preferences, filter for “quick” and there you see it under General/Editors/Text Editors. Well it says enabled, but you want to check Show differences in overview ruler too. In my case I want to change colours to IDEA-ish yellow/green/red. (Who came with these Sun/enterprise violetish colours for everything in Eclipse?) What to use as reference source? Well, with IDEA there is no sense for “version on disk” option. I chose SVN Working Copy Base in hope it does what I think it does (shows my actual outgoing changes that are to be committed).

Outgoing changes contain unmanaged files!

Ah yeah, I recall something like this. This is the most stupid aspect of SVN integration – it does not respect how SVN work. After seeing my outgoing changes in Team Synchronizing perspective (probably my least favourite and most confusing one for me) I was really afraid to click on Team/Commit… But as the three dots indicate, there is one more dialog before any damage is done – and only managed files are checked by default. So commit looks good, but disrespect of outgoing changes to the SVN underlying principles is terrible. Eclipse users will tell you to add files to ignore, but that is just workaround – you can then see in the repository all the files people needed to ignore for the sake of stupid team synchronization. Just don’t auto-add unmanaged files, show some respect!

Eclipse Code Style options

With quick diff ready I can finally start tuning my formatter. There are some good news and some bad news. Well, these are no news actually, nothing has changed since 2011. Code Style in IDEA is something you can set up for many languages in IDEA. It also includes imports. In Eclipse when you filter for “format” in Preferences you see Formatter under Java/Code Style and more options for XML/XML Files/Editor. These are completely separated parts and you cannot save them as one bunch. For Imports you have Java/Code Style/Organize Imports.

In my case it doesn’t make sense to use project specific settings. What I change now will become workspace specific, which is OK with me – but only because I don’t want to use Eclipse for any other projects with different settings (that would probably either kill me or I’d have to put them into separate workspaces).

And then we have Java/Code Style/Clean Up configuration (this is what Source/) and Java/Editor/Save Actions to configure and put into project(s) as well. Plenty of stuff you need to take care of separately.

Line wrapping and joining

One of the most important thing we do with our code in terms of readability is line wrapping – and one thing I expect from any formatter is an option that respects my line breaks. Eclipse offers “Never join lines” on Line Wrapping and Comment tab. It seems you have to combine them with “Wrap where necessary” option for most options on Line Wrapping tab, but it still does not allow you to split line arbitrarily – it joins the lines back on reformat, to be precise.

Sometimes I want to say “wrap it HERE” and keep it that way. In IDEA I can wrap before = in assignment or after – and it respects it. I don’t know about any specific line-break/wrapping option for this specific case. Eclipse respects the wrap after, but not the one before – it re-joins the lines in the latter case. Sure I don’t mind, I prefer the break after = as well. But obviously, Eclipse will not respect my line breaks that much as IDEA.

Just to be absolutely clear, I don’t mind when a standalone { is joined to the previous line when rules say so. There are good cases when control structures should be reformatted even when wrapped – but these are cases revolving mostly around parentheses, braces or brackets.

When I investigated around “Never join lines” behaviour I also noticed that people complain about Eclipse Mars formatter when compared to Luna one. Do they rewrite them all the time or do they just make them better? Do they have good tests for them? I don’t know. Sure, formatters are beasts and we all want different things from them.

Exporting Eclipse settings

Let’s say you select top right corner link Configure Project Specific Settings… in particular settings (e.g. Organize Imports). This opens dialog Project Specific Configuration – but do I know what is the scope of it when I select my top-level project? Actually – I don’t even see my top level project (parent POM in our case), only subset of my open projects based on I don’t know what criteria. That is ridiculous.

I ended up exporting settings using Export All… button – but you have to do it separately for whatever section you want. In our case it’s Clean Up, Formatter, Organize Imports and Save Actions. I simply don’t know how to do it better. I’ll add these XML exported configs into SVN, but everybody has to import them manually.

IDEA with its project (where project is really a project in common sense) and modules (which may be Maven “project”, but in fact just a part of the main project) makes much more sense. Also, in IDEA when you copy the code style into the project you feel sure we’re talking about all of the code style. If I add it to our SVN everybody can use it.

You can also export Code Style as XML – but a single one. Or you can export all of IDE settings and choose (to a degree) what portions you want to import. While this is also manual solution you need to do it once with a single exported config.

(This situation is still not that bad as with keybinds where after all these years you still can’t create your own new Scheme in a normal fashion inside the Eclipse IDE.)

Conclusion

Maybe the way of Go language, where formatting is part of the standard toolchain, is the best way – although if it means joining lines anywhere I definitely wouldn’t like it either.

I can bash IDEA formatter a bit too. For me it’s perfectly logical to prefer annotations for fields and methods on separate line, but NOT wrapping them when they are on the same line. Just keep the damn lines a bit different when I want it this way! Something like soft format with prefered way how to do the new code. This is currently not possible all the way. I can set IDEA formatter in such a way that it keeps annotations on separate lines and respects them at the same line as well – but all the new code has annotations by default on the same line.

This concept combining “how I prefer it” with “what I want to keep preserved even if it’s not the way I’d do it” is generally not the way formatters work now. I believe it would be a great way how they should work. This can partially be solved by formatting only changed lines, but that has its own drawbacks – especially when the indentation is not unified yet.

Advertisements

Templating localization with gender and case

In this article we will explore the problem how to localize messages that repeat a lot and only a small part of the message varies. The variable part will be typically some domain object and repeating messages are various confirmations, error and similar messages revolving around these domain objects.

Our code examples will be in Java using standard Java MessageFormat and version using ICU project, ICU4J to be precise. While ICU library is rather taxing on disk space (10 MB if you download it using Maven, as it contains all possible localization data) and also their patterns are more verbose than the one from java.text there is one big advantage. Besides the ICU4C (C version) also available on the ICU site there are other JavaScript implementations. Maybe not official, but they refer to ICU and this sounds promising if you need the same message resources both on Java backend and JavaScript frontend.

Before we go on I’ll point to various resources regarding best practices of localization. Virtually any bigger project where localization matters has something similar – and among other natural things (like prefer Unicode or never use hardcoded strings) there is one notoriously repeated warning: Don’t concatenate localized strings.

Reason for this is very obvious if/when you try any other language than your default because even similar languages have different structures for some things. Sometimes the number goes first and then the date, sometimes the other way – things like that. How does this relate to our problem?

Example sentences

Imagine an application where you can select multiple things in a table and delete them or delete using some filter. We need message that announces the result to the user. And we also can edit an object and we want to see that it was successfully saved. This is enough for our needs. Our domain objects are: Transaction and Client.

English language

Example sentences with bold showing our noun (domain object) that can change and italics for other affected words:

  • Transaction was successfully saved.
  • No transactions were deleted. (singular variation possible)
  • One transaction was deleted.
  • 2 transactions were deleted. (or more)

Problems to note:

  • We need Transaction or transaction depending on the position in the sentence. Solution: Either use the whole phrase per domain object (noun). For insertion into some template format we need two different keys or a single key with some pre-processing (de)capitalizing the first letter. This either needs to be supported on the message format level or we can do it in the code after the whole sentence is completely formatted. We cannot choose/change casing for a single inserted word because for various languages it can be at different positions in the sentence.
  • We need plural/singular for a noun – this possibly combined with the need for various casing of the first character. We also need to show the parameter (number), possibly with word versions for some cases (“No”). Finally, we need to use were/was appropriately. Solution: some choice format mechanism, mostly readily available.

In overall, no problem to add new domain object (nouns) and put them into the sentences somehow. But this ties us to English rather too much. Let’s see some different language to see the whole problem.

Slovak language

The same sentences in Slovak (bold for noun and italics for other affected words):

  • Transakcia bola úspešne uložená. (The subject noun is in Slovak “nominative” case, and “bola … uložená” is affected by the feminine gender of “transakcia”. This sentence is needed only in singular.)
  • Žiadna transakcia nebola zmazaná. (Word “transakcia” is in singular “nominative”, this time with lowercase. The rest of the sentence reflects case for “none deleted” with all three words affected by the feminine gender of “transakcia”. Just like in English, plural variation is possible. Unlike English, word “nebola” means “was not” – sentences with two negatives are normal in Slovak and these don’t cancel out.)
  • Jedna transakcia bola zmazaná. (Singular nominative, feminine affecting the whole sentence, this time with positive “bola zmazaná”, that is “was deleted”.)
  • 2 transakcie boli zmazané. (For cases 2-4: Plural nominative, feminine.)
  • 5 transakcií bolo zmazaných. (For cases 5 and more: Plural genitive, feminine. Here the number plays the “nominative” role and the whole subject here is roughly like “five (of) transactions” and instead of “of” Slovak uses genitive case.)

Alternative example for “Transaction was successfully saved”, something like “Saving of transaction was successful”:

  • Ukladanie transakcie bolo úspešné. (Here none of the words are affected by the noun gender, it wouldn’t even be affected by the number. However, the noun itself is not in “nominative” case anymore, instead it is in “genitive”. This cannot be generalized to some “object” case (in addition to normal “subject” case) as objects in Slovak can be in different cases, mostly “accusative”. This information also cannot be worked with in the code, not even if the code is related to a single message localization, because this information relates to a single language, here Slovak. We explore the possibilities with various approaches lower.)

Problems:

  • Many words in the template itself are affected by the gender of the noun. Solution: Localize complete messages. Using templates we need to obtain the gender information somehow (part of the key? another key? what about languages where it does not matter at all?) and use it as a parameter for the formatting mechanism (some choice/select format).
  • We may need to use various cases of the same noun, depending on the message or even a parameter of the message (like the count of transactions).

And what about the Client?

You can imagine the messages with “client” in English, just replace the single word. No inflection, no cases, just respect the number and letter casing depending on the position in the sentence.

Things are different in Slovak though. Let’s see the sentence about about Client being saved but first let’s repeat the one about transaction for comparison:

  • Transakcia bola úspešne uložená. (Singular, feminine, noun in nominative.)
  • Klient bol úspešne uložený. (Singular, but masculine, nominative.)

Rather innocent change in English is a nightmare in Slovak if you want to reuse the structure of the message somehow. So what are our options?

Approach 1: whole sentences

Pure Java solution

Imaginary localization file Simple.properties:

transaction.saved=Transaction was successfully saved.
transaction.deleted={0,choice,0#No transaction was|1#One transaction was\
  |1<{0} transactions were} deleted.

client.saved=Client was successfully saved.
client.deleted={0,choice,0#No client was|1#One client was\
  |1<{0} clients were} deleted.

Last word “deleted” might be included into each sentence too – and this is actually recommended when your choice already involves significant portion of the sentence anyway.

The same for Slovak in Simple_sk.properties:

transaction.saved=Transakcia bola úspešne uložená.
transaction.saved.alt=Ukladanie transakcie bolo úspešné.
transaction.deleted={0,choice,0#Žiadna transakcia nebola zmazaná|1#Jedna transakcia bola zmazaná\
  |1<{0} transakcie boli zmazané|4<{0} transakcií bolo zmazaných}.

client.saved=Klient bol úspešne uložený.
client.saved.alt=Ukladanie klienta bolo úspešné.
client.deleted={0,choice,0#Žiadny klient nebol zmazaný|1#Jeden klient bol zmazaný\
  |1<{0} klienti boli zmazaní|4<{0} klientov bolo zmazaných}.

And some demo program to print it out:

import java.text.MessageFormat;
import java.util.Arrays;
import java.util.Locale;
import java.util.MissingResourceException;
import java.util.ResourceBundle;

public class SimpleApproach {
    public static void main(String[] args) {
        showDemo("transaction", Locale.getDefault());
        showDemo("client", Locale.getDefault());
        showDemo("transaction", Locale.forLanguageTag("sk"));
        showDemo("client", Locale.forLanguageTag("sk"));
    }

    private static void showDemo(String domainObject, Locale locale) {
        System.out.println("\nLOCALE: " + locale);
        print(locale, domainObject + ".saved");
        print(locale, domainObject + ".saved.alt");
        for (Integer count : Arrays.asList(0, 1, 2, 4, 5, 99)) {
            print(locale, domainObject + ".deleted", count);
        }
    }

    private static void print(Locale locale, String messageKey, Object... args) {
        String message = format(locale, messageKey, args);
        System.out.println(messageKey + Arrays.toString(args) + ": " + message);
    }

    private static String format(Locale locale, String key, Object... args) {
        ResourceBundle bundle = ResourceBundle.getBundle("Simple", locale);
        try {
            String pattern = bundle.getString(key);
            return new MessageFormat(pattern, locale)
                .format(args);
        } catch (MissingResourceException e) {
            return "";
        }
    }
}

ICU4J solution

I’ll use ICU4J MessageFormat instead of the one from java.util. The usage is actually the same for both cases, only the import statement and loaded resource is different. ICU4J allows not only positional arguments, but also named ones. For this reason we also changed how the arguments are provided, because named parameters in a map are much cleaner. But first the resource files – SimpleIcu.properties:

transaction.saved=Transaction was successfully saved.
transaction.deleted={count,plural,=0 {No transaction was}\
  one {One transaction was}\
  other {{count} transactions were}} deleted.

client.saved=Client was successfully saved.
client.deleted={count,plural,=0 {No client was}\
  one {One client was}\
  other {{count} clients were}} deleted.

And for Slovak – SimpleIcu_sk.properties:

transaction.saved=Transakcia bola úspešne uložená.
transaction.saved.alt=Ukladanie transakcie bolo úspešné.
transaction.deleted={count,plural,=0 {Žiadna transakcia nebola zmazaná}\
  one {Jedna transakcia bola zmazaná}\
  few {{count} transakcie boli zmazané}\
  other {{count} transakcií bolo zmazaných}}.

client.saved=Klient bol úspešne uložený.
client.saved.alt=Ukladanie klienta bolo úspešné.
client.deleted={count,plural,=0 {Žiadny klient nebol zmazaný}\
  one {Jeden klient bol zmazaný}\
  few {{count} klienti boli zmazaní}\
  other {{count} klientov bolo zmazaných}}.

Program listing:

import java.util.Arrays;
import java.util.Collections;
import java.util.Locale;
import java.util.Map;
import java.util.MissingResourceException;
import java.util.ResourceBundle;

import com.ibm.icu.text.MessageFormat;

public class SimpleIcuApproach {
    public static void main(String[] args) {
        showDemo("transaction", Locale.getDefault());
        showDemo("client", Locale.getDefault());
        showDemo("transaction", Locale.forLanguageTag("sk"));
        showDemo("client", Locale.forLanguageTag("sk"));
    }

    private static void showDemo(String domainObject, Locale locale) {
        System.out.println("\nLOCALE: " + locale);
        print(locale, domainObject + ".saved", Collections.emptyMap());
        print(locale, domainObject + ".saved.alt", Collections.emptyMap());
        for (Integer count : Arrays.asList(0, 1, 2, 4, 5, 99)) {
            print(locale, domainObject + ".deleted", Collections.singletonMap("count", count));
        }
    }

    private static void print(Locale locale, String messageKey, Map args) {
        String message = format(locale, messageKey, args);
        System.out.println(messageKey + args + ": " + message);
    }

    private static String format(Locale locale, String key, Map args) {
        ResourceBundle bundle = ResourceBundle.getBundle("SimpleIcu", locale);
        try {
            String pattern = bundle.getString(key);
            return new MessageFormat(pattern, locale)
                .format(args);
        } catch (MissingResourceException e) {
            return "";
        }
    }
}

Message output is the same for both cases except for args part, but that is part of the label, not part of the final message. Please note that the demo listings don’t work efficiently with resource bundles as they call getBundle for every formatting. Do it better in production, or use some other abstraction above it, e.g. Spring.

Pros and cons

It’s easy! It’s easy to read the messages, or at least as easy as it gets. The trouble is that every time you add a new domain object you need to kinda replicate all the generic messages for it. Here we have two messages only, but imagine there’s ten of them or more. This is not exactly DRY, but in localization world it is quite safe way how to play it. The trouble is that if you decide to change the generic message you have to go over many places (consequence of duplication).

Can we do a bit better? Can we template the messages and combine them with some forms of those domain object names?

Approach 2: single template and noun nesting

We already foreshadowed all the problems our template solution must overcome. Concatenation is out of the question – but we’re playing it nicely with templates that allows structure of the sentence being completely different in various languages. But we still have to:

  • Treat starting letter of the sentence somehow (when it is a sentence, but let’s say we always know).
  • Use a specific inflection form in a template when the list of forms (cases and numbers) is not known to the code. It must not, of course, as the list may vary depending on a language. For nouns in English we just need singular/plural, but for Slovak you need nominative/genitive/accusative in both singular/plural forms – we have even more cases, but the unused ones are not important.
  • Template message needs to know the gender of the noun (domain object) as it may affect a lot of words in it.

Reading the list of problems we need to retrieve some information about the object domain noun without our code really understanding it and pass it somehow to the template. So we don’t know much about the information we need about the noun in advance. Maybe we can narrow it to the case/number combination, but it actually doesn’t matter. We know there are multiple forms of the noun – but the code doing our templating doesn’t know the names of the forms. Only template and the noun knows – and only in a particular language.

We don’t even know what forms for a particular template are needed – the writer or the template knows. We can only settle on a set of needed forms for the nouns. In case we need a new form, for instance for some new template, we have to add the new form for all existing nouns. But this design is still orthogonal. The question is – how to offer all the available forms to the template?

ICU4J solution

I’ll start with ICU4J because it allows those named parameters we already used in the simple approach. We can just fill the map with all possible forms of the word and let the template do the job. So how can we get a map of all the forms? Will we have multiple keys in the bundle with some suffix? I don’t like that. Let’s do something more brutal – we put some serialized map into a single key. I’d go for JSON, but as I don’t want to import any JSON library, I’ll use some special characters I really don’t expect in the names of the domain objects.

English resource file TemplateIcu.properties is pretty boring:

domain.object.transaction=sg:transaction,pl:transactions
domain.object.client=sg:client,pl:clients

object.saved={sg} was successfully saved.
object.deleted={count,plural,=0 {No {sg} was}\
  one {One {sg} was}\
  other {{count} {pl} were}} deleted.

We have both domain object names in singular (sg) and plural (pl) forms. Templating for English is obviously a no-brainer, with only two templates for two objects we are already saving a lot of characters. How about Slovak property file TemplateIcu_sk.properties?

domain.object.transaction=gend:fem,nom:transakcia,gen:transakcie,\
  pl:transakcie,plgen:transakcií
domain.object.client=gend:mas,nom:klient,gen:klienta,\
  pl:klienti,plgen:klientov

object.saved={nom} {gend, select, mas {bol úspešne uložený}\
  fem {bola úspešne uložená}\
  other {!!!}}.
object.saved.alt=Ukladanie {gen} bolo úspešné.
object.deleted={gend, select,\
  mas {{count,plural,\
    =0 {Žiadny {nom} nebol zmazaný}\
    one {Jeden {nom} bol zmazaný}\
    few {{count} {pl} boli zmazaní}\
    other {{count} {plgen} bolo zmazaných}}} \
  fem {{count,plural,\
    =0 {Žiadna {nom} nebola zmazaná}\
    one {Jedna {nom} bola zmazaná}\
    few {{count} {pl} boli zmazané}\
    other {{count} {plgen} bolo zmazaných}}}\
  other {!!!}}.

Ok, at this moment it’s longer than the original file, but I’d say with the third domain object we would already save some characters – granted we don’t need more forms and genders. Eventually, for Slovak at least, we would add neuter gender too, but that’s about it.

Now the code that runs it:

import java.util.Arrays;
import java.util.Collections;
import java.util.HashMap;
import java.util.Locale;
import java.util.Map;
import java.util.MissingResourceException;
import java.util.ResourceBundle;

import com.ibm.icu.text.MessageFormat;

public class TemplateIcuApproach {
    public static void main(String[] args) {
        showDemo("transaction", Locale.getDefault());
        showDemo("client", Locale.getDefault());
        showDemo("transaction", Locale.forLanguageTag("sk"));
        showDemo("client", Locale.forLanguageTag("sk"));
    }

    private static void showDemo(String domainObject, Locale locale) {
        System.out.println("\nLOCALE: " + locale);
        print(locale, domainObject, "object.saved", Collections.emptyMap());
        print(locale, domainObject, "object.saved.alt", Collections.emptyMap());
        for (Integer count : Arrays.asList(0, 1, 2, 4, 5, 99)) {
            print(locale, domainObject, "object.deleted", Collections.singletonMap("count", count));
        }
    }

    private static void print(Locale locale, String domainObject, String messageKey, Map args) {
        ResourceBundle bundle = ResourceBundle.getBundle("TemplateIcu", locale);
        Map objectInfo = parseObjectInfo(bundle.getString("domain.object." + domainObject));
        // not generified, sorry; we know that objectInfo is mutable, so we do it this way
        objectInfo.putAll(args);
        String message = format(bundle, locale, messageKey, objectInfo);
        if (sentenceRequiresCapitalization(message, true)) {
            message = Character.toUpperCase(message.charAt(0)) + message.substring(1);
        }
        System.out.println(messageKey + args + ": " + message);
    }

    private static boolean sentenceRequiresCapitalization(String message, boolean isSentence) {
        return isSentence && message != null && !(message.isEmpty())
            && Character.isLowerCase(message.charAt(0));
    }

    // no sanity checking here, but there should be some
    private static Map parseObjectInfo(String objectInfoString) {
        Map map = new HashMap();
        for (String form : objectInfoString.split(" *, *")) {
            String[] sa = form.split(" *: *");
            map.put(sa[0], sa[1]);
        }
        return map;
    }

    private static String format(ResourceBundle bundle, Locale locale, String key, Map args) {
        try {
            String pattern = bundle.getString(key);
            return new MessageFormat(pattern, locale)
                .format(args);
        } catch (IllegalArgumentException e) {
            return e.getMessage();
        } catch (MissingResourceException e) {
            return "";
        }
    }
}

Remember again that we should not call getBundle as often as I do here. Take this just as a proof of concept. 🙂 Code also performs capitalization of the first letter when the message is sentence

  • that’s the true argument in the call. This information could be embedded into the format too, but that could be awkward. Best solution would be some custom formatter for an argument, something like {nom, capitalizeFirst}, but I can’t find any mechanism to do that with this MessageFormat class. But that is rather minor issue.

Actually, there may be better alternative – just use two separate keys for nominative, that is “Nom” and “nom” (that is sg/Sg in English resources) with respective casing of the first letter. We don’t need this distinction for other forms, at least not here, not yet. Most of the cases will hardly appear at the start of the sentence.

We can easily create a tool that checks validity of the keys starting with “domain.object.” and indicate those that don’t have expected forms for a particular language. And we can also easily use this solution in JavaScript – which is very important for us.

Pure Java solution

It is not completely impossible to do this in Java, but you’d need to use indexes – and that sucks. If you needed to find all the occurrences of nominative it’s easy with ICU’s named parameters. Just search for {nom} which is pretty distinct pattern. With Java you’d have to decide fixed indexes for the forms and then … put the real arguments as indexes after these? What?! What if I discover I need to add another form? Will I renumber all arguments (only for that language, mind you) to make a room for it? And searching? Should I search for {0} as nominative? What about cases where we don’t use domain objects in the message and zeroth argument means something completely different?

Honestly, we will skip this solution altogether. It’s not viable.

Pros and cons

Before I wrote the proof of concept I was not sure if I tried this – but now I’m pretty sure we will try it! The key was to come up with the answer to the question how to feed the template message pattern with all possible forms? How to do it without code knowing about languages and concrete form names? With the map structure serialized in the domain object key this is all easy.

Sure, the generic messages are a bit more messy, but we don’t repeat ourselves anymore. It seems that combo gender-select/plural works for most cases. There are probably even more demanding messages, but I believe that the deletion message with count shows worthiness of this solution.

Multiple inserted words?

What if we want to insert multiple nouns? What if the sentence is something like “Transaction for client XY is above limit”? Here “transaction” and “client” are the domain objects and “XY” is a concrete name of a client (not a problem, we – hopefully – don’t want to inflect that). If we merge form maps for domain.object.transaction and domain.object.client one will override the other. That’s not good. What we need is to give these guys some role.

Before going on, I have to admit that this example is not a good one, because if only transactions can break limits and if we always want to show a client there, this would be message with a key like transaction.above.limit and all inserted words in all languages would be there verbatim, no insertion, only the name of the client would really be an argument. So before we get carried away but the opportunity to use it everywhere we need to think and prefer NOT to use it when not necessary. Just because we have our “multi-form” dictionary of domain objects doesn’t mean we want to use it. There is one legitimate case when you might to – when you expect that you change the word for “transaction” and/or “client”. But even if you think it may be actually useful for American vs British or similar – again, don’t. You want different bundles for both language variants with specified country.

We should not go on with wrong example, right? So how about the sentence “There are some transactions for a client XY – do you want to delete them as well?” From business point this still is silly, because we probably don’t want to delete anything like this, but at least we can see that this sentence can be used over and over for various combinations of domain objects.

I’ll not implement the whole solution here, let’s just shed a bit of light on it. Messages in English – this time we introduced singular with indefinite article (sgwa):

domain.object.transaction=sg:transaction,sgwa:a transaction,pl:transactions
domain.object.client=sg:client,sgwa:a client,pl:clients

object.delete.constraint.warning=There are some {slave.pl} for {master.sgwa} {name} - do you...?

The same in Slovak:

domain.object.transaction=gend:fem,nom:transakcia,gen:transakcie,\
  pl:transakcie,plgen:transakcií
domain.object.client=gend:mas,nom:klient,gen:klienta,\
  pl:klienti,plgen:klientov

object.delete.constraint.warning=Pre {master_gen} {name} existujú {slave_pl} - zmažeme...?

This time it was easy, no gender, but you can expect it can get more complicated. The point is we indicated the role of the domain object with the prefix, like master_. Now how we call message formatting to work like this? I’ll offer a snippet of code, but again, its API can be groomed, but you’ll probably finish this design with your needs in mind anyway:

...
print(loc, "object.delete.constraint.warning",
    Collections.singletonMap("name", "SuperCo."),
    new DomainObject("client", "master"),
    new DomainObject("transaction", "slave"));
...

private static void print(
    Locale locale, String messageKey, Map args, DomainObject... domainObjects)
{
    ResourceBundle bundle = ResourceBundle.getBundle("TemplateIcu", locale);

    // not generified, sorry
    Map finalArgs = new HashMap(args);
    for (DomainObject domainObject : domainObjects) {
        finalArgs.putAll(domainObject.parseObjectInfo(bundle));
    }

    String message = format(bundle, locale, messageKey, finalArgs);
    System.out.println(messageKey + args + ": " + message);
}

// format method like before, parseObjectInfo is embedded into following class:
public class DomainObject {
    private final String domainObject;
    private final String role;

    public DomainObject(String domainObject, String role) {
        this.domainObject = domainObject;
        this.role = role;
    }

    public DomainObject(String domainObject) {
        this.domainObject = domainObject;
        this.role = null;
    }

    // no sanity checking here, but there should be some
    Map parseObjectInfo(ResourceBundle bundle) {
        String objectInfoString = bundle.getString("domain.object." + domainObject);
        Map map = new HashMap();
        for (String form : objectInfoString.split(" *, *")) {
            String[] sa = form.split(" *: *");
            String key = role != null ? role + '_' + sa[0] : sa[0];
            map.put(key, sa[1]);
        }
        return map;
    }
}

That’s it! Not just 1 insertion anymore, but N – we know only two cardinalities 1 and N, right?

Your API for translation sucks!

There is no API, these are just demo programs. I’d aim for something much more fluent indeed. Resource bundle would be somehow available, for instance for current user’s locale. Then the code could go like this:

String message = new LocalizedMessage(resourceBundle, "delete.object")
  .withParam("count", deletedCount)
  .forDomainObject(editor.getDomainObjectName()) // optional role parameter possible
  .format();

This code can be in some unified object(s) deleted confirmation dialog that can be reused across many specific editors. Editor merely needs to provide the name of the domain object (generic one like “transaction”, not the name for the particular instance). I guess this API makes sense and it’s easy to get there.

Conclusion

Writing this post was part of the exploratory process here, I wasn’t sure what the conclusion be. I now see that the template solution is viable, but it builds on the MessageFormat that supports named parameters. As we already use ICU to align our backend (ICU4J) and frontend (we use yahoo/intl-messageformat) we can build on that.

I didn’t go into performance characteristics, but we’re talking about enterprisy software already. The whole message formatting is not cheap anyway. Unified template is definitely more complex to parse, at least the gender there is on top of original patterns. We can cache domain object maps, of course after we change the way we merge them with actual arguments as my solution up there adds arguments into this map. But otherwise I don’t believe it’s a big deal. I never compared ICU with java.text, but we need ICU features and don’t see any problems yet.

So, yes, templating of often used messages with some snippets changing in them is possible and makes sense. No it’s not concatenation. I believe we’re not doing anything internationally illegal here. 🙂 (GitHub project with sources)

How I unknowingly deviated from JPA

In a post from January 2015 I wrote about possibility to use plain foreign key values instead of @ManyToOne and @OneToOne mappings in order to avoid eager fetch. It built on the JPA 2.1 as it needed ON clause not available before and on EclipseLink which is a reference implementation of the specification.

To be fair, there are ways how to make to-one lazy, sure, but they are not portable and JPA does not assure that. They rely on bytecode magic and properly configured ORM. Otherwise lazy to-one mapping wouldn’t have spawned so many questions around the internet. And that’s why we decided to try it without them.

Total success

We applied this style on our project and we liked it. We didn’t have to worry about random fetch cascades – in complex domain models often triggering many dozens of fetches. Sure it can be “fixed” with second-level cache, but that’s another thing – we could stop worrying about cache too. Now we could think about caching things we wanted, not caching everything possibly reachable even if we don’t need it. Second-level cache should not exist for the sole reason of making this flawed eager fetch bearable.

When we needed a Breed for a Dog we could simply do:

Breed breed = em.find(Breed.class, dog.getBreedId());

Yes, it is noisier than dog.getBreed() but explicit solutions come with a price. We can still implement the method on an entity, but it must somehow access entityManager – directly or indirectly – and that adds some infrastructure dependency and makes it more active-record-ish. We did it, no problem.

Now this can be done in JPA with any version and probably with any ORM. The trouble is with queries. They require explicit join condition and for that we need ON. For inner joins WHERE is sufficient, but any outer join obviously needs ON clause. We don’t have dog.breed path to join, we need to join breed ON dog.breedId = breed.id. But this is no problem really.

We really enjoyed this style while still benefiting from many perks of JPA like convenient and customizable type conversion, unit of work pattern, transaction support, etc.

I’ll write a book!

Having enough experiences and not knowing I’m already outside of JPA specification scope I decided to conjure a neat little book called Opinionated JPA. The name says it all, it should have been a book that adds a bit to the discussion about how to use and tweak JPA in case it really backfires at you with these eager fetches and you don’t mind to tune it down a bit. It should have been a book about fighting with JPA less.

Alas, it backfired on me in the most ironic way. I wrote a lot of material around it before I got to the core part. Sure, I felt I should not postpone it too long, but I wanted to build an argument, do the research and so on. What never occurred to me is I should have tested it with some other JPA too. And that’s what is so ironic.

In recent years I learned a lot about JPA, I have JPA specification open every other day to check something, I cross reference bugs in between EclipseLink and Hibernate – but trying to find a final argument in the specification – I really felt good at all this. But I never checked whether query with left join breed ON dog.breedId = breed.id works in anything else than EclipseLink (reference implementation, mind you!).

Shattered dreams

It does not. Today, I can even add “obviously”. JPA 2.1 specification defines Joins in section 4.4.5 as (selected important grammar rules):

join::= join_spec join_association_path_expression [AS] identification_variable [join_condition]
join_association_path_expression ::=
  join_collection_valued_path_expression |
  join_single_valued_path_expression |
  TREAT(join_collection_valued_path_expression AS subtype) |
  TREAT(join_single_valued_path_expression AS subtype)
join_spec::= [ LEFT [OUTER] | INNER ] JOIN
join_condition ::= ON conditional_expression

The trouble here is that breed in left join breed does not conform to any alternative of the join_association_path_expression.

Of course my live goes on, I’ve got a family to feed, I’ll ask my colleagues for forgiveness and try to build up my professional credit again. I can even say: “I told myself so!” Because the theme that JPA can surprise again and again is kinda repeating in my story.

Opinionated JPA revisited

What does it mean for my opinionated approach? Well, it works with EclipseLink! I’ll just drop JPA from the equation. I tried to be pure JPA for many years but even during these I never ruled out proprietary ORM features as “evil”. I don’t believe in an easy JPA provider switch anyway. You can use the most basic JPA elements and be able to switch, but I’d rather utilize chosen library better.

If you switch from Hibernate, where to-one seems to work lazily when you ask for it, to EclipseLink, you will need some non-trivial tweaking to get there. If JPA spec mandated lazy support and not define it as mere hint I wouldn’t mess around this topic at all. But I understand that the topic is deeper as Java language features don’t allow it easily. With explicit proxy wrapping the relation it is possible but we’re spoiling the domain. Still, with bytecode manipulation being rather ubiquitous now, I think they could have done it and remove this vague point once for all.

Not to mention very primitive alternative – let the user explicitly choose he does not want to cascade eager fetches at the moment of usage. He’ll get a Breed object when he calls dog.getBreed(), but this object will not be managed and will contain only breed’s ID – exactly what user has asked for. There is no room for confusion here and at least gives us the option to break the deadly fetching cascade.

And the book?

Well the main argument is now limited to EclipseLink and not to JPA. Maybe I should rename it to Opinionated ORM with EclipseLink (and Querydsl). I wouldn’t like to leave it in a plane of essay about JPA and various “horror stories”, although even that may help people to decide for or against it. If you don’t need ORM after all, use something different – like Querydsl over SQL or alternatives like JOOQ.

I’ll probably still describe this strategy, but not as a main point anymore. Main point now is that JPA is very strict ORM and limited in options how to control its behavior when it comes to fetching. These options are delegated to JPA providers and this may lock you to them nearly as much as not being JPA compliant at all.

Final concerns

But even when I accept that I’m stuck to EclipseLink feature… is it a feature? Wouldn’t it be better if reference implementation strictly complained about invalid JPQL just like Hibernate does? Put aside the thought that Hibernate is perfect JPA 2.1 implementation, it does not implement other things and is not strict in different areas.

What if EclipseLink reconsiders and removes this extension? I doubt the next JPA will support this type of paths after JOINs although that would save my butt (which is not so important after all). I honestly believed I’m still on the standard motorway just a little bit on the shoulder perhaps. Now I know I’m away from any mainstream… and the only way back is to re-introduce all the to-one relations into our entities which first kills the performance, then we turn on the cache for all, which hopefully does not kill memory, but definitely does not help. Not to mention we actually need distributed cache across multiple applications over the same database.

In the most honest attempt to get out of the quagmire before I get stuck deep in it I inadvertently found myself neck-deep already. ORM indeed is The Vietnam of Computer Science.

Last three years with software

Long time ago I decided to blog about my technology struggles – mostly with software but also with consumer devices. Don’t know why it happened on Christmas Eve though. Two years later I repeated the format. And here we are three years after that. So the next post can be expected in four years, I guess. Actually, I split this into two – one for software, mostly based on professional experience, and the other one for consumer technology.

Without further ado, let’s dive into this… well… dive, it will be obviously pretty shallow. Let’s skim the stuff I worked with, stuff I like and some I don’t.

Java case – Java 8 (verdict: 5/5)

This time I’m adding my personal rating right into the header – little change from previous post where it was at the end.

I love Java 8. Sure, it’s not Scala or anything even more progressive, but in context of Java philosophy it was a huge leap and especially lambda really changed my life. BTW: Check this interesting Erik Meijer’s talk about category theory and (among other things) how it relates to Java 8 and its method references. Quite fun.

Working with Java 8 for 17 months now, I can’t imagine going back. Not only because of lambda and streams and related details like Map.computeIfAbsent, but also because date and time API, default methods on interfaces and the list could probably go on.

JPA 2.1 (no verdict)

ORM is interesting idea and I can claim around 10 years of experience with it, although the term itself is not always important. But I read books it in my quest to understand it (many programmers don’t bother). The idea is kinda simple, but it has many tweaks – mainly when it comes to relationships. JPA 2.1 as an upgrade is good, I like where things are going, but I like the concept less and less over time.

My biggest gripes are little control over “to-one” loading, which is difficult to make lazy (more like impossible without some nasty tricks) and can result in chain loading even if you are not interested in the related entity at all. I think there is reason why things like JOOQ cropped up (although I personally don’t use it). There are some tricks how to get rid of these problems, but they come at cost. Typically – don’t map these to-one relationships, keep them as foreign key values. You can always fetch the stuff with query.

That leads to the bottom line – be explicit, it pays off. Sure, it doesn’t work universally, but anytime I leaned to the explicit solutions I felt a lot of relief from struggles I went through before.

I don’t rank JPA, because I try to rely on less and less ORM features. JPA is not a bad effort, but it is so Java EE-ish, it does not support modularity and the providers are not easy to change anyway.

Querydsl (5/5)

And when you work with JPA queries a lot, get some help – I can only recommend Querydsl. I’ve been recommending this library for three years now – it never failed me, it never let me down and often it amazed me. This is how criteria API should have looked like.

It has strong metamodel allowing to do crazy things with it. We based kinda universal filtering layer on it, whatever the query is. We even filter queries with joins, even on joined fields. But again – we can do that, because our queries and their joins are not ad-hoc, they are explicit. 🙂 Because you should know your queries, right?

Sure, Querydsl is not perfect, but it is as powerful as JPQL (or limited for that matter) and more expressive than JPA criteria API. Bugs are fixed quickly (personal experience), developers care… what more to ask?

Docker (5/5)

Docker stormed into our lives, for some practically for others at least through the media. We don’t use it that much, because lately I’m bound to Microsoft Windows and SQL Server. But I experimented with it couple of times for development support – we ran Jenkins in the container for instance. And I’m watching it closely because it rocks and will rock. Not sure what I’m talking about? Just watch DockerCon 2015 keynote by Solomon Hykes and friends!

Sure – their new Docker Toolbox accidentally screwed my Git installation, so I’ll rather install Linux on VirtualBox and test docker inside it without polluting my Windows even further. But these are just minor problems in this (r)evolutionary tidal wave. And one just must love the idea of immutable infrastructure – especially when demonstrated by someone like Jérôme Petazzoni (for the merit itself, not that he’s my idol beyond professional scope :-)).

Spring 4 and on (4/5)

I have been aware of the Spring since the dawn of microcontainers – and Spring emerged victorious (sort of). A friend of mine once mentioned how much he was impressed by Rod Johnson’s presentation about Spring many years ago. How structured his talk and speech was – the story about how he disliked all those logs pouring out of your EE application server… and that’s how Spring was born (sort of).

However, my real exposure to Spring started in 2011 – but it was very intense. And again, I read more about it than most of my colleagues. And just like with JPA – the more I read, the less I know, so it seems. Spring is big. And start some typical application and read those logs – and you can see EE of 2010’s (sort of).

That is not that I don’t like Spring, but I guess its authors (and how many they are now) simply can’t see anymore what beast they created over the years. Sure, there is Spring Boot which reflects all the trends now – like don’t deploy into container, but start the container from within, or all of its automagic features, monitoring, clever defaults and so on. But that’s it. More you don’t do, but you better know about it. Or not? Recently I got to one of the newer Uncle Bob’s articles – called Make the Magic go away. And there is undeniably much to it.

Spring developers do their best, but the truth is that many developers just adopt Spring because “it just works”, while they don’t know how and very often it does not (sort of). You actually should know more about it – or at least some basics for that matter – to be really useful. Of course – this magic problem is not only about Spring (or JPA), but these are the leaders of the “it simply works” movement.

But however you look at it, it’s still “enterprise” – and that means complexity. Sometimes essential, but mostly accidental. Well, that’s also part of the Java landscape.

Google Talk (RIP)

And this is for this post’s biggest let down. Google stopped supporting their beautifully simple chat client without any reasonable replacement. Chrome application just doesn’t seem right to me – and it actually genuinely annoys me with it’s chat icon that hangs on the desktop, sometimes over my focused application, I can’t relocate it easily… simply put, it does not behave as normal application. That means it behaves badly.

I switched to pidgin, but there are issues. Pidgin sometimes misses a message in the middle of the talk – that was the biggest surprise. I double checked, when someone asked me something reportedly again, I went to my Gmail account and really saw the message in Chat archive, but not in my client. And if I get messages when offline, nothing notifies me.

I activated the chat in my Gmail after all (against my wishes though), merely to be able to see any missing messages. But sadly, the situation with Google talk/chat (or Hangout, I don’t care) is dire when you expect normal desktop client. 😦

My Windows toolset

Well – now away from Java, we will hop on my typical developer’s Windows desktop. I mentioned some of my favourite tools, some of them couple of times I guess. So let’s do it quickly – bullet style:

  • Just after some “real browser” (my first download on the fresh Windows) I actually download Rapid Environment Editor. Setting Windows environment variables suddenly feels normal again.
  • Git for Windows – even if I didn’t use git itself, just for its bash – it’s worth it…
  • …but I still complement the bash with GnuWin32 packages for whatever is missing…
  • …and run it in better console emulator, recently it’s ConEmu.
  • Notepad2 binary.
  • And the rest like putty, WinSCP, …
  • Also, on Windows 8 and 10 I can’t imagine living without Classic Shell. Windows 10 is a bit better, but their Start menu is simply unusable for me, classic Start menu was so much faster with keyboard!

As an a developer I sport also some other languages and tools, mostly JVM based:

  • Ant, Maven, Gradle… obviously.
  • Groovy, or course, probably the most popular alternative JVM language. Not to mention that groovsh is good REPL until Java 9 arrives (recently delayed beyond 2016).
  • VirtualBox, recently joined by Vagrant and hopefully also something like Chef/Puppet/Ansible. And this leads us to my plans.

Things I want to try

I was always friend of automation. I’ve been using Windows for many years now, but my preference of UNIX tools is obvious. Try to download and spin up virtual machine for Windows and Linux and you’ll see the difference. Linux just works and tools like Vagrant know where to download images, etc.

With Windows people are not even sure how/whether they can publish prepared images (talking about development only, of course), because nobody can really understand the licenses. Microsoft started to offer prepared Windows virtual machines – primarily for web development though, no server class OS (not that I appreciate Windows Server anyway). They even offer Vagrant, but try to download it and run it as is. For me Vagrant refused to connect to the started VirtualBox machine, any reasonable instructions are missing (nothing specific for Vagrant is in the linked instructions), no Vagrantfile is provided… honestly, quite lame work of making my life easier. I still appreciate the virtual machines.

But then there are those expiration periods… I just can’t imagine preferring any Microsoft product/platform for development (and then for production, obviously). The whole culture of automation on Windows is just completely different – read anything from “nonexistent for many” through “very difficult” to “made artificially restricted”. No wonder many Linux people can script and too few Windows guys can. Licensing terms are to be blamed as well. And virtual machine sizes for Windows are also ridiculous – although Microsoft is reportedly trying to do something in this field to offer reasonably small base image for containerization.

Anyway, back to the topic. Automation is what I want to try to improve. I’m still doing it anyway, but recently the progress is not that good I wished it to be. I fell behind with Gradle, I didn’t use Docker as much as I’d like to, etc. Well – but life is not work only, is it? 😉

Conclusion

Good thing is there are many tools available for Windows that make developer’s (and former Linux user’s) life so much easier. And if you look at Java and its whole ecosystem, it seems to be alive and kicking – so everything seems good on this front as well.

Maybe you ask: “What does 5/5 mean anyway?” Is it perfect? Well, probably not, but at least it means I’m satisfied – happy even! Without happiness it’s not 5, right?

Expression evaluation in Java (4)

Previously we looked at various options how to evaluate expressions, then we implemented our own evaluator in ANTLR v4 and then we complicated it a bit with more types and more operations. But it still doesn’t make sense without variables. What good would any expression based rule engine be if we can’t change input parameters? 🙂

But before that…

All the code related to this blog post is here on GitHub, package name is called expr3 this time as it is our third iteration of the ANTLR solution (even though we are in the 4th post). Tests are here. We will focus on variables, but there are some changes made to expr3 beyond variables themselves.

  • Grammar supports NULL literal and identifiers (for variables) – but we will get to this.
  • ExpressionCalculatorVisitor now accepts even more types and method convertToSupportedType is taking care of this. This is important to support reasonable palette of object types for variables (and will work the same for return values from functions later).
  • Numbers can be represented as Integer or BigDecimal. If the number fits into Integer range (and is not decimal number, of course) it will be represented with Integer, otherwise BigDecimal is used. This does complicate arithmetic and relational (comparisons) operations, we need some promotions here and there, etc. As of now it is coded in a bit crude way – had the implicit conversion rules been more complicated, some more sophisticated solution would be better.
  • Java Date Time API types can be used as variable values, but these will be converted to the ISO extended strings (which still allows comparison!). LocalDate, LocalDateTime and Instant are supported in this demo.

As I said, I’ll not focus on these changes, they are in the code and while they affect how we treat variables, they are not inherently related to introducing them. I’ll also not talk about related tests (like literal resolution into Integer vs BigDecimal) – again, it is in the repo.

Identifiers and null

When we’re working with variables, we need to write them somehow into the expression – and that’s where identifiers come in. As you’d expect, identifiers represent the variables on their respective place in the expression (or rather their value), so they are one kind of elemental expressions, just like various literals. Second thing we may need is NULL value. This is not strictly necessary in all contexts, but null is so common in our Java world that I decided to support it too. Our expr node in its fullness looks like this:

expr: STRING_LITERAL # stringLiteral
    | BOOLEAN_LITERAL # booleanLiteral
    | NUMERIC_LITERAL # numericLiteral
    | NULL_LITERAL # nullLiteral
    | op=('-' | '+') expr # unarySign
    | expr op=(OP_MUL | OP_DIV | OP_MOD) expr # arithmeticOp
    | expr op=(OP_ADD | OP_SUB) expr # arithmeticOp
    | expr op=(OP_LT | OP_GT | OP_EQ | OP_NE | OP_LE | OP_GE) expr # comparisonOp
    | OP_NOT expr # logicNot
    | expr op=(OP_AND | OP_OR) expr # logicOp
    | ID # variable
    | '(' expr ')' # parens
    ;

Null literal is predictably primitive:

NULL_LITERAL : N U L L;

Identifiers are not very complicated either, and I guess they are pretty much similar to Java syntax:

ID: [a-zA-Z$_][a-zA-Z0-9$_.]*;

Various tests for null in the expression (without variables first) may look like this:

    @Test
    public void nullComparison() {
        assertEquals(expr("null == null"), true);
        assertEquals(expr("null != null"), false);
        assertEquals(expr("5 != null"), true);
        assertEquals(expr("5 == null"), false);
        assertEquals(expr("null != 5"), true);
        assertEquals(expr("null == 5"), false);
        assertEquals(expr("null > null"), false);
        assertEquals(expr("null < null"), false);
        assertEquals(expr("null <= null"), false); assertEquals(expr("null >= null"), false);
    }

We don’t need much for this – resolving the NULL literal is particularly simple:

@Override
public Object visitNullLiteral(NullLiteralContext ctx) {
    return null;
}

We also modified visitComparisonOp – now it starts like this:

@Override
public Boolean visitComparisonOp(ExprParser.ComparisonOpContext ctx) {
    Comparable left = (Comparable) visit(ctx.expr(0));
    Comparable right = (Comparable) visit(ctx.expr(1));
    int operator = ctx.op.getType();
    if (left == null || right == null) {
        return left == null && right == null && operator == OP_EQ
            || (left != null || right != null) && operator == OP_NE;
    }
...

The rest is dealing with non-null values, etc. We may also let this method return null when null is involved anywhere except for EQ/NE, now it returns false. Depends on what logic we want.

Variable resolver

Variable resolving inside the calculator class is also quite simple. We need something that resolves them – that’s that variableResolver field initialized in the constructor and used in visitVariable:

private final ExpressionVariableResolver variableResolver;

public ExpressionCalculatorVisitor(ExpressionVariableResolver variableResolver) {
    if (variableResolver == null) {
        throw new IllegalArgumentException("Variable resolver must be provided");
    }
    this.variableResolver = variableResolver;
}

@Override
public Object visitVariable(VariableContext ctx) {
    Object value = variableResolver.resolve(ctx.ID().getText());
    return convertToSupportedType(value);
}

Anything this resolver returns is converted to supported types as mentioned in the introduction. ExpressionVariableResolver is again very simple:

public interface ExpressionVariableResolver {
    Object resolve(String variableName);
}

And how we can implement this? In Java 8 you must just love it – here is piece of test:

private ExpressionVariableResolver variableResolver;

@BeforeMethod
public void init() {
    variableResolver = var -> null;
}

@Test
public void primitiveVariableResolverReturnsTheSameValueForAnyVarName() {
    variableResolver = var -> 5;
    assertEquals(expr("var"), 5);
    assertEquals(expr("anyvarworksnow"), 5);
}

I use field that is set in init method to default “implementation” that always return null. In another test method I change it and it always return 5, regardless of actual variable name (parameter var) as this test clearly demonstrates. Next test is more useful, because this resolver returns value only for specific value:

@Test
public void variableResolverReturnsValueForOneVarName() {
    variableResolver = var -> var.equals("var") ? 5 : null;
    assertEquals(expr("var"), 5);
    assertEquals(expr("var != null"), true);
    assertEquals(expr("var == null"), false);

    assertEquals(expr("anyvarworksnow"), null);
    assertEquals(expr("anyvarworksnow == null"), true);
}

Now the actual name of the variable (identifier) must be “var”, otherwise it returns null again. You might have heard that lambdas may work as super-short test implementations – and yes, they can.

You may wonder why I have the field instead of using it only in the test method itself. This would be better contained and preventing any accidental leaks (although that @BeforeMethod covered me on this). Trouble is that variableResolver is used deeper in expr(…) method and I didn’t want to add it as parameter everywhere, hence the field:

private Object expr(String expression) {
    ParseTree parseTree = ExpressionUtils.createParseTree(expression);
    return new ExpressionCalculatorVisitor(variableResolver)
        .visit(parseTree);
}

Any real-life implementation?

Variable resolvers in the test were obviously very primitive, so let’s try something more realistic. First try is also very simple, but indeed realistic. Remember Bindings in Java’s ScriptEngine? It actually extends Map – so how about resolver that wraps existing Map<String, Object> (mapping variable name to it’s value)? Ok, it – again – may be too primitive:

ExpressionVariableResolver resolver = var -> map.get(var);

Bah, we need some bigger challenge! Let’s say I have a Java Bean or any POJO and I want to explicitly specify my variable names and how they should be resolved with that object. This may be call of method, like getter, so right now we don’t have the values readily available in a collection (or a map).

Important thing to realize here is that resolver will be different from an object to object, because for different objects it needs to provide different values. However, the way how it obtains the values will be the same. And we will wrap this “way” into VariableMapper that knows how to get values from an object of specific type (using generics) – and it will also help us to resolve the value for specific instance. Tests show how I intend to use it:

private VariableMapper<SomeBean> variableMapper;
private ParseTree myNameExpression;
private ParseTree myCountExpression;

@BeforeClass
public void init() {
    variableMapper = new VariableMapper<SomeBean>()
        .set("myName", o -> o.name)
        .set("myCount", SomeBean::getCount);
    myNameExpression = ExpressionUtils.createParseTree("myName <= 'Virgo'"); myCountExpression = ExpressionUtils.createParseTree("myCount * 3"); } @Test public void myNameExpressionTest() { SomeBean bean = new SomeBean(); ExpressionCalculatorVisitor visitor = new ExpressionCalculatorVisitor( var -> variableMapper.resolveVariable(var, bean));

    assertEquals(visitor.visit(myNameExpression), false); // null comparison is false
    bean.name = "Virgo";
    assertEquals(visitor.visit(myNameExpression), true);
    bean.name = "ABBA";
    assertEquals(visitor.visit(myNameExpression), true);
    bean.name = "Virgo47";
    assertEquals(visitor.visit(myNameExpression), false);
}

@Test
public void myCountExpressionTest() {
    SomeBean bean = new SomeBean();
    ExpressionCalculatorVisitor visitor = new ExpressionCalculatorVisitor(
        var -> variableMapper.resolveVariable(var, bean));

// assertEquals(visitor.visit(myCountExpression), false); // NPE!
    bean.setCount(3f);
    assertEquals(visitor.visit(myCountExpression), new BigDecimal("9"));
    bean.setCount(-1.1f);
    assertEquals(visitor.visit(myCountExpression), new BigDecimal("-3.3"));
}

public static class SomeBean {
    public String name;
    private Float count;

    public Float getCount() {
        return count;
    }

    public void setCount(Float count) {
        this.count = count;
    }
}

VariableMapper can live longer, you set it up and then reuse it. Its configuration is its state (methods set), concrete object is merely input parameter. Variable resolver itself works like a closure around the concrete instance. Keep in mind that instantiating calculator visitor is cheap and visiting itself is something you have to do anyway. Creating parsing tree is expensive, but we don’t repeat this between tests – and that’s probably how you want to use it in your application too. Cache the parse trees, create visitors – even with state specific for a single calculation – and then throw them away. This is also safest from threading perspective. You don’t want to use calculator that “closes over” another target object in another thread in the middle of your visiting business. 🙂

Ok, how does that VariableMapper look like?

public class VariableMapper<T> {
    private Map<String, Function<T, Object>> variableValueFunctions = new HashMap<>();

    public VariableMapper<T> set(String variableName, Function<T, Object> valueFunction) {
        variableValueFunctions.put(variableName, valueFunction);
        return this;
    }

    public Object resolveVariable(String variableName, T object) {
        Function<T, Object> valueFunction = variableValueFunctions.get(variableName);
        if (valueFunction == null) {
            throw new ExpressionException("Unknown variable " + variableName);
        }
        return valueFunction.apply(object);
    }
}

As said, it keeps the configuration, but not the state of the object used in concrete calculation – that’s what the variable resolver does (and again, using lambda, one simply can’t resist in this case). Sure, you can combine VariableResolver with the mapping configuration too, but that will either 1) work in a single-threaded environment only, 2) or you have to reconfigure that mapping for each resolver in each thread. It simply doesn’t make sense. Mapper (long-lived) keeps the “way” how to get stuff from an object of some type in a particular computation context while variable resolver (short-lived) merely closes over the concrete instance.

Of course, our mapper can stand some improvements, it would be good if one could “seal” the configuration and no more “set” calls are allowed after that (probably throwing IllegalStateException).

Conclusion

So here we are, supporting even more types (Integer/BigDecimal), but – most importantly – variables! As you can see, now every computation can bring different result. That’s why it’s advisable to rethink how you want to instantiate your visitors, especially in case of multi-threaded environment.

Our ExpressionVariableResolver interface is very simple, it supports only variable name – so if you want to resolve from something stateful (and probably mutable) it’s important to wrap around it somehow. Variable resolver doesn’t know how to get stuff from an object, because there is no such input parameter. That’s why we introduced VariableMapper that supports getting values from an object of some type (generic). And we “implement” variable resolver as lambda to close over the configured variable mapper and an object that is then fed to its resolveVariable method. This method, in contrast to variable resolver’s resolve, takes in the object as a parameter.

It doesn’t have to be an object – you may implement other ways to get variable values in different contexts, you just have to wrap around that context (in our case object) somehow. I dare to say that Java 8 functional programming capabilities make it so much easier…

Still, the main hero here is ANTLR v4, of course. Now our expression evaluator truly makes sense. I’m not promising any continuation of this series, but maybe I’ll talk about functions too. Although I guess you can easily implement them yourselves by now.

Exploring the cloud with AWS Free Tier (2)

In the first part of this “diary” I found a cloud provider for my developer’s testing needs – Amazon’s AWS. This time we will mention some hiccups one may encounter when doing some basic operations around their EC2 instance. Finally, we will prepare some Docker image for ourselves, although this is not really AWS specific – at least not in our basic scenario case.

Shut it down!

When you shutdown your desktop computer, you see what it does. I’ve been running Windows for some years although being a Linux guy before (blame gaming and music home recording). On servers, no doubt, I prefer Linux every time. But I honestly don’t remember what happens if I enter shutdown now command without further options.

If I see the computer going on and on although my OS is down already, I just turn it off and remember to use -h switch the next time. But when “my computer” runs far away and only some dashboard shows what is happening, you simply don’t know for sure. There is no room for “mechanical sympathy”.

Long story short – always use shutdown now -h on your AMI instance if you really want to stop it. Of course, check instance’s Shutdown Behavior setup – by default it’s Stop and that’s probably what you want (Terminate would delete the instance altogether). With magical -h you’ll soon see that the state of the instance goes through stopping to stopped – without it it just hangs there running, but not really reachable.

Watch those volumes

When you shut down your EC2 instances they will stop taking any “instance-hours”. On the other hand, if you spin up 100 t2.micro instances and run them for an hour, you’ll spend 100 of your 750 limit for a month. It’s easy to understand this way of “spending”.

However, volumes (disk space for your EC2 instance) work a bit differently. They are reserved for you and they are billed for all the time you have them available – whether the instance runs or not. Also, how much of it you really use is NOT important. Your reserved space (typically 8 GiB for t2.micro instance if you use defaults) is what counts. Two sleeping instances for the whole month would not hit the limit, but three would – and 4 GiB above 20GiB/month would be billed to you (depending on the time you are above limit as well).

In any case, Billing Management Console is your friend here and AWS definitely provides you with all the necessary data to see where you are with your usage.

Back to Docker

I wanted to play with Docker before I decided to couple it with cloud exploration. AWS provides so called EC2 Container Service (ECS) to give you more power when managing containers, but today we will not go there. We will create Docker image manually right on our EC2 instance. I’d rather take baby steps than skip some “maturity levels” without understanding the basics.

When I want to “deploy” a Java application in a container, I want to create some Java base image for it first. So let’s connect to our EC2 instance and do it.

Java 32-bit base image

Let’s create our base image for Java applications first. Create a dir (any name will do, but something like java-base sounds reasonable) and this Dockerfile in it:

FROM ubuntu:14.04
MAINTAINER virgo47

# We want WGET in any case
RUN apt-get -qqy install wget

# For 32-bit Java we need to enable 32-bit binaries
RUN dpkg --add-architecture i386
RUN apt-get -qqy update
RUN apt-get -qqy install libc6:i386 libncurses5:i386 libstdc++6:i386

ENV HOME /root

# Install 32-bit JAVA
WORKDIR $HOME
RUN wget -q --no-check-certificate --no-cookies --header "Cookie: oraclelicense=accept-securebac kup-cookie" http://download.oracle.com/otn-pub/java/jdk/8u60-b27/jdk-8u60-linux-i586.tar.gz
RUN tar xzf jdk-8u60-linux-i586.tar.gz
ENV JAVA_HOME $HOME/jdk1.8.0_60
ENV PATH $JAVA_HOME/bin:$PATH

Then to build it (you must be in the directory with Dockerfile):

$ docker build -t virgo47/jaba .

Jaba stands for “java base”. And to test it:

$ docker run -ti virgo47/jaba
root@46d1b8156c7c:~# java -version
java version "1.8.0_60"
Java(TM) SE Runtime Environment (build 1.8.0_60-b27)
Java HotSpot(TM) Client VM (build 25.60-b23, mixed mode)
root@46d1b8156c7c:~# exit

My application image

Now I want to run my HelloWorld application in that base image. That means creating another image based on virgo47/jaba. Create another directory (myapp) and the following Dockerfile:

FROM virgo47/jaba
MAINTAINER virgo47

WORKDIR /root/
COPY HelloWorld.java ./
RUN javac HelloWorld.java
CMD java HelloWorld

Easy enough, but before we can build it we need that HelloWorld.java too. I guess anybody can do it, but for the sake of completeness:

public class HelloWorld {
        public static void main(String... args) {
                System.out.println("Hello, world!");
        }
}

Now let’s build it:

$ docker build -t virgo47/myapp .

And to test it:

$ docker run -ti virgo47/myapp
Hello, world!

So it actually works! But we should probably deliver JAR file directly into the image build and not compiling it during the build. Can we automate it? Sure we can, but maybe in another post.

To wrap up…

I hope I’ll get to Amazon’s ECS later, because the things above are working, are kinda Docker(file) practice, but they definitely are not for real world. You may at least run it all from your local machine as a combination of scp/ssh, instead of creating Dockerfiles and other sources on the remote machine – because that doesn’t make sense, of course. We need to build Docker image as part of our build process, publish it somewhere and just download it to the target environment. But let’s get away from Docker and back to AWS.

In the meantime one big AWS event occurred – AWS re:Invent 2015. I have to admit I wasn’t aware of this at all until now, I just got email notifications about the event and the keynotes as an AWS user. I am aware of other conferences, I’m happy enough to attend some European Sun TechDays (how I miss those :-)), TheServerSide Java Symposiums (miss those too) and one DEVOXX – but just judging from the videos, the re:Invent was really mind-blowing.

I don’t know what more to say, so I’m over and out for now. It will probably take me another couple of weeks to get more of concrete impressions about AWS, but I plan to add the third part – hopefully again loosely coupled to Docker.

Expression evaluation in Java (3)

In the first part of the series we evaluated expressions to get the result, in the second part we parsed the expression with our own grammar. Today we will expand on the topic, we will introduce more types and more operations. Variables are left for the next installment as this part will be long enough without them anyway.

As always – complete example is available on Github and its structure is pretty much similar to the first simple version of our ANTLR based expression evaluator. And what will we do today?

  • We will expand our numbers from integers to floating point (implemented with BigDecimals).
  • We will add strings and booleans types too, along with relational operators (comparing).
  • Relation operators can be written as >= but also as GE which is extremely handy when expressions are part of XML configuration. In addition to this, keywords will be case insensitive.

We will start introducing the grammar changes from the end of the file to the beginning – you can download our new grammar in its wholeness.

Case-insensitive keywords

It would be cool if there was some easy way how to say to ANTLR that some particular keyword (string literal in the grammar) is case insensitive – as there are many languages that use this way. But there no magical switch, so we have to help ourselves with tiny parts of token rules called fragments. Following part will go to the very end of the grammar file.

fragment DIGIT : [0-9];

fragment A : [aA];
fragment B : [bB];
fragment C : [cC];
fragment D : [dD];
fragment E : [eE];
fragment F : [fF];
fragment G : [gG];
fragment H : [hH];
fragment I : [iI];
fragment J : [jJ];
fragment K : [kK];
fragment L : [lL];
fragment M : [mM];
fragment N : [nN];
fragment O : [oO];
fragment P : [pP];
fragment Q : [qQ];
fragment R : [rR];
fragment S : [sS];
fragment T : [tT];
fragment U : [uU];
fragment V : [vV];
fragment W : [wW];
fragment X : [xX];
fragment Y : [yY];
fragment Z : [zZ];

Now when we want case-insensitive keyword we will write O R instead of ‘or’ like this:

OP_OR: O R | '||';

But we will get to operators later, let’s do the literals and company first.

Literals and whitespaces

We will support boolean literals as a word and a letter (blame my recent experience with R language). Number literals must cover floating point numbers too. And finally there are strings here as well. White spaces are without any change at all:

BOOLEAN_LITERAL: T R U E | T
    | F A L S E | F
    ;

NUMERIC_LITERAL : DIGIT+ ( '.' DIGIT* )? ( E [-+]? DIGIT+ )?
    | '.' DIGIT+ ( E [-+]? DIGIT+ )?
    ;

STRING_LITERAL : '\'' ( ~'\'' | '\'\'' )* '\'';

WS: [ \t\r\n]+ -> skip;

Operators

Nothing special here, we just add more of them.

OP_LT: L T | '<'; OP_GT: G T | '>';
OP_LE: L E | '<='; OP_GE: G E | '>=';
OP_EQ: E Q | '=' '='?;
OP_NE: N E | N E Q | '!=' | '<>';
OP_AND: A N D | '&&';
OP_OR: O R | '||';
OP_NOT: N O T | '!';
OP_ADD: '+';
OP_SUB: '-';
OP_MUL: '*';
OP_DIV: '/';
OP_MOD: '%';

Equality can be written both as in SQL or as in Java, because there is no assignment statement in our grammar. All relational and logical operators have both the symbols and keywords. If you miss XOR you can add it yourself, of course.

And the expression rule…

Finally we got to the expression rule, which got a bit richer:

result: expr;

expr: STRING_LITERAL # stringLiteral
    | BOOLEAN_LITERAL # booleanLiteral
    | NUMERIC_LITERAL # numericLiteral
    | op=('-' | '+') expr # unarySign
    | expr op=(OP_MUL | OP_DIV | OP_MOD) expr # arithmeticOp
    | expr op=(OP_ADD | OP_SUB) expr # arithmeticOp
    | expr op=(OP_LT | OP_GT | OP_EQ | OP_NE | OP_LE | OP_GE) expr # comparisonOp
    | OP_NOT expr # logicNot
    | expr op=(OP_AND | OP_OR) expr # logicOp
    | '(' expr ')' # parens
    ;

You may notice one additional rule – result. We will use this for a single special reason covered later. Now is the time to generate the classes from the grammar and then to reimplement the visitor class.

BTW: Our new version of the calculator visitor will not extend ExprBaseVisitor<Integer> with parameter (Integer) anymore as the result may be of any supported type – String, BigDecimal or Boolean. And we don’t know what will be the result. So we just don’t state the parameterized type at all.

Implementing literals

Before we implement, we should have some expectations – and that means tests. My test class is not perfect one case per test, but it does the job.

    @Test
    public void booleanLiterals() {
        assertEquals(expr("t"), true);
        assertEquals(expr("True"), true);
        assertEquals(expr("f"), false);
        assertEquals(expr("faLSE"), false);
    }

    @Test
    public void stringLiteral() {
        assertEquals(expr("''"), "");
        assertEquals(expr("''''"), "'");
        assertEquals(expr("'something'"), "something");
    }

    @Test
    public void numberLiterals() {
        assertEquals(expr("5"), new BigDecimal("5"));
        assertEquals(expr("10.35"), new BigDecimal("10.35"));
    }

Method expr in test is still implemented like before. Let’s focus on the visitor implementation now. The parts we need for this test to work are here:

    @Override
    public String visitStringLiteral(StringLiteralContext ctx) {
        String text = ctx.STRING_LITERAL().getText();
        text = text.substring(1, text.length() - 1)
            .replaceAll("''", "'");
        return text;
    }

    @Override
    public Boolean visitBooleanLiteral(BooleanLiteralContext ctx) {
        return ctx.BOOLEAN_LITERAL().getText().toLowerCase().charAt(0) == 't';
    }

    @Override
    public BigDecimal visitNumericLiteral(NumericLiteralContext ctx) {
        String text = ctx.NUMERIC_LITERAL().getText();
        return stringToNumber(text);
    }

    private BigDecimal stringToNumber(String text) {
        BigDecimal bigDecimal = new BigDecimal(text);

        return bigDecimal.scale() < 0
            ? bigDecimal.setScale(0, roundingMode)
            : bigDecimal;
    }

Arithmetic operations

Now we will implement arithmetics with BigDecimals, including unary signs and parentheses.

    @Test
    public void arithmeticTest() {
        assertEquals(expr("5+5.1"), new BigDecimal("10.1"));
        assertEquals(expr("5-5.1"), new BigDecimal("-0.1"));
        assertEquals(expr("0.3*0.1"), new BigDecimal("0.03"));
        assertEquals(expr("0.33/0.1"), new BigDecimal("3.3"));
        assertEquals(expr("1/3"), new BigDecimal("0.333333333333333"));
        assertEquals(expr("10%3"), new BigDecimal("1"));
    }

    @Test
    public void unarySignTest() {
        assertEquals(expr("-5"), new BigDecimal("-5"));
        assertEquals(expr("-+-5"), new BigDecimal("5"));
        assertEquals(expr("-(3+5)"), new BigDecimal("-8"));
    }

The implementation in the Visitor respects our defined maximal scale, which is very important for cases like 1/3. Without scale and rounding set this would throw an error.

    public static final int DEFAULT_MAX_SCALE = 15;
    private int maxScale = DEFAULT_MAX_SCALE;
    private int roundingMode = BigDecimal.ROUND_HALF_UP;

    @Override
    public BigDecimal visitArithmeticOp(ExprParser.ArithmeticOpContext ctx) {
        BigDecimal left = (BigDecimal) visit(ctx.expr(0));
        BigDecimal right = (BigDecimal) visit(ctx.expr(1));
        return bigDecimalArithmetic(ctx, left, right);
    }

    private BigDecimal bigDecimalArithmetic(ExprParser.ArithmeticOpContext ctx, BigDecimal left, BigDecimal right) {
        switch (ctx.op.getType()) {
            case OP_ADD:
                return left.add(right);
            case OP_SUB:
                return left.subtract(right);
            case OP_MUL:
                return left.multiply(right);
            case OP_DIV:
                return left.divide(right, maxScale, roundingMode).stripTrailingZeros();
            case OP_MOD:
                return left.remainder(right);
            default:
                throw new IllegalStateException("Unknown operator " + ctx.op);
        }
    }

    @Override
    public BigDecimal visitUnarySign(UnarySignContext ctx) {
        BigDecimal result = (BigDecimal) visit(ctx.expr());
        boolean unaryMinus = ctx.op.getText().equals("-");
        return unaryMinus
            ? result.negate()
            : result;
    }

    @Override
    public Object visitParens(ParensContext ctx) {
        return visit(ctx.expr());
    }

We may appreciate scale 15 for interim results, but maybe we want lower scale for final result. Let’s do that now:

Result rule with different scale

Let’s say we require just scale of 6 for final results. We will implement result rule for that:

    public static final int DEFAULT_MAX_RESULT_SCALE = 6;
    private int maxResultScale = DEFAULT_MAX_RESULT_SCALE;

    /** Maximum BigDecimal scale used during computations. */
    public ExpressionCalculatorVisitor maxScale(int maxScale) {
        this.maxScale = maxScale;
        return this;
    }

    /** Maximum BigDecimal scale for result. */
    public ExpressionCalculatorVisitor maxResultScale(int maxResultScale) {
        this.maxResultScale = maxResultScale;
        return this;
    }

    @Override
    public Object visitResult(ExprParser.ResultContext ctx) {
        Object result = visit(ctx.expr());
        if (result instanceof BigDecimal) {
            BigDecimal bdResult = (BigDecimal) result;
            if (bdResult.scale() > maxResultScale) {
                result = bdResult.setScale(maxResultScale, roundingMode);
            }
        }
        return result;
    }

We also introduced two methods setting the maximal interim and result scale. Both of them return this, so they can be used right after constructor of the ExpressionCalculatorVisitor. One last change is required in ExpressionUtils – instead of calling expr rule on parser we need to use result rule. Otherwise the util class looks exactly like before.

Of course we have to fix the test and for 1/3 expect just 0.333333 instead of 15 digits like before.

Logical operations

After previous problems this is a piece of cake. First the test:

    @Test
    public void logicalOperatorTest() {
        assertEquals(expr("F && F"), false);
        assertEquals(expr("F && T"), false);
        assertEquals(expr("T and F"), false);
        assertEquals(expr("T AND T"), true);
        assertEquals(expr("F || F"), false);
        assertEquals(expr("F || T"), true);
        assertEquals(expr("T or F"), true);
        assertEquals(expr("T OR T"), true);
        assertEquals(expr("!T"), false);
        assertEquals(expr("not T"), false);
        assertEquals(expr("!f"), true);
    }

And the visitor implementation:

    @Override
    public Boolean visitLogicOp(ExprParser.LogicOpContext ctx) {
        boolean left = (boolean) visit(ctx.expr(0));

        switch (ctx.op.getType()) {
            case OP_AND:
                return left && booleanRightSide(ctx);
            case OP_OR:
                return left || booleanRightSide(ctx);
            default:
                throw new IllegalStateException("Unknown operator " + ctx.op);
        }
    }

    private boolean booleanRightSide(ExprParser.LogicOpContext ctx) {
        return (boolean) visit(ctx.expr(1));
    }

    @Override
    public Boolean visitLogicNot(ExprParser.LogicNotContext ctx) {
        return !(Boolean) visit(ctx.expr());
    }

Relational operators

Let’s do some comparisons and (non) equalities. Test first:

    @Test
    public void relationalOperatorTest() {
        assertEquals(expr("1 > 0.5"), true);
        assertEquals(expr("1 > 1"), false);
        assertEquals(expr("1 >= 0.5"), true);
        assertEquals(expr("1 >= 1"), true);
        assertEquals(expr("5 == 5"), true);
        assertEquals(expr("5 != 5"), false);
        assertEquals(expr("'a' > 'b'"), false);
        assertEquals(expr("'a' >= 'b'"), false);
        assertEquals(expr("'a' < 'b'"), true);
        assertEquals(expr("'a' <= 'b'"), true);
        assertEquals(expr("true == true"), true);
        assertEquals(expr("true == f"), false);
        assertEquals(expr("true eq t"), true);
    }

Again, I realize that these should be many separate tests and the coverage of cases is also not that great, but let’s move on to the implementation:

    @Override
    public Boolean visitComparisonOp(ExprParser.ComparisonOpContext ctx) {
        Comparable left = (Comparable) visit(ctx.expr(0));
        Comparable right = (Comparable) visit(ctx.expr(1));
        int operator = ctx.op.getType();
        if (left == null || right == null) {
            return left == null && right == null && operator == OP_EQ
                || (left != null || right != null) && operator == OP_NE;
        }

        //noinspection unchecked
        int comp = left.compareTo(right);
        switch (operator) {
            case OP_EQ:
                return comp == 0;
            case OP_NE:
                return comp != 0;
            case OP_GT:
                return comp > 0;
            case OP_LT:
                return comp < 0; case OP_GE: return comp >= 0;
            case OP_LE:
                return comp <= 0;
            default:
                throw new IllegalStateException("Unknown operator " + ctx.op);
        }
    }

Nothing special here – we utilize the fact that Strings, Booleans and BigDecimals are all comparable.

Conclusion

What did we see and what more can we do? For the sheer scope of the changes we postponed the variables for the next part of the series. Of course, variables (and functions!) are very important for expression evaluation, otherwise we’re talking about constant result for each expression anyway, right?

We have created flexible grammar that is kinda mix between SQL and Java. Of course, you may not like too many options for all the operators, but in our context it doesn’t pose any problem. It might have – in the context of general programming language (e.g. = vs == operators).

Another thing is that numbers are BigDecimals – always. This is kinda Groovy-like, where 3 / 2 is not 1 as Java programmer would expect and this division is also much slower than integer division. We may introduce integers and support smart division based on the context (like Java does after all, except that FP is IEEE 754 and not BigDecimal-based). But not now.

Now you can download the complete example – and if you don’t know how to download part of the repo, you may try this command:

svn export https://github.com/virgo47/litterbin/trunk/demos/antlr-expr/src/main/java/expr2/

Tests are in src/test of course. See you next with variables and maybe even functions.