Exploring the cloud with AWS Free Tier (2)

In the first part of this “diary” I found a cloud provider for my developer’s testing needs – Amazon’s AWS. This time we will mention some hiccups one may encounter when doing some basic operations around their EC2 instance. Finally, we will prepare some Docker image for ourselves, although this is not really AWS specific – at least not in our basic scenario case.

Shut it down!

When you shutdown your desktop computer, you see what it does. I’ve been running Windows for some years although being a Linux guy before (blame gaming and music home recording). On servers, no doubt, I prefer Linux every time. But I honestly don’t remember what happens if I enter shutdown now command without further options.

If I see the computer going on and on although my OS is down already, I just turn it off and remember to use -h switch the next time. But when “my computer” runs far away and only some dashboard shows what is happening, you simply don’t know for sure. There is no room for “mechanical sympathy”.

Long story short – always use shutdown now -h on your AMI instance if you really want to stop it. Of course, check instance’s Shutdown Behavior setup – by default it’s Stop and that’s probably what you want (Terminate would delete the instance altogether). With magical -h you’ll soon see that the state of the instance goes through stopping to stopped – without it it just hangs there running, but not really reachable.

Watch those volumes

When you shut down your EC2 instances they will stop taking any “instance-hours”. On the other hand, if you spin up 100 t2.micro instances and run them for an hour, you’ll spend 100 of your 750 limit for a month. It’s easy to understand this way of “spending”.

However, volumes (disk space for your EC2 instance) work a bit differently. They are reserved for you and they are billed for all the time you have them available – whether the instance runs or not. Also, how much of it you really use is NOT important. Your reserved space (typically 8 GiB for t2.micro instance if you use defaults) is what counts. Two sleeping instances for the whole month would not hit the limit, but three would – and 4 GiB above 20GiB/month would be billed to you (depending on the time you are above limit as well).

In any case, Billing Management Console is your friend here and AWS definitely provides you with all the necessary data to see where you are with your usage.

Back to Docker

I wanted to play with Docker before I decided to couple it with cloud exploration. AWS provides so called EC2 Container Service (ECS) to give you more power when managing containers, but today we will not go there. We will create Docker image manually right on our EC2 instance. I’d rather take baby steps than skip some “maturity levels” without understanding the basics.

When I want to “deploy” a Java application in a container, I want to create some Java base image for it first. So let’s connect to our EC2 instance and do it.

Java 32-bit base image

Let’s create our base image for Java applications first. Create a dir (any name will do, but something like java-base sounds reasonable) and this Dockerfile in it:

FROM ubuntu:14.04

# We want WGET in any case
RUN apt-get -qqy install wget

# For 32-bit Java we need to enable 32-bit binaries
RUN dpkg --add-architecture i386
RUN apt-get -qqy update
RUN apt-get -qqy install libc6:i386 libncurses5:i386 libstdc++6:i386

ENV HOME /root

# Install 32-bit JAVA
RUN wget -q --no-check-certificate --no-cookies --header "Cookie: oraclelicense=accept-securebac kup-cookie" http://download.oracle.com/otn-pub/java/jdk/8u60-b27/jdk-8u60-linux-i586.tar.gz
RUN tar xzf jdk-8u60-linux-i586.tar.gz
ENV JAVA_HOME $HOME/jdk1.8.0_60

Then to build it (you must be in the directory with Dockerfile):

$ docker build -t virgo47/jaba .

Jaba stands for “java base”. And to test it:

$ docker run -ti virgo47/jaba
root@46d1b8156c7c:~# java -version
java version "1.8.0_60"
Java(TM) SE Runtime Environment (build 1.8.0_60-b27)
Java HotSpot(TM) Client VM (build 25.60-b23, mixed mode)
root@46d1b8156c7c:~# exit

My application image

Now I want to run my HelloWorld application in that base image. That means creating another image based on virgo47/jaba. Create another directory (myapp) and the following Dockerfile:

FROM virgo47/jaba

WORKDIR /root/
COPY HelloWorld.java ./
RUN javac HelloWorld.java
CMD java HelloWorld

Easy enough, but before we can build it we need that HelloWorld.java too. I guess anybody can do it, but for the sake of completeness:

public class HelloWorld {
        public static void main(String... args) {
                System.out.println("Hello, world!");

Now let’s build it:

$ docker build -t virgo47/myapp .

And to test it:

$ docker run -ti virgo47/myapp
Hello, world!

So it actually works! But we should probably deliver JAR file directly into the image build and not compiling it during the build. Can we automate it? Sure we can, but maybe in another post.

To wrap up…

I hope I’ll get to Amazon’s ECS later, because the things above are working, are kinda Docker(file) practice, but they definitely are not for real world. You may at least run it all from your local machine as a combination of scp/ssh, instead of creating Dockerfiles and other sources on the remote machine – because that doesn’t make sense, of course. We need to build Docker image as part of our build process, publish it somewhere and just download it to the target environment. But let’s get away from Docker and back to AWS.

In the meantime one big AWS event occurred – AWS re:Invent 2015. I have to admit I wasn’t aware of this at all until now, I just got email notifications about the event and the keynotes as an AWS user. I am aware of other conferences, I’m happy enough to attend some European Sun TechDays (how I miss those :-)), TheServerSide Java Symposiums (miss those too) and one DEVOXX – but just judging from the videos, the re:Invent was really mind-blowing.

I don’t know what more to say, so I’m over and out for now. It will probably take me another couple of weeks to get more of concrete impressions about AWS, but I plan to add the third part – hopefully again loosely coupled to Docker.

The pain with sourcecode in WordPress

I don’t know why, but I always felt better when I prepared my blog posts somewhere else (Google Docs in my case) and then just pasted them to wordpress.com. I wanted some backup – but there is WordPress feature for that. But the most important factor was their editor. It was able to mess up with the post in a one-way way (when undo doesn’t help), so I just refused to use it in the first place. Google has versioning, reliable undo, auto-saves often, etc.

The biggest issue was always related to posting source code examples. In “good old” WordPress editor I had some workflow that worked well enough when I was pasting code from Google Docs, I had to go through it and fix some details, but it was usable.

And then their new editor (Beep beep boop) came. What was easy before became nearly impossible now. Not to be all negative, normal formatted text can be easily copied/pasted from Google Doc to WordPress, so I don’t have to care about hyperlinks and formatting anymore. But the source code – that had its issues before – got virtually “unpastable”. At least not directly.

With old editor I often just started in HTML view, pasted the plaintext there and formatted headers and bolds and links in Visual mode. Annoying, but working. But we will not discuss this scenario, because that requires a lot of work after the paste. We will focus on the Visual mode of new editor – how unreliable that one is when you want to just copy/paste something in it. After all, I expected that more will work with new editor – but it didn’t deliver.

Basic workflow

It’s pretty easy. I open the WordPress editor, paste the post title and then paste the whole post in Visual mode. At the bottom of it I have my tags for the post. Tags are just another new stupid issue with the new editor, but it doesn’t kill so much time as source code does. I use sources a lot in my posts, it’s my line of work after all.

So after the whole post is there, it’s time to go over all the sources and fix them one by one. Well, let’s see what happens when you paste code from Google Doc directly:

copy-paste-source-from-googledocYou can paste with Ctrl+V or Ctrl+Shift+V – but the latter is not useful when you need formatting in your post – and you always do. Ctrl+Shift+V would be useful during fixing the codes one by one, but it doesn’t work when you do it directly from Google Doc anyway. It just screws them in different manner.

Spoiler: You can copy/paste directly in HTML mode. But we will stay in Visual mode for a bit longer and introduce more of its annoyances.

Copy/paste inside the post breaks it too!

Ok, this really got me. Maybe we can have some understanding when some editor doesn’t accept pasted stuff from different product – but this one cannot even accept its own stuff. I wanted to rewrite “tags” to in order to get syntax highlighting. Let’s see how that went:

copy-paste-demages-sourcesWhich leads us to another problem – undo. It completely breaks sources that were OK before. And not just the one that is around, but also others:

undo-damages-other-sourcesThis renders undo as extremely unreliable feature. There is no way to undo this mess after it happened. You have to fix it manually or reload the page to get the last working draft (if you made some changes – your fault). It will tell you about newer version and asks you to restore it, which you – of course – don’t want now. To get rid of this, click on Save Draft (after that reload). Trash probably removes the whole post I guess, so there is no way to discard current changes.

Any way to copy stuff?

As I mentioned, you can try it in HTML mode if it is what leads to success – and in case of it seems to be working solution. But how about in Visual mode? If you copy/paste things a lot – funny I do, because I fight copy/paste coding mercilessly – you might have noticed that often the source program kinda affects how the things are pasted. In this case it results in mess whether you use Ctrl+V or the same with Shift added (normally works as copy without format, not sure what it suppose to do in Beep-beep-poo editor though).

But what if we drag the text through something that makes it plain text? So I tried to paste it into a notepad first (Notepad2 here, but it doesn’t make a difference) and then copy it from there:

paste-via-notepadA bit longer “video”, again this is not without its twist. We have to Ctrl+Shift+V from plain text editor, otherwise the pasted text contains nbsp… which may be harmless (haven’t tried), but I simply don’t want it there. I expect some “canonical” representation of what I work on in my editor, not low level HTML stuff.

Undo is documented again – after simple paste only the pasted code is messed, after Undo other codes are affected too.


This again seems like win of Web 2.0 over reason. Before I could just copy/paste all my tags separated with a comma – at once. Not anymore. If you do so they will all form one giant tag. You simply have to write (or paste) them one by one. While writing, comma works as a separator and it instantly ends the word and makes it a tag bubble. But goodbye to “batch” copy/paste.

Final words

The biggest trouble is how brittle it all is. How easy it is to damage all the nicely formatted source code with a single Undo. Would you seriously consider writing in such an editor? I merely tip-toe around it, paste my post, fix the mess and Publish as soon as I can. It is possible to Edit simple typos later. But anything beyond that…? Well my trust is not there.

If WordPress developers ever change the editor again, I hope they will go for usability that actually involves copy/paste as well. I don’t know, maybe I’m such a weirdo and “lucky” to find all these corner-cases (it’s not only WordPress after all) – maybe I’d be great tester. But I seriously don’t think I’m pushing everything to some limits or so.

After all – I just copy/pasted in this case! :-)

BTW: My previous post was copy/pasted to Visual editor to preserve all the formatting and links, and then I switched to HTML where I easily copied all the code (one by one). Seems to be my workflow of choice for some time. Visual editor and code simply don’t go together.

Expression evaluation in Java (3)

In the first part of the series we evaluated expressions to get the result, in the second part we parsed the expression with our own grammar. Today we will expand on the topic, we will introduce more types and more operations. Variables are left for the next installment as this part will be long enough without them anyway.

As always – complete example is available on Github and its structure is pretty much similar to the first simple version of our ANTLR based expression evaluator. And what will we do today?

  • We will expand our numbers from integers to floating point (implemented with BigDecimals).
  • We will add strings and booleans types too, along with relational operators (comparing).
  • Relation operators can be written as >= but also as GE which is extremely handy when expressions are part of XML configuration. In addition to this, keywords will be case insensitive.

We will start introducing the grammar changes from the end of the file to the beginning – you can download our new grammar in its wholeness.

Case-insensitive keywords

It would be cool if there was some easy way how to say to ANTLR that some particular keyword (string literal in the grammar) is case insensitive – as there are many languages that use this way. But there no magical switch, so we have to help ourselves with tiny parts of token rules called fragments. Following part will go to the very end of the grammar file.

fragment DIGIT : [0-9];

fragment A : [aA];
fragment B : [bB];
fragment C : [cC];
fragment D : [dD];
fragment E : [eE];
fragment F : [fF];
fragment G : [gG];
fragment H : [hH];
fragment I : [iI];
fragment J : [jJ];
fragment K : [kK];
fragment L : [lL];
fragment M : [mM];
fragment N : [nN];
fragment O : [oO];
fragment P : [pP];
fragment Q : [qQ];
fragment R : [rR];
fragment S : [sS];
fragment T : [tT];
fragment U : [uU];
fragment V : [vV];
fragment W : [wW];
fragment X : [xX];
fragment Y : [yY];
fragment Z : [zZ];

Now when we want case-insensitive keyword we will write O R instead of ‘or’ like this:

OP_OR: O R | '||';

But we will get to operators later, let’s do the literals and company first.

Literals and whitespaces

We will support boolean literals as a word and a letter (blame my recent experience with R language). Number literals must cover floating point numbers too. And finally there are strings here as well. White spaces are without any change at all:

    | F A L S E | F

NUMERIC_LITERAL : DIGIT+ ( '.' DIGIT* )? ( E [-+]? DIGIT+ )?
    | '.' DIGIT+ ( E [-+]? DIGIT+ )?

STRING_LITERAL : '\'' ( ~'\'' | '\'\'' )* '\'';

WS: [ \t\r\n]+ -> skip;


Nothing special here, we just add more of them.

OP_LT: L T | '<'; OP_GT: G T | '>';
OP_LE: L E | '<='; OP_GE: G E | '>=';
OP_EQ: E Q | '=' '='?;
OP_NE: N E | N E Q | '!=' | '<>';
OP_AND: A N D | '&&';
OP_OR: O R | '||';
OP_NOT: N O T | '!';
OP_ADD: '+';
OP_SUB: '-';
OP_MUL: '*';
OP_DIV: '/';
OP_MOD: '%';

Equality can be written both as in SQL or as in Java, because there is no assignment statement in our grammar. All relational and logical operators have both the symbols and keywords. If you miss XOR you can add it yourself, of course.

And the expression rule…

Finally we got to the expression rule, which got a bit richer:

result: expr;

expr: STRING_LITERAL # stringLiteral
    | BOOLEAN_LITERAL # booleanLiteral
    | NUMERIC_LITERAL # numericLiteral
    | op=('-' | '+') expr # unarySign
    | expr op=(OP_MUL | OP_DIV | OP_MOD) expr # arithmeticOp
    | expr op=(OP_ADD | OP_SUB) expr # arithmeticOp
    | expr op=(OP_LT | OP_GT | OP_EQ | OP_NE | OP_LE | OP_GE) expr # comparisonOp
    | OP_NOT expr # logicNot
    | expr op=(OP_AND | OP_OR) expr # logicOp
    | '(' expr ')' # parens

You may notice one additional rule – result. We will use this for a single special reason covered later. Now is the time to generate the classes from the grammar and then to reimplement the visitor class.

BTW: Our new version of the calculator visitor will not extend ExprBaseVisitor<Integer> with parameter (Integer) anymore as the result may be of any supported type – String, BigDecimal or Boolean. And we don’t know what will be the result. So we just don’t state the parameterized type at all.

Implementing literals

Before we implement, we should have some expectations – and that means tests. My test class is not perfect one case per test, but it does the job.

    public void booleanLiterals() {
        assertEquals(expr("t"), true);
        assertEquals(expr("True"), true);
        assertEquals(expr("f"), false);
        assertEquals(expr("faLSE"), false);

    public void stringLiteral() {
        assertEquals(expr("''"), "");
        assertEquals(expr("''''"), "'");
        assertEquals(expr("'something'"), "something");

    public void numberLiterals() {
        assertEquals(expr("5"), new BigDecimal("5"));
        assertEquals(expr("10.35"), new BigDecimal("10.35"));

Method expr in test is still implemented like before. Let’s focus on the visitor implementation now. The parts we need for this test to work are here:

    public String visitStringLiteral(StringLiteralContext ctx) {
        String text = ctx.STRING_LITERAL().getText();
        text = text.substring(1, text.length() - 1)
            .replaceAll("''", "'");
        return text;

    public Boolean visitBooleanLiteral(BooleanLiteralContext ctx) {
        return ctx.BOOLEAN_LITERAL().getText().toLowerCase().charAt(0) == 't';

    public BigDecimal visitNumericLiteral(NumericLiteralContext ctx) {
        String text = ctx.NUMERIC_LITERAL().getText();
        return stringToNumber(text);

    private BigDecimal stringToNumber(String text) {
        BigDecimal bigDecimal = new BigDecimal(text);

        return bigDecimal.scale() < 0
            ? bigDecimal.setScale(0, roundingMode)
            : bigDecimal;

Arithmetic operations

Now we will implement arithmetics with BigDecimals, including unary signs and parentheses.

    public void arithmeticTest() {
        assertEquals(expr("5+5.1"), new BigDecimal("10.1"));
        assertEquals(expr("5-5.1"), new BigDecimal("-0.1"));
        assertEquals(expr("0.3*0.1"), new BigDecimal("0.03"));
        assertEquals(expr("0.33/0.1"), new BigDecimal("3.3"));
        assertEquals(expr("1/3"), new BigDecimal("0.333333333333333"));
        assertEquals(expr("10%3"), new BigDecimal("1"));

    public void unarySignTest() {
        assertEquals(expr("-5"), new BigDecimal("-5"));
        assertEquals(expr("-+-5"), new BigDecimal("5"));
        assertEquals(expr("-(3+5)"), new BigDecimal("-8"));

The implementation in the Visitor respects our defined maximal scale, which is very important for cases like 1/3. Without scale and rounding set this would throw an error.

    public static final int DEFAULT_MAX_SCALE = 15;
    private int maxScale = DEFAULT_MAX_SCALE;
    private int roundingMode = BigDecimal.ROUND_HALF_UP;

    public BigDecimal visitArithmeticOp(ExprParser.ArithmeticOpContext ctx) {
        BigDecimal left = (BigDecimal) visit(ctx.expr(0));
        BigDecimal right = (BigDecimal) visit(ctx.expr(1));
        return bigDecimalArithmetic(ctx, left, right);

    private BigDecimal bigDecimalArithmetic(ExprParser.ArithmeticOpContext ctx, BigDecimal left, BigDecimal right) {
        switch (ctx.op.getType()) {
            case OP_ADD:
                return left.add(right);
            case OP_SUB:
                return left.subtract(right);
            case OP_MUL:
                return left.multiply(right);
            case OP_DIV:
                return left.divide(right, maxScale, roundingMode).stripTrailingZeros();
            case OP_MOD:
                return left.remainder(right);
                throw new IllegalStateException("Unknown operator " + ctx.op);

    public BigDecimal visitUnarySign(UnarySignContext ctx) {
        BigDecimal result = (BigDecimal) visit(ctx.expr());
        boolean unaryMinus = ctx.op.getText().equals("-");
        return unaryMinus
            ? result.negate()
            : result;

    public Object visitParens(ParensContext ctx) {
        return visit(ctx.expr());

We may appreciate scale 15 for interim results, but maybe we want lower scale for final result. Let’s do that now:

Result rule with different scale

Let’s say we require just scale of 6 for final results. We will implement result rule for that:

    public static final int DEFAULT_MAX_RESULT_SCALE = 6;
    private int maxResultScale = DEFAULT_MAX_RESULT_SCALE;

    /** Maximum BigDecimal scale used during computations. */
    public ExpressionCalculatorVisitor maxScale(int maxScale) {
        this.maxScale = maxScale;
        return this;

    /** Maximum BigDecimal scale for result. */
    public ExpressionCalculatorVisitor maxResultScale(int maxResultScale) {
        this.maxResultScale = maxResultScale;
        return this;

    public Object visitResult(ExprParser.ResultContext ctx) {
        Object result = visit(ctx.expr());
        if (result instanceof BigDecimal) {
            BigDecimal bdResult = (BigDecimal) result;
            if (bdResult.scale() > maxResultScale) {
                result = bdResult.setScale(maxResultScale, roundingMode);
        return result;

We also introduced two methods setting the maximal interim and result scale. Both of them return this, so they can be used right after constructor of the ExpressionCalculatorVisitor. One last change is required in ExpressionUtils – instead of calling expr rule on parser we need to use result rule. Otherwise the util class looks exactly like before.

Of course we have to fix the test and for 1/3 expect just 0.333333 instead of 15 digits like before.

Logical operations

After previous problems this is a piece of cake. First the test:

    public void logicalOperatorTest() {
        assertEquals(expr("F && F"), false);
        assertEquals(expr("F && T"), false);
        assertEquals(expr("T and F"), false);
        assertEquals(expr("T AND T"), true);
        assertEquals(expr("F || F"), false);
        assertEquals(expr("F || T"), true);
        assertEquals(expr("T or F"), true);
        assertEquals(expr("T OR T"), true);
        assertEquals(expr("!T"), false);
        assertEquals(expr("not T"), false);
        assertEquals(expr("!f"), true);

And the visitor implementation:

    public Boolean visitLogicOp(ExprParser.LogicOpContext ctx) {
        boolean left = (boolean) visit(ctx.expr(0));

        switch (ctx.op.getType()) {
            case OP_AND:
                return left && booleanRightSide(ctx);
            case OP_OR:
                return left || booleanRightSide(ctx);
                throw new IllegalStateException("Unknown operator " + ctx.op);

    private boolean booleanRightSide(ExprParser.LogicOpContext ctx) {
        return (boolean) visit(ctx.expr(1));

    public Boolean visitLogicNot(ExprParser.LogicNotContext ctx) {
        return !(Boolean) visit(ctx.expr());

Relational operators

Let’s do some comparisons and (non) equalities. Test first:

    public void relationalOperatorTest() {
        assertEquals(expr("1 > 0.5"), true);
        assertEquals(expr("1 > 1"), false);
        assertEquals(expr("1 >= 0.5"), true);
        assertEquals(expr("1 >= 1"), true);
        assertEquals(expr("5 == 5"), true);
        assertEquals(expr("5 != 5"), false);
        assertEquals(expr("'a' > 'b'"), false);
        assertEquals(expr("'a' >= 'b'"), false);
        assertEquals(expr("'a' < 'b'"), true);
        assertEquals(expr("'a' <= 'b'"), true);
        assertEquals(expr("true == true"), true);
        assertEquals(expr("true == f"), false);
        assertEquals(expr("true eq t"), true);

Again, I realize that these should be many separate tests and the coverage of cases is also not that great, but let’s move on to the implementation:

    public Boolean visitComparisonOp(ExprParser.ComparisonOpContext ctx) {
        Comparable left = (Comparable) visit(ctx.expr(0));
        Comparable right = (Comparable) visit(ctx.expr(1));
        int operator = ctx.op.getType();
        if (left == null || right == null) {
            return left == null && right == null && operator == OP_EQ
                || (left != null || right != null) && operator == OP_NE;

        //noinspection unchecked
        int comp = left.compareTo(right);
        switch (operator) {
            case OP_EQ:
                return comp == 0;
            case OP_NE:
                return comp != 0;
            case OP_GT:
                return comp > 0;
            case OP_LT:
                return comp < 0; case OP_GE: return comp >= 0;
            case OP_LE:
                return comp <= 0;
                throw new IllegalStateException("Unknown operator " + ctx.op);

Nothing special here – we utilize the fact that Strings, Booleans and BigDecimals are all comparable.


What did we see and what more can we do? For the sheer scope of the changes we postponed the variables for the next part of the series. Of course, variables (and functions!) are very important for expression evaluation, otherwise we’re talking about constant result for each expression anyway, right?

We have created flexible grammar that is kinda mix between SQL and Java. Of course, you may not like too many options for all the operators, but in our context it doesn’t pose any problem. It might have – in the context of general programming language (e.g. = vs == operators).

Another thing is that numbers are BigDecimals – always. This is kinda Groovy-like, where 3 / 2 is not 1 as Java programmer would expect and this division is also much slower than integer division. We may introduce integers and support smart division based on the context (like Java does after all, except that FP is IEEE 754 and not BigDecimal-based). But not now.

Now you can download the complete example – and if you don’t know how to download part of the repo, you may try this command:

svn export https://github.com/virgo47/litterbin/trunk/demos/antlr-expr/src/main/java/expr2/

Tests are in src/test of course. See you next with variables and maybe even functions.

Exploring the cloud with AWS Free Tier (1)

This will not be typical blog post, but rather some kind of a diary. Of course, I’ll try not to write complete nonsense, but as it will be rather exploratory effort it may happen. Sorry. My interest in the cloud is personally-professional. I may use it in my line of work, but I wanted to “touch” the cloud myself to get the feeling of it. Reading is cool, but it’s just not enough. Everybody is talking about it, I know I’m way behind – but lo, here I am going to experiment with the cloud as a curious developer.

My personal needs

My goals? I want to run some Docker containers in the cloud, maybe connect them somehow – so it will be about learning Docker better as well. Last year was very fast in that field (it was only the second year for Docker after all!) and there’s a lot I missed since I covered some basics of the technology in the summer 2014.

And I also wanted to know what it is to manage my cloud – even if it is small, preferably free. I didn’t check many players. It is difficult to orient among them and I’m not going to enter my credit card details to some company unknown to me. Talking about Docker, I needed either some direct container hosting (not sure if/how it is provided), or IaaS – that is some Linux machine where I run the Docker.

Finding my provider

Originally I wanted to test Google cloud, but that one is not available for personal use – which surprised me a bit. I knew Rackspace, but wasn’t able to find reasonable offer – something under 10 bucks a month. So I tried Amazon’s AWS – I like Amazon after all. :-) You may also check this article – 10 IaaS providers who provide free cloud resources.

What attracted me to AWS the most is their Free Tier offering that can be used for the whole one year! I don’t need much power, I don’t need excessive connectivity, I just needed some quiet developer’s machine. Also, one year is an incredible option there, many times I start some trial and don’t get enough of “real CPU/brain time” to play with the trial because of other obligations. But a year?! I’ll definitely be able to use AWS at least a bit during that time. Longer span also gives you better perspective.

Finally, AWS has a lot of documentation, instructions, videos, help pages, etc. I also quickly got a feeling that they are trying hard to give you all the tools to know how much you’d pay (which is probably feature of any good cloud) – even if you are using Free Tier. We’ll get to that later.

First steps with AWS

So I registered at AWS – it took couple of straightforward steps and one call they made to let me enter some PIN. I chose Basic support (no change in my zero price), confirmed here, confirmed there – and suddenly I was introduced to my AWS Console with tons of icons.

What next? Let’s stick to the plan and Google something about Docker and AWS. Actually I did this before I even started with AWS, of course. This page reads all the necessary steps. So we need some EC2 Linux instance – t2.micro with Amazon Linux AMI is available for Free Tier. Let’s launch it! Or, because I’m the manual kind of guy (kinda rare, reportedly), let’s check how to do it or see the video. It doesn’t seem that difficult after all.

I chose my closes region in AWS Console (Frankfurt, or eu-central) and created my instance. I clicked through all the Next buttons just to walk and read through the setup, but I didn’t change anything. In the course of this I created my key pair, downloaded the private key and – my instance was ready! Even if you don’t name it right away, it’s easy to do it later in the table of instances when click into the cell in Name column:

aws-ec2-instancesIn general – I have to say I’m very satisfied with overall look-and-feel of AWS web. It provides tons of options and information, but so far I found what I wanted. You either look around the screen and see what you want, or click on the top menu (e.g. Billing & Cost Management is under your login name on the right) – or Google it really quickly.

Let’s connect to my Linux box!

If you know how to use private key with SSH, this will be extra easy for you. You just need to know what user to use for login. Again – I googled – and you can find either this article from their Getting Started section, or another article from Instance Lifecycle section.

Whatever method or client you used to SSH into your instance (I used command line ssh that comes with git bash), you’ll find yourself in an ever familiar CLI. And because we wanted to play with Docker, we – of course – follow another help page. This time called Docker Basics.

Well, docker info command has run, everything seems OK, we will continue some other time when the night is younger.

Next morning first impressions…

I mentioned already, that you have full control over the spending and AWS guys really offer you all the information needed. For example, when I woke up today after setting and launching my EC2 t2.micro instance late last night, I saw this report in my Billing & Cost Management Dashboard:

aws-free-tier-top-servicesSure, I don’t know exactly how to read it yet – or how it’s calculated exactly, because it must have been running for more than 5 hours. But at least I know where I am with my “spending”.

…and evening observations

I definitely should read how those instances are billed – and when. When I logged out from the instance I saw no change on the report (not even for hours), but when I logged in later, it suddenly jumped by couple of hours. This is probably about when the billing update is triggered. Talking about couple of hours, this is no problem at all.

Another thing to remember is that when you restart the instance it will take an instance-hour right away. That is, every started hour is billed. We’re still talking about prices like 0.0something for an t2.micro hour, so again, not a real deal-breaker. But with 750 hour limit for Free Tier, one should not try to shutdown/restart the instance 750 times in a day. :-)

The pricing information is detailed but clear and it also plainly states: “Pricing is per instance-hour consumed for each instance, from the time an instance is launched until it is terminated or stopped. Each partial instance-hour consumed will be billed as a full hour.”

What next?

One day is not enough time to draw any conclusions, but so far I’m extremely satisfied with AWS. Next we will try to deploy some Docker images to it.

Expression evaluation in Java (2)

Previously we considered our options when evaluating expressions in Java language. Let’s recap them quickly:

  • Evaluate string expression in one go. Find a library or use built-in ScriptEngine (my favourite choice for most cases). Various universal expression language libraries may be more useful if you use beans and/or call Java methods in the expressions.
  • Parse the string into some kind of AST tree (or ParseTree in ANTLR4).
  • Use the tree as you wish. Interpret the expression, generate different representations of the same expression, etc.

In the first post we covered the first option. Now we’re going to check the latter.

Parser with interpreter

As I mentioned there are many parsers out there, libraries, blog posts, etc. I’ll use ANTLR4 to implement my grammar and expression evaluator. ANTLR seems to be the state of the art technology and ANTLR v4 brings grammar definition and its separation from actual applications even further. There is also a great book – The Definitive ANTLR 4 Reference (written by Terence Parr, ANTLR author) – which I highly recommend if you have even semi-serious ANTLR application on your mind.

Why do we need grammar and some custom code instead of simple expression evaluator? Because we want not only to evaluate the expression, but also transform it and build other representations from it. Our expressions are kinda SQL WHERE clauses (after WHERE) and we want to generate Querydsl Expressions. With expression evaluation we can check whether some object meets some criteria, with Querydsl query we can find all such objects in a database table – and both is important for us. But for now we will focus just on evaluation.

Couple of other disclaimers:

  • I’ll not explain ANTLR grammar syntax beyond the most basic points or dig deep into details of ANTLR that are covered in many other tutorials – or in the aforementioned book.
  • I’ll nor cover any build of the project, but of course there are plugins how to generate sources from ANTLR grammar. I’ll just mention command-line and IDEA way to do it.

Our first grammar will be very simple, we will support couple of operations, integer literals and parenthesis, but no variables – just to warm up. The whole example is on github.

Let there be a grammar!

ANTLR grammar file specifies grammatical rules and lexical rules. Lexical rules recognize tokens, grammatical then build upon these tokens. Our simple grammar file looks like this:

Use IDEA with ANTLR v4 plugin to generate classes:
- first set-up (right-click on the file, Configure ANTLR...)
- setup output directory to match this modules src/main/java
- set package for generated classes: expr1.grammar
- check "generate parse tree visitor", uncheck listener above (not necessary)

Command line when using downloaded "Complete ANTLR 4.5.1 Java binaries jar" (version may vary):
- know where the JAR is downloaded
- run the command in the expr1/grammar package directory (where Expr.g4 is)
- java -jar ~/Downloads/antlr-4.5.1-complete.jar -o . -package expr1.grammar -visitor -no-listener Expr.g4
- -o . means this directory (can be omitted), in IDEA it somehow adds package to the output directory
  but this command does not do that, it merely sets package in sources

*.tokens files are NOT necessary for runtime, they are not added to version control
grammar Expr;

expr: expr op=(OP_MUL | OP_DIV) expr # arithmetic
    | expr op=(OP_ADD | OP_SUB) expr # arithmetic
    | INT # int
    | '(' expr ')' # parens

OP_ADD: '+';
OP_SUB: '-';
OP_MUL: '*';
OP_DIV: '/';

INT: [0-9]+;
WS: [ \t\r\n]+ -> skip;

Lexical rules start with uppercase, grammatical ones in lowercase. Later we will be able to hook onto the grammatical rules, or even to their alternatives as in this case. Alternatives are marked by # followed by alternative name and we will be able to write code for them separately. There is much more about this in the book I mentioned.

ANTLR4 grammars are super intuitive because just mentioning OP_MUL/DIV before OP_ADD/SUB we defined the operator precedence. Still we want them all process in the same method, hence the same alternative name. For now just remember to recall these ideas when you see our Visitor implementation. :-)

Generating helper classes

We need to ask ANTLR to generate some support classes for us, so we can build our application with them. Comment at the beginning is for me and my colleagues so we know how to do that. Of course, this can be part of the build and classes would go somewhere to target/generated-sources. Please, generate these classes, we will need them in the next steps.

ANTLR allows us to separate grammar from the concrete application nicely. When we look at it we see inherent limitations (like no way to enter floating-point numbers), but it doesn’t say whether number will be Integer or BigInteger. It also doesn’t say whether * will really be multiplication, although we would be shooting ourselves in the foot repurposing this operator. But it can be done. Grammar (how it looks) and application (what it does) are cleanly separated.

Parse tree

ANTLR offers two way how to work with parsed “code” based on our grammar. You can implement ParseTreeListener that is event-based or ParseTreeVisitor where the flow control during processing is more explicit. If you know SAX and StAX (XML push parser vs pull parser) it is something similar. In any case we need ParseTree first, so let’s get it:

public static ParseTree createParseTree(String expression) {
    ANTLRInputStream input = new ANTLRInputStream(expression);
    ExprLexer lexer = new ExprLexer(input);
    CommonTokenStream tokens = new CommonTokenStream(lexer);
    ExprParser parser = new ExprParser(tokens);
    return parser.expr();
ParseTree parseTree = createParseTree("5+3");

On the second line we use our lexer (tokenizer if you will) that ANLTR generated for us from the grammar. We feed it with CharStream – implemented by ANTLRInputStream. Next two lines we do the same thing with tokenized input utilizing generated class ExprParser. We call expr() method on it – this matches the first rule from our grammar. This returns generated implementation of a ParseTree, but we will stick with the most abstract interface as we don’t need anything specific.

Visitor calculating the expression

On the next line we parse some expression and now… we need to implement the application logic. In our case we will use Visitor. ANTLR generated ExprVisitor interface for us, but even better, it also provides skeleton implementation ExprBaseVisitor that we can extend without the need to implement everything. Let’s extend it and call it ExpressionCalculatorVisitor.

package expr1;

import expr1.grammar.ExprBaseVisitor;
import expr1.grammar.ExprParser;

public class ExpressionCalculatorVisitor extends ExprBaseVisitor<Integer> {

    public Integer visitInt(ExprParser.IntContext ctx) {
        return Integer.valueOf(ctx.INT().getText());

    public Integer visitParens(ExprParser.ParensContext ctx) {
        return visit(ctx.expr());

    public Integer visitArithmetic(ExprParser.ArithmeticContext ctx) {
        int left = visit(ctx.expr(0));
        int right = visit(ctx.expr(1));

        switch (ctx.op.getType()) {
            case ExprParser.OP_ADD:
                return left + right;
            case ExprParser.OP_SUB:
                return left - right;
            case ExprParser.OP_MUL:
                return left * right;
            case ExprParser.OP_DIV:
                return left / right;
                throw new IllegalStateException("Unknown operator " + ctx.op);

Our visitor returns Integer, because it can work with a single type anyway. Anything more complex would probably fall back to Object, or used some value wrapper. This example should return 8 for our 5+3 expression. But you never know until you test it, right?

Writing test

Testing expression calculator was super easy and lightweight compared to other tests in our project, many of them starting the whole Spring context. I’ll use TestNG, but JUnit will work in the same way. Here is our Expr1Test:

package expr1;

import static org.testng.Assert.assertEquals;

import org.antlr.v4.runtime.tree.ParseTree;
import org.testng.annotations.Test;

public class Expr1Test {

	public void integerLiteralStaysInteger() {
		assertEquals(expr("5"), 5);

	public void plusAddsNumbers() {
		assertEquals(expr("5+4"), 9);

	public void minusCanProduceNegativeNumber() {
		assertEquals(expr("5-8"), -3);

	public void whitespacesAreIgnored() {
		assertEquals(expr("5\n- 8\t + 3"), 0);

	public void multiplyPrecedesPlus() {
		assertEquals(expr("2+3*3"), 11);
		assertEquals(expr("2*3+3"), 9);

	public void parenthesisChangePrecedence() {
		assertEquals(expr("(2+3)*3"), 15);
		assertEquals(expr("2*(3+3)"), 12);

	@Test(expectedExceptions = ExpressionException.class,
		expectedExceptionsMessageRegExp = "no viable alternative at input '-'")
	public void cannotEnterNegativeNumber() {

	private Object expr(String expression) {
		ParseTree parseTree = ExpressionUtils.createParseTree(expression);
		return new ExpressionCalculatorVisitor()

I used shortcut method called expr to parse and evaluate my expression – and just compared the results. But with the code so far there is one test failing – cannotEnterNegativeNumber.

Custom error handling

Brush aside the silliness of the idea that we can’t enter negative number (especially when it can be valid result :-)), the problem here is that by default parser reports the error but does not throw any exception. Let’s add this feature now – here is complete listing of ExpressionUtils:

package expr1;

import expr1.grammar.ExprLexer;
import expr1.grammar.ExprParser;
import org.antlr.v4.runtime.ANTLRErrorListener;
import org.antlr.v4.runtime.ANTLRInputStream;
import org.antlr.v4.runtime.BaseErrorListener;
import org.antlr.v4.runtime.CommonTokenStream;
import org.antlr.v4.runtime.RecognitionException;
import org.antlr.v4.runtime.Recognizer;
import org.antlr.v4.runtime.tree.ParseTree;

public class ExpressionUtils {

	private static final ANTLRErrorListener ERROR_LISTENER = new ExceptionThrowingErrorListener();

	public static ParseTree createParseTree(String expression) {
		ANTLRInputStream input = new ANTLRInputStream(expression);
		ExprLexer lexer = new ExprLexer(input);
		CommonTokenStream tokens = new CommonTokenStream(lexer);
		ExprParser parser = new ExprParser(tokens);
		return parser.expr();

	public static class ExceptionThrowingErrorListener extends BaseErrorListener {
		public void syntaxError(Recognizer<?, ?> recognizer, Object offendingSymbol,
			int line, int charPositionInLine, String msg, RecognitionException e)
			throw new ExpressionException(msg);

What we did here is that we removed the original listener (that would still print the error on stderr otherwise) and added listener that throws exception with the same message instead. Exception implementation is on github and isn’t worth the listing.


Of course, we merely scratched the surface of ANTLR and custom language implementations, but we managed to do so much with so little code! You saw rules and their alternatives in action and how they manifested in the generated visitor class.

Now you probably can imagine how to extend the idea – and we will do exactly this in my next post. We will add variables, because without them expression evaluation isn’t that useful for most customization purposes. We will also add more types and logical operations. So it will be fun, I promise.

Expression evaluation in Java (1)

There are times when you need to evaluate some expression in your program. Sometimes you can create something that is not text/string based, that is you construct the expression in a different way – typically so it is easier to validate, evaluate, etc. Sometimes, nothing beats the textual representation. It’s easy and quick to write, and I prefer it any time to various click-based solutions (typically based on some tree representation).

But the expression is a tree after all and we need to parse it somehow. Or do we? In my first sentence I said I needed to evaluate the expression. Can we forgo the parsing altogether?

This will be just the first of (probably) two posts and after some general thoughts I will focus on direct string expression evaluation. Parsing and what follows will follow in another post.

Parsing the grammar or what?

If you know tools like ANTLR then you may be asking what am I going to parse here? Is it a grammar written in some meta-language? No – I’m not interested in grammar itself now at all. Grammar is given and I want to parse the expression written in that grammar. If we used ANTLR, we’d be working with classes generated from the grammar.

Also, just to be sure, anytime I say parse/parsing mean it in a broader sense – both lexer and parser process together (except for cases where I explicitly mention tokenization).

Evaluate or parse as well?

Unless you internally represent the expression as a tree already, you will need to parse it from the string/stream – or something has to do it for you. Next question is whether you want to work with the expression tree and interpret it yourself, or you are really interested only in the result of the expression evaluation.

Evaluation in any case means that I can feed the same expression (whatever the representation is) with various inputs – which is valuable when when the expression contains variables. Without variables it doesn’t make any sense anyway. For instance in case of Java’s ScriptEngine, this is what Bindings do.

So what are our options?

  • Expression evaluation from string to result in one go. I’m not interested in a tree, any internals, I just want results. String input may be compiled for efficiency if it is to be evaluated many times.
  • I have tree representation of the particular expression based on well-know types (custom or from some API, doesn’t matter) and I interpret it. This implies that the tree construction is supported by UI, tree is stored in a meaningful way, etc.
    I don’t see any easy way to compile it here, unless we produce string representation first and then follow the previous bullet. I’d go this path only if compilation provided big performance win (after thorough benchmarks).
  • I have string input, but I want to parse it first and then follow previous bullet. There are many parsers out there, many blog posts about the topic, etc. However, I’d just take ANTLR and use that. You generate the classes from a grammar and you’re done, then it’s just pure Java with a bit of antlr-runtime. It’s the state of the art technology, reinventing wheel here is pointless unless you have some extraordinary requirements.

Obviously – the first option hides both parsing and interpreting (or compiling) the parsed result. What it doesn’t provide is any introspection or any way to transform one representation of the expression into the other (except for the most simple string replacements).

Another thing to consider is how you represent the expression?

  • String in a specific grammar – this is most straightforward. When I write “1+2” you can imagine the semantics of such a grammar right away. You can also guess the type of returned value (some number), just as you can for an expression like “a>30” (boolean). It’s easy to store in a DB, file, anywhere. But it’s furthest from anything computer can directly work with, although many frameworks offer some shortcut such as an eval(…) method.
  • Tree representing the expression – but hey, what does it look like? In JVM memory it’s obvious – there are objects of various AST nodes (or ParseTree in ANTLR4). But how do you persist this? Serialize? Good, you may not need to read it and although Java serialization is not very good it may suffice for this. Or will it be tree like structure in a database? Then you need to process it somehow to form the tree in memory, but it may be much easier than parsing raw string indeed. Or you may store it as a string after all – like XML, JSON, or anything else that supports tree serialization in any form. Then you are parsing, but not with your/custom parser, you’re using well-known tools for ubiquitous formats and just transforming the input into a tree. You’re not using any custom grammar – maybe just custom XML/JSON schema or serialization mechanism. In any case, the core representation is the tree – and the tree only.

Just evaluate my string, please!

Here you want your expression to be just “executed”. If you want to do it many times for the same expression then compilation (if provided) typically makes it faster. Let’s see how you can do this easily with Java’s ScriptEngine aka JSR 223 (use your favourite IDE(A) to help you with imports ;-)):

String expr = "1+1";
ScriptEngine scriptEngine = new ScriptEngineManager().getEngineByName("ecmascript");
CompiledScript compiledScript = ((Compilable) scriptEngine).compile(expr);
Object result = compiledScript.eval();
System.out.println("result = " + result + " (" + result.getClass().getName() + ")");

Easy, isn’t it? And it is compiled! We used built-in JavaScript that is available in JDK since 1.6 (Rhino for 1.6/7 and then famous Nashorn since Java 8).

But hey, something is missing, we don’t want the result 2 every time. Let’s try different expression with some variables:

String expr = "a &lt; b";
ScriptEngine scriptEngine = new ScriptEngineManager().getEngineByName("ecmascript");
CompiledScript compiledScript = ((Compilable) scriptEngine).compile(expr);
Bindings bindings = scriptEngine.createBindings();
bindings.put("a", 30);
bindings.put("b", 25);
Object result = compiledScript.eval(bindings);
System.out.println("result = " + result + " (" + result.getClass().getName() + ")");

Here we have expression with variables and we create Bindings (implements Map) and provide values for a and b. The Bindings are used as a parameter for eval – and the result is naturally Boolean.

Other options for evaluation?

When I needed expression evaluation for configuring Java Simon callback mechanism it was in times of Java 5 – and no scripting engine. First I use JEval (it was on dev.java.net, I’m not sure whether it’s the same project) but when I moved Java Simon to Maven build JEval was lagging behind and I was looking for another solution.

So I moved to MVEL which was way too powerful for my needs – and also the JAR was around 1MB. But MVEL (MVFLEX Expression Language) is more than just some expression evaluator – it really is expression language, like you know it from JSP or Spring (SpEL). It traverses beans, can call methods, etc. If this is what you need then look for this kind of beast (EL).

Consider also the extensibility of the solution. Will you need to add custom functions? These are all things that are already covered by existing libraries. All you have to do is choose. I, personally, didn’t needed this kind of power and was happy with script engine. It is also more powerful than I need, but at least it is built-in already. (Although technically it’s just an API and your non-Oracle JDK may not provide any concrete engine.)

Expression string preprocessing

Whatever evaluator you may use, it may happen that the supported grammar is not exactly what you want. Sometimes it’s about details. Consider writing expressions into XML configuration – all those <, > and && are not very XML friendly. You may use CDATA section (better) or character entity (ugly).

Or you may introduce some simple replacement preprocessing and support LT, GT and AND (and others) instead!

// code in main(...)
String expr = "a &gt; 10 AND NOT (b == 20)";
expr = preprocessExpression(expr);
// ...the rest of the code follows as before

private static String preprocessExpression(String expr) {
    return expr
        .replaceAll(" AND ", " &amp;&amp; ")
        .replaceAll(" OR ", " || ")
        .replaceAll(" NOT ", " !")

If you expect a lot of replacing for a lot of expressions, you may use precompiled Patterns, but if you preprocess each expression just once and then store it as a CompiledScript the gain is probably not that big.

I used this strategy in Java Simon where I compare ns measurements and in order to do so I even provide shortcuts for seconds, millis, micros (us) – check the code here. Very simple replacements but they can help a lot. User might not guess that there is JavaScript evaluation behind.

How to validate used variables?

Imagine you want to allow only some variables in your expressions – which is reasonable. Using ScriptEngine, you know what you put into the Bindings, anything other variable would fail anyway. How to check the expression in this case? Compilation is not enough, of course.

The easiest way to check the expression is to prepare bindings with any values (just the type must be right, of course) – and then let the expression to evaluate with these bindings. This checks that everything clicks. There is possible problem though, in case of expressions where not all parts must be evaluated (like a || b when a is true) the question is whether the names in the not-evaluated part are checked or not. In case of lazy evaluation some things may slip – this definitely happens with Nashorn engine for instance.

What you can do is somehow check all identifier-like “words” in your string – you can use regex to extract them one by one. But it is cumbersome of course. This is much more natural in case of parsed expressions stored in syntax tree. But that one is for another post.


If we don’t need to work with the parsed syntax tree, these mentioned options are preferable to using some parser and then interpreting the expression represented as a tree. Especially when the compilation is available the result may be quite fast – of course depending on the cost of entering the runtime of the expression evaluator. It’s also mentally easy to understand – you want to evaluate the expression, and that’s what you see in the code.

In the next post we will look at the cases when you want a little bit more than just the evaluation.

Why Gradle doesn’t provide “provided”?

Honestly, I don’t know. I’ve been watching Gradle for around 3 years already, but except for primitive demos I didn’t have courage to switch to it. And – believe it or not – provided scope was the biggest practical obstacle in my case.

What is provided, anyway?

Ouch, now I got myself too. I know when to use it, or better said – I know with what kind of dependencies I use it. Use it with any dependency (mostly an API) that are provided (hence the name I guess :-)) at the runtime platform where your artifact will be run. Typical case is javax:javaee-api:7.0. You want to compile your classes that use various Java EE API. This one is kinda “javaee-all” and you can find separate dependencies for particular JSRs. But why not to make your life easier when you don’t pack this into your final artifact (WAR/EAR) anyway?

So it seems to be like compile (Maven’s default scope for dependencies) except that it should not be wrapped in WAR’s lib directory, right? I guess so, except that provided is not transitive, so you have to name it again and again, while compile dependencies are taken from upstream projects.

BTW: This is why I like writing blog posts – I have to make it clear to myself (sometimes not for the first time, of course). Maven’s dependency scopes are nicely described here.

But Gradle has provided!

Without being strict what Gradle is and what are its plugins, when you download Gradle, you can use this kind of scope – if you use ‘war’ plugin, just like in this simple example. If you want to run it (and other examples from this post), just try the following commands in git-bash (or adjust as necessary):

$ svn export <a href="https://github.com/virgo47/litterbin/trunk/demos/gradle-provided/">https://github.com/virgo47/litterbin/trunk/demos/gradle-provided/</a>
$ cd gradle-provided
$ gradle -b build-war.gradle build

Works like a charm – but it’s WAR! Good thing is you can now really check that the provided dependency is not in the built WAR file, only Guava sits there in WEB-INF/lib directory. But often we just need JARs. Actually, when you modularize your project, you mostly work with JARs that are put together in a couple of final artifacts (WAR/EAR). That doesn’t mean you don’t need Java EE imports in these JARs – on the contrary.

So this providedCompile is dearly missed in Gradle’s java plugin. And we have to work around it.

Just Google it!

I tried. Too many results. Various results. Different solutions, different snippets. And nothing worked for me.

The main reason for my failures must have been the fact that I tried to apply various StackOverflow answers or blog advices into an existing project. I should have tried to create something super-simple first.

Recently I created my little “litterbin” project on GitHub. It contains any tests, demos or issue reproductions I need to share (mostly with my-later-self, or when I’m on a different computer). And today, finally, I tried to proof my latest “research” in provided scope – you can check various gradle.build files using vanilla aproach or propdeps plugins (read further). You can also “svn export” (download) the project as I showed higher and play with it.

My final result without using any fancy plugin is this:

apply plugin: 'maven'
apply plugin: 'java'
apply plugin: 'idea'

repositories {

configurations {

sourceSets {
    main {
        compileClasspath += configurations.provided
        test.compileClasspath += configurations.provided
        test.runtimeClasspath += configurations.provided

// if you use 'idea' plugin, otherwise fails with: Could not find method idea() for arguments...
idea {
    module {
         * If you omit [ ] around, it fails with: Cannot change configuration ':provided' after it has been resolved
         * This is due Gradle 2.x using Groovy 2.3 that does not allow += for single elements addition.
         * More: https://discuss.gradle.org/t/custom-provided-configuration-not-working-with-gradle-2-0-rc2-in-multi-project-mode/2459
        scopes.PROVIDED.plus += [configurations.provided]
        downloadJavadoc = true
        downloadSources = true

dependencies {
    compile 'com.google.guava:guava:17.0'
    provided 'javax:javaee-api:7.0'

In the comments you can see the potential problems.

With strictly contained proof-of-concept “project” I can finally be sure what works and what doesn’t. If it works here and doesn’t work when combined with something else, the problem is somewhere else (or in the interaction of various parts of the build). Before I always tried to migrate some multi-module build from Maven, and although I tried to do it incrementally, it simply got over my head when I wanted to tackle provided dependencies.

Just use something pre-cooked!

If you want provided scope you can also use something that just gives it to you. Spring Boot plugin does, for instance, but it may also add something you don’t want. In this StackOverflow answer it was suggested to use propdeps plugin managed by Spring. This just adds the scope you may want – and nothing else. Let’s try it! I went to the page and copied the snippets – the build looked like this:

apply plugin: 'maven'
apply plugin: 'java'
apply plugin: 'idea'

repositories {

buildscript {
    repositories {
        maven { url 'http://repo.spring.io/plugins-release' }
    dependencies {
        classpath 'org.springframework.build.gradle:propdeps-plugin:0.0.6'

configure(allprojects) {
    apply plugin: 'propdeps'
    apply plugin: 'propdeps-maven'
    // following line causes Cannot change configuration ':provided' with Gradle 2.x (uses += without [ ] internally)
    apply plugin: 'propdeps-idea'
    apply plugin: 'propdeps-eclipse'

dependencies {
    compile 'com.google.guava:guava:17.0'
    provided 'javax:javaee-api:7.0'

As the added comment suggest, it wasn’t complete success. Without IDEA plugin and the section, it worked. But the error with the IDEA parts was this:

Cannot change configuration ':provided' after it has been resolved.

You google and eventually find this discussion, where the key message by Peter Niederwieser (core Gradle developer) is:

Gradle 2 updated to Groovy 2.3, which no longer supports the use of ‘+=’ for adding a single element to a collection. So instead of ‘scopes.PROVIDED.plus += configurations.provided’ it’s now ‘scopes.PROVIDED.plus += [configurations.provided]’.

Funny part is, that it is actually fixed in the spring-projects/gradle-plugins version 0.0.7, they have just forgotten to update the examples in the README. :-) So yeah, with 0.0.7 instead of 0.0.6 in the example, it works fine.

How can this stop you?

Maybe provided scope is not that trivial. Scope is actually not the right word in Gradle world, but my mentality and vocabulary is rooted in Maven world after all the years. If provided was obvious and easy they’d probably resolve this never ending story already. Now the issue is polluted with advocates for the scope (yeah, I didn’t resist either) and it’s difficult to understand what the problem is on the side of the Gradle team, except it seems they’re just ignoring it for the couple of years.

Original reporter claimed it doesn’t make sense to stay with Maven for this – and he is right. He is also right that many developers don’t understand how Configuration works (true for me as well) and how it relates to ClassLoader (true again). I’ve read some Gradle book and read many parts of the manual, trouble is that my problems were always about migrating existing Maven builds. Not big ones, but definitely multi-module with provided dependencies. And it really is not easy from this position.

I successfully used Gradle for one-time projects, demos, etc. Every time I try to learn something new about it. I acknowledge that building domain is hard domain. Gradle has good documentation, but it doesn’t mean it’s always easy to find the right recipe. I never worked with a team where someone was dedicated for this task and I was mostly (and sadly) the best learned member when it came to builds with tons of other stuff on my hands. (Sorry for rant. It springs from the fact that builds are considered secondary matter, or worse. And there is too much primary concerns anyway.)

When one doesn’t know how to get to “provided” scope – that was available “for free” in Maven – any obstacle seems much bigger than it really is. There is simply too much we don’t know when we tackle the Gradle the first time. Nobody tells you “don’t use propdeps-plugin:0.0.6, try 0.0.7 instead”.

Or you get Gradle like message “Cannot change configuration ‘:provided’ after it has been resolved” which is probably perfectly OK from Gradle point of view – it nicely covers underlying technology. But it also covers the root cause that Groovy 2.3 simply doesn’t support += without wrapping the right side into […] – and even that only in some cases:

// correct line, but fails without [ ]
idea { module { scopes.PROVIDED.plus += [configurations.provided] }}

Even –stacktrace –debug will not help you to find the root cause. Maybe if you’d debug the build in IDE, but I’m definitely not there yet (not with Gradle, I debug Maven builds sometimes).

I hope you can now appreciate how subtle the whole problem is and how much difficulty it may cause.

provided or providedCompile?

And that is another trick – people call it differently. “providedCompile” is probably more Gradle-like (and available with war plugin), “provided” is what we are used to from Maven. Now imagine you experiment with various solutions how to introduce this kind of scope – that is you test different plugins. And all these call it differently. Every time you have to go to your dependency list and fix it there, or wonder why it doesn’t work when you forget. It just adds to the chaos when you already navigate unknown territory.

And it also nicely underlines the fact how much it is missing for java plugin out of the box. Because “it is supported in ‘war’ plugin” is not satisfactory answer. I want to use Java EE imports in my JAR that may be later put to WAR. Or I may run it in embedded container that will be declared with different dependencies. “This mostly affects only library developers” is also not true. Sure, it affects my Java Simon (which is a library), but I used provided scope for JAR modules on every single project in my past.

Now imagine this is your first battle with Gradle (which more or less was in my case). How should I be confident about releasing to Maven Central? It reportedly works, but then, for experienced Gradle users everything is easy…


During my research I found also the article Provided Scope in Gradle. I don’t know how accurate it is for Gradle 2.x or whether Android guys didn’t solve it already somehow. Author added nice pictures and also started with “What is provided anyway?” question (I swear it was a natural choice for my first subheader too :-)). And again it just shows how much complicated Gradle builds are when it’s not available out of the box.

It doesn’t mean I don’t want to try to get to Gradle build. I don’t like Maven’s rigidity – although I appreciate the conventions and I’ll follow those with my Gradle builds too. But sometimes you just want to switch something from false to true – and it takes 10 XML lines. You may say, meh! But it means you see less on screen, builds are not readable, etc. And we already agreed, I hope, that building is a (potentially) complex domain. Readability is a must.

Sure there is something about polyglot Maven, but there is still also the issue with the lack of flexibility. I’m absolutely convinced that Gradle is the way to go. I tried it for simple things and I liked it, and I have no doubt I’ll learn it well enough to master bigger builds too.

Hopefully, provided will not be problem anymore. :-)


Get every new post delivered to your Inbox.

Join 237 other followers