Believe in build automation

I was probably lucky having a great colleague who changed our installation/upgrade process into a single install.sh script. The script contained uuencoded ZIP as well, it unzipped the content (couple of Java EARs), checked the application server if all the resources are set up and then deployed these EARs. Having this experience probably around 2005 I got totally hooked to the whole idea of automation.

2005? Wasn’t it late already?

I don’t remember exactly, maybe it was even 2006. In any case, it wasn’t late for me. It was the next step in my personal development and I was able to fully embrace it. It may sound foolish, but we did not measure saved hours vs those we spent to get to this single command upgrade of our system. We didn’t have many other automation experiences. We had no integration server (not sure we knew what it was) and we struggled with automated testing. Even if we had wanted to measure we would have failed probably, because we simply were not there yet.

There is this idea of a maturity model – that when there are multiple levels of knowledge or understanding or skill or whatever, you simply can’t just learn the highest one, or even skip over any to get higher. Because without living the steps one by one there is no solid foundation for the next one. You don’t need formalized maturity model, very often there simply is one, call it natural progress.

Of course, I don’t mean to bring in a “maturity model” model to hold you back. Aim high, but definitely reinforce your foundations when you feel they don’t work. Not all models are prescribed, sometimes it’s more an exploration. But talking about build automation, you can find some maturity models for it too. (Funny enough, it mentions Vagrant/Packer on the top of the pyramid, because these two tools and last two weeks with them made me write this post. :-))

Personal and team maturity

We were a small team, far from the best in the business, but we were quite far ahead in our neighbourhood. We were following trends, not blindly, but there was no approval process to stop us trying out interesting thing. By 2010 we had had our integration server (continuous build), managed to write automated tests for newly developed features and even created some for critical older parts.

Then I changed my job and went to kind of established software house and I was shocked how desperate the state of automation was there. Some bosses thought we have something like integration server, but nobody really knew about one. What?! And of course, testing takes time, especially automated, and so on and so forth… There was no way how I could just push them. I gave some talks about it, some people were on the same page, some tried to catch up.

We got Jenkins/Sonar up and running, but automated testing were lagging behind. We had really important system that should have had some – but there were none. People tried, but failed. But they did not pursue the goal, they saw only the problems (it takes time and adds code you have to maintain) but did not see the benefits. There are cases when doing more of the wrong thing does not make it any better (“let’s do even more detailed specification, so that coding can be done by cheaper people”), but there are other cases when doing the right things the wrong way requires different approach. It requires learning (reading, courses) and practice. There is no magic that will get you from zero to fourth maturity level in any discipline.

You can have mature individuals, but the team (or division, or even company) can still be immature in what you pursue. And it strongly depends on the company culture how it turns out. Other way around is much easier – coming into more matured team means you can quickly get up and running on the level, although it is still important to catch up with those foundations, unless you want to do what you do just superficially. This may be acceptable for some areas, it may even be really efficient as not everybody can know everything in all the depth.

Proof vs belief

I really believe, I got lucky to get hooked. You can throw books at them, you can argue and show them success stories. Just as with anything that is based on “levels of maturity” you can’t simply show them the result. They don’t see it with your eyes. It never “just works” and there are many agile/lean failures showing it – mostly based on following the practices only, forgetting values and principles (which is an analogy of a maturity model too).

I got my share of evidence, I saw the shell script allowing us those “one-click” updates. But for some this would not be enough. I was always inclined to learning and self-improvement. Sure there are days when I feel I do the repetitive work “manually”, I guess it’s because I was out of mental fuel. But in most cases I was trying to “program” my problems.

I rather messed around with vim macros for a while, even if it didn’t pay off the moment I needed to change 20 lines. Next time when I needed to tweak 2000 lines, I was ready. I never thought about kata, but now I know that was what I was doing. I was absolutely sure about why I do it. I didn’t see the lost time, what I imagined was my neurons bending around the problem first, just to bend later around the whole class of problems, around the pattern. I didn’t care about proof, I believed.

I see the proof now, some 15 years down the path. I see where believing got me and where many other programmers are stuck. I see how easily now I write a test that would be mind-blowing mission impossible just a couple of years ago. And I know that the path ahead is a long one still. I’m definitely much closer to the top of the trends than anytime before.

Final thoughts

I don’t think it was that install.sh script alone that brought the power of automation to my attention. I believe it was one of defining moments, my favourite one when I try to convince people with a personal story. It saves us incredible time. It was kind of documentation, the best one, because it was actually running. I would understand a lot of this much later and it’s crystal clear to me now.

But it was belief in the first place. That made it easier for me. When I met a culture that didn’t support these ideas (at least not in practice) I was already strong enough to do it my way anyway. And I saw it was paying off over and over and over again. While my colleagues declared “no time for testing”, I was in fact saving time with testing. People are so busy to write lines of code, maybe if they thought more or started with a test they would get to the result faster – with the benefit of higher quality packed in too.

How can we hope for continuous integration when we don’t start small with tests and other scripted tools? There are so many automation tools out there and the concepts are well understood already that staying stuck in 2000’s is just pure laziness. The bad kind of it. Doesn’t matter if it’s a single person or the whole company.

Last three years with software

Long time ago I decided to blog about my technology struggles – mostly with software but also with consumer devices. Don’t know why it happened on Christmas Eve though. Two years later I repeated the format. And here we are three years after that. So the next post can be expected in four years, I guess. Actually, I split this into two – one for software, mostly based on professional experience, and the other one for consumer technology.

Without further ado, let’s dive into this… well… dive, it will be obviously pretty shallow. Let’s skim the stuff I worked with, stuff I like and some I don’t.

Java case – Java 8 (verdict: 5/5)

This time I’m adding my personal rating right into the header – little change from previous post where it was at the end.

I love Java 8. Sure, it’s not Scala or anything even more progressive, but in context of Java philosophy it was a huge leap and especially lambda really changed my life. BTW: Check this interesting Erik Meijer’s talk about category theory and (among other things) how it relates to Java 8 and its method references. Quite fun.

Working with Java 8 for 17 months now, I can’t imagine going back. Not only because of lambda and streams and related details like Map.computeIfAbsent, but also because date and time API, default methods on interfaces and the list could probably go on.

JPA 2.1 (no verdict)

ORM is interesting idea and I can claim around 10 years of experience with it, although the term itself is not always important. But I read books it in my quest to understand it (many programmers don’t bother). The idea is kinda simple, but it has many tweaks – mainly when it comes to relationships. JPA 2.1 as an upgrade is good, I like where things are going, but I like the concept less and less over time.

My biggest gripes are little control over “to-one” loading, which is difficult to make lazy (more like impossible without some nasty tricks) and can result in chain loading even if you are not interested in the related entity at all. I think there is reason why things like JOOQ cropped up (although I personally don’t use it). There are some tricks how to get rid of these problems, but they come at cost. Typically – don’t map these to-one relationships, keep them as foreign key values. You can always fetch the stuff with query.

That leads to the bottom line – be explicit, it pays off. Sure, it doesn’t work universally, but anytime I leaned to the explicit solutions I felt a lot of relief from struggles I went through before.

I don’t rank JPA, because I try to rely on less and less ORM features. JPA is not a bad effort, but it is so Java EE-ish, it does not support modularity and the providers are not easy to change anyway.

Querydsl (5/5)

And when you work with JPA queries a lot, get some help – I can only recommend Querydsl. I’ve been recommending this library for three years now – it never failed me, it never let me down and often it amazed me. This is how criteria API should have looked like.

It has strong metamodel allowing to do crazy things with it. We based kinda universal filtering layer on it, whatever the query is. We even filter queries with joins, even on joined fields. But again – we can do that, because our queries and their joins are not ad-hoc, they are explicit. :-) Because you should know your queries, right?

Sure, Querydsl is not perfect, but it is as powerful as JPQL (or limited for that matter) and more expressive than JPA criteria API. Bugs are fixed quickly (personal experience), developers care… what more to ask?

Docker (5/5)

Docker stormed into our lives, for some practically for others at least through the media. We don’t use it that much, because lately I’m bound to Microsoft Windows and SQL Server. But I experimented with it couple of times for development support – we ran Jenkins in the container for instance. And I’m watching it closely because it rocks and will rock. Not sure what I’m talking about? Just watch DockerCon 2015 keynote by Solomon Hykes and friends!

Sure – their new Docker Toolbox accidentally screwed my Git installation, so I’ll rather install Linux on VirtualBox and test docker inside it without polluting my Windows even further. But these are just minor problems in this (r)evolutionary tidal wave. And one just must love the idea of immutable infrastructure – especially when demonstrated by someone like Jérôme Petazzoni (for the merit itself, not that he’s my idol beyond professional scope :-)).

Spring 4 and on (4/5)

I have been aware of the Spring since the dawn of microcontainers – and Spring emerged victorious (sort of). A friend of mine once mentioned how much he was impressed by Rod Johnson’s presentation about Spring many years ago. How structured his talk and speech was – the story about how he disliked all those logs pouring out of your EE application server… and that’s how Spring was born (sort of).

However, my real exposure to Spring started in 2011 – but it was very intense. And again, I read more about it than most of my colleagues. And just like with JPA – the more I read, the less I know, so it seems. Spring is big. And start some typical application and read those logs – and you can see EE of 2010’s (sort of).

That is not that I don’t like Spring, but I guess its authors (and how many they are now) simply can’t see anymore what beast they created over the years. Sure, there is Spring Boot which reflects all the trends now – like don’t deploy into container, but start the container from within, or all of its automagic features, monitoring, clever defaults and so on. But that’s it. More you don’t do, but you better know about it. Or not? Recently I got to one of the newer Uncle Bob’s articles – called Make the Magic go away. And there is undeniably much to it.

Spring developers do their best, but the truth is that many developers just adopt Spring because “it just works”, while they don’t know how and very often it does not (sort of). You actually should know more about it – or at least some basics for that matter – to be really useful. Of course – this magic problem is not only about Spring (or JPA), but these are the leaders of the “it simply works” movement.

But however you look at it, it’s still “enterprise” – and that means complexity. Sometimes essential, but mostly accidental. Well, that’s also part of the Java landscape.

Google Talk (RIP)

And this is for this post’s biggest let down. Google stopped supporting their beautifully simple chat client without any reasonable replacement. Chrome application just doesn’t seem right to me – and it actually genuinely annoys me with it’s chat icon that hangs on the desktop, sometimes over my focused application, I can’t relocate it easily… simply put, it does not behave as normal application. That means it behaves badly.

I switched to pidgin, but there are issues. Pidgin sometimes misses a message in the middle of the talk – that was the biggest surprise. I double checked, when someone asked me something reportedly again, I went to my Gmail account and really saw the message in Chat archive, but not in my client. And if I get messages when offline, nothing notifies me.

I activated the chat in my Gmail after all (against my wishes though), merely to be able to see any missing messages. But sadly, the situation with Google talk/chat (or Hangout, I don’t care) is dire when you expect normal desktop client. :-(

My Windows toolset

Well – now away from Java, we will hop on my typical developer’s Windows desktop. I mentioned some of my favourite tools, some of them couple of times I guess. So let’s do it quickly – bullet style:

  • Just after some “real browser” (my first download on the fresh Windows) I actually download Rapid Environment Editor. Setting Windows environment variables suddenly feels normal again.
  • Git for Windows – even if I didn’t use git itself, just for its bash – it’s worth it…
  • …but I still complement the bash with GnuWin32 packages for whatever is missing…
  • …and run it in better console emulator, recently it’s ConEmu.
  • Notepad2 binary.
  • And the rest like putty, WinSCP, …
  • Also, on Windows 8 and 10 I can’t imagine living without Classic Shell. Windows 10 is a bit better, but their Start menu is simply unusable for me, classic Start menu was so much faster with keyboard!

As an a developer I sport also some other languages and tools, mostly JVM based:

  • Ant, Maven, Gradle… obviously.
  • Groovy, or course, probably the most popular alternative JVM language. Not to mention that groovsh is good REPL until Java 9 arrives (recently delayed beyond 2016).
  • VirtualBox, recently joined by Vagrant and hopefully also something like Chef/Puppet/Ansible. And this leads us to my plans.

Things I want to try

I was always friend of automation. I’ve been using Windows for many years now, but my preference of UNIX tools is obvious. Try to download and spin up virtual machine for Windows and Linux and you’ll see the difference. Linux just works and tools like Vagrant know where to download images, etc.

With Windows people are not even sure how/whether they can publish prepared images (talking about development only, of course), because nobody can really understand the licenses. Microsoft started to offer prepared Windows virtual machines – primarily for web development though, no server class OS (not that I appreciate Windows Server anyway). They even offer Vagrant, but try to download it and run it as is. For me Vagrant refused to connect to the started VirtualBox machine, any reasonable instructions are missing (nothing specific for Vagrant is in the linked instructions), no Vagrantfile is provided… honestly, quite lame work of making my life easier. I still appreciate the virtual machines.

But then there are those expiration periods… I just can’t imagine preferring any Microsoft product/platform for development (and then for production, obviously). The whole culture of automation on Windows is just completely different – read anything from “nonexistent for many” through “very difficult” to “made artificially restricted”. No wonder many Linux people can script and too few Windows guys can. Licensing terms are to be blamed as well. And virtual machine sizes for Windows are also ridiculous – although Microsoft is reportedly trying to do something in this field to offer reasonably small base image for containerization.

Anyway, back to the topic. Automation is what I want to try to improve. I’m still doing it anyway, but recently the progress is not that good I wished it to be. I fell behind with Gradle, I didn’t use Docker as much as I’d like to, etc. Well – but life is not work only, is it? ;-)

Conclusion

Good thing is there are many tools available for Windows that make developer’s (and former Linux user’s) life so much easier. And if you look at Java and its whole ecosystem, it seems to be alive and kicking – so everything seems good on this front as well.

Maybe you ask: “What does 5/5 mean anyway?” Is it perfect? Well, probably not, but at least it means I’m satisfied – happy even! Without happiness it’s not 5, right?

Expression evaluation in Java (4)

Previously we looked at various options how to evaluate expressions, then we implemented our own evaluator in ANTLR v4 and then we complicated it a bit with more types and more operations. But it still doesn’t make sense without variables. What good would any expression based rule engine be if we can’t change input parameters? :-)

But before that…

All the code related to this blog post is here on GitHub, package name is called expr3 this time as it is our third iteration of the ANTLR solution (even though we are in the 4th post). Tests are here. We will focus on variables, but there are some changes made to expr3 beyond variables themselves.

  • Grammar supports NULL literal and identifiers (for variables) – but we will get to this.
  • ExpressionCalculatorVisitor now accepts even more types and method convertToSupportedType is taking care of this. This is important to support reasonable palette of object types for variables (and will work the same for return values from functions later).
  • Numbers can be represented as Integer or BigDecimal. If the number fits into Integer range (and is not decimal number, of course) it will be represented with Integer, otherwise BigDecimal is used. This does complicate arithmetic and relational (comparisons) operations, we need some promotions here and there, etc. As of now it is coded in a bit crude way – had the implicit conversion rules been more complicated, some more sophisticated solution would be better.
  • Java Date Time API types can be used as variable values, but these will be converted to the ISO extended strings (which still allows comparison!). LocalDate, LocalDateTime and Instant are supported in this demo.

As I said, I’ll not focus on these changes, they are in the code and while they affect how we treat variables, they are not inherently related to introducing them. I’ll also not talk about related tests (like literal resolution into Integer vs BigDecimal) – again, it is in the repo.

Identifiers and null

When we’re working with variables, we need to write them somehow into the expression – and that’s where identifiers come in. As you’d expect, identifiers represent the variables on their respective place in the expression (or rather their value), so they are one kind of elemental expressions, just like various literals. Second thing we may need is NULL value. This is not strictly necessary in all contexts, but null is so common in our Java world that I decided to support it too. Our expr node in its fullness looks like this:

expr: STRING_LITERAL # stringLiteral
    | BOOLEAN_LITERAL # booleanLiteral
    | NUMERIC_LITERAL # numericLiteral
    | NULL_LITERAL # nullLiteral
    | op=('-' | '+') expr # unarySign
    | expr op=(OP_MUL | OP_DIV | OP_MOD) expr # arithmeticOp
    | expr op=(OP_ADD | OP_SUB) expr # arithmeticOp
    | expr op=(OP_LT | OP_GT | OP_EQ | OP_NE | OP_LE | OP_GE) expr # comparisonOp
    | OP_NOT expr # logicNot
    | expr op=(OP_AND | OP_OR) expr # logicOp
    | ID # variable
    | '(' expr ')' # parens
    ;

Null literal is predictably primitive:

NULL_LITERAL : N U L L;

Identifiers are not very complicated either, and I guess they are pretty much similar to Java syntax:

ID: [a-zA-Z$_][a-zA-Z0-9$_.]*;

Various tests for null in the expression (without variables first) may look like this:

    @Test
    public void nullComparison() {
        assertEquals(expr("null == null"), true);
        assertEquals(expr("null != null"), false);
        assertEquals(expr("5 != null"), true);
        assertEquals(expr("5 == null"), false);
        assertEquals(expr("null != 5"), true);
        assertEquals(expr("null == 5"), false);
        assertEquals(expr("null > null"), false);
        assertEquals(expr("null < null"), false);
        assertEquals(expr("null <= null"), false); assertEquals(expr("null >= null"), false);
    }

We don’t need much for this – resolving the NULL literal is particularly simple:

@Override
public Object visitNullLiteral(NullLiteralContext ctx) {
    return null;
}

We also modified visitComparisonOp – now it starts like this:

@Override
public Boolean visitComparisonOp(ExprParser.ComparisonOpContext ctx) {
    Comparable left = (Comparable) visit(ctx.expr(0));
    Comparable right = (Comparable) visit(ctx.expr(1));
    int operator = ctx.op.getType();
    if (left == null || right == null) {
        return left == null && right == null && operator == OP_EQ
            || (left != null || right != null) && operator == OP_NE;
    }
...

The rest is dealing with non-null values, etc. We may also let this method return null when null is involved anywhere except for EQ/NE, now it returns false. Depends on what logic we want.

Variable resolver

Variable resolving inside the calculator class is also quite simple. We need something that resolves them – that’s that variableResolver field initialized in the constructor and used in visitVariable:

private final ExpressionVariableResolver variableResolver;

public ExpressionCalculatorVisitor(ExpressionVariableResolver variableResolver) {
    if (variableResolver == null) {
        throw new IllegalArgumentException("Variable resolver must be provided");
    }
    this.variableResolver = variableResolver;
}

@Override
public Object visitVariable(VariableContext ctx) {
    Object value = variableResolver.resolve(ctx.ID().getText());
    return convertToSupportedType(value);
}

Anything this resolver returns is converted to supported types as mentioned in the introduction. ExpressionVariableResolver is again very simple:

public interface ExpressionVariableResolver {
    Object resolve(String variableName);
}

And how we can implement this? In Java 8 you must just love it – here is piece of test:

private ExpressionVariableResolver variableResolver;

@BeforeMethod
public void init() {
    variableResolver = var -> null;
}

@Test
public void primitiveVariableResolverReturnsTheSameValueForAnyVarName() {
    variableResolver = var -> 5;
    assertEquals(expr("var"), 5);
    assertEquals(expr("anyvarworksnow"), 5);
}

I use field that is set in init method to default “implementation” that always return null. In another test method I change it and it always return 5, regardless of actual variable name (parameter var) as this test clearly demonstrates. Next test is more useful, because this resolver returns value only for specific value:

@Test
public void variableResolverReturnsValueForOneVarName() {
    variableResolver = var -> var.equals("var") ? 5 : null;
    assertEquals(expr("var"), 5);
    assertEquals(expr("var != null"), true);
    assertEquals(expr("var == null"), false);

    assertEquals(expr("anyvarworksnow"), null);
    assertEquals(expr("anyvarworksnow == null"), true);
}

Now the actual name of the variable (identifier) must be “var”, otherwise it returns null again. You might have heard that lambdas may work as super-short test implementations – and yes, they can.

You may wonder why I have the field instead of using it only in the test method itself. This would be better contained and preventing any accidental leaks (although that @BeforeMethod covered me on this). Trouble is that variableResolver is used deeper in expr(…) method and I didn’t want to add it as parameter everywhere, hence the field:

private Object expr(String expression) {
    ParseTree parseTree = ExpressionUtils.createParseTree(expression);
    return new ExpressionCalculatorVisitor(variableResolver)
        .visit(parseTree);
}

Any real-life implementation?

Variable resolvers in the test were obviously very primitive, so let’s try something more realistic. First try is also very simple, but indeed realistic. Remember Bindings in Java’s ScriptEngine? It actually extends Map – so how about resolver that wraps existing Map<String, Object> (mapping variable name to it’s value)? Ok, it – again – may be too primitive:

ExpressionVariableResolver resolver = var -> map.get(var);

Bah, we need some bigger challenge! Let’s say I have a Java Bean or any POJO and I want to explicitly specify my variable names and how they should be resolved with that object. This may be call of method, like getter, so right now we don’t have the values readily available in a collection (or a map).

Important thing to realize here is that resolver will be different from an object to object, because for different objects it needs to provide different values. However, the way how it obtains the values will be the same. And we will wrap this “way” into VariableMapper that knows how to get values from an object of specific type (using generics) – and it will also help us to resolve the value for specific instance. Tests show how I intend to use it:

private VariableMapper<SomeBean> variableMapper;
private ParseTree myNameExpression;
private ParseTree myCountExpression;

@BeforeClass
public void init() {
    variableMapper = new VariableMapper<SomeBean>()
        .set("myName", o -> o.name)
        .set("myCount", SomeBean::getCount);
    myNameExpression = ExpressionUtils.createParseTree("myName <= 'Virgo'"); myCountExpression = ExpressionUtils.createParseTree("myCount * 3"); } @Test public void myNameExpressionTest() { SomeBean bean = new SomeBean(); ExpressionCalculatorVisitor visitor = new ExpressionCalculatorVisitor( var -> variableMapper.resolveVariable(var, bean));

    assertEquals(visitor.visit(myNameExpression), false); // null comparison is false
    bean.name = "Virgo";
    assertEquals(visitor.visit(myNameExpression), true);
    bean.name = "ABBA";
    assertEquals(visitor.visit(myNameExpression), true);
    bean.name = "Virgo47";
    assertEquals(visitor.visit(myNameExpression), false);
}

@Test
public void myCountExpressionTest() {
    SomeBean bean = new SomeBean();
    ExpressionCalculatorVisitor visitor = new ExpressionCalculatorVisitor(
        var -> variableMapper.resolveVariable(var, bean));

// assertEquals(visitor.visit(myCountExpression), false); // NPE!
    bean.setCount(3f);
    assertEquals(visitor.visit(myCountExpression), new BigDecimal("9"));
    bean.setCount(-1.1f);
    assertEquals(visitor.visit(myCountExpression), new BigDecimal("-3.3"));
}

public static class SomeBean {
    public String name;
    private Float count;

    public Float getCount() {
        return count;
    }

    public void setCount(Float count) {
        this.count = count;
    }
}

VariableMapper can live longer, you set it up and then reuse it. Its configuration is its state (methods set), concrete object is merely input parameter. Variable resolver itself works like a closure around the concrete instance. Keep in mind that instantiating calculator visitor is cheap and visiting itself is something you have to do anyway. Creating parsing tree is expensive, but we don’t repeat this between tests – and that’s probably how you want to use it in your application too. Cache the parse trees, create visitors – even with state specific for a single calculation – and then throw them away. This is also safest from threading perspective. You don’t want to use calculator that “closes over” another target object in another thread in the middle of your visiting business. :-)

Ok, how does that VariableMapper look like?

public class VariableMapper<T> {
    private Map<String, Function<T, Object>> variableValueFunctions = new HashMap<>();

    public VariableMapper<T> set(String variableName, Function<T, Object> valueFunction) {
        variableValueFunctions.put(variableName, valueFunction);
        return this;
    }

    public Object resolveVariable(String variableName, T object) {
        Function<T, Object> valueFunction = variableValueFunctions.get(variableName);
        if (valueFunction == null) {
            throw new ExpressionException("Unknown variable " + variableName);
        }
        return valueFunction.apply(object);
    }
}

As said, it keeps the configuration, but not the state of the object used in concrete calculation – that’s what the variable resolver does (and again, using lambda, one simply can’t resist in this case). Sure, you can combine VariableResolver with the mapping configuration too, but that will either 1) work in a single-threaded environment only, 2) or you have to reconfigure that mapping for each resolver in each thread. It simply doesn’t make sense. Mapper (long-lived) keeps the “way” how to get stuff from an object of some type in a particular computation context while variable resolver (short-lived) merely closes over the concrete instance.

Of course, our mapper can stand some improvements, it would be good if one could “seal” the configuration and no more “set” calls are allowed after that (probably throwing IllegalStateException).

Conclusion

So here we are, supporting even more types (Integer/BigDecimal), but – most importantly – variables! As you can see, now every computation can bring different result. That’s why it’s advisable to rethink how you want to instantiate your visitors, especially in case of multi-threaded environment.

Our ExpressionVariableResolver interface is very simple, it supports only variable name – so if you want to resolve from something stateful (and probably mutable) it’s important to wrap around it somehow. Variable resolver doesn’t know how to get stuff from an object, because there is no such input parameter. That’s why we introduced VariableMapper that supports getting values from an object of some type (generic). And we “implement” variable resolver as lambda to close over the configured variable mapper and an object that is then fed to its resolveVariable method. This method, in contrast to variable resolver’s resolve, takes in the object as a parameter.

It doesn’t have to be an object – you may implement other ways to get variable values in different contexts, you just have to wrap around that context (in our case object) somehow. I dare to say that Java 8 functional programming capabilities make it so much easier…

Still, the main hero here is ANTLR v4, of course. Now our expression evaluator truly makes sense. I’m not promising any continuation of this series, but maybe I’ll talk about functions too. Although I guess you can easily implement them yourselves by now.

Exploring the cloud with AWS Free Tier (2)

In the first part of this “diary” I found a cloud provider for my developer’s testing needs – Amazon’s AWS. This time we will mention some hiccups one may encounter when doing some basic operations around their EC2 instance. Finally, we will prepare some Docker image for ourselves, although this is not really AWS specific – at least not in our basic scenario case.

Shut it down!

When you shutdown your desktop computer, you see what it does. I’ve been running Windows for some years although being a Linux guy before (blame gaming and music home recording). On servers, no doubt, I prefer Linux every time. But I honestly don’t remember what happens if I enter shutdown now command without further options.

If I see the computer going on and on although my OS is down already, I just turn it off and remember to use -h switch the next time. But when “my computer” runs far away and only some dashboard shows what is happening, you simply don’t know for sure. There is no room for “mechanical sympathy”.

Long story short – always use shutdown now -h on your AMI instance if you really want to stop it. Of course, check instance’s Shutdown Behavior setup – by default it’s Stop and that’s probably what you want (Terminate would delete the instance altogether). With magical -h you’ll soon see that the state of the instance goes through stopping to stopped – without it it just hangs there running, but not really reachable.

Watch those volumes

When you shut down your EC2 instances they will stop taking any “instance-hours”. On the other hand, if you spin up 100 t2.micro instances and run them for an hour, you’ll spend 100 of your 750 limit for a month. It’s easy to understand this way of “spending”.

However, volumes (disk space for your EC2 instance) work a bit differently. They are reserved for you and they are billed for all the time you have them available – whether the instance runs or not. Also, how much of it you really use is NOT important. Your reserved space (typically 8 GiB for t2.micro instance if you use defaults) is what counts. Two sleeping instances for the whole month would not hit the limit, but three would – and 4 GiB above 20GiB/month would be billed to you (depending on the time you are above limit as well).

In any case, Billing Management Console is your friend here and AWS definitely provides you with all the necessary data to see where you are with your usage.

Back to Docker

I wanted to play with Docker before I decided to couple it with cloud exploration. AWS provides so called EC2 Container Service (ECS) to give you more power when managing containers, but today we will not go there. We will create Docker image manually right on our EC2 instance. I’d rather take baby steps than skip some “maturity levels” without understanding the basics.

When I want to “deploy” a Java application in a container, I want to create some Java base image for it first. So let’s connect to our EC2 instance and do it.

Java 32-bit base image

Let’s create our base image for Java applications first. Create a dir (any name will do, but something like java-base sounds reasonable) and this Dockerfile in it:

FROM ubuntu:14.04
MAINTAINER virgo47

# We want WGET in any case
RUN apt-get -qqy install wget

# For 32-bit Java we need to enable 32-bit binaries
RUN dpkg --add-architecture i386
RUN apt-get -qqy update
RUN apt-get -qqy install libc6:i386 libncurses5:i386 libstdc++6:i386

ENV HOME /root

# Install 32-bit JAVA
WORKDIR $HOME
RUN wget -q --no-check-certificate --no-cookies --header "Cookie: oraclelicense=accept-securebac kup-cookie" http://download.oracle.com/otn-pub/java/jdk/8u60-b27/jdk-8u60-linux-i586.tar.gz
RUN tar xzf jdk-8u60-linux-i586.tar.gz
ENV JAVA_HOME $HOME/jdk1.8.0_60
ENV PATH $JAVA_HOME/bin:$PATH

Then to build it (you must be in the directory with Dockerfile):

$ docker build -t virgo47/jaba .

Jaba stands for “java base”. And to test it:

$ docker run -ti virgo47/jaba
root@46d1b8156c7c:~# java -version
java version "1.8.0_60"
Java(TM) SE Runtime Environment (build 1.8.0_60-b27)
Java HotSpot(TM) Client VM (build 25.60-b23, mixed mode)
root@46d1b8156c7c:~# exit

My application image

Now I want to run my HelloWorld application in that base image. That means creating another image based on virgo47/jaba. Create another directory (myapp) and the following Dockerfile:

FROM virgo47/jaba
MAINTAINER virgo47

WORKDIR /root/
COPY HelloWorld.java ./
RUN javac HelloWorld.java
CMD java HelloWorld

Easy enough, but before we can build it we need that HelloWorld.java too. I guess anybody can do it, but for the sake of completeness:

public class HelloWorld {
        public static void main(String... args) {
                System.out.println("Hello, world!");
        }
}

Now let’s build it:

$ docker build -t virgo47/myapp .

And to test it:

$ docker run -ti virgo47/myapp
Hello, world!

So it actually works! But we should probably deliver JAR file directly into the image build and not compiling it during the build. Can we automate it? Sure we can, but maybe in another post.

To wrap up…

I hope I’ll get to Amazon’s ECS later, because the things above are working, are kinda Docker(file) practice, but they definitely are not for real world. You may at least run it all from your local machine as a combination of scp/ssh, instead of creating Dockerfiles and other sources on the remote machine – because that doesn’t make sense, of course. We need to build Docker image as part of our build process, publish it somewhere and just download it to the target environment. But let’s get away from Docker and back to AWS.

In the meantime one big AWS event occurred – AWS re:Invent 2015. I have to admit I wasn’t aware of this at all until now, I just got email notifications about the event and the keynotes as an AWS user. I am aware of other conferences, I’m happy enough to attend some European Sun TechDays (how I miss those :-)), TheServerSide Java Symposiums (miss those too) and one DEVOXX – but just judging from the videos, the re:Invent was really mind-blowing.

I don’t know what more to say, so I’m over and out for now. It will probably take me another couple of weeks to get more of concrete impressions about AWS, but I plan to add the third part – hopefully again loosely coupled to Docker.

The pain with sourcecode in WordPress

I don’t know why, but I always felt better when I prepared my blog posts somewhere else (Google Docs in my case) and then just pasted them to wordpress.com. I wanted some backup – but there is WordPress feature for that. But the most important factor was their editor. It was able to mess up with the post in a one-way way (when undo doesn’t help), so I just refused to use it in the first place. Google has versioning, reliable undo, auto-saves often, etc.

The biggest issue was always related to posting source code examples. In “good old” WordPress editor I had some workflow that worked well enough when I was pasting code from Google Docs, I had to go through it and fix some details, but it was usable.

And then their new editor (Beep beep boop) came. What was easy before became nearly impossible now. Not to be all negative, normal formatted text can be easily copied/pasted from Google Doc to WordPress, so I don’t have to care about hyperlinks and formatting anymore. But the source code – that had its issues before – got virtually “unpastable”. At least not directly.

With old editor I often just started in HTML view, pasted the plaintext there and formatted headers and bolds and links in Visual mode. Annoying, but working. But we will not discuss this scenario, because that requires a lot of work after the paste. We will focus on the Visual mode of new editor – how unreliable that one is when you want to just copy/paste something in it. After all, I expected that more will work with new editor – but it didn’t deliver.

Basic workflow

It’s pretty easy. I open the WordPress editor, paste the post title and then paste the whole post in Visual mode. At the bottom of it I have my tags for the post. Tags are just another new stupid issue with the new editor, but it doesn’t kill so much time as source code does. I use sources a lot in my posts, it’s my line of work after all.

So after the whole post is there, it’s time to go over all the sources and fix them one by one. Well, let’s see what happens when you paste code from Google Doc directly:

copy-paste-source-from-googledocYou can paste with Ctrl+V or Ctrl+Shift+V – but the latter is not useful when you need formatting in your post – and you always do. Ctrl+Shift+V would be useful during fixing the codes one by one, but it doesn’t work when you do it directly from Google Doc anyway. It just screws them in different manner.

Spoiler: You can copy/paste directly in HTML mode. But we will stay in Visual mode for a bit longer and introduce more of its annoyances.

Copy/paste inside the post breaks it too!

Ok, this really got me. Maybe we can have some understanding when some editor doesn’t accept pasted stuff from different product – but this one cannot even accept its own stuff. I wanted to rewrite “tags” to in order to get syntax highlighting. Let’s see how that went:

copy-paste-demages-sourcesWhich leads us to another problem – undo. It completely breaks sources that were OK before. And not just the one that is around, but also others:

undo-damages-other-sourcesThis renders undo as extremely unreliable feature. There is no way to undo this mess after it happened. You have to fix it manually or reload the page to get the last working draft (if you made some changes – your fault). It will tell you about newer version and asks you to restore it, which you – of course – don’t want now. To get rid of this, click on Save Draft (after that reload). Trash probably removes the whole post I guess, so there is no way to discard current changes.

Any way to copy stuff?

As I mentioned, you can try it in HTML mode if it is what leads to success – and in case of it seems to be working solution. But how about in Visual mode? If you copy/paste things a lot – funny I do, because I fight copy/paste coding mercilessly – you might have noticed that often the source program kinda affects how the things are pasted. In this case it results in mess whether you use Ctrl+V or the same with Shift added (normally works as copy without format, not sure what it suppose to do in Beep-beep-poo editor though).

But what if we drag the text through something that makes it plain text? So I tried to paste it into a notepad first (Notepad2 here, but it doesn’t make a difference) and then copy it from there:

paste-via-notepadA bit longer “video”, again this is not without its twist. We have to Ctrl+Shift+V from plain text editor, otherwise the pasted text contains nbsp… which may be harmless (haven’t tried), but I simply don’t want it there. I expect some “canonical” representation of what I work on in my editor, not low level HTML stuff.

Undo is documented again – after simple paste only the pasted code is messed, after Undo other codes are affected too.

Tags

This again seems like win of Web 2.0 over reason. Before I could just copy/paste all my tags separated with a comma – at once. Not anymore. If you do so they will all form one giant tag. You simply have to write (or paste) them one by one. While writing, comma works as a separator and it instantly ends the word and makes it a tag bubble. But goodbye to “batch” copy/paste.

Final words

The biggest trouble is how brittle it all is. How easy it is to damage all the nicely formatted source code with a single Undo. Would you seriously consider writing in such an editor? I merely tip-toe around it, paste my post, fix the mess and Publish as soon as I can. It is possible to Edit simple typos later. But anything beyond that…? Well my trust is not there.

If WordPress developers ever change the editor again, I hope they will go for usability that actually involves copy/paste as well. I don’t know, maybe I’m such a weirdo and “lucky” to find all these corner-cases (it’s not only WordPress after all) – maybe I’d be great tester. But I seriously don’t think I’m pushing everything to some limits or so.

After all – I just copy/pasted in this case! :-)

BTW: My previous post was copy/pasted to Visual editor to preserve all the formatting and links, and then I switched to HTML where I easily copied all the code (one by one). Seems to be my workflow of choice for some time. Visual editor and code simply don’t go together.

Expression evaluation in Java (3)

In the first part of the series we evaluated expressions to get the result, in the second part we parsed the expression with our own grammar. Today we will expand on the topic, we will introduce more types and more operations. Variables are left for the next installment as this part will be long enough without them anyway.

As always – complete example is available on Github and its structure is pretty much similar to the first simple version of our ANTLR based expression evaluator. And what will we do today?

  • We will expand our numbers from integers to floating point (implemented with BigDecimals).
  • We will add strings and booleans types too, along with relational operators (comparing).
  • Relation operators can be written as >= but also as GE which is extremely handy when expressions are part of XML configuration. In addition to this, keywords will be case insensitive.

We will start introducing the grammar changes from the end of the file to the beginning – you can download our new grammar in its wholeness.

Case-insensitive keywords

It would be cool if there was some easy way how to say to ANTLR that some particular keyword (string literal in the grammar) is case insensitive – as there are many languages that use this way. But there no magical switch, so we have to help ourselves with tiny parts of token rules called fragments. Following part will go to the very end of the grammar file.

fragment DIGIT : [0-9];

fragment A : [aA];
fragment B : [bB];
fragment C : [cC];
fragment D : [dD];
fragment E : [eE];
fragment F : [fF];
fragment G : [gG];
fragment H : [hH];
fragment I : [iI];
fragment J : [jJ];
fragment K : [kK];
fragment L : [lL];
fragment M : [mM];
fragment N : [nN];
fragment O : [oO];
fragment P : [pP];
fragment Q : [qQ];
fragment R : [rR];
fragment S : [sS];
fragment T : [tT];
fragment U : [uU];
fragment V : [vV];
fragment W : [wW];
fragment X : [xX];
fragment Y : [yY];
fragment Z : [zZ];

Now when we want case-insensitive keyword we will write O R instead of ‘or’ like this:

OP_OR: O R | '||';

But we will get to operators later, let’s do the literals and company first.

Literals and whitespaces

We will support boolean literals as a word and a letter (blame my recent experience with R language). Number literals must cover floating point numbers too. And finally there are strings here as well. White spaces are without any change at all:

BOOLEAN_LITERAL: T R U E | T
    | F A L S E | F
    ;

NUMERIC_LITERAL : DIGIT+ ( '.' DIGIT* )? ( E [-+]? DIGIT+ )?
    | '.' DIGIT+ ( E [-+]? DIGIT+ )?
    ;

STRING_LITERAL : '\'' ( ~'\'' | '\'\'' )* '\'';

WS: [ \t\r\n]+ -> skip;

Operators

Nothing special here, we just add more of them.

OP_LT: L T | '<'; OP_GT: G T | '>';
OP_LE: L E | '<='; OP_GE: G E | '>=';
OP_EQ: E Q | '=' '='?;
OP_NE: N E | N E Q | '!=' | '<>';
OP_AND: A N D | '&&';
OP_OR: O R | '||';
OP_NOT: N O T | '!';
OP_ADD: '+';
OP_SUB: '-';
OP_MUL: '*';
OP_DIV: '/';
OP_MOD: '%';

Equality can be written both as in SQL or as in Java, because there is no assignment statement in our grammar. All relational and logical operators have both the symbols and keywords. If you miss XOR you can add it yourself, of course.

And the expression rule…

Finally we got to the expression rule, which got a bit richer:

result: expr;

expr: STRING_LITERAL # stringLiteral
    | BOOLEAN_LITERAL # booleanLiteral
    | NUMERIC_LITERAL # numericLiteral
    | op=('-' | '+') expr # unarySign
    | expr op=(OP_MUL | OP_DIV | OP_MOD) expr # arithmeticOp
    | expr op=(OP_ADD | OP_SUB) expr # arithmeticOp
    | expr op=(OP_LT | OP_GT | OP_EQ | OP_NE | OP_LE | OP_GE) expr # comparisonOp
    | OP_NOT expr # logicNot
    | expr op=(OP_AND | OP_OR) expr # logicOp
    | '(' expr ')' # parens
    ;

You may notice one additional rule – result. We will use this for a single special reason covered later. Now is the time to generate the classes from the grammar and then to reimplement the visitor class.

BTW: Our new version of the calculator visitor will not extend ExprBaseVisitor<Integer> with parameter (Integer) anymore as the result may be of any supported type – String, BigDecimal or Boolean. And we don’t know what will be the result. So we just don’t state the parameterized type at all.

Implementing literals

Before we implement, we should have some expectations – and that means tests. My test class is not perfect one case per test, but it does the job.

    @Test
    public void booleanLiterals() {
        assertEquals(expr("t"), true);
        assertEquals(expr("True"), true);
        assertEquals(expr("f"), false);
        assertEquals(expr("faLSE"), false);
    }

    @Test
    public void stringLiteral() {
        assertEquals(expr("''"), "");
        assertEquals(expr("''''"), "'");
        assertEquals(expr("'something'"), "something");
    }

    @Test
    public void numberLiterals() {
        assertEquals(expr("5"), new BigDecimal("5"));
        assertEquals(expr("10.35"), new BigDecimal("10.35"));
    }

Method expr in test is still implemented like before. Let’s focus on the visitor implementation now. The parts we need for this test to work are here:

    @Override
    public String visitStringLiteral(StringLiteralContext ctx) {
        String text = ctx.STRING_LITERAL().getText();
        text = text.substring(1, text.length() - 1)
            .replaceAll("''", "'");
        return text;
    }

    @Override
    public Boolean visitBooleanLiteral(BooleanLiteralContext ctx) {
        return ctx.BOOLEAN_LITERAL().getText().toLowerCase().charAt(0) == 't';
    }

    @Override
    public BigDecimal visitNumericLiteral(NumericLiteralContext ctx) {
        String text = ctx.NUMERIC_LITERAL().getText();
        return stringToNumber(text);
    }

    private BigDecimal stringToNumber(String text) {
        BigDecimal bigDecimal = new BigDecimal(text);

        return bigDecimal.scale() < 0
            ? bigDecimal.setScale(0, roundingMode)
            : bigDecimal;
    }

Arithmetic operations

Now we will implement arithmetics with BigDecimals, including unary signs and parentheses.

    @Test
    public void arithmeticTest() {
        assertEquals(expr("5+5.1"), new BigDecimal("10.1"));
        assertEquals(expr("5-5.1"), new BigDecimal("-0.1"));
        assertEquals(expr("0.3*0.1"), new BigDecimal("0.03"));
        assertEquals(expr("0.33/0.1"), new BigDecimal("3.3"));
        assertEquals(expr("1/3"), new BigDecimal("0.333333333333333"));
        assertEquals(expr("10%3"), new BigDecimal("1"));
    }

    @Test
    public void unarySignTest() {
        assertEquals(expr("-5"), new BigDecimal("-5"));
        assertEquals(expr("-+-5"), new BigDecimal("5"));
        assertEquals(expr("-(3+5)"), new BigDecimal("-8"));
    }

The implementation in the Visitor respects our defined maximal scale, which is very important for cases like 1/3. Without scale and rounding set this would throw an error.

    public static final int DEFAULT_MAX_SCALE = 15;
    private int maxScale = DEFAULT_MAX_SCALE;
    private int roundingMode = BigDecimal.ROUND_HALF_UP;

    @Override
    public BigDecimal visitArithmeticOp(ExprParser.ArithmeticOpContext ctx) {
        BigDecimal left = (BigDecimal) visit(ctx.expr(0));
        BigDecimal right = (BigDecimal) visit(ctx.expr(1));
        return bigDecimalArithmetic(ctx, left, right);
    }

    private BigDecimal bigDecimalArithmetic(ExprParser.ArithmeticOpContext ctx, BigDecimal left, BigDecimal right) {
        switch (ctx.op.getType()) {
            case OP_ADD:
                return left.add(right);
            case OP_SUB:
                return left.subtract(right);
            case OP_MUL:
                return left.multiply(right);
            case OP_DIV:
                return left.divide(right, maxScale, roundingMode).stripTrailingZeros();
            case OP_MOD:
                return left.remainder(right);
            default:
                throw new IllegalStateException("Unknown operator " + ctx.op);
        }
    }

    @Override
    public BigDecimal visitUnarySign(UnarySignContext ctx) {
        BigDecimal result = (BigDecimal) visit(ctx.expr());
        boolean unaryMinus = ctx.op.getText().equals("-");
        return unaryMinus
            ? result.negate()
            : result;
    }

    @Override
    public Object visitParens(ParensContext ctx) {
        return visit(ctx.expr());
    }

We may appreciate scale 15 for interim results, but maybe we want lower scale for final result. Let’s do that now:

Result rule with different scale

Let’s say we require just scale of 6 for final results. We will implement result rule for that:

    public static final int DEFAULT_MAX_RESULT_SCALE = 6;
    private int maxResultScale = DEFAULT_MAX_RESULT_SCALE;

    /** Maximum BigDecimal scale used during computations. */
    public ExpressionCalculatorVisitor maxScale(int maxScale) {
        this.maxScale = maxScale;
        return this;
    }

    /** Maximum BigDecimal scale for result. */
    public ExpressionCalculatorVisitor maxResultScale(int maxResultScale) {
        this.maxResultScale = maxResultScale;
        return this;
    }

    @Override
    public Object visitResult(ExprParser.ResultContext ctx) {
        Object result = visit(ctx.expr());
        if (result instanceof BigDecimal) {
            BigDecimal bdResult = (BigDecimal) result;
            if (bdResult.scale() > maxResultScale) {
                result = bdResult.setScale(maxResultScale, roundingMode);
            }
        }
        return result;
    }

We also introduced two methods setting the maximal interim and result scale. Both of them return this, so they can be used right after constructor of the ExpressionCalculatorVisitor. One last change is required in ExpressionUtils – instead of calling expr rule on parser we need to use result rule. Otherwise the util class looks exactly like before.

Of course we have to fix the test and for 1/3 expect just 0.333333 instead of 15 digits like before.

Logical operations

After previous problems this is a piece of cake. First the test:

    @Test
    public void logicalOperatorTest() {
        assertEquals(expr("F && F"), false);
        assertEquals(expr("F && T"), false);
        assertEquals(expr("T and F"), false);
        assertEquals(expr("T AND T"), true);
        assertEquals(expr("F || F"), false);
        assertEquals(expr("F || T"), true);
        assertEquals(expr("T or F"), true);
        assertEquals(expr("T OR T"), true);
        assertEquals(expr("!T"), false);
        assertEquals(expr("not T"), false);
        assertEquals(expr("!f"), true);
    }

And the visitor implementation:

    @Override
    public Boolean visitLogicOp(ExprParser.LogicOpContext ctx) {
        boolean left = (boolean) visit(ctx.expr(0));

        switch (ctx.op.getType()) {
            case OP_AND:
                return left && booleanRightSide(ctx);
            case OP_OR:
                return left || booleanRightSide(ctx);
            default:
                throw new IllegalStateException("Unknown operator " + ctx.op);
        }
    }

    private boolean booleanRightSide(ExprParser.LogicOpContext ctx) {
        return (boolean) visit(ctx.expr(1));
    }

    @Override
    public Boolean visitLogicNot(ExprParser.LogicNotContext ctx) {
        return !(Boolean) visit(ctx.expr());
    }

Relational operators

Let’s do some comparisons and (non) equalities. Test first:

    @Test
    public void relationalOperatorTest() {
        assertEquals(expr("1 > 0.5"), true);
        assertEquals(expr("1 > 1"), false);
        assertEquals(expr("1 >= 0.5"), true);
        assertEquals(expr("1 >= 1"), true);
        assertEquals(expr("5 == 5"), true);
        assertEquals(expr("5 != 5"), false);
        assertEquals(expr("'a' > 'b'"), false);
        assertEquals(expr("'a' >= 'b'"), false);
        assertEquals(expr("'a' < 'b'"), true);
        assertEquals(expr("'a' <= 'b'"), true);
        assertEquals(expr("true == true"), true);
        assertEquals(expr("true == f"), false);
        assertEquals(expr("true eq t"), true);
    }

Again, I realize that these should be many separate tests and the coverage of cases is also not that great, but let’s move on to the implementation:

    @Override
    public Boolean visitComparisonOp(ExprParser.ComparisonOpContext ctx) {
        Comparable left = (Comparable) visit(ctx.expr(0));
        Comparable right = (Comparable) visit(ctx.expr(1));
        int operator = ctx.op.getType();
        if (left == null || right == null) {
            return left == null && right == null && operator == OP_EQ
                || (left != null || right != null) && operator == OP_NE;
        }

        //noinspection unchecked
        int comp = left.compareTo(right);
        switch (operator) {
            case OP_EQ:
                return comp == 0;
            case OP_NE:
                return comp != 0;
            case OP_GT:
                return comp > 0;
            case OP_LT:
                return comp < 0; case OP_GE: return comp >= 0;
            case OP_LE:
                return comp <= 0;
            default:
                throw new IllegalStateException("Unknown operator " + ctx.op);
        }
    }

Nothing special here – we utilize the fact that Strings, Booleans and BigDecimals are all comparable.

Conclusion

What did we see and what more can we do? For the sheer scope of the changes we postponed the variables for the next part of the series. Of course, variables (and functions!) are very important for expression evaluation, otherwise we’re talking about constant result for each expression anyway, right?

We have created flexible grammar that is kinda mix between SQL and Java. Of course, you may not like too many options for all the operators, but in our context it doesn’t pose any problem. It might have – in the context of general programming language (e.g. = vs == operators).

Another thing is that numbers are BigDecimals – always. This is kinda Groovy-like, where 3 / 2 is not 1 as Java programmer would expect and this division is also much slower than integer division. We may introduce integers and support smart division based on the context (like Java does after all, except that FP is IEEE 754 and not BigDecimal-based). But not now.

Now you can download the complete example – and if you don’t know how to download part of the repo, you may try this command:

svn export https://github.com/virgo47/litterbin/trunk/demos/antlr-expr/src/main/java/expr2/

Tests are in src/test of course. See you next with variables and maybe even functions.

Exploring the cloud with AWS Free Tier (1)

This will not be typical blog post, but rather some kind of a diary. Of course, I’ll try not to write complete nonsense, but as it will be rather exploratory effort it may happen. Sorry. My interest in the cloud is personally-professional. I may use it in my line of work, but I wanted to “touch” the cloud myself to get the feeling of it. Reading is cool, but it’s just not enough. Everybody is talking about it, I know I’m way behind – but lo, here I am going to experiment with the cloud as a curious developer.

My personal needs

My goals? I want to run some Docker containers in the cloud, maybe connect them somehow – so it will be about learning Docker better as well. Last year was very fast in that field (it was only the second year for Docker after all!) and there’s a lot I missed since I covered some basics of the technology in the summer 2014.

And I also wanted to know what it is to manage my cloud – even if it is small, preferably free. I didn’t check many players. It is difficult to orient among them and I’m not going to enter my credit card details to some company unknown to me. Talking about Docker, I needed either some direct container hosting (not sure if/how it is provided), or IaaS – that is some Linux machine where I run the Docker.

Finding my provider

Originally I wanted to test Google cloud, but that one is not available for personal use – which surprised me a bit. I knew Rackspace, but wasn’t able to find reasonable offer – something under 10 bucks a month. So I tried Amazon’s AWS – I like Amazon after all. :-) You may also check this article – 10 IaaS providers who provide free cloud resources.

What attracted me to AWS the most is their Free Tier offering that can be used for the whole one year! I don’t need much power, I don’t need excessive connectivity, I just needed some quiet developer’s machine. Also, one year is an incredible option there, many times I start some trial and don’t get enough of “real CPU/brain time” to play with the trial because of other obligations. But a year?! I’ll definitely be able to use AWS at least a bit during that time. Longer span also gives you better perspective.

Finally, AWS has a lot of documentation, instructions, videos, help pages, etc. I also quickly got a feeling that they are trying hard to give you all the tools to know how much you’d pay (which is probably feature of any good cloud) – even if you are using Free Tier. We’ll get to that later.

First steps with AWS

So I registered at AWS – it took couple of straightforward steps and one call they made to let me enter some PIN. I chose Basic support (no change in my zero price), confirmed here, confirmed there – and suddenly I was introduced to my AWS Console with tons of icons.

What next? Let’s stick to the plan and Google something about Docker and AWS. Actually I did this before I even started with AWS, of course. This page reads all the necessary steps. So we need some EC2 Linux instance – t2.micro with Amazon Linux AMI is available for Free Tier. Let’s launch it! Or, because I’m the manual kind of guy (kinda rare, reportedly), let’s check how to do it or see the video. It doesn’t seem that difficult after all.

I chose my closes region in AWS Console (Frankfurt, or eu-central) and created my instance. I clicked through all the Next buttons just to walk and read through the setup, but I didn’t change anything. In the course of this I created my key pair, downloaded the private key and – my instance was ready! Even if you don’t name it right away, it’s easy to do it later in the table of instances when click into the cell in Name column:

aws-ec2-instancesIn general – I have to say I’m very satisfied with overall look-and-feel of AWS web. It provides tons of options and information, but so far I found what I wanted. You either look around the screen and see what you want, or click on the top menu (e.g. Billing & Cost Management is under your login name on the right) – or Google it really quickly.

Let’s connect to my Linux box!

If you know how to use private key with SSH, this will be extra easy for you. You just need to know what user to use for login. Again – I googled – and you can find either this article from their Getting Started section, or another article from Instance Lifecycle section.

Whatever method or client you used to SSH into your instance (I used command line ssh that comes with git bash), you’ll find yourself in an ever familiar CLI. And because we wanted to play with Docker, we – of course – follow another help page. This time called Docker Basics.

Well, docker info command has run, everything seems OK, we will continue some other time when the night is younger.

Next morning first impressions…

I mentioned already, that you have full control over the spending and AWS guys really offer you all the information needed. For example, when I woke up today after setting and launching my EC2 t2.micro instance late last night, I saw this report in my Billing & Cost Management Dashboard:

aws-free-tier-top-servicesSure, I don’t know exactly how to read it yet – or how it’s calculated exactly, because it must have been running for more than 5 hours. But at least I know where I am with my “spending”.

…and evening observations

I definitely should read how those instances are billed – and when. When I logged out from the instance I saw no change on the report (not even for hours), but when I logged in later, it suddenly jumped by couple of hours. This is probably about when the billing update is triggered. Talking about couple of hours, this is no problem at all.

Another thing to remember is that when you restart the instance it will take an instance-hour right away. That is, every started hour is billed. We’re still talking about prices like 0.0something for an t2.micro hour, so again, not a real deal-breaker. But with 750 hour limit for Free Tier, one should not try to shutdown/restart the instance 750 times in a day. :-)

The pricing information is detailed but clear and it also plainly states: “Pricing is per instance-hour consumed for each instance, from the time an instance is launched until it is terminated or stopped. Each partial instance-hour consumed will be billed as a full hour.”

What next?

One day is not enough time to draw any conclusions, but so far I’m extremely satisfied with AWS. Next we will try to deploy some Docker images to it.

Follow

Get every new post delivered to your Inbox.

Join 242 other followers