AWS did it again, this time with Lightsail

I still remember how AWS (Amazon Web Services) re:Invent 2015 conference impressed me. AWS is extremely successful but they keep pushing innovations at us. I played with their smallest EC2 instances couple of times, considering whether or not I want my own server.

This year at re:Invent 2016 happening during this week Andy Jassy, CEO of Amazon Web Services, announced many new services in his keynote. One that resonated with me immediately was Lightsail. It is extremely affordable server with plenty to offer even in its weakest configuration. For $5 per month you’ll get 512MB of memory, 1 vCPU, 20GB SSD and 1TB data transfer. See also this blog post for more information.

With such a reasonable base price I decided to try it – by the way, the first month is free! My plans were nothing big, I actually just wanted to move my homepage there. But you have that flexibility of a Linux box ready for you anytime you fancy.

I spun my Lightsail instance in no time, choose AMI Linux just for a change (considering my Ubuntu preference) and with Google’s help I got nginx and PHP 7 up and running very quickly indeed. I used the in-browser SSH but because it’s not quite the console I like (but close, even Shift+PgUp/Down works as expected) I wanted to put my public key in ~/.ssh/authorized_keys. I didn’t know how to copy/paste it from my computer, but when you press Ctrl+Alt+Shift in the web SSH it will open a sidebar where you can paste anything into the clipboard and right-click will do the rest.

I really liked this experience, I like the price as well and in-browse SSH comes very handy when you are at a place where port 22 is blocked for whatever reason. (I’m sure it has something with compliance, but don’t want me to understand.) I’m definitely keeping my new pet server although I know that cattle is more common now. Especially in the cloud.

Exploring the cloud with AWS Free Tier (2)

In the first part of this “diary” I found a cloud provider for my developer’s testing needs – Amazon’s AWS. This time we will mention some hiccups one may encounter when doing some basic operations around their EC2 instance. Finally, we will prepare some Docker image for ourselves, although this is not really AWS specific – at least not in our basic scenario case.

Shut it down!

When you shutdown your desktop computer, you see what it does. I’ve been running Windows for some years although being a Linux guy before (blame gaming and music home recording). On servers, no doubt, I prefer Linux every time. But I honestly don’t remember what happens if I enter shutdown now command without further options.

If I see the computer going on and on although my OS is down already, I just turn it off and remember to use -h switch the next time. But when “my computer” runs far away and only some dashboard shows what is happening, you simply don’t know for sure. There is no room for “mechanical sympathy”.

Long story short – always use shutdown now -h on your AMI instance if you really want to stop it. Of course, check instance’s Shutdown Behavior setup – by default it’s Stop and that’s probably what you want (Terminate would delete the instance altogether). With magical -h you’ll soon see that the state of the instance goes through stopping to stopped – without it it just hangs there running, but not really reachable.

Watch those volumes

When you shut down your EC2 instances they will stop taking any “instance-hours”. On the other hand, if you spin up 100 t2.micro instances and run them for an hour, you’ll spend 100 of your 750 limit for a month. It’s easy to understand this way of “spending”.

However, volumes (disk space for your EC2 instance) work a bit differently. They are reserved for you and they are billed for all the time you have them available – whether the instance runs or not. Also, how much of it you really use is NOT important. Your reserved space (typically 8 GiB for t2.micro instance if you use defaults) is what counts. Two sleeping instances for the whole month would not hit the limit, but three would – and 4 GiB above 20GiB/month would be billed to you (depending on the time you are above limit as well).

In any case, Billing Management Console is your friend here and AWS definitely provides you with all the necessary data to see where you are with your usage.

Back to Docker

I wanted to play with Docker before I decided to couple it with cloud exploration. AWS provides so called EC2 Container Service (ECS) to give you more power when managing containers, but today we will not go there. We will create Docker image manually right on our EC2 instance. I’d rather take baby steps than skip some “maturity levels” without understanding the basics.

When I want to “deploy” a Java application in a container, I want to create some Java base image for it first. So let’s connect to our EC2 instance and do it.

Java 32-bit base image

Let’s create our base image for Java applications first. Create a dir (any name will do, but something like java-base sounds reasonable) and this Dockerfile in it:

FROM ubuntu:14.04
MAINTAINER virgo47

# We want WGET in any case
RUN apt-get -qqy install wget

# For 32-bit Java we need to enable 32-bit binaries
RUN dpkg --add-architecture i386
RUN apt-get -qqy update
RUN apt-get -qqy install libc6:i386 libncurses5:i386 libstdc++6:i386

ENV HOME /root

# Install 32-bit JAVA
WORKDIR $HOME
RUN wget -q --no-check-certificate --no-cookies --header "Cookie: oraclelicense=accept-securebac kup-cookie" http://download.oracle.com/otn-pub/java/jdk/8u60-b27/jdk-8u60-linux-i586.tar.gz
RUN tar xzf jdk-8u60-linux-i586.tar.gz
ENV JAVA_HOME $HOME/jdk1.8.0_60
ENV PATH $JAVA_HOME/bin:$PATH

Then to build it (you must be in the directory with Dockerfile):

$ docker build -t virgo47/jaba .

Jaba stands for “java base”. And to test it:

$ docker run -ti virgo47/jaba
root@46d1b8156c7c:~# java -version
java version "1.8.0_60"
Java(TM) SE Runtime Environment (build 1.8.0_60-b27)
Java HotSpot(TM) Client VM (build 25.60-b23, mixed mode)
root@46d1b8156c7c:~# exit

My application image

Now I want to run my HelloWorld application in that base image. That means creating another image based on virgo47/jaba. Create another directory (myapp) and the following Dockerfile:

FROM virgo47/jaba
MAINTAINER virgo47

WORKDIR /root/
COPY HelloWorld.java ./
RUN javac HelloWorld.java
CMD java HelloWorld

Easy enough, but before we can build it we need that HelloWorld.java too. I guess anybody can do it, but for the sake of completeness:

public class HelloWorld {
        public static void main(String... args) {
                System.out.println("Hello, world!");
        }
}

Now let’s build it:

$ docker build -t virgo47/myapp .

And to test it:

$ docker run -ti virgo47/myapp
Hello, world!

So it actually works! But we should probably deliver JAR file directly into the image build and not compiling it during the build. Can we automate it? Sure we can, but maybe in another post.

To wrap up…

I hope I’ll get to Amazon’s ECS later, because the things above are working, are kinda Docker(file) practice, but they definitely are not for real world. You may at least run it all from your local machine as a combination of scp/ssh, instead of creating Dockerfiles and other sources on the remote machine – because that doesn’t make sense, of course. We need to build Docker image as part of our build process, publish it somewhere and just download it to the target environment. But let’s get away from Docker and back to AWS.

In the meantime one big AWS event occurred – AWS re:Invent 2015. I have to admit I wasn’t aware of this at all until now, I just got email notifications about the event and the keynotes as an AWS user. I am aware of other conferences, I’m happy enough to attend some European Sun TechDays (how I miss those :-)), TheServerSide Java Symposiums (miss those too) and one DEVOXX – but just judging from the videos, the re:Invent was really mind-blowing.

I don’t know what more to say, so I’m over and out for now. It will probably take me another couple of weeks to get more of concrete impressions about AWS, but I plan to add the third part – hopefully again loosely coupled to Docker.

Exploring the cloud with AWS Free Tier (1)

This will not be typical blog post, but rather some kind of a diary. Of course, I’ll try not to write complete nonsense, but as it will be rather exploratory effort it may happen. Sorry. My interest in the cloud is personally-professional. I may use it in my line of work, but I wanted to “touch” the cloud myself to get the feeling of it. Reading is cool, but it’s just not enough. Everybody is talking about it, I know I’m way behind – but lo, here I am going to experiment with the cloud as a curious developer.

My personal needs

My goals? I want to run some Docker containers in the cloud, maybe connect them somehow – so it will be about learning Docker better as well. Last year was very fast in that field (it was only the second year for Docker after all!) and there’s a lot I missed since I covered some basics of the technology in the summer 2014.

And I also wanted to know what it is to manage my cloud – even if it is small, preferably free. I didn’t check many players. It is difficult to orient among them and I’m not going to enter my credit card details to some company unknown to me. Talking about Docker, I needed either some direct container hosting (not sure if/how it is provided), or IaaS – that is some Linux machine where I run the Docker.

Finding my provider

Originally I wanted to test Google cloud, but that one is not available for personal use – which surprised me a bit. I knew Rackspace, but wasn’t able to find reasonable offer – something under 10 bucks a month. So I tried Amazon’s AWS – I like Amazon after all. 🙂 You may also check this article – 10 IaaS providers who provide free cloud resources.

What attracted me to AWS the most is their Free Tier offering that can be used for the whole one year! I don’t need much power, I don’t need excessive connectivity, I just needed some quiet developer’s machine. Also, one year is an incredible option there, many times I start some trial and don’t get enough of “real CPU/brain time” to play with the trial because of other obligations. But a year?! I’ll definitely be able to use AWS at least a bit during that time. Longer span also gives you better perspective.

Finally, AWS has a lot of documentation, instructions, videos, help pages, etc. I also quickly got a feeling that they are trying hard to give you all the tools to know how much you’d pay (which is probably feature of any good cloud) – even if you are using Free Tier. We’ll get to that later.

First steps with AWS

So I registered at AWS – it took couple of straightforward steps and one call they made to let me enter some PIN. I chose Basic support (no change in my zero price), confirmed here, confirmed there – and suddenly I was introduced to my AWS Console with tons of icons.

What next? Let’s stick to the plan and Google something about Docker and AWS. Actually I did this before I even started with AWS, of course. This page reads all the necessary steps. So we need some EC2 Linux instance – t2.micro with Amazon Linux AMI is available for Free Tier. Let’s launch it! Or, because I’m the manual kind of guy (kinda rare, reportedly), let’s check how to do it or see the video. It doesn’t seem that difficult after all.

I chose my closes region in AWS Console (Frankfurt, or eu-central) and created my instance. I clicked through all the Next buttons just to walk and read through the setup, but I didn’t change anything. In the course of this I created my key pair, downloaded the private key and – my instance was ready! Even if you don’t name it right away, it’s easy to do it later in the table of instances when click into the cell in Name column:

aws-ec2-instancesIn general – I have to say I’m very satisfied with overall look-and-feel of AWS web. It provides tons of options and information, but so far I found what I wanted. You either look around the screen and see what you want, or click on the top menu (e.g. Billing & Cost Management is under your login name on the right) – or Google it really quickly.

Let’s connect to my Linux box!

If you know how to use private key with SSH, this will be extra easy for you. You just need to know what user to use for login. Again – I googled – and you can find either this article from their Getting Started section, or another article from Instance Lifecycle section.

Whatever method or client you used to SSH into your instance (I used command line ssh that comes with git bash), you’ll find yourself in an ever familiar CLI. And because we wanted to play with Docker, we – of course – follow another help page. This time called Docker Basics.

Well, docker info command has run, everything seems OK, we will continue some other time when the night is younger.

Next morning first impressions…

I mentioned already, that you have full control over the spending and AWS guys really offer you all the information needed. For example, when I woke up today after setting and launching my EC2 t2.micro instance late last night, I saw this report in my Billing & Cost Management Dashboard:

aws-free-tier-top-servicesSure, I don’t know exactly how to read it yet – or how it’s calculated exactly, because it must have been running for more than 5 hours. But at least I know where I am with my “spending”.

…and evening observations

I definitely should read how those instances are billed – and when. When I logged out from the instance I saw no change on the report (not even for hours), but when I logged in later, it suddenly jumped by couple of hours. This is probably about when the billing update is triggered. Talking about couple of hours, this is no problem at all.

Another thing to remember is that when you restart the instance it will take an instance-hour right away. That is, every started hour is billed. We’re still talking about prices like 0.0something for an t2.micro hour, so again, not a real deal-breaker. But with 750 hour limit for Free Tier, one should not try to shutdown/restart the instance 750 times in a day. 🙂

The pricing information is detailed but clear and it also plainly states: “Pricing is per instance-hour consumed for each instance, from the time an instance is launched until it is terminated or stopped. Each partial instance-hour consumed will be billed as a full hour.”

What next?

One day is not enough time to draw any conclusions, but so far I’m extremely satisfied with AWS. Next we will try to deploy some Docker images to it.