Migrating from Heroku (and Linode) to Docker on AWS
I’ve long been a huge fan of Heroku. They’ve made it super easy to deploy and scale web applications without getting bogged down in server administration. Also, their free tier has been very generous, which made Heroku a perfect place to run weekend projects. (And my clients have happily paid plenty of money to Heroku over the years, so nobody’s been losing out.)
Heroku’s costs and limitations
Lately, the costs of using Heroku for weekend projects have been creeping upwards:
- Hobby databases with more than 10,000 rows cost $9/month per application.
- SSL certificates cost $20/month per application. But increasingly, thanks to the arrival of HTTP 2.0 and widespread eavesdropping, it’s looking like we should just encrypt everything, so SSL is going to become increasingly important.
- Heroku is beta-testing new types of dynos, and it looks like the free dynos are going to get a lot weaker. There may be a new hobbyist dyno for $7/month per application.
Now, I value my time pretty highly, but if I’m reading the tea leaves correctly, it looks like I might be paying as much as $9+$20+$7 = $36/month per application, unless Heroku also overhauls their add-on pricing. And there are other issues with Heroku:
- Heroku dynos aren’t going to win any awards for speed.
- Heroku only supports web applications, so I still need to pay Linode for an actual server.
So what am I looking for? I just want some professional-looking way to throw my weekend hacks up in the cloud. Oh, and it would be nice if my hobby projects used the same toolchain as my professional ones.
Deploying to an Amazon t2.small using Docker Machine
I was never impressed by Amazon’s old t1.micro
instances—they were slow
and laggy. But the new t2.small
instances have 2GB of RAM, and they feel
fairly snappy. And they cost $15–20/month.
Let’s begin by creating a t2.small
on EC2 using
docker-machine
:
docker-machine create -d amazonec2 \
--amazonec2-access-key=$AWS_ACCESS_KEY_ID \
--amazonec2-secret-key=$AWS_SECRET_ACCESS_KEY \
--amazonec2-instance-type=t2.small \
--amazonec2-vpc-id=$MY_VPC_ID \
example
Here, $MY_VPC_ID
is the ID of the default “virtual private cloud” that
seems to have come with my account. This command will create an Ubuntu
14.04 server, install Docker, set up SSH keys, and configure SSL
certificates for Docker authentication. We also need to go to our
newly-created docker-machine
security group and open up the HTTP port.
While we’re it, let’s set up an Elastic IP address using the AWS console, and bind it to our server using the EC2 command-line tools and the nifty jq JSON parser:
INSTANCE_ID="`docker-machine inspect example | jq -r .Driver.InstanceId`"
ec2-associate-address $MY_ELASTIC_IP -i "$INSTANCE_ID" \
-O $AWS_ACCESS_KEY_ID -W $AWS_SECRET_ACCESS_KEY
Finally, let’s point our local docker client at the new server, and deploy a test application:
eval "$(docker-machine env example)"
docker run -d -p 80:5000 --name test luisbebop/docker-sinatra-hello-world
Try accessing $MY_ELASTIC_IP
in a web browser. When you’re done, tear it
down:
docker stop test && docker rm test
Setting up nginx and a simple application using docker-compose
Create a directory nginx
containing a Dockerfile
with the following
contents:
FROM nginx
ADD conf.d /etc/nginx/conf.d
Then create a file conf.d/test.conf
containing the following, replacing
test.example.com
with a domain name that you’ve pointed at your new
server:
upstream test {
server test:5000;
}
server {
listen 80;
server_name test.example.com;
location / {
proxy_pass http://test;
}
}
This allows us to serve multiple hostnames from one IP address. We can also set up SSL, http password authentication, or any other nginx options we want.
Next, we want to build a new docker image using this configuration:
docker build -t nginx-proxy .
And now we can install docker-compose
and create a file
docker-compose.yml
containing:
test:
image: "luisbebop/docker-sinatra-hello-world"
restart: "always"
nginx:
image: "nginx-proxy"
restart: "always"
links:
- "test"
ports:
- "80:80"
All that’s left to do is to launch our new suite of applications:
docker-compose up -d
# Take a look.
docker ps
Packaging up a Heroku-style application with a buildpack
If you want to package up pre-existing Heroku application with a
Procfile
, you can use buildpack-runner as a starting point. For
example, here’s how I packaged a Java web application that listened on port
8080:
FROM centurylink/buildpack-runner
EXPOSE 8080
CMD ["/start", "web"]
This can then be built into a named image, added to our
docker-compose.yml
, and given a *.conf
file (and a links
entry in our
nginx setup). Of course, if our application needed a database, we’d also
need to pass in some environment variables in docker-compose.yml
. We
could either run our database in a Docker container, or we could set up an
Amazon RDS database.
Next steps
Need a mail server? A private docker registry? You can use off-the-shelf
images or build your own, and just toss them into docker-compose.yml
.
Then re-run docker-compose up -d
. Don’t forget to open any new ports in
your EC2 security group.
Once you get the basics down, you might want to check out dokku or
flynn, which provide a more Heroku-like experience. But it’s pretty
easy to wire up basic hobby projects using
docker-machine
and docker-compose
,
and to get something which is faster and cheaper than Heroku—and
considerably more flexible.
Want to contact me about this article? Or if you're looking for something else to read, here's a list of popular posts.