Sheldon robot: the web app on Docker

In a previous post, I described how I’m planning to use a web app to communicate with my robot, Sheldon. Well, it just occurred to me that this is a perfect use of containerisation. Time to learn something about Docker.

The app as it currently looks on my iPhone

A web app is the perfect solution for this because HTML and JavaScript offer a very simple way to throw together a user interface (UI). With the robot running a websockets server (currently on an ESP8266 board), communication is also very simple.

True, the connectionless nature of HTTP means that it’s not ideal for all applications – such as monitoring telemetry. I think I’m going to be looking into Bluetooth for that. But for a simple control panel, it’s fine.

I put together a basic web site, built on HTML, PHP, JavaScript, Bootstrap and JQuery. I can access it – and control the robot – from any device of my choosing, such as my laptop, desktop, phone or tablet, as the whim takes me.

One issue that did crop up, though, was where to host it. I mostly develop on my iMac. But this isn’t very portable. There might be times when I want the web site running on my laptop, or perhaps my LattePanda Alpha, which I have earmarked as the robot base station.

Portable solution

I came up with a solution that I thought was clever. On each machine that might conceivably host the site I installed Apache and configured a virtual host. The document root of this host is a directory that is automatically shared among all the machines using NextCloud (although Dropbox or the cloud file sharing service of your choice would have worked as well). Any changes I make to the site’s files are automatically reflected across all the machines.

It’s not really that clever, is it?

It means having to install Apache and PHP on all the machines, and possibly making sure the versions stay in sync. I don’t need to run Apache on these machines for any other purpose, so it’s all a bit of a waste of resources when the robot’s not operating.

Containerise!

This is where containerisation excels, especially as I’m not looking to persist any data. The web app is interactive and lives in the moment.

Over the past couple of years I’ve looked at Docker … and then quickly looked away. I really didn’t have a use case for it. Now I do.

The first step – as is so often the case in learning a new technology – was to watch YouTube. Brad Traversy, of the Traversy Media channel, has produced many fine videos, particularly on web development, that have proven to be a godsend in getting me over that first hurdle with a new technology. So it proved with Docker. He has a two-part series. Here’s the first part.

I don’t know about you, but when I’m learning something new I tend to run ahead of the tutor (whether that’s a video or a book), taking what I’ve learned so far and trying to stretch and adapt it. I make a lot of mistakes and usually end up throwing away my first efforts.

The first thing I did was search DockerHub to find an image that had a version of Apache integrated with PHP. Turned out that php:7.2-apache is just what I need. Which is all very well, but how do I get my website files into the container?

In the folder in which I’m creating this container, I put a folder called public_html with all the files needed for the site. Here’s a look at the container folder.

At first, that docker-compose.yml file wasn’t there – we’ll get to that later. I was doing everything with the Dockerfile file. This started off looking like this:

FROM php:7.2-apache
COPY public_html/ /var/www/html/

The second line solved the problem mentioned above. On building the image, all the files (including those in subfolders) would effectively be copied to the /var/www/html directory in the container. With a shell open and CD’d to this folder, I could then build the container image with:

docker image build -t robot_server .

That creates an image called robot_server. And then I could create and run the container with:

docker container run -d -p 8084:80 --name roboserve robot_server

That creates a container running in the background (-d) called roboserve. The web server would be exposed on port 8084 (both for localhost and remote connections to this machine). And it worked. Except, not quite.

Now, I’m not going to provide a guide to learning Docker here – the videos do that much better than I could. I’m just showing how getting ahead of yourself can cause problems.

The issue was that the files were being copied, but were ending up in the container with the wrong permissions. The permissions were root:root, whereas the owner and group both needed to be www-data. This wasn’t obvious at first – the site ran okay because the web server had the necessary read permissions. But the app has a page where you can read a log file – and delete it if desired. Except that Apache didn’t have permission to delete it. So I cobbled together a clumsy fix, which was to change the Dockerfile to this:

# Dockerfile
FROM php:7.2-apache
COPY public_html/ /var/www/html/
RUN chown -R www-data /var/www/html/*
RUN chgrp -R www-data /var/www/html/*

Now, to be fair (to me), that actually worked. But it’s not the most elegant solution. Plus, there’s still that ungainly long ‘docker container run…’ command to grapple with. (Yes, yes, I know, shell scripts, yada, yada).

In Brad Traversy’s second video we learn about a better way – docker-compose.

I ditched the Dockerfile file and instead created that docker-compose.yml file, which looks like this:

#docker-compose.yml
version: '3'
services:
  http:
    image: php:7.2-apache
    container_name: roboserve
    restart: always
    volumes: ['./public_html:/var/www/html']
    ports:
      - '8084:80'

Now I can start the container with:

docker-compose up -d

And stop it with:

docker-compose down

Anything that’s written to the log file is persisted across runs of the container (at least on a given machine). And that’s a bonus, because I don’t really need persistence anyway.

The directory containing all the files above is shared across machines via NextCloud, like I did with my previous approach. But this is a much better solution because, in effect, I’m also sharing all the Apache and PHP config. I could, for example, upgrade the PHP version and do it just once, in the container, rather than having to ensure that all machines that might run this site are individually upgraded.

All I need to do now is start installing Docker on everything…

[UPDATE: later that day] Okay, so I spoke too soon.

I installed Docker on Ubuntu 18.04 running on the LattePanda Alpha, using this guide (plus sudo apt install docker-compose).

When I ran the robot container, I hit the same problem with permissions. So it’s back to Plan A – using the Dockerfile as shown above, and an edited docker-compose.yml file that looks like this:

#docker-compose.yml
version: '3'
services: http:
 container_name: roboserve
 build: .
 restart: always 
 ports:
   - '8084:80'

So I’ve removed the image and volumes entries and put back in the ‘build: .’ entry. It works, but I’m not sure it’s how you’re meant to do it.

Lesson learned: watching two videos and spending an hour or so on the Interwebz does not make you an expert in Docker. Who knew?

[UPDATE: 26/01/2019] A couple more minor changes. I’ve added the following line to the Dockerfile file:

RUN cp "$PHP_INI_DIR/php.ini-development" "$PHP_INI_DIR/php.ini"

This enables the development version of the php.ini file, which is fine for our purposes.

I’ve also reinstated (again) the volumes entry in docker-compose.yml. This creates a bind mount. I can edit, and and delete files outside of the container, which is handy for development.

And I’d forgotten about the networking needs of websockets. The thing about Docker containers is that they are, well, contained. There is no direct networking link from them to the outside world, other than via the services you define acting via the ports you expose. This sandboxing is one of the strengths of containers because it improves their security.

There is a way of getting the container to use the host’s networking (via the host parameter), but this is very limited – it doesn’t work on Mac desktop versions of Docker, for instance. Alternatively, you can alter iptables routing – but you have to do that on each machine, which is precisely the kind of configuration malarky that Docker is supposed to eliminate. So the best option is to expose specific ports.

I use websockets, running on port 8181 (because, why not?), to send messages to the robot from the webserver, and get responses back. So I had to expose that port. My docker-compose.yml now looks like this:

#docker-compose.yml
version: '3'
services:
  http:
    container_name: sheldon
    restart: always
    volumes: ['./public_html:/var/www/html']
    ports:
      - '8084:80'
      - '8181:8181'

(Oh yeah, I also changed the name of the container.)

[UPDATE 05/03/2018] The files for everything mentioned here (and a bit more) are now on my Github pages. Use at your own risk.

Leave a Reply

Your e-mail address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.