Containers & Co

The basics and beyond

by David Markus, v006

(press space to continue)

Quick help:

  • space : continue to next page
  • up/down/left/right : navigate
  • f : full screen
  • o : overview
  • s : speaker mode
  • alt-click : zoom
  • this is a link

About me

  • David Markus
  • AWS Advocate
  • Docker Enthusiast
  • Kubernetes Fan
  • Almost 20 years in IT
    • 15 years @Atos: Unix / Storage / Transition Architect
    • Currently @X-talent: Cloud Architect

Part 1

Containers

an introduction

Containers?

More and more companies are integrating containers in their development and production process. One of the reasons is that containers provide a fast and easy way to develop and run applications.

So, what do you think is a container?

Some say it’s

  • A way to isolate stuff from the rest
  • Something like a VM, but with less overhead
  • A kind of chroot or jail
  • A means to package an app
  • What docker does
  • A way to share apps across systems
  • A big thing to put things in

And the fun thing is: they’re not all wrong!

To be precise:

A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another.

(definition by Docker)

Containers use the resource isolation features of the Linux kernel such as kernel namespaces (isolates an application’s view of the operating environment including process trees, network, user IDs and mounted file systems) and cgroups (provides resource limiting, including the CPU, memory, block I/O and network), and a union-capable file system such as aufs and others.

(definition by Kaï Wähner @ Voxxed)

A brief history of Containers

The idea to segregate apps isn’t new. A long time ago, Unix V7 brought ‘chroot’ in 1979, making it possible to segregate file access for each process.

This idea evolved gradually via ‘Jails’ to the first ‘Containers’ on Solaris in 2004. Over the years this idea became more and more popular, sparking several projects and startups around this concept.

One of these projects saw the light of day in March 2013, Docker was its name. It rapidly became the most popular containerization software of all.

A more detailed history can be found here

Various Container platforms

A Container…

… is an abstraction at the application layer, packaging code and dependencies together.

  • A container only holds the required code and its dependencies
  • Each container runs as an isolated process in user space
  • But shares the underlying kernel
  • Multiple containers can run on the same machine
  • Containers require far less resources than VMs

How does it compare to Virtual Machines?

Another proven method to isolate applications are Virtual Machines. But what are the differences?

A Virtual Machine…

… is an abstraction of physical hardware turning one server into many.

  • Allows multiple system environments on a single physical computer
  • A VM consists of virtual devices backed by physical resources
  • Each VM includes a full copy of an OS, its libraries and its applications

Spot the differences

Let’s have a quick demo:

Let’s compare a standard VM with Ubuntu running a simple webapp and compare it with a container with the same simple webapp.

to the demo

Containers win by 3 points

  • Efficiency
  • Portability
  • Flexibility

What containers can solve

So what about ‘thin-apps’, ‘app-v’, etc?

They’re still great for virtualizing desktop applications. But for server applications and development Containers are unrivaled. Containers enable traditional monolithic applications to be delivered as a set of reusable microservices.

Beyond the container: Orchestration

Coffee break

or grab a tea, or something. Let’s not be judgemental.

Part 2

Docker

Getting started

Docker editions

Docker comes in two flavors:

  • Community Edition (CE)
  • Enterprise Edition (EE)

More info about the differences

Installing Docker

To run docker containers it might be handy to have Docker CE installed. Please check the bottom of the Docker Docs to read how to install the Docker Desktop for your environment (Mac, Linux or Windows).

Docker Overview

Docker Hub

The Docker Hub is a Registry, to host Repositories of container Images, both public and private and for customers, teams as well as the community.

the Hub

A quick demo

Docker Engine

Is where the magic happens. Docker follows a client-server approach, the server being the Docker Host running the Docker daemon (dockerd). It listens to API calls ad manages objects like images, containers, networks and volumes

Dockerfile

Docker can build images automatically by reading the instructions from a Dockerfile, a text file that contains all the commands, in order, needed to build a given image.

It is possible to provide a bunch of options to the build command to customize an image, but using a text-file that contains all commands might be easier.

The so called Dockerfile is read and executed from top to bottom, which means the order of things is quite important.

Docker images are built in layers and if during a rebuild a layer is unchanged, it is pulled from cache, speeding up the build-process. If a layer did change, that one and all subsequent layers need to be rebuilt from scratch.

So make sure to put time consuming or frequently changing parts as much at the bottom as possible, to keep build times short.

We’ll go over an example Dockerfile in a quick demo

Connecting Containers

One of the reasons Docker containers and services are so powerful is that you can connect them together, or connect them to non-Docker workloads. Docker containers and services do not even need to be aware that they are deployed on Docker, or whether their peers are also Docker workloads or not.

To let two containers communicate with each-other, you might loop up the IP addresses of the running containers and use them to communicate to each other. But… that’s quite tedious. it’s easier to leverage the built-in network possibilities.

Docker comes with a network subsystem which is pluggable using drivers. It comes with the following by default:

  • bridge: the default network driver. Allows containers connected to the same bridge network on the same docker host to communicate and denies access from containers which are not.
  • host: directly bind the containers network stack to that of the Docker host, making the application available on the host’s IP address. Only available on Linux or Docker Swarm.
  • none: disable all networking for this container.

Besides the defaults the following other options are available:

  • overlay: to span networks over more Docker hosts and enable swarm services to communicate with each-other.
  • macvlan: make the container appear as physical host in the network by assigning a MAC address to it. The Docker Daemon handles all traffic. Might be handy for legacy applications.
  • Network Plugins: 3rd party network plugins for Docker.

Making a container mutable

Remember from demo 2 that a container is immutable by default? Docker offers three ways to make data persistent:

  • volumes: created and managed by docker, isolated from core functionality of the host machine. Can be mounted into multiple containers at the same time. Can be named or anonymous.
  • bind mount: mounts any local file or directory on the host machine into a container. The source is both accessible from within the container AND(!) from the local host. Nice for development purposes, but to be avoided in production.
  • tmpfs mount: a non-persistent in-memory volume. Linux only.

Let’s see mounts and networks in action

Docker Compose

Running multiple containers by hand can be a hassle, and you could of course script a lot, but a far easier way is to use Dockers’ own docker-compose:

Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration.

The Compose YAML file is a declarative way to describe what the environment should look like, similar to Ansible and Cloudformation. The flask-demo-yaml file looks like this:

version: '3'

services:
  redis:
    image: 'redis:4-alpine'
    ports:
      - '6379:6379'
    volumes:
      - 'redis4flask:/data'

  flask:
    build: '.'
    depends_on:
      - 'redis'
    env_file:
      - '.env'
    ports:
      - '5000:5000'

volumes:
  redis4flask: {}

Demo time!

You might now be ready build your own app from scratch. So let’s dig into a nice cool wordpress example. Have fun!

Wordpress Demo!

Demo 7: Voting with Microservices

To show the power of the people by Docker Containers once more: a quick demo to show microservices in action:

A nice Flask frontend in Python connected to Redis to collect votes, a C#/.NET combo to consume the votes to store them in a Postgres database. A NodeJS webapp to show the results in realtime. All these goodies up and running with one command…

Go to Demo 7

Docker cleanup

By now most basics of Docker have been covered, only one reminder to cleanup the mess left behind by the demo’s remaining.

Cleanup demo

Resources:

For the NEXT time:

Orchestration with: Kubernetes // EKS

Part 3

Demo’s

Pull up some sleeves

Please download and extract the zipfile if you want to play around with the demo’s.

Demo 1: VM vs Container

As this particular demo is a bit cumbersome to share, I’ve made a video of it instead.

I’ve created two VMs running Ubuntu 18.04 LTS and Alpine 3.9.0, both with a simple flask app running on Python which starts automatically. The VM itself is started with a script, reporting back when the application is ready.

The same app is also packaged in a Docker container and is started with a similar script. So… sit back, relax and enjoy the movie.

The results

Metric Ubuntu VM Alpine VM Docker Container
Start Time 36,5s 24,5s 0,9s
CPU Usage 13% 15% 0,05%
Memory Usage 741 MB 288 MB 19 MB
Disk Usage 4501 MB 823 MB 72 MB

Spot the differences and check the overhead of a VM!

…return to theory

Demo 2: First Docker Steps

Assuming you have Docker Desktop installed (if not, go here) let’s start with the famous words Hello, World:

docker image ls # check existing images
docker run hello-world # Get and run the container
docker container run hello-world ## spot the differences in command and output
docker image ls # check images again and notice hello-world

Note that running a container without the image present will download it first from the repository.

Both 'docker run hello-world' and 'docker container run hello-world' do the same, although the latter uses the fairly new ‘management commands’. It is advised to use these from now on, as the old-school commands might get deprecated at some point.

Also note that on the second run, it starts a lot faster as the image is already present locally.

Images can be tagged to your liking, for versioning. You can even upload your images to Docker Hub.

docker image tag hello-world <username>/hello # tag an image
docker image push <username>/hello   # upload image to docker

Note that <username> needs to be changed to your own account-name on docker.

If you now visit the Hub and log in, you should see the ‘hello’ image.

This way you can easily share your images or use them on other environments.

Now lets run a complete linux distro as a container and log into its shell. Let’s use Alpine Linux, only 5 MB in size. It might also be nice to show that an image is immutable by default.

‘Everything that happens in a container, stays in a container’

docker container run -it alpine sh # Run Alpine linux and run a shell
ls -al # check state of current directory.
touch VEGAS # create a new file
ls -al # make sure VEGAS exists.
exit # Exit the shell.

Now run the container again and check for the file:

docker container run -it alpine sh
ls -la # Look for VEGAS
ls -la # Look harder...
exit

To make a container mutable, data volumes can be added, which will be shown a few demo’s later.

…return to theory

Demo 3: The dockerfile

Okay, let’s have a look at an average dockerfile:

# Start the file with a "FROM" instruction,
# specifying the base image to build on.
# This one uses an official Python image as a base, using Alpine Linux.
FROM python:2.7-alpine

# Create an 'app' directory
RUN mkdir /app
# Change the working directory to /app, like 'cd'
WORKDIR /app

# Copy requirements.txt into the container...
COPY ./requirements.txt .
# ...then install the required packages
RUN pip install -r requirements.txt

# Copy the rest as well
COPY . .

# Add custom metadata with LABEL"
LABEL maintainer="Foo Bar <foo@bar.com>"\
      version="1.0"

# Run the command flask with host and port options when the container launches
CMD flask run --host=0.0.0.0 --port=5000

To play around with this dockerfile, please go to in your terminal to “3_simpleflask” of the demo-directory.

cd <dir-to>/3_simpleflask # Go to demo directory

docker image ls # check current Images
docker image build -t flask1 . # Build image using dockerfile in current
# directory.
docker image inspect flask1 # At the end of the JSON output notice the 'Layers'
# sections with about 8 'sha256' hashes.
docker image build -t flask1 . # Build again an notice all steps are taken
# from cache.

Let’s change the label in the dockerfile a bit and rebuild the image. Pay close attention to the steps being rebuilt.

docker image build -t flask1 .

Another cool thing to check is to see what happens to the rebuild time when you move ‘COPY . . ’ directly below ‘WORKDIR /app’. On my laptop: 1,3s before move vs 6,5s after move.

Now, let’s see if the image works:

docker container run -it flask1

Hmm… it returned an error, saying it’s missing some environment variables. They can be added to the dockerfile (ENV or ENTRYPOINT), or can be added to the command:

docker container run -it -e FLASK_APP=app.py flask1

It seems to work, but when you try to connect to http://localhost:5000, no-one’s home. Let’s map a local port to a containerport. And throw in some other options as well:

docker container run -it -p 5000:5000 -e FLASK_APP=app.py --rm \
 --name superflask -d flask1

When you reload the site again, you’re welcomed by nice message that ‘Flask is running!’ Well done!

A quick explanation of the options:

  • -it: interactive tty
  • -p: map port
  • -e: environment variable
  • -rm: remove container automatically when stopped
  • -name: assign a name to the container
  • -d: run container in the background

Some last things to show:

docker container ls # should show your superflask container
docker container stop superflask # stop it
docker container ls # verify the 'rm' option did its job

…return to theory

Demo 4: Volumes and networks

Wouldn’t it be nice to create a cool counter on a website and have flask talk to a database of some sort to ‘remember’ the amount of visitors?

Let’s start by creating some ‘infra’:

cd ../4_linking-Containers # Go to demo 4 directory
docker network ls # show existing networks
docker network create --driver bridge backend # create backend network
docker network inspect backend # check new network

docker volume ls # show existing volumes
docker volume create redis4flask # create volume
docker volume inspect redis4flask # inspect volume

Let’s add a Redis container in the mix and start a Flask container, both using the newly created ‘infra’:

docker container run --rm -dit -p 6379:6379 --name redis \
--network backend -v redis4flask:/data redis:4-alpine  # create Redis container
docker image build -t flask-counter . # build the flask container
docker container run --rm -dit -p 5000:5000 --name flask-counter \
--network backend flask-counter # run flask container

When you visit http://localhost:5000 you’ll see that flask is running and has a nice cool counter. Many thanks to Redis for keeping count. By the way: the Redis container downloaded ‘as-is’ from the Docker hub, and runs without any tweaks. Nice!

Other noteworthy things to mention are the ‘–network’ option to define which network a container belongs to and the -v (volume) options to add a volume to a container to offer persistence.

Some last notes to mention: it is better to start Redis before the app. And also notice I’ve added some ENV variables to the Dockerfile to make the run command a bit simpler. And as both containers are connected to the ‘backend’ network, they can communicate to each-other freely.

Now, please stop the running containers:

docker container stop flask-counter redis # Stop 2 containers at once!

…return to theory

Demo 5: docker-compose

Let’s first have a closer look at the docker-compose.yml file. Open it in your favorite editor or simply cat the file in the terminal:

cd ../5_docker-compose # Go to demo 5 directory
cat docker-compose.yml

The file defines services, volumes and (not in this example) networks. As you can see there will be two services: a redis and a flask container.

Redis will be built from an image pulled from the Docker Hub, whereas flask will be built from the Dockerfile in the local dir. The ports and volumes lines are self explanatory. Depends_on means flaks requires redis to run. And env_file points to a file with environment variables.

Buildtime!

docker-compose build # builds images from dockerfile
docker-compose pull # pulls images from the hub
docker-compose up -d # starts the containers in the correct order. In this case: redis first as flask depends on it
docker-compose ps # check running containers related to the yaml-file
docker container ls # shows any running container

Check http://localhost:5000 to see if the app is running.

docker-compose down # Stops containers and removes resources.

One command to rule them all might be easier:

docker-compose up --build -d # Build and starts all resources in the background
docker-compose down # Don't forget to bring it down when done

To read more about docker-compose CLI commands

…return to theory

Demo 6: DIY Wordpress

By now you might want to get your hands dirty yourself. So lets build a Wordpress site from scratch.

Note: for the lazy ones, or for reference, I’ve create a 6_wordpress_demo_for_lazy_people directory.

1: create a new directory and go into it

cd .. # just in case you were still in the demo 5 dir.
mkdir 6_wordpress_demo # make the directory
cd 6_wordpress_demo

2: Create a new docker-compose.yml (or .yaml, both work) file which creates two services: mariadb and wordpress.

version: '3.3'

services:
   db:
     image: mariadb:10.2

   wordpress:
     image: wordpress:latest

Let’s add some environment variables and port info:

version: '3.3'

services:
   db:
     image: mariadb:10.2
     environment:
       MYSQL_ROOT_PASSWORD: verysecretpassword
       MYSQL_DATABASE: wordpress
       MYSQL_USER: wordpress
       MYSQL_PASSWORD: anotherverysecretpassword

   wordpress:
     image: wordpress:latest
     ports:
       - "8000:80"
     environment:
       WORDPRESS_DB_HOST: db:3306
       WORDPRESS_DB_USER: wordpress
       WORDPRESS_DB_PASSWORD: anotherverysecretpassword
       WORDPRESS_DB_NAME: wordpress

This would already create a working situation, but isn’t very rock-solid…

Add some persistence to the mix with volumes:

version: '3.3'

services:
   db:
     image: mariadb:10.2
     volumes:
       - db_data:/var/lib/mysql
     environment:
       MYSQL_ROOT_PASSWORD: verysecretpassword
       MYSQL_DATABASE: wordpress
       MYSQL_USER: wordpress
       MYSQL_PASSWORD: anotherverysecretpassword

   wordpress:
     image: wordpress:latest
     volumes:
       - ./wp-content:/var/www/html/wp-content:rw
     ports:
       - "8000:80"
     environment:
       WORDPRESS_DB_HOST: db:3306
       WORDPRESS_DB_USER: wordpress
       WORDPRESS_DB_PASSWORD: anotherverysecretpassword
       WORDPRESS_DB_NAME: wordpress
volumes:
    db_data: {}

For the database we’ll create a named volume, for wordpress we’ll use create a local ‘wp-content’ directory later. Almost done…

Finish up the file by adding a database dependancy for wordpress and a rule what to do in case of a failure:

version: '3.3'

services:
   db:
     image: mariadb:10.2
     volumes:
       - db_data:/var/lib/mysql
     restart: always
     environment:
       MYSQL_ROOT_PASSWORD: anotherverysecretpassword
       MYSQL_DATABASE: wordpress
       MYSQL_USER: wordpress
       MYSQL_PASSWORD: anotherverysecretpassword

   wordpress:
     depends_on:
       - db
     image: wordpress:latest
     volumes:
       - ./wp-content:/var/www/html/wp-content:rw
     ports:
       - "8000:80"
     restart: always
     environment:
       WORDPRESS_DB_HOST: db:3306
       WORDPRESS_DB_USER: wordpress
       WORDPRESS_DB_PASSWORD: anotherverysecretpassword
       WORDPRESS_DB_NAME: wordpress
volumes:
    db_data: {}

Cool! The yaml file is done, but…

3: …let’s not forget to create a local folder for the wordpress volume:

mkdir wp-content

This is quite neat, as this enables you to interact with the wp-content directory, for example to add or adjust themes or plugins.

4: Now it’s time to take it for a spin:

docker-compose up -d # Run Wordpress in detached mode

And connect to it by visiting http://localhost:8000. You should see the setup page of Wordpress. Have some fun with it!

To stop it:

docker-compose down

And you’re done with this demo.

Just for your information: this is slightly adjusted version of a Docker Compose example from Docker. They have more to play around with, like Django or Rails. But not before you finish my last two demo’s ;)

…return to theory

Demo 7: Voting with Microservices

As said: a bunch of microservices ready to be launched on your command:

cd ../7_microservices/example-voting-app
docker-compose up --build -d

And we’re off. Your terminal will go on and on and on for a few minutes, pulling and building everything. When done, visit http://localhost:5000 to cast a vote.

Even more fun: you can ask others on the same network to show their preference by asking them to visit http://:5000.

You can check the results in realtime on http://localhost:5001. Neat, no?

When done toying around, let’s stop the containers:

docker-compose stop # Tip; try to find out what the difference is
# between 'stop' and 'down'  :)

This demo is taken from dockersamples on github. It holds more goodies than just shown here and I plan to talk about them in another session, so stay tuned!

…return to theory

Demo 8: Cleanup your mess :)

If you did all the demo’s, you’ve created a nice list of images, containers, volumes and networks. About 2,6 GB. To clean this up, do the following:

First lets get some insight in the resource hunger of your containers and images:

docker system info # shows system-wide information, including container information
docker system df # show docker disk usage
docker container ls # shows running containers
docker container ls -a # shows all containers
docker image ls -a # shows all containers

As you can see, there is some urgent need to cleanup some mess.

NOTE: these commands assume you only have the Demo leftovers in your Docker environment. If not: be careful!

docker system df  # show docker disk usage
docker system prune # removes unused data
docker container ls # show active containers
docker container ls -a # show all containers, incl. stopped
docker container stop $(docker container ls -a -q) # stop ALL(!) containers, use at your own risk!
docker container rm flask1 # removes one container
docker container rm $(docker container ls -a -q) # remove ALL(!) containers.
docker container ls -a # verify all is gone.
docker image ls -a # show all images
docker image rm flask1 redis # remove flask1 and redis image
docker image rm $(docker image ls -a -q) # remove ALL(!) images, use at your own risk!
docker system df # check disk usage, notice decrease in usage
docker volume ls # show volume usage
docker volume rm $(docker volume ls -q) # remove ALL(!) volumes, use at your own risk!
docker network ls # show networks
docker network rm $(docker network ls -q) # remove ALL(!) volumes, use at your own risk! Note that the default networks can't be removed.
docker system df # verify all is gone

…return to theory

Questions?

Contact me at david[dot]markus[at]x[dash]talent[dot]nl

Thank you

No, really: Thank you!

Credits:

Fotos:

“Containers in Rotterdam port”, Andrii Stashko, 2012, source: flickr, CC BY-NC-ND 2.0

“Real coffee”, Olle Svenson, 2009, source: flickr, CC BY 2.0

“What?”, Véronique Debord-Lazaro, 2010, source: flickr, CC BY-SA 2.0

“End of the road”, Keith Ewing, 2018, source: flickr, CC BY-NC 2.0

“John Cleese”, Monty Python, source: giphy

“more John Cleese”, Monty Python, source: giphy

Site:

Theme: Reveal-Hugo, by dzello

Generator: Hugo

Sources:

Docker

Voxxed

acloud.guru

Version history

Version Date Change
v001 jan 2019 Started with Container Intro, using Reveal-Hugo for the first time
v002 feb 2019 Added Docker Basics section, created default ending- and credits sections
v003 mrt 2019 Created and tested demo’s, added demo section
v004 apr 2019 Presentation online, added logo
v005 may 2019 Fixed demo 7, added version history+custom css, shuffled demo7 slides, added link to help
v006 jan 2020 Added some slides, updated demo7

Go back to start