In the previous exercise you pulled down images from Docker Store to run in your containers. Then you ran multiple instances and noted how each instance was isolated from the others. We hinted that this is used in many production IT environments every day but obviously we need a few more tools in our belt to get to the point where Docker can become a true time & money saver.
First thing you may want to do is figure out how to create our own images. While there are over 700K images on Docker Store it is almost certain that none of them are exactly what you run in your data center today. Even something as common as a Windows OS image would get its own tweaks before you actually run it in production. In the first lab, we created a file called “hello.txt” in one of our container instances. If that instance of our Alpine container was something we wanted to re-use in future containers and share with others, we would need to create a custom image that everyone could use.
We will start with the simplest form of image creation, in which we simply commit
one of our container instances as an image. Then we will explore a much more powerful and useful method for creating images: the Dockerfile.
We will then see how to get the details of an image through the inspection and explore the filesystem to have a better understanding of what happens under the hood.
Let’s start by running an interactive shell in an alpine container:
docker run -it alpine sh
As you know from earlier labs, you just grabbed the image called alpine from Docker Store and are now running the sh shell inside that container.
To customize things a little bit we will install a package called figlet in this container. Your container should still be running so type the following commands at your alpine container command line:
apk add figlet
figlet "Hello Docker"
You should see the words “hello docker” printed out in large ascii characters on the screen.
For reference:
/ # apk add figlet
fetch https://dl-cdn.alpinelinux.org/alpine/v3.20/main/x86_64/APKINDEX.tar.gz
fetch https://dl-cdn.alpinelinux.org/alpine/v3.20/community/x86_64/APKINDEX.tar.gz
(1/1) Installing figlet (2.2.5-r3)
Executing busybox-1.36.1-r28.trigger
OK: 9 MiB in 15 packages
/ # figlet "Hello Docker"
_ _ _ _ ____ _
| | | | ___| | | ___ | _ \ ___ ___| | _____ _ __
| |_| |/ _ \ | |/ _ \ | | | |/ _ \ / __| |/ / _ \ '__|
| _ | __/ | | (_) | | |_| | (_) | (__| < __/ |
|_| |_|\___|_|_|\___/ |____/ \___/ \___|_|\_\___|_|
Go ahead and exit from this container
exit
Now let us pretend this new figlet application is quite useful and you want to share it with the rest of your team. You could
tell them to do exactly what you did above and install figlet in to their own container, which is simple enough in this example. But if this was a real world application where you had just installed several packages and run through a number of configuration steps the process could get cumbersome and become quite error prone. Instead, it would be easier to create an image
you can share with your team.
To start, we need to get the ID of this container using the docker ps
command (do not forget the -a
option as the non running container are not returned by the ps
command).
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ed48b18e2856 alpine "sh" 10 minutes ago Exited (127) 17 seconds ago frosty_rhodes
Before we create our own image, we might want to inspect all the changes we made. Try typing the command docker container diff <container ID>
for the container you just created.
For Example:
$ docker container diff ed48b18e2856
C /root
A /root/.ash_history
C /usr
C /usr/share
A /usr/share/figlet
A /usr/share/figlet/fonts
A /usr/share/figlet/fonts/block.flf
A /usr/share/figlet/fonts/jis0201.flc
A /usr/share/figlet/fonts/8859-2.flc
A /usr/share/figlet/fonts/8859-7.flc
A /usr/share/figlet/fonts/smscript.flf
A /usr/share/figlet/fonts/mini.flf
A /usr/share/figlet/fonts/term.flf
A /usr/share/figlet/fonts/646-ca2.flc
A /usr/share/figlet/fonts/646-dk.flc
A /usr/share/figlet/fonts/646-it.flc
A /usr/share/figlet/fonts/646-pt2.flc
A /usr/share/figlet/fonts/hz.flc
A /usr/share/figlet/fonts/smshadow.flf
A /usr/share/figlet/fonts/646-se.flc
A /usr/share/figlet/fonts/digital.flf
A /usr/share/figlet/fonts/shadow.flf
A /usr/share/figlet/fonts/small.flf
A /usr/share/figlet/fonts/ushebrew.flc
A /usr/share/figlet/fonts/646-fr.flc
A /usr/share/figlet/fonts/script.flf
A /usr/share/figlet/fonts/standard.flf
A /usr/share/figlet/fonts/utf8.flc
A /usr/share/figlet/fonts/646-se2.flc
A /usr/share/figlet/fonts/lean.flf
A /usr/share/figlet/fonts/ivrit.flf
A /usr/share/figlet/fonts/646-gb.flc
A /usr/share/figlet/fonts/banner.flf
A /usr/share/figlet/fonts/646-kr.flc
A /usr/share/figlet/fonts/646-pt.flc
A /usr/share/figlet/fonts/8859-8.flc
A /usr/share/figlet/fonts/uskata.flc
A /usr/share/figlet/fonts/646-ca.flc
A /usr/share/figlet/fonts/646-cn.flc
A /usr/share/figlet/fonts/646-irv.flc
A /usr/share/figlet/fonts/646-cu.flc
A /usr/share/figlet/fonts/646-es2.flc
A /usr/share/figlet/fonts/646-no2.flc
A /usr/share/figlet/fonts/8859-9.flc
A /usr/share/figlet/fonts/koi8r.flc
A /usr/share/figlet/fonts/646-es.flc
A /usr/share/figlet/fonts/big.flf
A /usr/share/figlet/fonts/bubble.flf
A /usr/share/figlet/fonts/frango.flc
A /usr/share/figlet/fonts/upper.flc
A /usr/share/figlet/fonts/646-de.flc
A /usr/share/figlet/fonts/646-hu.flc
A /usr/share/figlet/fonts/8859-3.flc
A /usr/share/figlet/fonts/mnemonic.flf
A /usr/share/figlet/fonts/slant.flf
A /usr/share/figlet/fonts/646-no.flc
A /usr/share/figlet/fonts/8859-4.flc
A /usr/share/figlet/fonts/8859-5.flc
A /usr/share/figlet/fonts/646-jp.flc
A /usr/share/figlet/fonts/646-yu.flc
A /usr/share/figlet/fonts/ilhebrew.flc
A /usr/share/figlet/fonts/moscow.flc
A /usr/share/figlet/fonts/smslant.flf
C /usr/bin
A /usr/bin/showfigfonts
A /usr/bin/figlist
A /usr/bin/chkfont
A /usr/bin/figlet
C /var
C /var/cache
C /var/cache/apk
A /var/cache/apk/APKINDEX.e0297a25.tar.gz
A /var/cache/apk/APKINDEX.f86367c4.tar.gz
C /etc
C /etc/apk
C /etc/apk/world
C /lib
C /lib/apk
C /lib/apk/db
C /lib/apk/db/scripts.tar
C /lib/apk/db/triggers
C /lib/apk/db/installed
You should see a list of all the files that were added to or changed in the container when you installed figlet. Docker keeps track of all of this information for us. This is part of the layer
concept we will explore in a few minutes.
Now, to create an image we need to “commit
” this container. Commit creates an image locally on the system running the Docker engine. Run the following command, using the container ID you retrieved, in order to commit the container and create an image out of it.
docker container commit CONTAINER_ID
For Example:
$ docker container commit ed48b18e2856
sha256:74149b97e8ff53a6ca4ef5732424592b1b6493d80dd047eeb16a6d89741d960c
That’s it - you have created your first image! Once it has been commited, we can see the newly created image in the list of available images.
docker image ls
You should see something like this:
$ docker images ls
REPOSITORY TAG IMAGE ID CREATED SIZE
<none> <none> 74149b97e8ff 41 seconds ago 10.8MB
alpine latest 1d34ffeaf190 10 days ago 7.79MB
Note that the image we pulled down in the first step (alpine
) is listed here along with our own custom image. Except our custom image has no information in the REPOSITORY
or TAG
columns, which would make it tough to identify exactly what was in this container if we wanted to share amongst multiple team members.
Adding this information to an image is known as tagging
an image. From the previous command, get the ID of the newly created image and tag it so it’s named ourfiglet:
docker image tag <IMAGE_ID> ourfiglet
For example:
docker image tag 74149b97e8ff ourfiglet
Now we have the more friendly name “ourfiglet” that we can use to identify our image.
$ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
ourfiglet latest 74149b97e8ff 6 minutes ago 10.8MB
alpine latest 1d34ffeaf190 10 days ago 7.79MB
Here is a graphical view of what we just completed:
Now we will run a container based on the newly created ourfiglet
image:
docker container run ourfiglet figlet hello
As the figlet package is present in our ourfiglet
image, the command returns the following output:
_ _ _
| |__ ___| | | ___
| '_ \ / _ \ | |/ _ \
| | | | __/ | | (_) |
|_| |_|\___|_|_|\___/
This example shows that we can create a container, add all the libraries and binaries in it and then commit it in order to create an image. We can then use that image just as we would for images pulled down from the Docker Store. We still have a slight issue in that our image is only stored locally. To share the image we would want to push the image to a registry somewhere. This is beyond the scope of this lab but you can get a free Docker ID, run these labs, and push to the Docker Community Hub from your own system using Docker for Windows or Docker for Mac if you want to try this out.
As mentioned above, this approach of manually installing software in a container and then committing it to a custom image is just one way to create an image. It works fine and is quite common. However, there is a more powerful way to create images. In the following exercise we will see how images are created using a Dockerfile
, which is a text file that contains all the instructions to build an image.
Instead of creating a static binary image, we can use a file called a Dockerfile
to create an image. The final result is essentially the same, but with a Dockerfile we are supplying the instructions for building the image, rather than just the raw binary files. This is useful because it becomes much easier to manage changes, especially as your images get bigger and more complex.
For example, if a new version of figlet is released we would either have to re-create our image from scratch, or run our image and upgrade the installed version of figlet. In contrast, a Dockerfile
would include the apk
commands we used to install figlet so that we - or anybody using the Dockerfile - could simply recompose the image using those instructions.
It is kind of like the old adage:
Give a sysadmin an image and their app will be up-to-date for a day, give a sysadmin a Dockerfile and their app will always be up-to-date.
Ok, maybe that’s a bit of a stretch but Dockerfiles are powerful because they allow us to manage how an image is built, rather than just managing binaries. In practice, Dockerfiles can be managed the same way you might manage source code: they are simply text files so almost any version control system can be used to manage Dockerfiles over time.
We will use a simple example in this section and build a “hello world” application in Node.js. Do not be concerned if you are not familiar with Node.js: Docker (and this exercise) does not require you to know all these details.
We will start by creating a file in which we retrieve the hostname and display it.
NOTE: You should be at the Docker host’s command line ($
). If you see a command line that looks similar to / #
then you are probably still inside your alpine container from the previous exercise. Type exit
to return to the host command line.
Type the following content into a file named index.js. You can use vi, vim or several other Linux editors in this exercise. If you need assistance with the Linux editor commands to do this follow this footnote[^1].
var os = require("os");
var hostname = os.hostname();
console.log("hello from " + hostname);
The file we just created is the javascript code for our server. As you can probably guess, Node.js will simply print out a “hello” message. We will Docker-ize this application by creating a Dockerfile. We will use alpine as the base OS image, add a Node.js runtime and then copy our source code in to the container. We will also specify the default command to be run upon container creation.
Create a file named Dockerfile
and copy the following content into it. Again, help creating this file with Linux editors is here [^2].
FROM alpine
RUN apk update && apk add nodejs
COPY . /app
WORKDIR /app
CMD ["node","index.js"]
Let’s build our first image out of this Dockerfile and name it hello:v0.1:
docker image build -t hello:v0.1 .
Should see similar output
$ docker image build -t hello:v0.1 .
[+] Building 3.4s (9/9) FINISHED docker:default
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 131B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/alpine:latest 0.0s
=> [1/4] FROM docker.io/library/alpine 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 4.79kB 0.0s
=> [2/4] RUN apk update && apk add nodejs 2.7s
=> [3/4] COPY . /app 0.1s
=> [4/4] WORKDIR /app 0.0s
=> exporting to image 0.5s
=> => exporting layers 0.5s
=> => writing image sha256:0a5f595ca89e37fd21152670da539b641531c8f 0.0s
=> => naming to docker.io/library/hello:v0.1 0.0s
This is what you just completed:
Now lets check created image exists
$ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
hello v0.1 0a5f595ca89e 34 seconds ago 66MB
ourfiglet latest 74149b97e8ff 20 minutes ago 10.8MB
alpine latest 1d34ffeaf190 10 days ago 7.79MB
We then start a container to check that our applications runs correctly:
docker container run hello:v0.1
You should then have an output similar to the following one (the ID will be different though).
hello from 1f59348d731a
What just happened? We created two files: our application code (index.js) is a simple bit of javascript code that prints out a message. And the Dockerfile is the instructions for Docker engine to create our custom container. This Dockerfile does the following:
Recall that in previous labs we put commands like echo "hello world"
on the command line. With a Dockerfile we can specify precise commands to run for everyone who uses this container. Other users do not have to build the container themselves once you push your container up to a repository (which we will cover later) or even know what commands are used. The Dockerfile allows us to specify how to build a container so that we can repeat those steps precisely everytime and we can specify what the container should do when it runs. There are actually multiple methods for specifying the commands and accepting parameters a container will use, but for now it is enough to know that you have the tools to create some pretty powerful containers.
There is something else interesting about the images we build with Docker. When running they appear to be a single OS and application. But the images themselves are actually built in layers. If you scroll back and look at the output from your docker image build
command you will notice that there were 5 steps and each step had several tasks. You should see several “fetch” and “pull” tasks where Docker is grabbing various bits from Docker Store or other places. These bits were used to create one or more container layers. Layers are an important concept. To explore this, we will go through another set of exercises.
First, check out the image you created earlier by using the history command (remember to use the docker image ls
command from earlier exercises to find your image IDs):
docker image history <image ID>
For example
$ docker image history fff3ebe6bffa
IMAGE CREATED CREATED BY SIZE COMMENT
fff3ebe6bffa 3 minutes ago CMD ["node" "index.js"] 0B buildkit.dockerfile.v0
<missing> 3 minutes ago WORKDIR /app 0B buildkit.dockerfile.v0
<missing> 3 minutes ago COPY . /app # buildkit 4.33kB buildkit.dockerfile.v0
<missing> 7 minutes ago RUN /bin/sh -c apk update && apk add nodejs … 58.2MB buildkit.dockerfile.v0
<missing> 10 days ago /bin/sh -c #(nop) CMD ["/bin/sh"] 0B
<missing> 10 days ago /bin/sh -c #(nop) ADD file:e3abcdba177145039… 7.79MB
What you see is the list of intermediate container images that were built along the way to creating your final Node.js app image. Some of these intermediate images will become layers in your final container image. In the history command output, the original Alpine layers are at the bottom of the list and then each customization we added in our Dockerfile is its own step in the output. This is a powerful concept because it means that if we need to make a change to our application, it may only affect a single layer! To see this, we will modify our app a bit and create a new image.
Type the following in to your console window:
echo "console.log(\"this is v0.2\");" >> index.js
This will add a new line to the bottom of your index.js file from earlier so your application will output one additional line of text. Now we will build a new image using our updated code. We will also tag our new image to mark it as a new version so that anybody consuming our images later can identify the correct version to use:
docker image build -t hello:v0.2 .
You should see output similar to this:
[+] Building 0.2s (9/9) FINISHED docker:default => [internal] load build definition from Dockerfile 0.0s => => transferring dockerfile: 131B 0.0s => [internal] load .dockerignore 0.0s => => transferring context: 2B 0.0s => [internal] load metadata for docker.io/library/alpine:latest 0.0s => [1/4] FROM docker.io/library/alpine 0.0s => [internal] load build context 0.0s => => transferring context: 1.59kB 0.0s => CACHED [2/4] RUN apk update && apk add nodejs 0.0s => [3/4] COPY . /app 0.1s
=> [4/4] WORKDIR /app 0.0s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:12b890dc72c0921dc061a566dd352c751e5f9e4 0.0s
=> => naming to docker.io/library/hello:v0.2 0.0s
Notice something interesting in the build steps this time. In the output it goes through the same five steps, but notice that in some steps it says Using cache.
Docker recognized that we had already built some of these layers in our earlier image builds and since nothing had changed in those layers it could simply use a cached version of the layer, rather than pulling down code a second time and running those steps. Docker’s layer management is very useful to IT teams when patching systems, updating or upgrading to the latest version of code, or making configuration changes to applications. Docker is intelligent enough to build the container in the most efficient way possible, as opposed to repeatedly building an image from the ground up each and every time.
Now let us reverse our thinking a bit. What if we get a container from Docker Store or another registry and want to know a bit about what is inside the container we are consuming? Docker has an inspect command for images and it returns details on the container image, the commands it runs, the OS and more.
The alpine
image should already be present locally from the exercises above (use docker image ls
to confirm), if it’s not, run the following command to pull it down:
docker image pull alpine
Once we are sure it is there let’s inspect it.
docker image inspect alpine
There is a lot of information in there:
We will not go into all the details here but we can use some filters to just inspect particular details about the image. You may have noticed that the image information is in JSON format. We can take advantage of that to use the inspect command with some filtering info to just get specific data from the image.
Let’s get the list of layers:
docker image inspect --format "{{ json .RootFS.Layers }}" alpine
Alpine is just a small base OS image so there’s just one layer:
["sha256:02f2bcb26af5ea6d185dcf509dc795746d907ae10c53918b6944ac85447a0c72"]
Now let’s look at our custom Hello image. You will need the image ID (use docker image ls
if you need to look it up):
docker image inspect --format "{{ json .RootFS.Layers }}" <image ID>
Our Hello image is a bit more interesting (your sha256 hashes will vary):
$ docker image inspect --format "{{ json .RootFS.Layers }}" hello:v0.2
["sha256:02f2bcb26af5ea6d185dcf509dc795746d907ae10c53918b6944ac85447a0c72","sha256:b3c4cbb9c5c5be9c9229f72570e6b32d4114375a8cbe98cde09fa3e46cf463d5","sha256:a857a8b2f3d0ca330b92f0cadbff927053e09dea7cb9c203df665ba0c490562b","sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef"]
We have four layers in our application. Recall that we had the base Alpine image (the FROM command in our Dockerfile), then we had a RUN command to install some packages, then we had a COPY command to add in our javascript code. Those are our layers! If you look closely, you can even see that both alpine
and hello
are using the same base layer, which we know because they have the same sha256 hash.
The tools and commands we explored in this lab are just the beginning. Docker Enterprise Edition includes private Trusted Registries with Security Scanning and Image Signing capabilities so you can further inspect and authenticate your images. In addition, there are policy controls to specify which users have access to various images, who can push and pull images, and much more.
Another important note about layers: each layer is immutable. As an image is created and successive layers are added, the new layers keep track of the changes from the layer below. When you start the container running there is an additional layer used to keep track of any changes that occur as the application runs (like the “hello.txt” file we created in the earlier exercises). This design principle is important for both security and data management. If someone mistakenly or maliciously changes something in a running container, you can very easily revert back to its original state because the base layers cannot be changed. Or you can simply start a new container instance which will start fresh from your pristine image. And applications that create and store data (databases, for example) can store their data in a special kind of Docker object called a volume, so that data can persist and be shared with other containers. We will explore volumes in a later lab.
Up next, we will look at more sophisticated applications that run across several containers and use Docker Compose and Docker Swarm to define our architecture and manage it.
Type vi index.js
then once the editor loads hit the i
key. You can now type each of the commands as shown in the example. When you are finished hit the <esc>
key then type :wq
and that will save the file and take you back to the command prompt. You can type ls
at the command prompt to ensure your index.js file is there or type cat index.js
to make sure all the code is in the file. If you make a mistake in the editor and you have a hard time navigating the editor it might be easier to start fresh: simply type <esc>
and then :wq
if you are in the editor and then when you are back to the command line type rm index.js
to delete the file and then start again.
Type vi Dockerfile
then once the editor loads hit the i
key. Type in each line of the Dockerfile code as shown in the example - capitalization is important! - then hit the <esc>
key followed by :wq
. To verify your Dockerfile exists and is correct type cat Dockerfile
. If you make a mistake in the editor and you have a hard time navigating the editor it might be easier to start fresh: simply type <esc>
and then :wq
if you are in the editor and then when you are back to the command line type rm Dockerfile
and then start again.