In this lab you will learn about key Docker Networking concepts. You will get your hands dirty by going through examples of a few basic networking concepts, learn about Bridge networking, and NAT/PAT.
The docker network
command is the main command for configuring and managing container networks. Run the docker network
command from the first terminal.
docker network
Usage: docker network COMMAND
Manage networks
Commands:
connect Connect a container to a network
create Create a network
disconnect Disconnect a container from a network
inspect Display detailed information on one or more networks
ls List networks
prune Remove all unused networks
rm Remove one or more networks
Run 'docker network COMMAND --help' for more information on a command.
The command output shows how to use the command as well as all of the docker network
sub-commands. As you can see from the output, the docker network
command allows you to create new networks, list existing networks, inspect networks, and remove networks. It also allows you to connect and disconnect containers from networks.
Run a docker network ls
command to view existing container networks on the current Docker host.
$ docker network ls
NETWORK ID NAME DRIVER SCOPE
fbb373b26121 bridge bridge local
b0cb60bd9f99 host host local
2d74bb663874 none null local
The output above shows the container networks that are created as part of a standard installation of Docker.
New networks that you create will also show up in the output of the docker network ls
command.
You can see that each network gets a unique ID
and NAME
. Each network is also associated with a single driver. Notice that the “bridge” network and the “host” network have the same name as their respective drivers.
The docker network inspect
command is used to view network configuration details. These details include; name, ID, driver, IPAM driver, subnet info, connected containers, and more.
Use docker network inspect <network>
to view configuration details of the container networks on your Docker host. The command below shows the details of the network called bridge
.
docker network inspect bridge
[
{
"Name": "bridge",
"Id": "fbb373b26121d6839aecd9cd2a601c7cf1bbed48e9a62c511ead337ceff0c409",
"Created": "2024-06-02T14:10:24.933043209Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
NOTE: The syntax of the
docker network inspect
command isdocker network inspect <network>
, where<network>
can be either network name or network ID. In the example above we are showing the configuration details for the network called “bridge”. Do not confuse this with the “bridge” driver.
The docker info
command shows a lot of interesting information about a Docker installation.
Run the docker info
command and locate the list of network plugins.
docker info
Server:
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 24.0.7
Storage Driver: overlay2
<Snip>
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: io.containerd.runc.v2 runc
<Snip>
The output above shows the bridge, host, ipvlan, macvlan, null, and overlay drivers.
Every clean installation of Docker comes with a pre-built network called bridge. Verify this with the docker network ls
.
docker network ls
NETWORK ID NAME DRIVER SCOPE
3430ad6f20bf bridge bridge local
a7449465c379 host host local
06c349b9cc77 none null local
The output above shows that the bridge network is associated with the bridge driver. It’s important to note that the network and the driver are connected, but they are not the same. In this example the network and the driver have the same name - but they are not the same thing!
The output above also shows that the bridge network is scoped locally. This means that the network only exists on this Docker host. This is true of all networks using the bridge driver - the bridge driver provides single-host networking.
All networks created with the bridge driver are based on a Linux bridge (a.k.a. a virtual switch).
Install the brctl
command and use it to list the Linux bridges on your Docker host. You can do this by running sudo apt-get install bridge-utils
.
apk update
apk add bridge
Then, list the bridges on your Docker host, by running brctl show
.
brctl show
bridge name bridge id STP enabled interfaces
docker0 8000.02423d8a0502 no
The output above shows a single Linux bridge called docker0. This is the bridge that was automatically created for the bridge network. You can see that it has no interfaces currently connected to it.
You can also use the ip addr
command to view details of the docker0 bridge.
ip addr
2: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
link/ether 02:42:3d:8a:05:02 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
The bridge network is the default network for new containers. This means that unless you specify a different network, all new containers will be connected to the bridge network.
Create a new container by running docker run -dt alpine sleep infinity
.
docker run -dt alpine sleep infinity
Unable to find image 'alpine:latest' locally
latest: Pulling from library/alpine
d25f557d7f31: Pull complete
Digest: sha256:77726ef6b57ddf65bb551896826ec38bc3e53f75cdde31354fbffb4f25238ebd
Status: Downloaded newer image for alpine:latest
58ef6958da41b4705e1fbddc6d34df25ec60477bbb1877c0ac33508af6144559
This command will create a new container based on the alpine:latest
image and will run the sleep
command to keep the container running in the background. You can verify our apine
container is up by running docker ps
.
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
58ef6958da41 alpine "sleep infinity" 7 seconds ago Up 5 seconds strange_saha
As no network was specified on the docker run
command, the container will be added to the bridge network.
Run the brctl show
command again.
$ brctl show
bridge name bridge id STP enabled interfaces
docker0 8000.02423d8a0502 no veth3ce888e
Notice how the docker0 bridge now has an interface connected. This interface connects the docker0 bridge to the new container just created.
You can inspect the bridge network again, by running docker network inspect bridge
, to see the new container attached to it.
docker network inspect bridge
[
{
"Name": "bridge",
"Id": "fbb373b26121d6839aecd9cd2a601c7cf1bbed48e9a62c511ead337ceff0c409",
"Created": "2024-06-02T14:10:24.933043209Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"58ef6958da41b4705e1fbddc6d34df25ec60477bbb1877c0ac33508af6144559": {
"Name": "strange_saha",
"EndpointID": "4893b1e4e95b15dc01e0e52526a651e79d93c4629967ff7901eabef0d5550e33",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
The output to the previous docker network inspect
command shows the IP address of the new container. In the previous example it is “172.17.0.2” but yours might be different.
Ping the IP address of the container from the shell prompt of your Docker host by running ping -c5 <IPv4 Address>
. Remember to use the IP of the container in your environment.
$ ping -c5 172.17.0.2
PING 172.17.0.2 (172.17.0.2): 56 data bytes
64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.241 ms
64 bytes from 172.17.0.2: seq=1 ttl=64 time=0.109 ms
64 bytes from 172.17.0.2: seq=2 ttl=64 time=0.093 ms
64 bytes from 172.17.0.2: seq=3 ttl=64 time=0.119 ms
64 bytes from 172.17.0.2: seq=4 ttl=64 time=0.136 ms
--- 172.17.0.2 ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max = 0.093/0.139/0.241 ms
The replies above show that the Docker host can ping the container over the bridge
network. But, we can also verify the container can connect to the outside world too. Lets log into the container, install the ping
program, and then ping www.github.com
.
First, we need to get the ID
of the container started in the previous step. You can run docker ps
to get that.
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
58ef6958da41 alpine "sleep infinity" 8 minutes ago Up 8 minutes strange_saha
Next, lets run a shell inside that alpine container, by running docker exec -it <CONTAINER ID> /bin/sh
.
$ docker exec -it strange_saha /bin/sh
/ #
Next, we need to install the ping program. So, lets run apk update && apk add iputils-ping
.
/ # apk update && apk add iputils-ping
fetch https://dl-cdn.alpinelinux.org/alpine/v3.20/main/x86_64/APKINDEX.tar.gz
fetch https://dl-cdn.alpinelinux.org/alpine/v3.20/community/x86_64/APKINDEX.tar.gz
v3.20.0-45-g671e1735083 [https://dl-cdn.alpinelinux.org/alpine/v3.20/main]
v3.20.0-56-gc4f89b9e95e [https://dl-cdn.alpinelinux.org/alpine/v3.20/community]
OK: 24148 distinct packages available
(1/2) Installing libcap2 (2.70-r0)
(2/2) Installing iputils-ping (20240117-r0)
Executing busybox-1.36.1-r28.trigger
OK: 8 MiB in 16 packages
Lets ping www.github.com by running ping -c5 www.github.com
/ # ping -c5 www.github.com
PING github.com (140.82.114.4) 56(84) bytes of data.
64 bytes from lb-140-82-114-4-iad.github.com (140.82.114.4): icmp_seq=1 ttl=49 time=1.18 ms
64 bytes from lb-140-82-114-4-iad.github.com (140.82.114.4): icmp_seq=2 ttl=49 time=1.17 ms
64 bytes from lb-140-82-114-4-iad.github.com (140.82.114.4): icmp_seq=3 ttl=49 time=1.16 ms
64 bytes from lb-140-82-114-4-iad.github.com (140.82.114.4): icmp_seq=4 ttl=49 time=1.45 ms
64 bytes from lb-140-82-114-4-iad.github.com (140.82.114.4): icmp_seq=5 ttl=49 time=1.23 ms
--- github.com ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4005ms
rtt min/avg/max/mdev = 1.163/1.238/1.452/0.109 ms
Finally, lets disconnect our shell from the container, by running exit
.
exit
We should also stop this container so we clean things up from this test, by running docker stop <CONTAINER ID>
.
docker stop strange_saha
This shows that the new container can ping the internet and therefore has a valid and working network configuration.
In this step we’ll start a new NGINX
container and map port 8080 on the Docker host to port 80 inside of the container. This means that traffic that hits the Docker host on port 8080 will be passed on to port 80 inside the container.
NOTE: If you start a new container from the official NGINX image without specifying a command to run, the container will run a basic web server on port 80.
Start a new container based off the official NGINX image by running docker run --name web1 -d -p 8080:80 nginx
.
$ docker run --name web1 -d -p 8080:80 nginx
Unable to find image 'nginx:latest' locally
latest: Pulling from library/nginx
09f376ebb190: Pull complete
5529e0792248: Pull complete
9b3addd3eb3d: Pull complete
57910a8c4316: Pull complete
7b5f78f21449: Pull complete
b7923aa4e8a6: Pull complete
785625911f12: Pull complete
Digest: sha256:0f04e4f646a3f14bf31d8bc8d885b6c951fdcf42589d06845f64d18aec6a3c4d
Status: Downloaded newer image for nginx:latest
b3c75bac8bbf04027e79288e945d8e57bf92abbd67ffc20caa2db515f89cb7c1
Review the container status and port mappings by running docker ps
.
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b3c75bac8bbf nginx "/docker-entrypoint.…" About a minute ago Up About a minute 0.0.0.0:8080->80/tcp web1
The top line shows the new web1 container running NGINX. Take note of the command the container is running as well as the port mapping - 0.0.0.0:8080->80/tcp
maps port 8080
on all host interfaces to port 80 inside the web1
container. This port mapping is what effectively makes the containers web service accessible from external sources (via the Docker hosts IP address on port 8080).
Now that the container is running and mapped to a port on a host interface you can test connectivity to the NGINX web server.
To complete the following task you will need the IP address of your Docker host. This will need to be an IP address that you can reach (e.g. your lab is hosted in Azure so this will be the instance’s Public IP - the one you SSH’d into). Just point your web browser to the IP and port 8080 of your Docker host. Also, if you try connecting to the same IP address on a different port number it will fail.
If for some reason you cannot open a session from a web broswer, you can connect from your Docker host using the curl 127.0.0.1:8080
command.
curl 127.0.0.1:8080
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
If you try and curl the IP address on a different port number it will fail.
NOTE: The port mapping is actually Port Address Translation (PAT).