There’s a lot of tutorials showing how to link containers using the docker run –link option, but the link flag is a deprecated feature of Docker and maybe eventually removed.
I will show you how to link containers via docker network providing a template of Dockerfile for your Golang application, but the focus here is really on the process.
Table of Contents
It’s there already…
Assuming you haven’t created any networks yet, executing docker network ls should list the default Docker networks:
$ docker network ls
NETWORK ID NAME DRIVER SCOPE
d9f12b2d877c bridge bridge local
5c8a44eced42 host host local
ba164e14133c none null local
These three are created automatically.
- the bridge is used by default when connecting containers.
- the host – the network configuration inside the container will be identical to the host (for external access their port will be mapped to a host port)
- the none – adding your containers to this network will isolate them (no network interfaces)
The bridge
Let’s focus on the default network (bridge) which is represented by docker0 network interface:
$ ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 0.0.0.0
inet6 fe80::42:7aff:fe3e:c021 prefixlen 64 scopeid 0x20<link>
ether 02:42:47:bc:3a:eb txqueuelen 0 (Ethernet)
RX packets 422458 bytes 27057040 (25.8 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 518829 bytes 1643834763 (1.5 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
You can also list of Linux bridges with brctl:
$ sudo brctl show
bridge name bridge id STP enabled interfaces
docker0 8000.024247bc3aeb no
Running this command shows details about the bridge network:
$ docker network inspect bridge
[
{
"Name": "bridge",
"Id": "d9f12b2d877c1a82c7f8f87279afe3985e599060564486adee653a9bcebccd0e",
"Created": "2017-03-29T18:41:55.073026151+02:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Containers": {},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
Pay attention to “Containers”: {}. It’s empty.
The docker run command automatically adds new containers to this network:
$ docker run -itd --name=container_demo1 busybox
dd11ddf700be0f61d6346b7f15283c14e1ca1c6eb303536d3ec3d27b15896e4b
$ docker network inspect bridge
[
{
"Name": "bridge",
"Id": "d9f12b2d877c1a82c7f8f87279afe3985e599060564486adee653a9bcebccd0e",
"Created": "2017-03-29T18:41:55.073026151+02:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Containers": {
"dd11ddf700be0f61d6346b7f15283c14e1ca1c6eb303536d3ec3d27b15896e4b": {
"Name": "container_demo1",
"EndpointID": "5ada8fdf60d20a8ff037ae6026dcdbd22b280d27de1306f2ccb8d83389ef4776",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
This time the list of Containers is not empty:
"Containers": {
"dd11ddf700be0f61d6346b7f15283c14e1ca1c6eb303536d3ec3d27b15896e4b": {
"Name": "container_demo1",
"EndpointID": "5ada8fdf60d20a8ff037ae6026dcdbd22b280d27de1306f2ccb8d83389ef4776",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
}
},
A bridge network is useful in cases where you want to run a relatively small network on a single host.
Define your own network
The basics are covered, so the next step would be to create our own networks where we can define most of the params within the structure we saw when were running ‘docker network inspect’.
I suggest to run docker network to see available options:
Usage: docker network COMMAND
Manage networks
Options:
--help Print usage
Commands:
connect Connect a container to a network
create Create a network
disconnect Disconnect a container from a network
inspect Display detailed information on one or more networks
ls List networks
prune Remove all unused networks
rm Remove one or more networks
Run 'docker network COMMAND --help' for more information on a command.
Creating a new one (bridge by default) is straight forward:
$ docker network create network_demo1
1d66cf38be951bb713fce59178b3a296d638974812d12f83c7945003d7d5bfaf
but you probably won’t use create new ones that way. The docker daemon automatically chooses and assigns a subnet for that network, so it could overlap with another subnet in your infrastructure that is not managed by docker. That’s why it’s recommended the –subnet option while creating a network:
$ docker network create -d bridge --subnet 172.25.0.0/16 network_demo2 --attachable
abd205fb5d7a587daddc056f57e65efacdd824faa1cba5de8114488bbecb1c80
- -d lets you choose the type of driver
- – -subnet specifies a subnetwork
- – -attachable enables manual container attachment (set to false by default). It means we can use docker run to create a container within the scope of this new network.
Add MongoDB to the network
Great! we have a network, let’s make a use of that:
$ docker run --network=network_demo2 --ip=172.25.3.3 -itd --name mongo -p 27017:27018 mongo
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
79c6b716d0f6 mongo "docker-entrypoint..." 54 seconds ago Up 52 seconds 27017/tcp, 0.0.0.0:27017->27018/tcp mongo
docker logs mongo should show logs produced by the mongo container. Between many lines you should find there something like:
[conn1] end connection 127.0.0.1:43656 (1 connection now open)
or jump to the mongo’s (mongoDB) console directly:
$ docker exec -ti mongo mongo
MongoDB shell version v3.4.2
connecting to: mongodb://127.0.0.1:27017
...
>
If you try to remove your network at this point, you’ll get an error:
$ docker network rm network_demo2
Error response from daemon: network network_demo2 has active endpoints
..but we don’t want to do that 🙂
Dockerise your Golang application
If you want to connect to the mongo container from your applicatiom, you can use mongo in any reference to the DB host here (for instance: mongodb://mongo:27017). The Docker embedded DNS server enables name resolution for containers connected to a given network.
If you haven’t dockerised your Golang code before, you can use this template of Dockerfile replacing your_github_user/your_code/source (and all other here) with your own values:
FROM golang:latest
LABEL maintainer="your@email"
LABEL name="MyGolangApplication"
LABEL version="0.0.1"
# run these commands only to be more verbose:
RUN go version
RUN go env
# Move current project to a valid go path
COPY . /go/src/github.com/your_github_user/your_code/source
WORKDIR /go/src/github.com/your_github_user/your_code/source
# install dependencies (like Revel, etc.):
# -v enables verbose and debug output
# -u use the network to update the named packages
# -fix runs the fix tool on the downloaded packages
# before resolving dependencies or building the code
RUN go get -v -u -fix gopkg.in/some_library
RUN go get -v -u -fix github.com/some_other_library
# build your code:
RUN go build your_main.go
RUN chmod +x your_main
RUN ./your_main
# if you have any port to expose, expose it here:
EXPOSE 8080
# run your app, for instance:
RUN revel test github.com/your_github_user/your_code/source
Save your Dockerfile and build an image from it:
$ docker build -t your_golang_application .
Add your Golang application to the network
Assuming it’s built succesfully. Run it with additional params that will place your container in your network:
$ docker run --network=network_demo2 --ip=172.25.3.4 -itd --name YourDemoProject -p 8080:8080 your_golang_application
ff5dad9d188f0148c0aa3e312ff7f31c543ebc356d327431d7412d21d8741968
Now inspect the network to check if both mongo and your_golang_application containers belong to the same network:
$ docker inspect network_demo2
[
{
"Name": "network_demo2",
"Id": "abd205fb5d7a587daddc056f57e65efacdd824faa1cba5de8114488bbecb1c80",
"Created": "2017-03-30T11:33:33.468339362+02:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.25.0.0/16",
"Gateway": "172.25.0.1"
}
]
},
"Internal": false,
"Attachable": true,
"Containers": {
"79c6b716d0f6d25147a6b164f21dce57e521097c5fa6fa733861bb2619a5dd32": {
"Name": "mongo",
"EndpointID": "d94d409dbe8ac92836a073f5bbecf123028147c28aa0ba810ba0197cadca7bd1",
"MacAddress": "01:42:ac:17:03:03",
"IPv4Address": "172.25.3.3/16",
"IPv6Address": ""
},
"ff5dad9d188f0148c0aa3e312ff7f31c543ebc356d327431d7412d21d8741968": {
"Name": "YourDemoProject",
"EndpointID": "7e0d2a246f4c99e15b4b65747f098cc6652a6d8b35571113f16eee3df1339eb3",
"MacAddress": "01:42:ac:17:03:04",
"IPv4Address": "172.25.3.4/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
Testing the connection
Jump to the shell:
$ docker exec -ti YourDemoProject bash
and ping mongo:
$ ping mongo
PING mongo (172.25.3.3): 56 data bytes
64 bytes from 172.25.3.3: icmp_seq=0 ttl=64 time=0.104 ms
64 bytes from 172.25.3.3: icmp_seq=1 ttl=64 time=0.155 ms
or simply run your app and check if it accesses the database.
Summary
The purpose of article was to get you familiar with basics of networking docker containers. I hope to see less places where the depricated flag “–link” is used as things seem to change quickly in the Docker’s world. I think it’s important to mention one feature that user defined networks do not support (and you can do with –link) is sharing environmental variables between containers but you can use other mechanisms such as volumes to do it in a more controlled way.
Having foundations of networking containers is a good start to look at more advanced scenarios.