WireGuard Remote Access to Docker Containers

Using WireGuard to access services running inside a set of containers on a remote host with a Point to Point Topology isn’t much different than using WireGuard to access services running on the same host outside of any containers. This article will show you how.

Each scenario we cover will work similarly to the Point to Point Configuration guide, where we connect one endpoint, “Endpoint A”, to a second endpoint, “Endpoint B”, over WireGuard, allowing Endpoint A to access a web server running on Endpoint B. In this article, the web server on Endpoint B will be running in a container not exposed via the host’s network.

If the containers on Endpoint B were running in host network-mode (ie using the --network=host flag with docker run, or the network_mode: host setting with docker-compose), or if their network services were exposed to the host’s network (using the --publish flag with docker run, or the ports setting with docker-compose) we could in fact just use the Point to Point Configuration guide instead of this article. In that case, those network services are accessible directly in the host’s network namespace, just as if they were running on the host outside of any containers.

But rather than using host network-mode, in this guide we will use a user-defined bridge network to access the containers on Endpoint B. This is the best practice with containers, as it provides better security via network isolation: By default, only containers sharing the same bridge network can access the network services of the others. This guide will show you how to adjust these secure defaults to allow limited external access through WireGuard under three scenarios:

WireGuard on the Host

WireGuard Remote Access Through Host

The easiest way to enable access from one host to isolated containers on another is to run WireGuard on the container host. First, set up a user-defined bridge network on the container host (Endpoint B), and connect to it the containers you want to expose to the remote host.

With the Docker command-line (CLI) tools, create a new network like the following, with some private-use subnet that you’re not already using (like 192.168.123.0/24 in this example) and network name (wg-network):

$ sudo docker network create \
    --subnet 192.168.123.0/24 \
    wg-network

Then start up the containers you want to expose, specifying the network name and an available IP address in that network for each. Note that the first IP address in the subnet (192.168.123.0) is reserved for the subnet itself, and the second IP address (192.168.123.1) Docker will use by default for the network’s gateway — so we’ll use the next available address in the subnet (192.168.123.2) for our example container:

$ sudo docker run \
    --name example-web-server \
    --network wg-network \
    --ip 192.168.123.2 \
    --rm \
    nginx

If you’ve already started the container without connecting it to the network, you can connect it later with the following command:

$ sudo docker network connect \
    --ip 192.168.123.2 \
    wg-network \
    example-web-server

Make sure you specify an explicit IP address for each container (instead of letting Docker choose), as you will need to use the container’s IP address to access the network services exposed by the container.

Alternatively, you can use Docker Compose to set up the network and containers. For example, using the Docker Compose version 3.5+ syntax, you can create a similar wg-network to the above, and connect a similar example-web-server container to it:

# /srv/wg-network/docker-compose.yml
version: '3.5'

networks:
  wg-network:
    ipam:
      config:
      - subnet: 192.168.123.0/24

services:
  example-web-server:
    image: nginx
    networks:
      wg-network:
        ipv4_address: 192.168.123.2

Start up the network and container by running sudo docker-compose up from the same directory as the docker-compose.yml file.

You can access the example container by running the following command on the container host (Endpoint B):

$ curl 192.168.123.2:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...

The container will not be accessible outside of the container host, yet, however. So let’s set up the WireGuard network that will enable access to the container from the remote host, Endpoint A.

First, on Endpoint A, create the following WireGuard configuration file at /etc/wireguard/wg0.conf:

# /etc/wireguard/wg0.conf

# local settings for Endpoint A
[Interface]
PrivateKey = AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEE=
Address = 10.0.0.1/32
ListenPort = 51821

# remote settings for Endpoint B
[Peer]
PublicKey = fE/wdxzl0klVp/IR8UcaoGUMjqaWi3jAd7KzHKFS6Ds=
Endpoint = 203.0.113.2:51822
AllowedIPs = 10.0.0.2/32
AllowedIPs = 192.168.123.0/24

Replace 203.0.113.2 in the Endpoint setting for Endpoint B with the actual IP address of Endpoint B from the perspective of Endpoint A. Also generate your own key pairs for Endpoint A and B, and use them instead. See the Point to Point Configuration guide for details.

The only difference between the WireGuard configuration for Endpoint A here and in the Point to Point Configuration guide is that here we also include the subnet of the Docker network we just set up, 192.168.123.0/24, as an AllowedIPs setting for Endpoint A’s connection to Endpoint B.

Next, on Endpoint B, create the following WireGuard configuration file at /etc/wireguard/wg0.conf (using your own keys for Endpoint A and B to match your config for Endpoint A):

# /etc/wireguard/wg0.conf

# local settings for Endpoint B
[Interface]
PrivateKey = ABBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBFA=
Address = 10.0.0.2/32
ListenPort = 51822
PreUp = iptables -N DOCKER-USER || true
PreUp = iptables -I DOCKER-USER -i wg0 -d 192.168.123.0/24 -j ACCEPT
PostDown = iptables -D DOCKER-USER -i wg0 -d 192.168.123.0/24 -j ACCEPT

# remote settings for Endpoint A
[Peer]
PublicKey = /TOE4TKtAqVsePRVR+5AA43HkAK5DSntkOCO7nYq5xU=
AllowedIPs = 10.0.0.1/32

The only difference between this and the Point to Point Configuration guide for Endpoint B is that here we also set up an iptables rule to allow access to our custom Docker network, 192.168.123.0/24, from this WireGuard interface (wg0):

iptables -I DOCKER-USER -i wg0 -d 192.168.123.0/24 -j ACCEPT

Docker will usually set up the DOCKER-USER chain for us; but on system boot it might not have done so yet, so the first PreUp command in the above WireGuard config for Endpoint B makes sure the DOCKER-USER chain exists before the second PreUp command adds a rule to it. Note that we’re also using the -I flag for this rule instead of the -A flag, so that the rule will be inserted at the top of the DOCKER-USER chain, in case Docker has already created the chain and added its default rule to it.

Start up these new WireGuard interfaces on Endpoint A and B (eg sudo wg-quick up wg0), and you’ll be able to access the example container by running the following command on the remote host (Endpoint A):

$ curl 192.168.123.2:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...

WireGuard in a Container

WireGuard Remote Access Through Container

If you’d rather run WireGuard inside a container on Endpoint B, instead of outside on the host, you can do that, too. When running WireGuard in a container, the simplest way to enable remote access to other containers is simply to connect the other containers directly to the network namespace of the WireGuard container (eg using a --network container:wg-server flag with docker run, or the network_mode: 'service:wg-server' setting with docker-compose).

That approach is covered by the Use for Container Network section of the Building, Using, and Monitoring WireGuard Containers guide. With that approach, the WireGuard configuration for Endpoint A and Endpoint B inside the WireGuard container are exactly the same as the configuration in the Point to Point Configuration guide.

But in this guide, we’ll cover a slightly more complicated, but more flexible approach. Like the above WireGuard on the Host section, we’ll create a user-defined bridge network to which we’ll connect all the containers we want to expose through WireGuard. But since we’re running WireGuard in a container, too, we’ll also connect the WireGuard container to the bridge network.

First, save the WireGuard configuration for WireGuard container to its own directory somewhere convenient on the host, like in the /srv/wg-network/wg-server directory:

# /srv/wg-network/wg-server/wg0.conf

# local settings for Endpoint B
[Interface]
PrivateKey = ABBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBFA=
Address = 10.0.0.2/32
ListenPort = 51822
PreUp = iptables -t nat -A POSTROUTING -d 192.168.123.0/24 -j MASQUERADE

# remote settings for Endpoint A
[Peer]
PublicKey = /TOE4TKtAqVsePRVR+5AA43HkAK5DSntkOCO7nYq5xU=
AllowedIPs = 10.0.0.1/32

Note that this config is exactly the same as we used for the container’s host in the WireGuard on the Host scenario above, except with a different iptables rule:

iptables -t nat -A POSTROUTING -d 192.168.123.0/24 -j MASQUERADE

For this scenario, where WireGuard is running within a container instead of on the host, we need a packet masquerading rule so that the other containers which will receive forwarded packets from Endpoint A through this container will know where to reply back. (We don’t need a rule like this under the WireGuard on the Host scenario, because the containers can just send packets back to Endpoint A through their default gateway, which goes to the host’s own network namespace, which has the routes needed for the return trip.)

Next, we’ll set up the containers. With the Docker command-line (CLI) tools, create a new network like this, with a private-use subnet that you’re not already using (like 192.168.123.0/24 in this example) and network name (wg-network):

$ sudo docker network create \
    --subnet 192.168.123.0/24 \
    wg-network

Then start up the WireGuard container, as well as the containers you want to expose through WireGuard. For each container, specify the network name we just created, and an available IP address in the network:

$ sudo docker run \
    --cap-add NET_ADMIN \
    --name wg-server \
    --network wg-network \
    --ip 192.168.123.123 \
    --publish 51822:51822/udp \
    --rm \
    --volume /srv/wg-network/wg-server:/etc/wireguard \
    procustodibus/wireguard

$ sudo docker run \
    --name example-web-server \
    --network wg-network \
    --ip 192.168.123.2 \
    --rm \
    nginx

We’re using the Pro Custodibus base WireGuard image for our WireGuard container, which automatically starts up a WireGuard interface in the container for all the WireGuard configuration files it finds in the container’s /etc/wireguard directory (and in the example above, we’ve mapped /srv/wg-network/wg-server on the host, where we placed the above wg0.conf file, to /etc/wireguard in the container). We expose UDP port 51822 of this container, so that the remote host (Endpoint A) can connect to it.

The order in which you start up containers doesn’t matter. If you had already started up a container you want to expose via WireGuard, you can connect it with the following command:

$ sudo docker network connect \
    --ip 192.168.123.2 \
    wg-network \
    example-web-server

Alternatively, you can use Docker Compose to set up the network and containers. For example, using the Docker Compose version 3.5+ syntax, you can create a similar wg-network to the above, and connect similar wg-server and example-web-server containers to it:

# /srv/wg-network/docker-compose.yml
version: '3.5'

networks:
  wg-network:
    ipam:
      config:
      - subnet: 192.168.123.0/24

services:
  wg-server:
    image: procustodibus/wireguard
    cap_add:
    - NET_ADMIN
    networks:
      wg-network:
        ipv4_address: 192.168.123.123
    ports:
    - 51822:51822/udp
    volumes:
    - ./wg-server:/etc/wireguard

  example-web-server:
    image: nginx
    networks:
      wg-network:
        ipv4_address: 192.168.123.2

Make sure you place the docker-compose.yml file in the directory above the WireGuard configuration file; in the above example, we’ve placed the docker-compose.yml file at /srv/wg-network/docker-compose.yml, and the wg0.conf file at /srv/wg-network/wg-server/wg0.conf.

Finally, on Endpoint A (the remote host), create the following WireGuard configuration file at /etc/wireguard/wg0.conf:

# /etc/wireguard/wg0.conf

# local settings for Endpoint A
[Interface]
PrivateKey = AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEE=
Address = 10.0.0.1/32
ListenPort = 51821

# remote settings for Endpoint B
[Peer]
PublicKey = fE/wdxzl0klVp/IR8UcaoGUMjqaWi3jAd7KzHKFS6Ds=
Endpoint = 203.0.113.2:51822
AllowedIPs = 10.0.0.2/32
AllowedIPs = 192.168.123.0/24

Note that this config is exactly the same as we used for the WireGuard on the Host scenario above. Like that scenario, make sure you replace 203.0.113.2 in the Endpoint setting with the actual IP address of Endpoint B (from the perspective of Endpoint A), and use your own key pairs for Endpoint A and B. See the Point to Point Configuration guide for details about these settings.

The only thing “special” about this configuration is that, just like the WireGuard on the Host scenario above, we’re including the subnet of the Docker network we just set up, 192.168.123.0/24, as an AllowedIPs setting.

Start up the WireGuard interface on Endpoint A (eg sudo wg-quick up wg0), and you’ll be able to access the example container by running the following command on Endpoint A:

$ curl 192.168.123.2:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...

WireGuard in a Container Without Masquerading

WireGuard Remote Access Through Masquerade

The one drawback to the WireGuard in a Container scenario above is that the iptables rule we had to add to the WireGuard container will rewrite the source IP address of the packets it forwards (to use the container’s own IP address, 192.168.123.123). The other containers to which it forwards packets will see this address instead of the packets’ original source addresses. If for access control or logging purposes within the containers you want to maintain the original source address, you can — with some extra work.

The extra work is to manually add a route (or routes) to the network namespace of each container to which the WireGuard container forwards packets. In the above scenario, we assigned the WireGuard container an IP address of 192.168.123.123, and the container’s WireGuard configuration allows only packets from 10.0.0.1/32 (Endpoint A). So, for that scenario, this is the one route we’d have to add to each (non-WireGuard) container:

ip route add 10.0.0.1/32 via 192.168.123.123

If the WireGuard container’s WireGuard config contained multiple AllowedIPs address blocks, we’d have to add a route for each block. For example, if the config contained the following:

...
AllowedIPs = 10.0.0.1/32
...
AllowedIPs = 10.1.2.3/32, 10.1.2.99/32
...
AllowedIPs = 192.168.234.0/24
...

We’d have to add four routes, one for each address block:

ip route add 10.0.0.1/32 via 192.168.123.123
ip route add 10.1.2.3/32 via 192.168.123.123
ip route add 10.1.2.99/32 via 192.168.123.123
ip route add 192.168.234.0/24 via 192.168.123.123

(Although if you know you won’t break the container’s access to other services by combining some blocks, you can do so — like to combine 10.1.2.3/32 and 10.1.2.99/32 into 10.1.2.0/24 for the above example.)

However, in order to add routes to the container, you have to run the ip route add command either a) from the host using the container’s namespace, or b) from within the container itself.

Add the Route From the Host

To add the route from the host using the container’s namespace, first start up the container:

$ sudo docker run \
    --name example-web-server \
    --network wg-network \
    --ip 192.168.123.2 \
    --rm \
    nginx

Then identify the PID (process ID) of the container, using the container’s name (example-web-server):

$ sudo docker container inspect example-web-server -f '{{.State.Pid}}'
12345

Finally, use the nsenter tool with that PID to run the ip route add command:

$ sudo nsenter -t 12345 -n ip route add 10.0.0.1/32 via 192.168.123.123

Note that if you shut down the container and start it up again, you’ll have to look up the new PID of the container, and add the route again, using the new PID.

Once you’ve added the route, you can use the same commands and configuration from the WireGuard in a Container scenario above — just without the iptables rule in the WireGuard container (ie omit the PreUp setting in the WireGuard config for Endpoint B). Everything will work the same, except the other containers will see the original source address on packets forwarded from the WireGuard container, instead of the WireGuard container’s own IP address.

Just like the WireGuard in a Container scenario, you’ll be able to access the example container by running the following command on Endpoint A:

$ curl 192.168.123.2:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...

Add the Route From The Container

To add the route from within the container itself, you need to first install the iproute2 package in the container (or, ideally, the container’s image), make sure you start up the container with the NET_ADMIN capability, and then run the ip route command from within the container. How you do this exactly depends on the container’s image, but usually you have to:

  1. Create a custom image with a custom Dockerfile in which you install the iproute2 package; and

  2. Run the image with a custom command in which you first run the ip route add command, and then run the default command from the custom image’s base image

For our example web server, we would create a custom Dockerfile (placed at /srv/wg-network/example-web-server/Dockerfile) like the following, which overrides the base nginx image to install the iproute2 package:

# /srv/wg-network/example-web-server/Dockerfile
FROM nginx
RUN apt-get update && apt-get install -y iproute2

Then we would create a custom command script (placed at /srv/wg-network/example-web-server/command.sh) that first runs the ip route add command, and then the command to start nginx:

#!/bin/sh -e
# (/srv/wg-network/example-web-server/command.sh)
ip route add 10.0.0.1/32 via 192.168.123.123
nginx -g 'daemon off;'

We’d build the custom image like so, giving it a tag of custom-nginx:

docker build \
    --tag custom-nginx \
    /srv/wg-network/example-web-server

After setting up our user-defined bridge network wg-network and starting the WireGuard container just like the WireGuard in a Container scenario, we’d run our custom image with our custom command (and also granting it the NET_ADMIN capability):

$ sudo docker run \
    --cap-add NET_ADMIN \
    --name example-web-server \
    --network wg-network \
    --ip 192.168.123.2 \
    --rm \
    --volume /srv/wg-network/example-web-server:/custom \
    custom-nginx \
    /custom/command.sh

Alternatively, we could build our custom image and run it with our custom command via a docker-compose.yml file like the following:

# /srv/wg-network/docker-compose.yml
version: '3.5'

networks:
  wg-network:
    ipam:
      config:
      - subnet: 192.168.123.0/24

services:
  wg-server:
    image: procustodibus/wireguard
    cap_add:
    - NET_ADMIN
    networks:
      wg-network:
        ipv4_address: 192.168.123.123
    ports:
    - 51822:51822/udp
    volumes:
    - ./wg-server:/etc/wireguard

  example-web-server:
    build: example-web-server
    cap_add:
    - NET_ADMIN
    command: /custom/command.sh
    networks:
      wg-network:
        ipv4_address: 192.168.123.2
    volumes:
    - ./example-web-server:/custom

With the route added, the same WireGuard configuration from the WireGuard in a Container scenario above will work — without needing the iptables rule in the WireGuard container (ie omit the PreUp setting in the WireGuard config for Endpoint B). Our example container will see the original source address on packets forwarded from the WireGuard container, instead of the WireGuard container’s IP address (and will be able to route replies back correctly).

Just like the WireGuard in a Container scenario, you can access the example container by running the following command on Endpoint A:

$ curl 192.168.123.2:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...