Everything You Need to Know about Using Docker Compose

choubertsprojects

VPN offers!

1. NordVPN

2. Surfshark

3. ExpressVPN

Docker Compose is an automated, declarative way to create and manage complex multi-container Docker applications. It’s a tool for building entire distributed systems with containerized components. In this article we provide some insight into how you might use it in your development workflow
Introduction: Finding the Right Coffee Mug Styles
Category: Home Decorating
Introduction: There are many styles of coffee mugs that can be used as home decor but each has its own aesthetic appeal depending on what color scheme you choose from among them

The “docker-compose example” is a command that helps you to configure your Docker Compose file. It will help you get started with the configuration of your containers.

Everything You Need to Know about Using Docker Compose

You’ve come to the correct spot if you want to learn how to use Docker Compose to construct repeatable Docker containers. You’ll learn how to construct basic containers, map ports using Docker Compose, and handle sophisticated multi-container situations in this step-by-step Docker Compose lesson.

Are you all set? Let’s get started!

Prerequisites

If you want to follow along with this tutorial step by step, make sure you have the following items:

  1. SSH enabled on a new install of Ubuntu Server LTS. The Docker host system in this tutorial will be Ubuntu Server LTS 20.04.1.
  2. VS Code installed on a PC (optional). To SSH to the Docker host and perform commands, this instruction will utilize Visual Studio Code 1.52.1.
  3. The official SSH extension for Visual Studio Code was installed and linked to the Docker host. (optional)

Related: Remote SSH and VS Code

What is Docker Compose all about?

In Docker, single commands may grow incredibly lengthy. Consider the following scenario. This example constructs a container for the bookstack software application.

docker create —name=bookstack -e PUID # UID of user who will own application/files -e PGID # GID of user who will own application/files -e DB USER # User who will own application/files DB PASS # is the database user. DB HOST # -e database password DB DATABASE # is the database host. -e APP URL # The database to be utilized The address at which your application will be accessed (required for correct operation of reverse proxy) # -v /path/to/config:/config # -p 80:80/tcp -p 80:80/tcp -p 80:80/tcp # unless-stopped web UI port —restart linuxserver/bookstack:version-v0.31.4

The number of flags and conditions necessary for a functional container setup grows as the complexity of a docker environment grows. When multi-container configurations are introduced, the Docker command line becomes unwieldy and difficult to debug.

Docker Compose uses a config file instead of incredibly lengthy Docker commands to construct repeatable Docker containers. Mistakes are easy to see and container interactions are easier to describe when utilizing a structured config file.

When working with container dependencies or multi-container settings, Docker Compose soon becomes indispensable.

Docker Compose is an excellent approach to get started with Infrastructure as Code without the hassles of distributed systems like Kubernetes.

YAML is the config file structure used by Docker Compose. In the same way that JSON or HTML are organized, machine-readable languages, YAML is. YAML is designed to be as human-readable as feasible while maintaining its organized capability.

The fact that tabs and other spaces are crucial in YAML means that they must be formatted appropriately. Many of the examples are done in VS Code since it takes care of a lot of the heavy lifting for you.

Docker Compose Installation

Now is the time to get your hands dirty. It’s time to install Docker Compose, assuming you’re connected to your Docker server.

Docker Compose is a separate package from the Docker runtime. But Docker Compose Installation will also install the Docker runtime so you’ll kill two birds with one stone!

Run the following two commands to install Docker Compose and the Docker runtime.

# Update the software repository, then install docker compose with any required dependencies. sudo apt update -y sudo apt install docker-compose -y uses the -y argument to bypass confirmation.

The Docker Compose installation commandThe Docker Compose installation command

You should now establish a folder structure to hold containers after they’ve been installed.

Creating a Docker Compose Folder Structure

Before you can use Docker Compose to construct a container, you must first create a container folder. You should not only build a folder structure to store containers, but you’ll discover that the placement of various Docker configuration files affects the behavior of various Docker commands; Docker Compose is no exception.

The docker-compose.yaml configuration file is the most crucial part of Docker Compose. As previously stated, this configuration file specifies how the Docker runtime should construct a container.

When you execute Docker Compose, it looks for its configuration file in the same directory as the command. Because of this restriction, it’s always advisable to execute Docker Compose in a different folder.

Each folder may only have one Docker Compose configuration file.

Create a folder structure to hold the future container and its configuration file using a tiny fileserver named Caddy to show constructing a Docker container using Docker Compose.

Caddy is a Go-based fileserver that works similarly to apache httpd or nginx. Caddy was built from the ground up to be simple to use (and will automatically produce or serve an index.html page). Caddy is a wonderful pick for beginners because of its combination.

Assuming you’re logged onto your Docker host, make the following folder structure:

  1. Make a folder named containers in your home directory. This folder will serve as a convenient storage location for this and other containers.
  2. Make a subdirectory named caddy within the containers folder. The Docker Compose configuration file and the Caddy container will both be in this folder.
  3. Finally, in the container folder, caddy, create a blank text file named docker-compose.yaml, which will serve as the Docker Compose configuration file.

You can now begin filling up the Docker Compose configuration file with a Docker Compose configuration after creating the folder structure and Docker Compose configuration file.

Docker Compose Configuration File Creation

A docker-compose.yaml file for the caddy container looks like this in its most basic form. Copy and paste the code below into the Docker Compose configuration file you produced previously in your preferred Linux text editor or using VS Code.

services: caddy: container name: “caddy” version: “3.7” ports: – “80:80” image: “caddy:latest”

Let’s have a look at each of the options:

  • The version of the docker-compose file is specified. Each new Docker Compose definition introduces breaking changes to the standard. As a result, the version is critical for Docker Compose to determine which features it requires. Ubuntu 20.04.1 LTS supports version 3.7 as of this writing.

The complete Docker Compose 3.x specification can be found here. Every option available in Docker Compose is listed in the attached documentation.

  • The specs for the actual containers are included in the services. This section allows you to specify several containers.
  • The first container is known as a caddy (this is purely for reference).
  • container name is the name supplied by Docker to the container and must be unique.
  • The name of the picture is image. Caddy from the Docker Hub is specified in this example. The version is the name or number following the tag, separated by a colon.

Mapping of Ports

That last choice, in particular, deserves special attention:

The ports directive in Docker Compose enables you to provide one or more mappings from the host to the container. You’ve mapped port 80 on the host to port 80 on the container, for example. You do not, however, need to match the port number. Port 8800 on the host is mapped to port 80 in the container in the example below.

You may also specify several ports, as seen below.

ports: – “80:80” – “443:443”

This would assign the host to both ports 80 and 443. (a common configuration for web servers, to serve both HTTP and HTTPS).

At the time of creation, the Docker image builder specifies the accessible ports. Make careful to look for mappable ports in the documentation of the image you’re working with on Docker Hub or on the maintainer’s website. It’s pointless to map a port if it’s not being used!

Let’s look at how to actually execute the container with that in mind.

Container Execution

The docker-compose.yaml file should now be within your containerscaddy folder. The Caddy container must now be created and started.

Run the following command in your terminal to start the Docker containers described in the docker-compose.yaml file.

# You must execute this command in the same folder as the file. The -d option executes the command *detached*, which starts the container in the background docker-compose up -d sudo

When executing docker-compose up -d sudo, you won’t see that you didn’t have to give the path of the docker-compose.yaml file. Because several commands are related to the folder containing the docker-compose.yaml file, Docker Compose expects you to execute them all within that location.

Now verify that the container is up and running by navigating to http://<your ip>. This guide is using http://homelab-docker for reference.

In the video below, you can witness this processing in VS Code while SSHed into the Docker host:

A container constructed using Docker Compose is shown.A container constructed using Docker Compose is shown.

Success! You’ve just finished using Docker Compose to launch a container from a configuration file. Let’s look at how you control the status of your container now that you’ve completed the first crucial step.

Detached Container Management Commands

The -d option was used to start the caddy container in the previous section. This caused a container to get disconnected. When a container is in the detached state, it will continue to execute in the background. However, this creates a problem: how do you manage that container now that you don’t have direct control?

Docker Compose includes a set of commands that will manage containers launched using a docker-compose.yaml file to tackle this problem:

  • A container that is presently operating may be restarted using docker-compose restart. It’s not the same as really rerunning docker-compose up -d. The restart command will simply restart an existing container, re-run the docker-compose up -d command, and start again (if the config file has been changed).
  • docker-compose stop will bring a running container to a halt without destroying it. Docker-compose start, on the other hand, will restart the container.
  • docker-compose down will halt and delete all running containers. This is when having a large number of bound books comes in handy (read more below).
  • docker-compose pull will pull the current version of the docker image (or images) off the repository. If using the latest tag, you can follow with docker-compose down && docker-compose up -d sudo to replace the container with the latest version. Using docker-compose pull is a convenient way to update containers quickly with minimal downtime.
  • docker-compose logs will show the logs of the running (or stopped) container. You can also address individual containers (if there are multiple containers defined in the compose file) with docker-compose logs <container name>.

Running docker-compose with no extra parameters displays the entire set of commands, which is also mentioned in the documentation.

Let’s look at utilising material stored locally on your workstation now that you have a working container.

Using Docker Compose to Create Bind Mounts

Docker uses bind mounts to link crucial user data to your server’s local storage. Create some material for the container to host first:

  1. Create a new folder named files within the /containers/caddy folder on the Docker host.

2. Inside the /containers/caddy folder, create a new file named index.html that looks like this. This will be the primary page served by the Caddy webserver.

<body><h2>hello world!</h2></body>

3. Make the following changes to your Docker Compose configuration file. The next example file adds the volumes section and points a bind mount to the newly formed files folder to make it accessible to the container.

services: caddy: container name: “caddy” version: “3.7” ports: – “80:80” image: “caddy:latest” volumes: #the ./ refers a folder relative to the docker-compose file – “./files:/usr/share/caddy”

4. Execute docker-compose up -d one again. Docker Compose will now detect the updated file and rebuild your container.

5. Open a browser and go to the container’s website. It should now be providing the “Hello World!” page.

In the animation below, you can observe the following:

Using Docker Compose to create a bind mountUsing Docker Compose to create a bind mount

You’re currently hosting stuff from your local computer! What if your material is on a network share or another external source?

Using Docker Volumes and Docker Compose

You’ll probably need that container to access files someplace else, such as a network share, after you’ve created a basic container using Docker Compose. If that’s the case, you may tell Docker Compose to utilize Docker volumes in the container’s configuration file.

This instruction will create a Network File Share (NFS) server on the Docker host for demonstration purposes. Outside of demonstration, serving local material as an NFS mount serves no practical function. An external source, such as a NAS or a remote server, is usually used to mount an NFS volume.

Create an NFS Share

Create an NFS share on the Docker host for this tutorial if you don’t already have one. To do so:

  1. apt install nfs-kernel-server -y installs the NFS server package.

2. Run the following command to add the container as an NFS export (equivalent to a Windows CIFS share).

# Create an NFS share for /home/homelab/containers by adding a line to the /etc/exports config file. This share is only visible to localhost (to restrict access from other machines). sudo tee -a /etc/exports | echo ‘/home/homelab/containers localhost(rw,sync,no root squash,no subtree check)’ # Use sudo systemctl restart nfs-kernel-server to restart the NFS server with the modified configuration.

3. Now run showmount -e localhost to see whether the host exposes the NFS share. This command displays all presently accessible NFS shares as well as who has access to them.

The /home/homelab/containers directory is visible in the snapshot below, but only to the localhost machine (which is the same server running the Docker host).

In Ubuntu 20.04, you may create an NFS share.In Ubuntu 20.04, you may create an NFS share.

If you see the folder /home/<username>/containers in the output, the NFS share is set up.

How to Create a Docker Named Volume

You must now tell Docker how to access the NFS share once it has been established. You may accomplish this with Docker Compose by creating a named volume in the Docker Compose configuration file.

Docker uses named volumes to abstract network-based file sharing. CIFS (Windows) shares, NFS (Linux) shares, AWS S3 Buckets, and more types of network file sharing are available these days. By establishing a Named Volume, Docker handles the difficult task of finding out how to communicate with the network share, allowing the container to simply consider it as local storage.

To name a volume, follow these steps:

  1. Open the configuration file for Docker Compose (docker-compose.yaml). The file should be in the /containers/caddy folder if you’re following along.

2. Add a volumes section after the services section in the Docker Compose configuration file. Your configuration file should look something like this. The MyWebsite volume is created under the volumes section. The specifications required (such as IP, NFS settings, and path) are defined inside that named volume. In the services section, the volumes option is changed to link to the name volume rather than a local folder.

services: caddy: container name: “caddy” version: “3.7” ports: – “80:80” image: “caddy:latest” volumes: – “MyWebsite:/usr/share/caddy” volumes: MyWebsite: driver_opts: type: “nfs” o: “addr=localhost,nolock,soft,rw” device: “:/home/homelab/containers/caddy/files”

3. Run docker-compose up -d to construct and start the container once you’ve specified the named volume pointing to the NFS share in the Docker Compose configuration file. The container and website should be up and running again if all goes well.

Using VS Code to configure NFS client settings in Docker ComposeUsing VS Code to configure NFS client settings in Docker Compose

4. Return to the page of the container. The contents of index.html should display as though the file were mounted locally. That file, on the other hand, is mounted through the network’s NFS server.

Using Docker to demonstrate access to the index.html file through an NFS shareUsing Docker to demonstrate access to the index.html file through an NFS share

You can now incorporate all kinds of network storage into your containers thanks to Docker Compose’s ability to mount external Docker volumes. Docker Compose, on the other hand, can do more than simply create individual containers or volumes. Let’s look at some more advanced multi-container situations.

Because the caddy container will no longer be used in this tutorial, you may delete it using docker-compose down.

Using Docker Compose to Create Multiple Containers

The majority of Docker containers will not function in a vacuum. Service dependencies, such as databases or independent web services that communicate through an API, are common in Docker containers.

You may use Docker Compose to arrange containers together in a single file. Containers may connect with dependent services and ease the organization of complicated container layouts by declaring many containers in a single file.

Let’s put up a popular wiki application named BookStack to showcase such a situation.

BookStack is a popular wiki software with a hierarchical style and simplicity of use (as opposed to a flat layout, such as mediawiki).

To perform correctly, BookStack, like many other online apps, needs a separate database as well as the information needed to connect with it. Docker Compose excels at setting up such a setup.

Make a configuration file for Docker Compose.

BookStack does not maintain an internal Docker image; nevertheless, linuxserver.io does so on BookStack’s behalf. While there is a suggested Docker Compose configuration file on the Docker Hub site, this article will generate a fresh configuration file while discussing the ideas.

On the Docker host, you should:

  1. Create a BookStack folder first. You should have a /containers folder if you followed the lessons in the previous section. In there, create a folder named bookstack.

2. Inside the bookstack folder, create a blank Docker Compose configuration file named docker-compose.yaml.

In VS Code, create the Bookstack folder structure.In VS Code, create the Bookstack folder structure.

3. Open the Docker Compose setup file and add two new containers: bookstack and bookstack db (mariadb).

version: “3.7” container name: “bookstack” services: bookstack “ghcr.io/linuxserver/bookstack” image volumes: “./files:/usr/share/caddy” ports: “8080:80” “bookstack db” is a dependency. container name: “bookstack db” bookstack db: volumes: – “./db:/var/lib/mysql” image: “mariadb”

So far, this docker-compose.yaml file is mostly using concepts already introduced: You have two services (bookstack, and bookstack_db), both with images and bind mounts. The bookstack container has a Mapping of Ports from host port 8080 to internal port 80.

Because Docker containers have such a minimal overhead, it’s typical to create a distinct database container for each web application. This provides for a wider separation of responsibilities. This is in stark contrast to typical database arrangements, in which a single database may service hundreds of web applications.

The depends on command is one of the new options in the aforementioned file. This command instructs Docker to start the containers in the sequence specified. The depends on command instructs Docker to launch the bookstack db container first.

Using Environment Variables to Set Up Container Communication

This configuration file from the previous part is still incomplete. While you’ve defined two services (containers), they’re not communicating! The bookstack container doesn’t know how to talk to the bookstack db container. Let’s use environment variables to fix this.

The most frequent method of giving variables to Docker containers is via environment variables. These are runtime variables (or variables set in the docker-compose.yaml configuration file) that offer information about the container’s requirements.

The person who builds the Docker image defines the environment variables. They will change based on the Docker image you’re using, and you’ll need to consult the creator’s instructions to figure out which environment variables to use.

Environment variables may be defined in two ways: directly in the docker-compose.yaml file or in a separate file.

A separate file is usually the best option, particularly if the variables include sensitive information like passwords. A docker-compose.yaml file may be shared or even posted to a public GitHub repository. Keeping sensitive data in a separate file decreases the risk of an unintentional security compromise.

Create two environment variables on the Docker host: one for the bookstack container and one for the bookstack db container.

  1. Create a new file named bookstack.env in the /containers/bookstack folder with the following content:

The IP address or hostname of your server is APP URL. Homelab-docker is used in this paper. DB HOST is the container name you give your container. APP URL=http://homelab-docker:8080 DB HOST=bookstack db DB USER is specified in the bookstack DB environment file DB USER=bookstack user DB PASS is likewise set in the bookstack DB environment file DB PASS=MySecurePassword

2. Create a new file named bookstack db.env in the /containers/bookstack folder with the following content:

Keep the root password for our database private and secure. MYSQL ROOT PASSWORD=MySecureRootPassword MYSQL DATABASE=bookstack will be used for the database bookstack. MYSQL USER=bookstack user will be the user, and MYSQL PASSWORD=MySecurePassword will be the password for bookstack.

3. As a best practice, make sure that neither of the env files may be viewed by other users.

bookstack.env chmod 600 bookstack db.env

Because both the bookstack.env and bookstack db.env files contain sensitive data, you should alter read access.

/containers/bookstack/docker-compose.yaml should be updated. To reference these two environment files, create a Docker Compose file.

services: bookstack: container name: “bookstack” version: “3.7” “ghcr.io/linuxserver/bookstack” image volumes: “./files:/usr/share/caddy” ports: “8080:80” env file: – “./bookstack.env” depends on: – “bookstack db” bookstack db: image: “mariadb” container name: “bookstack db” env file: – “./bookstack db.env” volumes: – “./db:/var/lib/mysql”

5. Using Docker Compose, launch the bookstack and bookstack db containers.

docker-compose up -d sudo

Each of the aforementioned stages in this part are shown below in VS Code.

Using VS Code to set up environment variables and the Docker Compose fileUsing VS Code to set up environment variables and the Docker Compose file

Docker Compose Logs Monitoring

The Docker engine and Docker Compose work together to conduct a variety of activities in the background. It’s useful to be able to keep track of what’s going on, particularly when dealing with numerous containers at once.

Use the logs command, for example, to keep track of the bookstack container. You may navigate to the bookstack URL after you notice the logs display [services.d] done in this tutorial.

logs bookstack sudo docker-compose

Using the logs command in docker-composeUsing the logs command in docker-compose

At this point, you should have a completely working wiki operating in its own Docker container, complete with its own database!

You may start again with your bookstack environment if you have the bookstack and bookstack db folders.

Docker Networking and Compose

You haven’t learnt anything about the communication and networking aspects of how containers operate together up to this point. Let’s make a difference.

When you create many containers in a single docker-compose.yaml file, they’re all assigned to the same network (typically named name-of-parent-folder default).

When you execute docker-compose up -d, you can see the network built for the containers, as seen below.

The docker-compose-created default network appears.The docker-compose-created default network appears.

Docker produces internal DNS records for all containers that are allocated to the same network. That’s why you named your database bookstack db in the environment variables in the previous example. The name bookstack db is really a DNS record that corresponds to the IP address of the database container.

You also don’t have to depend on Docker Compose to automatically create networks. Internal and external networks may be manually defined. When you have a container that wants to communicate with another container in a different docker-compose.yaml file, manually configuring networks is ideal. You may either expose the ports or construct a network that they can both join!

You must also explicitly declare the default network when you begin explicitly configuring networks. When you start specifying networks in Docker Compose, it will immediately cease constructing that network.

Modify the docker-compose.yaml file in the bookstack to incorporate an externally generated network.

  1. Docker network is used to create an external network. my external network should be created.

2. In docker-compose.yaml, define the external network:

services: bookstack: container name: “bookstack” version: “3.7” “ghcr.io/linuxserver/bookstack” image volumes: “./files:/usr/share/caddy” ports: “8080:80” env file: – “./bookstack.env” depends on: – “bookstack db” “my external network” is a network. – “default bookstack” container name: “bookstack db” bookstack db: picture credit: “mariadb” env file: – “./bookstack db.env” volumes: – “./db:/var/lib/mysql” “bookstack default” is a network. networks: bookstack default: external: true my external network

3. Recreate the containers using docker-compose up -d. As illustrated below, your two containers are now connected to two networks.

The networks specified inside a docker-compose file are highlighted.The networks specified inside a docker-compose file are highlighted.

The bookstack container is now connected to a network that is specified outside. This enables you to construct a new container that converts the HTTP traffic from the bookstack to HTTPS before it exits Docker (referred to as a reverse-proxy).

Specifying a User to Run a Container

All Docker containers operate as a sandboxed root user by default. This is the same as starting a virtual machine with the default Administrator account. While this isn’t usually an issue, if the sandbox is breached, there are security problems.

File permissions are another difficulty with executing as root. If you attempt to remove the database folder inside the bookstack folder, you’ll see that you can’t since the contents are owned by root.

While most images dislike operating as a non-root user, linuxserver.io images in particular have an environment option that may be used to choose the user who runs within the container. You may achieve this by setting UID=1000 and GID=1000 in the bookstack.env file.

The default user ID and group for the first user in Ubuntu is 1000:1000. (which you may not be). A Windows Guy in a Linux World: Users and File Permissions) has further information on User IDs and Group IDs.

You may also use the user argument in docker-compose to force a UID and GID, however this is not advised since most containers will not operate properly when forced to a different user.

Choosing a Restart Policy

If you’d like containers built with Docker Compose to restart on failure, use the restart policy by adding a restart: <option> parameter under the container settings in docker-compose.yaml.

relaunch: “no” always re-start on-failure restart unless-stopped: restart

When this option is set, containers will automatically restart on failure, which will assist preserve uptime in the case of a power outage.

Setting DNS entries for Containers manually

A “hosts file” exists in Docker, just as it does in Windows and Linux. You may compel a host to resolve to a certain IP address by using the extra hosts argument in a config file. When you have DNS limits, such as split DNS or a test server you wish to interface with momentarily, this might be handy.

“somehost:x.x.x.x” and “otherhost:x.x.x.x” are extra hosts.

Commands in Action

You may use the docker-compose run command to execute commands within the container after it has been launched. You could want to establish a Bash terminal within your bookstack container, for example. Run the command below to do this.

run web bash with docker-compose

Conclusion

You should now have enough knowledge to follow along with the bulk of docker-compose tutorials available online. This understanding will greatly enhance your ability to go into the realm of Docker and Infrastructure as Code web app development.

“Docker Compose volumes” is a command-line tool that allows users to create and manage their Docker containers. The “docker-compose volumes” command creates a volume for the container’s filesystem.

Related Tags

  • docker-compose depends on
  • docker-compose tutorial
  • docker-compose.yml example
  • docker-compose vs docker-compose
  • docker compose file