Mastering Docker: Understanding Docker Engine and Docker Images

Mastering Docker: Understanding Docker Engine and Docker Images

Unlocking Docker Potential: Deep Dive into Docker Engine and Images

Play this article

This guide takes an in-depth exploration of the fundamental elements of Docker, offering practical examples, valuable tips, and expert insights to help you fully realize the immense potential that Docker has to offer.

Docker Engine

The Docker Engine is a crucial Docker platform component responsible for constructing, operating, and maintaining containers, providing a runtime environment and enabling user interaction via the Docker command-line interface.

Docker Daemon (dotckerd)

The Docker Daemon, also known as dockerd, is a background service on the host system that manages Docker containers and images, listening to Docker API requests.

Docker Engine's server, manages container lifecycle operations, scheduling, and interaction with the host operating system's kernel, running as a daemon process in the background.

Example: To start the Docker Daemon, you typically don't need to interact with it directly. It runs in the background when you start Docker on your system.

APIs (Application Programming Interfaces)

Docker Engine provides APIs for communication between client applications and Docker daemon, enabling developers to control containers, images, networks, and volumes, integrating Docker functionality into workflows.

Docker Command-Line Interface (CLI)

The Docker CLI is a user-friendly command-line tool that enables interaction with Docker Engine, allowing users to build container images, manage networks, and inspect container states.

Example: To run a simple Nginx web server container, you can use the following command.

Running an Nginx Container

Step-by-step instructions to run a Nginx web server in a Docker container, use the official Nginx image from Docker Hub.

  • Step 1: Pull the Nginx Image

  • Step 2: Run the Nginx Container

  • Step 3: Access Nginx in a Web Browser

Step 1: Pull the Nginx Image

docker pull nginx

Step 2: Run the Nginx Container

Use the docker pull command to download the official Nginx image from Docker Hub

docker run -d -p 80:80 --name my-nginx nginx
  • '-d': Detaches the container and runs it in the background.

  • '-p 80:80': Maps port 80 on your host to port 80 in the container, allowing you to access the Nginx web server from the web browser on your host.

  • '--name my-nginx': Gives the container a unique name ("my-nginx").

Step 3: Access Nginx in a Web Browser

Enter your host machine's IP address in a web browser to access the default Nginx welcome page on your local machine or the server's IP address on a remote server.

💡
Customize the Nginx configuration by mounting a local configuration file into the container or by creating a custom Docker image with your configuration changes.

The CLI allows users to interact with Docker, build, run, manage, and troubleshoot containers using commands like docker run, docker build, and docker ps.

💡
This command instructs Docker to launch a Nginx container in detached mode ('-d') and to map host port 80 to container port 80.

Docker Objects

Docker Engine is in charge of building and maintaining numerous Docker objects, such as

  • Docker Image

  • Containers

  • Networks

  • Storage

  • Security

  • Monitoring

  • Services

Docker Image

Docker Engine offers tools for creating and managing Docker images, allowing users to create or modify existing images, store them in a local cache, and access them from Docker registries.

Example: To build a custom Docker image from a Dockerfile, create a file named Dockerfile with the following content

FROM ubuntu:20.04
RUN apt-get update && apt-get install -y curl

Build the image using this command

docker build -t my-ubuntu .

This creates a Docker image named my-ubuntu based on Ubuntu 20.04 with the curl package installed.

Container

Docker Engine uses a container runtime, typically Docker's containers, but can also use other runtimes like rkt or containers for container execution on the host system.

Example: When you run a container, Docker Engine uses its runtime to create an isolated environment. For instance, if you run a Python application, it ensures that the Python interpreter and necessary libraries are available within the container.

Networks

Docker Engine offers built-in networking for containers to communicate with each other and external networks, allowing users to create custom networks for better isolation and control.

Example: Create a custom Docker network

docker network create my-network

Run two containers on this network and allow them to communicate

docker run -d --network my-network --name container1 ubuntu
docker run -d --network my-network --name container2 ubuntu

container1 and container2 can communicate over the my-network bridge network.

Storage

The Docker Engine offers built-in networking capabilities for containers to communicate with each other and external networks, allowing users to create custom networks for better isolation and control.

Example: When you create a container, Docker Engine creates a read-only layer from the image and a writable layer for the container. Any changes made within the container are stored in this writable layer, preserving the image's original state.

💡
Docker Engine manages container storage using layered file systems.

Security

Docker Engine offers security features like container isolation, resource usage control, and security scanning, ensuring the protection of containers and the host system.

Example: Docker uses namespaces and c-groups to isolate containers. For example, containers cannot see or interfere with processes running outside their namespace. This enhances security and isolation.

Monitoring

Docker Engine provides logging and monitoring features for users to track container behavior, troubleshoot issues, and integrate with third-party monitoring and logging solutions.

Example: You may use the following command to inspect the logs of a running container

docker logs <container_id>

Services

💡
Services are a higher-level abstraction in Docker Swarm.

Docker Engine's Swarm Mode is a native clustering and orchestration feature that simplifies container application deployment, enabling users to manage a swarm of Docker nodes to scale multi-container applications.

Example: To create a Docker Swarm and deploy services, you would initialize a Swarm on one node, add more nodes as workers or managers, and then deploy services using Docker Compose or the Docker CLI.

Docker Image

Docker images are lightweight, executable packages containing code, runtime environment, libraries, and system tools, used to create Docker containers, portable environments for consistent application running across different systems.

Creating Docker Images

A Dockerfile, which contains a collection of instructions for generating the image, is generally used to produce Docker images. The steps involved in creating a Docker image

  • Selecting a base image (Linux distribution)

  • Installing dependencies

  • Application code copying

  • Configuring environment variables

  • Setting up runtime commands

Selecting a base image

  • A base image serves as the foundation for your Docker image.

  • It includes the necessary operating system and libraries.

  • Official Linux distribution images (Ubuntu, Alpine, CentOS) or customized images suited to certain use cases (Python, Node.js) are popular options.

Using a smaller base image, such as Alpine Linux, can reduce image size and enhance efficiency, which is especially important for production use.

# Use Alpine Linux as the base image
FROM alpine:3.14

Installing dependencies

  • Installing dependencies ensures that your application has access to the required runtime resources.

  • Using package managers like 'apt'(for Debian-based systems) or 'apk' (for Alpine Linux), install additional software or libraries for your application after creating a base image.

# Install Python and required packages using the package manager
RUN apk add --no-cache python3 py3-pip

Application code copying

  • Use the 'COPY' instruction to add your application code to the Docker image. This phase moves files from your local directory into the file system of the container.

  • To reduce picture size and increase construction performance, transfer only the essential files.

# Copy the application code into the container
COPY . /app

Configuring environment variables

  • Environment variables are used to configure your application's runtime behavior. They can be specified directly in the Dockerfile or as parameters supplied to the container during runtime.

  • Environment variables can be used to configure database connections, API keys, and application settings.

# Set an environment variable
ENV API_KEY=myapikey

Setting up runtime commands

  • The command to be run when the container starts is specified by the 'CMD' or 'ENTRYPOINT' directive. This might be a shell command, a script, or the executable for the main program.

  • When executing the container, use 'CMD' to specify a command that can be overridden. To establish a fixed entry point with parameters that may be appended at runtime, use 'ENTRYPOINT'.

# Define the command to run when the container starts
CMD ["python", "app.py"]
💡
Docker images enable developers to package applications into portable units, offering isolation, portability, and resource efficiency. They create consistent containers across different environments with your application, dependencies, and configurations.

to be continued...

Summary
Docker Engine is the core Docker component, enabling container creation, management, and management, and offering a user-friendly CLI, resource management, networking, storage, security, and monitoring features. Docker image creation involves a Dockerfile, which includes choosing a base image, installing dependencies, copying application code, configuring environment variables, and setting up runtime commands. This process creates a self-contained package for a consistent application running across different systems. Docker images offer advantages like isolation, portability, and resource efficiency, but require careful configuration and management.

"Mastering Docker: Understanding Docker Engine and Docker Images" is a comprehensive and invaluable resource tailored for developers, DevOps engineers, and system administrators. We hope you find this guide insightful and useful in your journey with Docker.

Stay tuned for the upcoming articles in the series, where we'll discuss more interesting topics related to Docker. Subscribe to our channel to ensure you don't miss any part of this enlightening journey!

Thank you for reading our blog. Our top priority is your success and satisfaction. We are ready to assist with any questions or additional help.

Warm regards,

Kamilla Preeti Samuel,

Content Editor

ByteScrum Technologies Private Limited! 🙏