Docker Containers
Introduction
Docker containers are lightweight, standalone, executable packages that include everything needed to run an application: code, runtime, system tools, libraries, and settings. Containers isolate software from its surroundings, ensuring it works uniformly despite differences in development and staging environments.
Unlike traditional virtual machines, containers share the host system's OS kernel, making them more efficient and faster to start. This fundamental difference has revolutionized how developers build, ship, and run applications.
Understanding Containers
What is a Container?
A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another.
Containers vs. Virtual Machines
Containers differ significantly from virtual machines (VMs) in their architecture and resource utilization.
Feature | Containers | Virtual Machines |
---|---|---|
Size | Megabytes | Gigabytes |
Startup time | Seconds | Minutes |
OS | Shares host OS | Requires full OS |
Isolation | Process-level | Hardware-level |
Resource overhead | Minimal | Substantial |
Docker Container Basics
Container Lifecycle
A Docker container goes through several stages during its lifecycle:
- Created: Container is created but not started
- Running: Container is running with all processes active
- Paused: Container processes are temporarily stopped
- Stopped: Container processes are stopped but the container exists
- Deleted: Container is removed and no longer exists
Key Container Commands
Here are some essential Docker commands for working with containers:
# Create and start a container
docker run -d --name my-container nginx
# List running containers
docker ps
# List all containers (including stopped)
docker ps -a
# Stop a container
docker stop my-container
# Start a stopped container
docker start my-container
# Remove a container
docker rm my-container
Creating Your First Container
Let's walk through creating and running your first Docker container:
Step 1: Pull an Image
First, we'll pull the official Nginx web server image:
docker pull nginx
Output:
Using default tag: latest
latest: Pulling from library/nginx
a2abf6c4d29d: Pull complete
a9edb18cadd1: Pull complete
589b7251471a: Pull complete
186b1aaa4aa6: Pull complete
b4df32aa5a72: Pull complete
a0bcbecc962e: Pull complete
Digest: sha256:0d17b565c37bcbd895e9d92315a05c1c3c9a29f762b011a10c54a66cd53c9b31
Status: Downloaded newer image for nginx:latest
docker.io/library/nginx:latest
Step 2: Run a Container
Now, let's run a container from the Nginx image:
docker run -d -p 8080:80 --name my-nginx nginx
This command:
-d
: Runs the container in detached mode (background)-p 8080:80
: Maps port 8080 on your host to port 80 in the container--name my-nginx
: Names your container "my-nginx"nginx
: Specifies the image to use
Step 3: Check Running Containers
Verify that your container is running:
docker ps
Output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3a7c37fb5a93 nginx "/docker-entrypoint.…" 5 seconds ago Up 4 seconds 0.0.0.0:8080->80/tcp my-nginx
Step 4: Access Your Container
Now you can access the Nginx welcome page by visiting http://localhost:8080
in your browser.
Step 5: View Container Logs
To see the container's logs:
docker logs my-nginx
Output:
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
172.17.0.1 - - [11/Mar/2025:14:30:22 +0000] "GET / HTTP/1.1" 200 615 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) Chrome/98.0.4758.102 Safari/537.36"
Step 6: Stop and Remove the Container
When you're done, stop and remove the container:
docker stop my-nginx
docker rm my-nginx
Working with Container Data
Volumes
Containers are ephemeral by nature, meaning any data created inside a container is lost when the container is removed. Docker volumes solve this problem by providing persistent storage.
# Create a volume
docker volume create my-data
# Run a container with a mounted volume
docker run -d --name my-nginx -v my-data:/usr/share/nginx/html -p 8080:80 nginx
Bind Mounts
Bind mounts link a container directory to a host directory, ideal for development:
# Mount local directory to container
docker run -d --name my-nginx -v $(pwd)/html:/usr/share/nginx/html -p 8080:80 nginx
Creating a Custom Container Image
Writing a Dockerfile
A Dockerfile is a text document containing instructions to build a Docker image:
# Use official Node.js image as base
FROM node:16
# Set working directory
WORKDIR /app
# Copy package.json and package-lock.json
COPY package*.json ./
# Install dependencies
RUN npm install
# Copy application code
COPY . .
# Expose port
EXPOSE 3000
# Command to run application
CMD ["npm", "start"]
Building the Image
Build your custom image from the Dockerfile:
docker build -t my-node-app .
Running Your Custom Container
Run a container from your custom image:
docker run -d -p 3000:3000 --name my-app my-node-app
Practical Example: Multi-Container Application
Let's create a simple application with a Node.js backend and MongoDB database:
Step 1: Create Network
docker network create my-app-network
Step 2: Run MongoDB Container
docker run -d \
--name mongodb \
--network my-app-network \
-e MONGO_INITDB_ROOT_USERNAME=admin \
-e MONGO_INITDB_ROOT_PASSWORD=password \
mongo
Step 3: Create Node.js Dockerfile
FROM node:16
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["node", "app.js"]
Step 4: Create app.js
const express = require('express');
const mongoose = require('mongoose');
const app = express();
// Connect to MongoDB
mongoose.connect('mongodb://admin:password@mongodb:27017/admin', {
useNewUrlParser: true,
useUnifiedTopology: true
})
.then(() => console.log('Connected to MongoDB'))
.catch(err => console.error('Could not connect to MongoDB', err));
// Define a simple schema
const Note = mongoose.model('Note', new mongoose.Schema({
text: String,
createdAt: { type: Date, default: Date.now }
}));
app.use(express.json());
// Routes
app.get('/', (req, res) => {
res.send('API is running');
});
app.get('/notes', async (req, res) => {
const notes = await Note.find();
res.send(notes);
});
app.post('/notes', async (req, res) => {
const note = new Note({ text: req.body.text });
await note.save();
res.send(note);
});
// Start server
const port = 3000;
app.listen(port, () => {
console.log(`Server running on port ${port}`);
});
Step 5: Build and Run Node.js Container
docker build -t notes-app .
docker run -d \
--name notes-api \
--network my-app-network \
-p 3000:3000 \
notes-app
Now you have a multi-container application with a MongoDB database and Node.js backend, connected through a Docker network.
Container Best Practices
- Keep containers small: Use minimal base images like Alpine Linux.
- One process per container: Follow the single responsibility principle.
- Use non-root users: Enhance security by avoiding root privileges.
- Leverage Docker Compose: For multi-container applications.
- Implement health checks: Monitor container health.
- Tag images properly: Never rely on the "latest" tag in production.
- Use volumes for persistent data: Avoid storing data in containers.
- Clean up regularly: Remove unused containers, images, and volumes.
Example of using a non-root user in a Dockerfile:
FROM node:16-alpine
# Create app directory
WORKDIR /app
# Copy package files
COPY package*.json ./
# Install dependencies
RUN npm install
# Copy app source
COPY . .
# Create non-root user
RUN addgroup -g 1001 appuser && \
adduser -D -u 1001 -G appuser appuser
# Change ownership
RUN chown -R appuser:appuser /app
# Switch to non-root user
USER appuser
EXPOSE 3000
CMD ["npm", "start"]
Summary
Docker containers provide a consistent, lightweight, and efficient way to package and run applications. They solve the "it works on my machine" problem by ensuring consistency across development, testing, and production environments.
In this guide, we've covered:
- What containers are and how they differ from VMs
- Basic Docker container commands
- Creating and running your first container
- Working with container data using volumes
- Building custom container images with Dockerfiles
- Creating multi-container applications
- Container best practices
Additional Resources
-
Learn More:
- Docker Official Documentation
- Docker Hub - Find official and community images
-
Practice Exercises:
- Containerize your favorite web application
- Create a multi-container setup with a frontend, backend, and database
- Build a CI/CD pipeline that uses containers
-
Advanced Topics:
- Docker Compose for multi-container applications
- Docker Swarm for container orchestration
- Kubernetes for production-grade container orchestration
If you spot any mistakes on this website, please let me know at [email protected]. I’d greatly appreciate your feedback! :)