Have you ever found yourself saying "but it works on my machine" when your code fails in production? Or spent hours setting up development environments for new team members? Maybe you've struggled with dependency conflicts between different projects?
Docker solves these problems by packaging your application and all its dependencies into a standardized unit called a container. In this tutorial, I'll show you how to use Docker to create, run, and manage containers. By the end, you'll have all the tools you need to containerize your own applications.
What is Docker?
Docker is a platform that enables developers to build, package, and run applications in containers. Containers are lightweight, standalone executable packages that include everything needed to run an application: code, runtime, system tools, libraries, and settings.
The beauty of containers is that they run consistently regardless of the environment – be it your local machine, a test server, or production cloud infrastructure. This consistency eliminates the "it works on my machine" problem and makes deployment much more predictable.
Getting Started with Docker
Before diving into commands, let's make sure you have Docker installed. Head over to Docker's website to download it for your machine.
After installation, verify everything is working by opening a terminal and running:
docker --version
Let's also check that the Docker daemon is running:
docker info
If you see system information rather than an error, you're ready to go!
Running Your First Container
Let's start by running an existing Docker image. The most basic example is the "Hello World" container:
docker run hello-world
When you run this command, Docker will:
- Look for the
hello-world
image locally - If it doesn't find it, download it from Docker Hub (the default public registry)
- Create a container from that image and run it
- Output a message explaining what just happened
- Exit the container when the process completes
That's the basic workflow of Docker in a nutshell! Let's try something a bit more useful – running a web server:
docker run --name my-nginx -p 8080:80 -d nginx
This command:
- Creates a container named
my-nginx
- Maps port 8080 on your host to port 80 in the container
- Runs it in detached mode (
-d
) so it runs in the background - Uses the official
nginx
image
Now open your browser and navigate to http://localhost:8080
. You should see the Nginx welcome page. Congratulations – you're running a web server in a container!
Understanding Docker Images
A Docker image is a read-only template containing instructions for creating a Docker container. Think of it as a snapshot of a file system plus some metadata about how to run it.
Pulling Images
Before running a container, you need an image. While docker run
will automatically pull images if they don't exist locally, you can also pull images explicitly:
docker pull ubuntu:22.04
This pulls the Ubuntu 22.04 image from Docker Hub. The part after the colon is called a tag, which usually specifies the version.
Listing Images
To see what images you have locally:
docker images
This shows all images, their tags, sizes, and when they were created.
Removing Images
To remove an image you no longer need:
docker rmi nginx
Note that you can't remove images that are being used by containers unless you force it, which is generally not recommended.
Building Your Own Images with Dockerfiles
Pre-built images are great, but the real power of Docker comes from creating your own images tailored to your applications.
What is a Dockerfile?
A Dockerfile is a text document containing commands that Docker uses to assemble an image. It's like a recipe for creating your Docker image.
Let's create a simple Node.js application and containerize it.
First, create a new directory for your project:
mkdir docker-node-example && cd docker-node-example
Create a simple app.js
file:
const http = require('http');
const server = http.createServer((req, res) => {
res.statusCode = 200;
res.setHeader('Content-Type', 'text/plain');
res.end('Hello from my containerized Node.js app!\n');
});
const port = 3000;
server.listen(port, () => {
console.log(`Server running at http://localhost:${port}/`);
});
Now create a package.json
file:
{
"name": "docker-node-example",
"version": "1.0.0",
"description": "A simple Node.js app for Docker demo",
"main": "app.js",
"scripts": {
"start": "node app.js"
},
"dependencies": {}
}
Now, let's create the Dockerfile:
# Use the official Node.js image as our base
FROM node:9-alpine
# Set the working directory
WORKDIR /app
# Copy package.json and package-lock.json first to leverage Docker cache
COPY package*.json ./
# Install dependencies
RUN npm install
# Copy the rest of the application code
COPY . .
# Expose the port the app will run on
EXPOSE 3000
# Command to run the application
CMD ["npm", "start"]
Let me explain what each line does:
FROM
specifies the base imageWORKDIR
sets the working directory inside the containerCOPY
copies files from your host to the containerRUN
executes commands during the buildEXPOSE
informs Docker which ports the container listens onCMD
provides the default command to run when starting the container
Building Your Image
Now let's build the image from our Dockerfile:
docker build -t my-node-app .
The -t
flag tags the image with a name, and the .
at the end tells Docker to look for the Dockerfile in the current directory.
Running Your Custom Image
Once the build completes, run your container:
docker run --name node-app -p 3000:3000 -d my-node-app
Navigate to http://localhost:3000
in your browser, and you should see your Node.js application's output.
Managing Containers
Now that we've created some containers, let's look at how to manage them.
Listing Containers
To see all running containers:
docker ps
To see all containers, including stopped ones:
docker ps -a
Stopping and Starting Containers
To stop a running container:
docker stop node-app
To start a stopped container:
docker start node-app
Executing Commands in a Container
Sometimes you need to run commands inside a running container. For example, to open a shell in the Node.js container:
docker exec -it node-app sh
The -it
flags provide an interactive terminal. Once inside, you can run commands as if you were logged into a server. Type exit
to leave the container's shell.
Removing Containers
To remove a container (it must be stopped first unless you use -f
):
docker rm node-app
To stop and remove in one command:
docker rm -f node-app
Docker Compose for Multiple Services
Real-world applications often have multiple components – a web server, a database, caching services, etc. Docker Compose makes it easy to define and run multi-container applications.
What is Docker Compose?
Docker Compose is a tool for defining and running multi-container Docker applications. It uses a YAML file to configure your application's services, networks, and volumes.
Let's create a simple application with a Node.js API and a MongoDB database.
First, create a new project:
mkdir docker-compose-example && cd docker-compose-example
Create a simple Express.js API that connects to MongoDB in app.js
:
const express = require('express');
const mongoose = require('mongoose');
const app = express();
const port = process.env.PORT || 3000;
// Connect to MongoDB
mongoose.connect('mongodb://mongo:27017/example')
.then(() => console.log('MongoDB Connected'))
.catch(err => console.log(err));
// Define a simple model
const Item = mongoose.model('Item', { name: String, date: { type: Date, default: Date.now } });
// Routes
app.get('/', (req, res) => {
res.send('Hello from Docker Compose!');
});
app.get('/items', (req, res) => {
Item.find().then(items => {
res.json(items);
}).catch(err => {
res.status(500).json({ error: err.message });
});
});
app.get('/add-item', (req, res) => {
const item = new Item({ name: 'Item ' + Math.floor(Math.random() * 100) });
item.save().then(() => {
res.send('Item added!');
}).catch(err => {
res.status(500).json({ error: err.message });
});
});
app.listen(port, () => {
console.log(`Server running at http://localhost:${port}`);
});
Create a package.json
file:
{
"name": "docker-compose-example",
"version": "1.0.0",
"main": "app.js",
"scripts": {
"start": "node app.js"
},
"dependencies": {
"express": "^4.15.3",
"mongoose": "^4.11.0"
}
}
Create a Dockerfile for the Node.js app:
FROM node:8-alpine
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["npm", "start"]
Now, create a docker-compose.yml
file that defines both services:
version: '2'
services:
app:
build: .
ports:
- "3000:3000"
links:
- mongo
environment:
- NODE_ENV=development
volumes:
- ./:/app
- /app/node_modules
mongo:
image: mongo:3.4
ports:
- "27017:27017"
volumes:
- mongodb_data:/data/db
volumes:
mongodb_data:
This file defines:
- Two services:
app
(our Node.js application) andmongo
(MongoDB database) - Port mappings for both services
- A dependency where
app
needsmongo
to be running - Environment variables for the app
- Volume mappings for code and database persistence
Running Docker Compose
To start all the services defined in your docker-compose.yml
:
docker-compose up -d
The -d
flag runs it in detached mode (in the background).
Now you can:
- Visit
http://localhost:3000
to see your app - Visit
http://localhost:3000/add-item
to add random items to MongoDB - Visit
http://localhost:3000/items
to see all items in the database
Managing Compose Applications
To see the logs from all services:
docker-compose logs
Or for a specific service:
docker-compose logs app
To stop all services but keep the containers:
docker-compose stop
To stop and remove containers, networks, and volumes:
docker-compose down
If you want to remove volumes too:
docker-compose down -v
Docker Administration Tips
Here are some additional commands and tips for working with Docker:
Viewing Container Logs
To see the logs from a container:
docker logs node-app
Add -f
to follow the logs in real-time:
docker logs -f node-app
Inspecting Containers
To see detailed information about a container:
docker inspect node-app
This returns a JSON object with all the configuration details.
Checking Container Resource Usage
To see CPU, memory, and network usage:
docker stats
Cleaning Up Docker Resources
Over time, you'll accumulate unused images, containers, and volumes. Here's how to clean them up:
Remove all stopped containers:
docker container prune
Remove all unused images:
docker image prune
Remove all unused volumes:
docker volume prune
Or, to clean everything at once:
docker system prune
Add the -a
flag to remove unused images as well:
docker system prune -a
Be careful with these commands – they permanently delete resources!
Common Docker Gotchas and Tips
Here are some issues you might run into and how to solve them:
Container File Permissions
Files created inside a container might be owned by root
, causing permission issues when mounted to your host. To fix this, set up a non-root user in your Dockerfile:
FROM node:9-alpine
# Create app directory and set permissions
WORKDIR /app
# Add a non-root user
RUN addgroup -g 1000 appuser && \
adduser -u 1000 -G appuser -s /bin/sh -D appuser
# Copy files with correct ownership
COPY --chown=appuser:appuser . .
# Switch to non-root user
USER appuser
# Rest of your Dockerfile...
Container Networking
If containers can't see each other, check that they're on the same network. In Docker Compose v2, this is handled automatically for you with the links
directive that we used earlier. If you're running containers directly with docker run
, you can create a custom network:
docker network create my-network
And then use it when running containers:
docker run --network my-network --name app1 my-app
docker run --network my-network --name app2 my-database
In the second container, you can now reach the first using its name (app1
).
Making Your Containers Robust
For production, make sure your containers handle crashes gracefully:
- Set up proper signal handling in your app so it shuts down gracefully when Docker sends a SIGTERM signal.
Consider using health checks:
services:
app:
# ...
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
interval: 30s
timeout: 10s
retries: 3
Use restart policies in Docker Compose:
services:
app:
# ...
restart: always
Wrapping Up
You've now got all the basic tools you need to start using Docker in your development workflow! Let's recap what we've covered:
- Running existing Docker images
- Building your own images with Dockerfiles
- Managing containers (creating, starting, stopping, removing)
- Using Docker Compose for multi-container applications
- Basic Docker administration and cleanup
Docker makes development and deployment more consistent and reproducible. No more "it works on my machine" excuses - with Docker, if it works in your container, it will work the same way everywhere else.
Docker is still maturing as a technology, but it's already revolutionizing how we develop and deploy applications. It's an exciting time to be jumping in, with new features being added regularly. Though the first version of Docker was only released in 2014, the adoption rate in our industry has been phenomenal because it solves a genuine need - the need to isolate project dependencies from your OS.
The next step is to integrate Docker into your CI/CD pipeline for automated testing and deployment. You might also want to look into Docker Swarm for orchestrating containers in production, or keep an eye on Kubernetes which is gaining a lot of traction in the container orchestration space.
I hope you've found this guide helpful! If you have any questions or run into issues, feel free to leave a comment below.