A Beginner's Guide to Docker Images and Containers
You’ve just joined a frontend team, and everyone’s talking about “spinning up containers” and “pulling images.” Your React app works perfectly on your machine, but deploying it feels like a mystery. Docker solves this exact problem—and understanding Docker image basics is simpler than you might think.
This guide covers what Docker images and containers actually are, how they relate to each other, and how to use them in your frontend workflow for local development, testing, and simple deployments.
Key Takeaways
- Docker images are read-only blueprints, while containers are running instances of those images.
- Use explicit tags like
node:20-alpineinstead oflatestfor predictable, reproducible builds. - Multi-stage builds keep your production images small and secure.
- Never store persistent data inside containers—use volumes instead.
- Run containers as non-root users and scan images regularly for vulnerabilities.
What Are Docker Images and Containers?
A Docker image is a read-only package containing everything needed to run an application: code, runtime, libraries, and configuration. Think of it as a class in object-oriented programming—a blueprint that defines structure but doesn’t execute anything on its own.
A container is a running instance of that image. When you execute docker run, Docker creates an isolated environment from the image where your application actually runs. You can spin up multiple containers from the same image, each operating independently.
This Docker image vs container distinction matters: images are static templates stored on disk, while containers are live processes with their own filesystem and network.
Docker images follow the OCI (Open Container Initiative) specification, meaning they work across different container runtimes—not just Docker. This standardization ensures your images remain portable.
Understanding Registries and Tags
Images live in registries—Docker Hub being the most common public one. When you reference an image like node:20-alpine, you’re specifying a repository (node) and a tag (20-alpine).
Here’s something that trips up beginners: the latest tag isn’t magic. It doesn’t automatically point to the newest version. It’s simply a default tag that image maintainers may or may not update. Always use explicit tags like node:20-alpine for predictable builds.
Running Your First Container
Let’s run a simple container using the official Node.js image:
docker run -it --rm node:20-alpine node -e "console.log('Hello from Docker!')"
The -it flags enable interactive mode with a terminal. The --rm flag automatically removes the container when it exits.
For a more practical example, you can run a development server. First, create a project directory with your frontend code, then:
docker run -d -p 3000:3000 -v $(pwd):/app -w /app node:20-alpine sh -c "npm install && npm run dev"
The -d flag runs the container in the background. The -p 3000:3000 maps port 3000 inside the container to port 3000 on your machine. The -v flag mounts your current directory into the container, and -w sets the working directory.
Building Custom Images with Dockerfiles
For real projects, you’ll create custom images. Here’s a Dockerfile for a React application:
FROM node:20-alpine AS build
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
FROM nginx:alpine
COPY --from=build /app/dist /usr/share/nginx/html
EXPOSE 80
This demonstrates a multi-stage build—a key Docker best practice. The first stage builds your app; the second stage copies only the production files into a minimal nginx image. Your final image stays small and secure.
Build and run it:
docker build -t my-frontend-app .
docker run -d -p 8080:80 my-frontend-app
Discover how at OpenReplay.com.
Docker Volumes and Persistence
Containers are ephemeral—when they stop, any data written inside them disappears. For local development, use bind mounts to sync your source code:
docker run -v $(pwd)/src:/app/src -p 3000:3000 my-frontend-app
For data that needs to persist (like database files), use named volumes:
docker volume create app-data
docker run -v app-data:/data my-app
Understanding Docker volumes and persistence is essential: never store important data solely inside a container’s filesystem.
Docker Security Basics
A few practices keep your containers safer:
Run as non-root. Add a user in your Dockerfile:
RUN adduser -D appuser
USER appuser
Use minimal base images. Alpine-based images have fewer vulnerabilities than full distributions.
Scan images regularly. Tools like Docker Scout or Trivy identify known vulnerabilities.
Never bake secrets into images. Environment variables or secret management tools handle credentials—hardcoding them creates security risks that persist in image layers.
Simplifying Multi-Container Setups with Compose
When your frontend needs a backend API and database locally, Docker Compose orchestrates everything:
services:
frontend:
build: ./frontend
ports:
- "3000:3000"
api:
build: ./api
ports:
- "4000:4000"
Run docker compose up and both services start together. Use docker compose down to stop and remove all containers.
Conclusion
Docker images are blueprints; containers are running instances. Keep images small with multi-stage builds, use explicit tags instead of latest, separate state from containers using volumes, and rebuild images regularly to catch security updates. These fundamentals will serve you whether you’re running a dev environment or deploying a simple frontend application.
FAQs
A Docker image is a read-only template containing your application code, dependencies, and configuration. A container is a running instance of that image. You can create multiple containers from the same image, each running independently with its own isolated filesystem and network.
The latest tag doesn't automatically update to the newest version. It's just a default tag that maintainers may or may not keep current. Using explicit version tags like node:20-alpine ensures reproducible builds and prevents unexpected breaking changes when images are updated.
Use Docker volumes to persist data outside the container's filesystem. Named volumes store data in a Docker-managed location, while bind mounts link to specific directories on your host machine. Never rely on a container's internal filesystem for important data.
Multi-stage builds let you use one image for building your application and a different, smaller image for running it. This keeps your production images lightweight by excluding build tools and dependencies, reducing both image size and potential security vulnerabilities.
Gain control over your UX
See how users are using your site as if you were sitting next to them, learn and iterate faster with OpenReplay. — the open-source session replay tool for developers. Self-host it in minutes, and have complete control over your customer data. Check our GitHub repo and join the thousands of developers in our community.