Skip to content

This repository demonstrates containerization workflows using Docker, tailored for DevOps and full-stack engineers. It includes practical examples, Dockerfiles, and automation scripts that streamline development and deployment.

Notifications You must be signed in to change notification settings

karadHub/Docker

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Docker: The Plain English Guide

Forget the hype. This guide cuts through the noise and tells you what Docker is, why you should care, and how it's different from the old way of doing things (we're looking at you, VMs).

Table of Contents


The Foundation: What is a Container?

First, let's get one thing straight: Docker didn't invent containers. They just made them easy enough for normal people to use.

A container is basically a way to trap your application and all its junk (libraries, tools, config files) into a neat little box. This box can run anywhere—your laptop, a server, the cloud—and your app will work exactly the same. No more "it works on my machine" excuses.

This magic is pulled off by two core Linux kernel tricks:

  1. Namespaces: These provide isolation. They wrap a global system resource in an abstraction that makes it appear to the processes within the namespace that they have their own isolated instance of that resource. For example, a process can have its own private network stack, process tree (PID 1), and user list.

  2. Control Groups (cgroups): These are the resource police. They stop one greedy container from hogging all the CPU and memory, ensuring every container plays nice and shares the hardware.

So, a container is just a regular process with some walls built around it, running on the same OS kernel as everything else.

The Docker Revolution

Before Docker, using container technology (like LXC) was a massive pain. It was a tool for hardcore system admins only.

Docker, launched in 2013, changed the game by giving us a simple toolkit and a clear workflow:

  • Docker Engine: The core runtime that builds and runs containers on the host machine.

  • Dockerfile: A shopping list. It's a plain text file that tells Docker exactly how to build the box for your app.

    Example Dockerfile for a simple Node.js app:

    # Use an official Node.js runtime as a parent image
    FROM node:18-alpine
    
    # Set the working directory inside the box
    WORKDIR /usr/src/app
    
    # Copy the dependency list
    COPY package*.json ./
    
    # Install the dependencies
    RUN npm install
    
    # Copy the rest of the app's code
    COPY . .
    
    # Tell Docker the app uses port 8080
    EXPOSE 8080
    
    # This is the command to start the app
    CMD [ "node", "server.js" ]
  • Docker Image: The blueprint. Once you follow the Dockerfile recipe, you get an image. It's a read-only template of your app's box. You can save it, share it, and version it.

  • Docker Container: The running thing. A container is a live, running instance of an image. You can start it, stop it, and throw it away.

  • Docker Hub: A giant warehouse for images. Think of it like GitHub, but for Docker images instead of code.

This ecosystem made it dead simple to Build, Ship, and Run any app, anywhere, without headaches.


Containers vs. Virtual Machines (VMs)

This is where people get confused, but it's simple. Both give you isolation, but they do it very differently.

Virtual Machines (VMs): The Old Way

A VM pretends to be an entire computer. It needs a full copy of an operating system (like Windows or another Linux) to run. This is why they are huge (gigabytes) and slow to start (minutes).

Analogy: A VM is like building a brand new, fully-furnished house for every app you want to run.

+---------------------+  +---------------------+
|      App A          |  |      App B          |
+---------------------+  +---------------------+
|   Bins / Libs       |  |   Bins / Libs       |
+---------------------+  +---------------------+
|    Guest OS         |  |    Guest OS         |
+---------------------+  +---------------------+
----------------------------------------------+
|                  Hypervisor                  |
+----------------------------------------------+
|                  Host OS                     |
+----------------------------------------------+
|                  Hardware                    |
+----------------------------------------------+

Container Architecture

Containers virtualize the operating system, not the hardware. All containers on a host share the host OS kernel. This makes them extremely lightweight, fast, and efficient because they don't carry the overhead of a full guest OS.

+-----------+ +-----------+ +-----------+
|   App A   | |   App B   | |   App C   |
+-----------+ +-----------+ +-----------+
| Bins/Libs | | Bins/Libs | | Bins/Libs |
+-----------+ +-----------+ +-----------+
-----------------------------------------+
|             Container Engine          |
+-----------------------------------------+
|                 Host OS                 |
+-----------------------------------------+
|                 Hardware                |
+-----------------------------------------+

Comparison Table

Feature Virtual Machines (VMs) Containers
Isolation Strong: Full hardware and kernel isolation. Weaker: Process-level isolation, shared kernel.
Size Large: Gigabytes (GBs), includes a full OS. Small: Megabytes (MBs), includes only app deps.
Startup Time Slow: Minutes, as it boots a full OS. Fast: Milliseconds to seconds.
Resource Usage High: Significant CPU and memory overhead per VM. Low: Minimal overhead, very efficient.
Portability Limited: Tied to the hypervisor configuration. High: Runs on any OS with a container engine.
Best For Running different OSs on one server; full isolation. Microservices, CI/CD pipelines, app packaging.

The Docker Ecosystem and Beyond

The Docker journey didn't stop with single containers. The ecosystem has grown to manage complex, distributed applications.

  • Docker Compose: A tool for defining and running multi-container applications. With a single docker-compose.yml file, you can spin up an entire application stack (e.g., a web server, database, and caching service) with one command.

  • Container Orchestration: When running hundreds or thousands of containers across a cluster of machines, you need an orchestrator to manage scheduling, scaling, networking, and health.

    • Kubernetes (K8s): The industry-standard, open-source platform for container orchestration. It was originally developed by Google.
    • Docker Swarm: Docker's native, simpler orchestration tool integrated into the Docker Engine.
  • Open Container Initiative (OCI): To prevent fragmentation, Docker helped create the OCI. It establishes open industry standards for container runtimes and image formats, ensuring that containers built with Docker can run on other OCI-compliant runtimes, and vice-versa. This guarantees portability and prevents vendor lock-in.


So, a container is just a regular process with some walls built around it, running on the same OS kernel as everything else.

Why Should a Developer Use Containers?

Alright, enough theory. Here's why this matters for your day-to-day coding:

  • Consistency: The container you build on your laptop is the exact same one that goes to testing and production. If it works for you, it'll work for everyone else. This kills the "but it works on my machine" bug forever.

  • Clean Environment: No more installing 5 different versions of a database or language on your machine. All project dependencies are inside the container. When the project is over, you just delete the container, and your machine is left clean.

  • Fast Setup: A new team member can be coding in minutes. Instead of a long setup document, they just run one command (like docker-compose up) to get the entire application stack running.


The Docker Revolution

Before Docker, using container technology (like LXC) was a massive pain. It was a tool for hardcore system admins only.


How to Install Docker

Getting Docker is easy. Here's the quick and dirty guide. For the most up-to-date steps, always check the official Docker docs.

  • Windows & Mac: The best way is to install Docker Desktop. It's a graphical application that includes the Docker Engine, the docker command-line tool, and other goodies.

    • On Windows, you'll need WSL 2 (Windows Subsystem for Linux), but the installer usually helps you set that up.
  • Linux: You'll install Docker Engine directly. The steps vary by Linux distro (like Ubuntu, Fedora, CentOS), but it's usually a few commands. For Ubuntu, it looks something like this:

    # Update your package list
    sudo apt-get update
    # Install Docker's dependencies
    sudo apt-get install ca-certificates curl
    # Add Docker's official GPG key
    sudo install -m 0755 -d /etc/apt/keyrings
    sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
    sudo chmod a+r /etc/apt/keyrings/docker.asc
    # Add the Docker repository to Apt sources
    echo \
      "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
      $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
      sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
    # Install Docker Engine
    sudo apt-get update
    sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

    Again, check the official docs for your specific distro!


Run Your First Container: Hello World

Okay, Docker is installed. Let's make sure it works. The hello-world container is the simplest way to test your setup.

Just run this command in your terminal:

docker run hello-world

When you run this, Docker will:

  1. Check if you have the hello-world image on your machine.
  2. Since you don't, it will download (or "pull") it from Docker Hub.
  3. It will then create and run a new container from that image.
  4. The container will print a confirmation message and then exit.

You should see output that looks like this:

Hello from Docker!
This message shows that your installation appears to be working correctly.
... (some more text explaining the steps)

If you see that message, congratulations! Your Docker installation is working.

Basic Docker CLI Commands

You've run hello-world, but that's just the beginning. Here are the essential commands you'll use every day.

We'll use the nginx web server image as an example, as it's small and useful.

Managing Containers

  • Run a container: This command downloads the nginx image (if you don't have it) and starts a new container named my-web-server.

    • -d runs the container in the background (detached).
    • -p 8080:80 maps port 8080 on your machine to port 80 inside the container.
    docker run -d -p 8080:80 --name my-web-server nginx

    Now you can visit http://localhost:8080 in your browser and see the Nginx welcome page!

  • List running containers:

    docker ps
  • List all containers (including stopped ones):

    docker ps -a
  • Stop a container: You can use the container's name or ID.

    docker stop my-web-server
  • Start a stopped container:

    docker start my-web-server
  • Remove a container: The container must be stopped first.

    docker rm my-web-server
  • View container logs: See the output from a container. To follow the log output in real-time, add the -f flag.

    docker logs -f my-web-server
  • Execute a command inside a running container: This is great for debugging. The following command opens an interactive shell (sh) inside the my-web-server container.

    docker exec -it my-web-server sh

Managing Images

  • List local images:

    docker images
  • Download an image from Docker Hub:

    docker pull ubuntu:22.04
  • Remove an image: You can't remove an image if a container is using it. You must remove the container first.

    docker rmi nginx

About

This repository demonstrates containerization workflows using Docker, tailored for DevOps and full-stack engineers. It includes practical examples, Dockerfiles, and automation scripts that streamline development and deployment.

Topics

Resources

Stars

Watchers

Forks