What's all the fuss about docker?
Why is everyone excited?
Docker is, simply put, a container. How is this at all exciting? Think about any time you set up a system, and how long you spend setting it up. No matter if it's a web server, a build environment or any other system service. Do you want to do that all over again when you move systems? What about updating your operating system under it, and having to deal with config file changes, and the other cruft that gets left behind, and causes problems over time? Not having to deal with any of this is why people like docker.
What docker is, and more importantly, what it's not.
Some people assume that docker is just another container like a virtual machine. While it is true that docker will spin up a minimal Linux environment, it does this every single time that the docker instance is started, and when stopped, it throws it all away. This seems like it would be a massive pain to update every single time you wanted to add new content to it, but this wasn't overlooked when Docker was designed.
Getting started with Docker
The getting started link for Docker is here so you can install it properly no matter what platform you are on. Once docker is up and running, feel free to come back. Otherwise, let's see what docker can do for you.
Give this command a run in a terminal/command prompt, then navigate a web
http://127.0.0.1 and you should see a web browser running.
docker run -d -p 80:80 docker/getting-started
Lets break down what that command is doing. You
docker is the basic command to
directly control Docker.
run tells it that you want to run something inside of
-d tells docker to run it in the background, as a daemon.
forwards the port 80 from your local machine, where docker is running on, to
port 80 in the container. This is the standard HTTP web port that allows you to
access that server. The url could have been typed
http://127.0.0.1:80, but 80
is implied for HTTP, so it didn't need typed.
docker/getting-started is the
docker image that is running inside of the container, and that image starts the
web server that you see when you load the web page.
Typing commands can get very confusing, especially as things get more complex,
so I would recommend learning docker-compose early. You may need to install it
separately on your system. The same
command as above would be saved as a file called
docker-compose.yml and is
yaml formatted as the filename implies.
--- version "2.3" services: getting-started: image: docker/getting-started ports: 80:80
Make it useful
Docker compose files can be brought up by running
docker-compose up -d. The
-d will run it in the background. If you omit this, you can see what's going
on, and stop it with
ctrl c like any normal command.
--- version: "3" # Specifies the compose version services: # The list of services are below nginxBlog: # The only service will be this blog, running on nginx image: nginx # Runs on the nginx official image container_name: blog # Sets the name of the container to keep track easier ports: - 80:80 # opens up port 80 to let you access the blog volumes: # Passes through /mnt/data/blog from the host to where nginx expects a web page to be - /mnt/data/blog:/usr/share/nginx/html # passes through /mnt/data/blog from the host to where nginx expects restart: unless-stopped # Automatically restarts the service on restart of docker, or host reboot, ect
This is how this blog gets to you (partially). What happens if I start this docker container on another machine? I have to upload my blog to all of them and keep them in sync? Not at all. It just takes an edit to how docker has access to data.
--- version: "3" # Specifies the compose version services: # The list of services are below nginxBlog: # The only service will be this blog, running on nginx image: nginx # Runs on the nginx official image container_name: blog # Sets the name of the container to keep track easier ports: - 80:80 # opens up port 80 to let you access the blog volumes: # This time we will pass the volume from below through to the container. - blog:/usr/share/nginx/html # passes through /mnt/data/blog from the host to where nginx expects restart: unless-stopped # Automatically restarts the service on restart of docker, or host reboot, ect volumes: blog: driver: local driver_opts: type:nfs o: "addr=192.168.25.51" device: ":/mnt/data/blog"
This will let docker manage a mount though NFS (assuming it's available on that machine). This means that you can use use this file with any computer that has access to that NFS mount.
Speaking of managing multiple computers with docker, why bother choose what goes where for things you don't care about what machine hosts it? Docker swarm has you covered. I'll link the getting started guide here as reference, but I'll highlight some of the things I was confused about going in, as well as some other benefits to running a swarm.
Short list of upsides
- High reliability services. Can run multiple instances in case one is restarting/crashing/overloaded
- Automatically can use any node that joins the swarm with little to no effort after joining
- Can easily reboot machines for updates, and docker containers stay up, or automatically come back up on another machine
Questions I and others have/had
Q: How do I know what IP address to access?
A: Docker swarm includes a load balancer. You can access any machine in the swarm on the port you want, and it will serve it to you properly.
Q: What if I need something specific for the container?
A: Docker swarm includes concepts of tagging. You may want to separate things that need ARM or x86_64 CPU's. You may also tag a system as "low_ram" for things like a raspberry pi so a minecraft server doesn't decide to try to start there. Tags are arbitrary, so you can craft it to your needs.
Q: How do I update the container?
A: You don't think about it most of the time. The implied tag for containers is
:latest, which will automatically pull down the latest version of the
container every time it's restarted. If you lock a version, you know when to
change the version tag, but docker does the rest for you.
Other uses for docker
Docker isn't limited to running services for servers. You can use it as a container to test applications without installing them on your system directly. This is also great for dev environments as there's no more "works on my system" bugs due to the nature of everything inside of the container always being the same on all systems. I'll link an article on how to do this with Rust, but it should translate to most projects well.
Docker is a great way to carry around services, build environments, and many other things that help you think less about the "how do I get there" and more about whatever your goal is. When I wanted to spin this blog up, I didn't care that I had to use a web server, or how it went together. I just started an nginx instance in docker, and I am done forever. Hopefully this has helped you see what's so great about docker. Feel free to reach out with questions, and I'll update the page with any common ones.