Hosting (Aka, how this page gets to you)
How this page gets to you
TLDR: VPS --> zerotier --> docker swarm --> docker container
The long answer
Nginx
Opening ports is generally a security risk, so I wanted to be able to self host, without opening ports where possible. With a cheap VPS running nginx, I'm able to reverse proxy back to where my docker swarm is. That still requires opening ports one would think, but the magic of zerotier makes that not a problem. I'll talk about this later.
Reverse proxies
Configuring a reverse proxy is quite simple in nginx.
server {
server_name blog.kdb424.xyz;
listen 80;
listen [::]:80;
access_log /var/log/nginx/reverse-access.log;
error_log /var/log/nginx/reverse-error.log;
location / {
proxy_pass http://planex.far:8197;
}
}
Hosts file
$ cat /etc/hosts
127.0.1.1 ubuntu
192.168.194.161 planex.far
# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
Zerotier
Zerotier allows me to create a virtual network, without creating a complex VPN, and just treat it
as if all of my machines are on a private subnet, but without an exit node. It's an ethereal network, creating connections
as needed. In the above example proxy_pass http://planex.far:8197;
points to an address that shouldn't exist. This is added
to my /etc/hosts
for my convenience, but adding a direct zerotier ip would work. A simple ping test shows what's going on.
$ ping planex.far -c 1
PING planex.far (192.168.194.161) 56(84) bytes of data.
64 bytes from planex.far (192.168.194.161): icmp_seq=1 ttl=64 time=10.4 ms
Docker swarm
Docker swarm conveniently takes the same compose files that you are used to using, and deploys them to a swarm. Swarms consist of multiple machines, can offer failover modes, load balancing, and mony other nice things though in this example, I'll just show how I deploy this blog to my swarm.
---
version: "3"
services:
nginxBlog:
image: nginx
container_name: blog
ports:
- 8197:80
restart: unless-stopped
volumes:
- blog:/usr/share/nginx/html
volumes:
blog:
driver: local
driver_opts:
type: nfs
o: "addr=192.168.25.51,rw,nfsvers=4"
device: ":/mnt/data/blog"
The reason we don't use a drive mounted locally, is that we need to ensure that whatever machine in the swarm starts this server, has access to the same data.
Why a swarm and not just run docker on each machine?
Swarms offer things like load balancing, which includes finding the correct machine with a service, even if you point at a different node in the cluster. If you had 3 machines in your swarm, you can access the port number on any of the machines, and the load balancer will properly direct you to the service transparently. Another reason is maintenence. If you have to restart a machine, or shut it down, services will just be moved to another machine in the cluster, or stay up if you had multiple instances already running, offering you little, to no downtime, for no thinking as you maintain, bring up, or down machines.
Static site generator
As you may have seen from the footer of every page, this blog is created with Pelican, which is a static site generator. The source code can actually be seen here.
Conclusion
Hopefully this gives you a better idea of how I host things, and keep my ports closed and secure. If you have any more questions or comments on this, feel free to reach out to me, and I'll be glad to chat about it, or do more writeups on specifics if things are commonly asked.