Octoprint all the things

Why you absolutely want to try this.

octoprint

Octoprint is a simple concept. It's a web server that gives you control of your printer. This could be useful if you don't have a screen, but it can do a lot more than just print a gcode file. I'll touch on a few features of octoprint that make it compelling to many users.

Send a print directly from a slicer

Many modern slicers such as PrusaSlicer, SuperSlicer, Slic3r, ect all have the ability to communicate directly with octoprint. This saves the hastle of moving a flash drive around, and trying to remember what settings were in the gcode if you may be using a different filament than you were before. No need to wonder any more. Just slice, and send it over!

Instructions for PrusaSlicer and SuperSlicer can be found here

Monitor your prints from anywhere

Octoprint allows you to see the gcode draw what the printer is doing, as well as add a webcam to let you monitor progress. Never wonder if your print is going ok when you are distracted and not watching it. Just check in once in a while remotely and get back to doing things! If it failed, you can even stop the print remotely.

Timelapses

Once you have a webcam to check in, you may as well make a timelapse, and Octoprint has that feature built right into it! It's great for watching how prints fail to help you tune, or just fun to see them getting played back at super speed.

Keep track of prints, both working, and failed.

We all like to print things, but we never really keep track of how much filament we go through, or how much it costs. There are plugins that can keep track of how much each print used, and how much it costs, so if you want to print another, you can get an idea of cost, or how much of your spool you'll need to print something, and all at a glance.

Kick off prints anywhere

Ever not been at your printer and knew the printer was ready to go, but didn't want to wait until you are back home to start a print? Thanks to the camera, you can check in, and even start a print, all on the go. There are even mobile apps for iOS, and android and those are just two examples.

Many many more plugins

These are not the only things that Octoprint can do. There are tons of user plugins that extend the functionality massively, or just change the way it looks. I don't have a ton of plugins going, but I'll list a few that I think others would enjoy.

Afterthoughts

So far I'm having a lot of fun with my printer, and getting amazing results, and Octoprint makes it all the better. It's free, and can run on as little as a Rasperry Pi Zero, so there's little reason to not give it a shot. More things like this will be coming in the future.

Prusa Mini: Initial thoughs

How good can it be for $400?

Short answer: Amazing. Buy one right now, then come back to read what you just got for your money.

The longer answer is probably why you are here, so I guess I should stop stalling. I've played with other's printers on and off for years. I hear other people that have 3D printers complaining that it's basically just another project car. Something you sink money and time into, and it gives you nothing but problems, but you enjoy it for the hobby. This is exactly what I didn't get with the Prusa Mini.

So how many issues did I have?

First off, I decided that a kit was not for me on my first printer, so I got it assembled. Right out of the box, I opened the printer and put in a few screws, and plugged in a few wires. Because I had no clue what I was doing, it took about an hour or two, but it over all was pretty easy. Loaded up some Galaxy Black PLA that came with the printer that was packed in for testing, set the Z offset with on screen instructions, plugged in the included USB drive to the printer, found something that looked interesting, and hit print. With zero effort, and no mistakes, the first print came out looking stunning, and everything I've printed since then in the same Galaxy Black has come out looking just as good.

But I have to complain about something

PETG IS ABSOLUTELY A NIGHTMARE TO WORK WITH! I printed about half of my spool of Galaxy Black PLA, and loaded up some PETG, switched out the bed for the textured sheet as was instructed, switched out the spool, and set the Z height for the new bed. Off to the races right? No... Conveniently, Prusa has profiles for their filament in Prusa Slicer, so everything you could need to tune for the printer, or the filament is basically done for you. It's even labled on every spool, what the temp range for the nozzle and bed should be. Easy enough right? Print after print, PETG failed. Lifting from the bed, sometimes even on the first layer. I contacted support, who was absolutely fantastic, generating things for me to print to test, offering me advice on things that could be physically wrong, and more. They stayed on the chat with me for several hours before I gave up for the day, and went to sleep. I ultimately discovered that the edges of my bed were lower than expected due to a draft from the nearby window cooling the plate down faster than it could auto react. I bumped up the heatbed issues, and haven't had sticking issues since. I'll ramble more about materials later in a separate post though, but know that some materials are amazing, and some need a bit of work.

This sounds mostly bad so far. Why do I want this?

While I did complain a bit about PETG, that ended up mostly user error as most people don't keep a printer by the window in the pacific north west where the nights cool rapidly, so this won't be an issue. I could have also asked around, and learned that a simple and cheap pop up enclosure would have fixed many of my issues as well. For only $400 USD, I got a printer that will print Prusa PLA right out of the box, even in subpar temperature conditions to the point that I was convinced that it should all be that easy, because it required no thought. I could start printing a keyboard case, and leave for work after I saw one layer go down, I could trust that when I got home, I would have a keyboard case, and did.

Conclusion

These are just my initial thoughts on this printer, and I'll be doing more on 3D printing. I think I'll have a lot of topics to talk about that others will be interested in. This is a great printer out of the box, but there's a world more that can be done, and not even risk the near perfection that you get out of the box. For now, I'll just leave this as a teaser.

octoprint

Zerotier. When opening ports isn't ok

Zerotier Primer

To understand this, you'll first need to understand the very basics of how ipv4 networks work. I'll only gloss over things here, but you won't need to know a lot in order to actually use zerotier. It's mearly explaining what you should understand in order to know how to use it.

Subnets

Most devices these days are behind a firewall, and an ipv4 router. Ipv4 is comprised of addresses that are 4 octets (0 to 255), seperated by dots. It may look like 1.1.1.1 or 192.168.0.1. These addresses are broken into 2 groups. Private and public.

Private

  • 10.0.0.0/8 IP addresses: 10.0.0.0 - 10.255.255.255
  • 172.16.0.0/12 IP addresses: 172.16.0.0 - 172.31.255.255
  • 192.168.0.0/16 IP addresses: 192.168.0.0 - 192.168.255.255

These ranges are consider private, so you can assign these to devices behind a router and not conflict with anything else in the world.

Public

Anything outside of the range of these addresses is considered public. These could be things like DNS servers like 1.1.1.1, 8.8.8.8 ect, or even websites that you browse every day. Normally you use DNS addresses, but it's all these numbers under the hood in the public range that you are actually using.

Why does this matter?

Your home network has access to it's private range of addresses that allow all of your devices to talk to each other. Want to copy files back and fourth, or SSH over from one box to the other? It's all good to go. When you aren't at home, these addresses aren't accessable to you however. This is where zerotier comes into play.

Zerotier as a service, gives you a virtual private address, and everything within your own network also has one. Unlike your regular network address, this can be used to talk to your devices anywhere, as long as they are on your zerotier network, just the same as if you were home. The possibilities are truly endless here.

Why not a VPN?

A VPN is similar in concept to zerotier. You join a network, and can talk to other devices. The main difference between zerotier and a VPN is that there is no "router". Lets say you have 3 devices.

  • Laptop
  • VPN server
  • Desktop

Every time you talk from your laptop to your desktop, you will have to route through the VPN. This may not seemingly matter to you, but if the connection between either device, and the VPN is slow, or even the VPN itself is slow, the whole connection is slow. The connection visually would look something like this.

Laptop <-> VPN <-> Desktop

Zerotier on the other hand doesn't run a server. It's an ethereal network that connects devices directly in a P2P system, and no zerotier server sits in the middle. This gives you much faster connections, lower latency, and less connections to fail. Visualized, it would look like this

Laptop <-> Desktop

Security

There is an alternative to this that can be used, though it has it's drawbacks. It's port forwarding. This allows anyone with access to your public address to talk to that port. Lets say that you open port 22 on your home server to the world so you can access a shell remotely. This exposes it to anyone in the world that may want to try to get in. You'll have to harden your security to ensure that you aren't broken into, and still have to take the system load of constant attacks as bots will scan for, and try to break into any public address on the internet. Zerotier on the other hand won't open that port to the public, and only those allowed in your network can access the machine.

If you want to read into the security of Zerotier itself, here is a link to that. It's far more complex than anything I could go over.

Conclusion

I believe that zerotier is great for small users, all the way up to big buisness that wants the convenience of private networks, but without the complication, and downsides to a VPN. It's fast, secure, and flexable, as any good networking tool should be.

Bonus section: Where I use Zerotier

I use zerotier in a lot of places, and run many networks. A non exhaustive list would be

  • Nginx reverse proxy through zerotier
  • Accessing an NFS server remotely, securely
  • "LAN" gaming with friends who join a network
  • Accessing my 3D printer remotely without opening a port
  • SSH between all of my controlled machines no matter what firewalls are in place

What's all the fuss about docker?

Why is everyone excited?

Docker is, simply put, a container. How is this at all exciting? Think about any time you set up a system, and how long you spend setting it up. No matter if it's a web server, a build environment or any other system service. Do you want to do that all over again when you move systems? What about updating your operating system under it, and having to deal with config file changes, and the other cruft that gets left behind, and causes problems over time? Not having to deal with any of this is why people like docker.

What docker is, and more importantly, what it's not.

Some people assume that docker is just another container like a virtual machine. While it is true that docker will spin up a minimal Linux environment, it does this every single time that the docker instance is started, and when stopped, it throws it all away. This seems like it would be a massive pain to update every single time you wanted to add new content to it, but this wasn't overlooked when Docker was designed.

Getting started with Docker

The getting started link for Docker is here so you can install it properly no matter what platform you are on. Once docker is up and running, feel free to come back. Otherwise, let's see what docker can do for you.

Basic example

Give this command a run in a terminal/command prompt, then navigate a web browser to http://127.0.0.1 and you should see a web browser running.

docker run -d -p 80:80 docker/getting-started

Lets break down what that command is doing. You docker is the basic command to directly control Docker. run tells it that you want to run something inside of docker. -d tells docker to run it in the background, as a daemon. -p 80:80 forwards the port 80 from your local machine, where docker is running on, to port 80 in the container. This is the standard HTTP web port that allows you to access that server. The url could have been typed http://127.0.0.1:80, but 80 is implied for HTTP, so it didn't need typed. docker/getting-started is the docker image that is running inside of the container, and that image starts the web server that you see when you load the web page.

Docker-compose

Typing commands can get very confusing, especially as things get more complex, so I would recommend learning docker-compose early. You may need to install it separately on your system. The same command as above would be saved as a file called docker-compose.yml and is yaml formatted as the filename implies.

---
version "2.3"
services:
    getting-started:
            image: docker/getting-started
            ports:
                80:80
                

Make it useful

Docker compose files can be brought up by running docker-compose up -d. The -d will run it in the background. If you omit this, you can see what's going on, and stop it with ctrl c like any normal command.

---
version: "3"  # Specifies the compose version

services:  # The list of services are below
    nginxBlog:  # The only service will be this blog, running on nginx
        image: nginx  # Runs on the nginx official image
        container_name: blog  # Sets the name of the container to keep track easier
        ports:
            - 80:80  # opens up port 80 to let you access the blog
        volumes:
            # Passes through /mnt/data/blog from the host to where nginx expects a web page to be
            - /mnt/data/blog:/usr/share/nginx/html  # passes through /mnt/data/blog from the host to where nginx expects 
        restart: unless-stopped  # Automatically restarts the service on restart of docker, or host reboot, ect

This is how this blog gets to you (partially). What happens if I start this docker container on another machine? I have to upload my blog to all of them and keep them in sync? Not at all. It just takes an edit to how docker has access to data.

---
version: "3"  # Specifies the compose version

services:  # The list of services are below
    nginxBlog:  # The only service will be this blog, running on nginx
        image: nginx  # Runs on the nginx official image
        container_name: blog  # Sets the name of the container to keep track easier
        ports:
            - 80:80  # opens up port 80 to let you access the blog
        volumes:
            # This time we will pass the volume from below through to the container.
            - blog:/usr/share/nginx/html  # passes through /mnt/data/blog from the host to where nginx expects 
        restart: unless-stopped  # Automatically restarts the service on restart of docker, or host reboot, ect
        
volumes:
    blog:
        driver: local
        driver_opts:
            type:nfs
            o: "addr=192.168.25.51"
            device: ":/mnt/data/blog"

This will let docker manage a mount though NFS (assuming it's available on that machine). This means that you can use use this file with any computer that has access to that NFS mount.

Docker swarm

Speaking of managing multiple computers with docker, why bother choose what goes where for things you don't care about what machine hosts it? Docker swarm has you covered. I'll link the getting started guide here as reference, but I'll highlight some of the things I was confused about going in, as well as some other benefits to running a swarm.

Short list of upsides

  • High reliability services. Can run multiple instances in case one is restarting/crashing/overloaded
  • Automatically can use any node that joins the swarm with little to no effort after joining
  • Can easily reboot machines for updates, and docker containers stay up, or automatically come back up on another machine

Questions I and others have/had


Q: How do I know what IP address to access?

A: Docker swarm includes a load balancer. You can access any machine in the swarm on the port you want, and it will serve it to you properly.


Q: What if I need something specific for the container?

A: Docker swarm includes concepts of tagging. You may want to separate things that need ARM or x86_64 CPU's. You may also tag a system as "low_ram" for things like a raspberry pi so a minecraft server doesn't decide to try to start there. Tags are arbitrary, so you can craft it to your needs.


Q: How do I update the container?

A: You don't think about it most of the time. The implied tag for containers is :latest, which will automatically pull down the latest version of the container every time it's restarted. If you lock a version, you know when to change the version tag, but docker does the rest for you.

Other uses for docker

Docker isn't limited to running services for servers. You can use it as a container to test applications without installing them on your system directly. This is also great for dev environments as there's no more "works on my system" bugs due to the nature of everything inside of the container always being the same on all systems. I'll link an article on how to do this with Rust, but it should translate to most projects well.

Conclusion

Docker is a great way to carry around services, build environments, and many other things that help you think less about the "how do I get there" and more about whatever your goal is. When I wanted to spin this blog up, I didn't care that I had to use a web server, or how it went together. I just started an nginx instance in docker, and I am done forever. Hopefully this has helped you see what's so great about docker. Feel free to reach out with questions, and I'll update the page with any common ones.

ZFS. It's not a filesystem, it's an ecosystem

What is a filesystem?

All computers need to give you access to files. This seems quite obvious at first, but how those files get stored, most people don't seem to think about. Files need to be stored on a disk (or a network, but lets focus about on disk), and that disk needs a way to know where files are, how big they are, ect. This is all part of a filesystem. Some common ones that people may know about are

  • NTFS
  • FAT32
  • EXT4
  • HFS+
  • APFS

These are just a few examples that can be found on different operating systems, and you are bound to recognize at least one.

What's the point? Isn't keeping track of files easy?

Different filesystems are built with different goals, or operating systems in mind. As a quick example, HFS+ was built before SSD's existed, and is optimized for spinning disk drives. That doesn't mean you can't use it on a solid state drive, but the performance could be better. This is what brought rise to APFS for Mac. It's built with only SSD's in mind. Once again, you can use this on spinning disk drives, but it won't perform as well as HFS+.

Another big area that filesystems are optimized for is features. More modern filesystems may offer things like on disk compression to save space while losing no data, permissions, to prevent users from accessing, modifying, or running files that they aren't allowed to, and much more. Not all filesystems are created equally, and each has upsides and downsides.

ZFS is a filesystem, but it's also not

Why explain what a filesystem is if ZFS is not one? Well, ZFS is not just a filesystem. It includes a filesystem as a component, but is far more. I won't explain all of the features it offers here, but some of the more useful ones that I take advantage of.

Redundant Array of Independent Disks (RAID)

RAID is a complex topic, so I'll only get into the basics here. It allows you to use more than 1 disk (SSD, Spinning disk, ect), all as one logical drive. There are many solutions to RAID, from hardware backed raid cards, to software in your BIOS/EFI, to LVM. One of the main drawbacks of hardware RAID is that if your raid card dies, you lose your data without an exact replacement for the RAID card. ZFS on the other hand allows you to keep your data, and as long as enough of the disks show up, the data is there. ZFS also allows some other special types of RAID, that will be talked about later that aren't possible with traditional RAID without complex layers of software needing set up on top of it. You can read a bit more about ZFS vs Hardware RAID here.

Combining RAID types (VDEV)

Storing lots of data means that sometimes combining multiple RAID types together is more cost or performance efficient. A common RAID type is RAID 10. This is a RAID 1 (mirror) with a RAID 0 on top of it. It would look something like this.

RAID 10

In ZFS, we call these sections of disks VDEV's. The above image would show 2 disks in each VDEV, and the stripe over all VDEV's is known as a "pool". Every ZFS array has at least 1 pool, and 1 VDEV even if it's a single disk.

Here is an example of a ZFS root filesystem used in one of my servers.

╰─$ zpool list zroot -v
NAME          SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
zroot         476G   199G   277G        -         -    14%    41%  1.00x    ONLINE  -
  nvme0n1p2   476G   199G   277G        -         -    14%  41.7%      -    ONLINE

Layers of ZFS

ZFS has some unique properties as far as filesysetms go. I won't list all of the layers as some are optional, but I'll highlight a few of the important ones to know about.

ARC

ZFS has a thing called ARC that allows for caching of things in RAM. This allows frequently accessed files to be accessed much faster than from disk, even if the disk is a fast SSD as RAM is always faster.

L2ARC

This is an optional secondary ARC that can be stored on an SSD to speed up reads when RAM is totally full. This is only used on massive arrays generally as ARC is really efficient at storing what should be cached on smaller arrays, and has some drawbacks as it takes up some RAM on it's own.

ZIL/SLOG

A ZIL is the ZFS Intent Log. This is where ZFS stores the data that it intends to write, and can verify that it was written correctly before committing it to disk. This is great in case of a power outage or a kernel panic stopping the system in the middle of a write. If it wasn't written properly, the data won't be committed to the disk, and there won't be corruption. This normally happens on the same disk(s) of the filesystem, though some arrays add a special device called a SLOG, which is usually an SSD to write these intents to, freeing up the normal disks to only write good data. You can read further on this topic here.

Special VDEV

Special vdevs are a type of RAID that are unique to ZFS. ZFS keeps track of files, and blocks by size. Small files and things like metadata are not where spinning disks are good, so this allows you to have a special vdev made of SSD's to help take the burden of these types of files and blocks. This has a massive increase in performance, while keeping over all storage cost low as most of the bulk storage is handled by the slow spinning disk drives, but using the SSD's where there are best. This is a fantastic read on the topic.

Filesystem, and RAID, what else?

I could spend the rest of existence rambling about everything that ZFS can do, so I'll leave a list of other features that are worth looking into.

Conclusion

These are the features that make ZFS the ultimate ecosystem, and not just a filesystem for my NAS/SAN use case, as well as data protection for even my single disks, allowing me to back up and restore quickly with snapshots, and send/recv faster than any other method available. I've accidentally deleted TB's of data before when targeting the wrong disk in a rm operation, only to undelete the files in less than 5 seconds with a snapshot, moved countless TB's over a network maxing out 10 gigabit speeds in ways that things like cp and rsync could never get close to matching, and even torture tested machines by pulling ram out of them while data was being sent just to see if I could cause corruption, and found none (missing data that wasn't sent, but everything that was sent was saved properly). This is unmatched on any other filesystem in my opinion, including BTRFS, but that's a rant for another day.

Further Reading

OpenZFS wiki Wikipedia ZFS page

Bonus

Below is an example of my array that is currently live in my SAN, serving everything including this page. It consists of 3 10TB Spinning disks and 2 500GB SSD's acting as an L2ARC as well as a special VDEV in a mirror.

╰─$ zpool list tank -v
NAME                                      SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
tank                                     27.7T  8.71T  19.0T        -         -     0%    31%  1.00x    ONLINE  -
  raidz1                                 27.3T  8.64T  18.6T        -         -     0%  31.7%      -    ONLINE
    ata-WDC_WD100EMAZ-00WJTA0_1EG9UBBN       -      -      -        -         -      -      -      -    ONLINE
    ata-WDC_WD100EMAZ-00WJTA0_1EGG56NZ       -      -      -        -         -      -      -      -    ONLINE
    ata-WDC_WD100EMAZ-00WJTA0_2YJXTUWD       -      -      -        -         -      -      -      -    ONLINE
special                                      -      -      -        -         -      -      -      -  -
  mirror                                  428G  70.5G   358G        -         -     9%  16.5%      -    ONLINE
    sda5                                     -      -      -        -         -      -      -      -    ONLINE
    ata-CT500MX500SSD1_2005E286AD8B          -      -      -        -         -      -      -      -    ONLINE
cache                                        -      -      -        -         -      -      -      -  -
  ata-CT500MX500SSD1_1904E1E57733-part1  34.7G  31.2G  3.51G        -         -     0%  89.9%      -    ONLINE