I’m a retired Unix admin. It was my job from the early '90s until the mid '10s. I’ve kept somewhat current ever since by running various machines at home. So far I’ve managed to avoid using Docker at home even though I have a decent understanding of how it works - I stopped being a sysadmin in the mid '10s, I still worked for a technology company and did plenty of “interesting” reading and training.
It seems that more and more stuff that I want to run at home is being delivered as Docker-first and I have to really go out of my way to find a non-Docker install.
I’m thinking it’s no longer a fad and I should invest some time getting comfortable with it?
dude, im kinda you. i just jumped into docker over the summer… feel stupid not doing it sooner. there is just so much pre-created content, tutorials, you name it. its very mature.
i spent a weekend containering all my home services… totally worth it and easy as pi[hole] in a container!.
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
Fewer Letters More Letters DNS Domain Name Service/System Git Popular version control system, primarily for code HTTP Hypertext Transfer Protocol, the Web IP Internet Protocol LXC Linux Containers NAS Network-Attached Storage PIA Private Internet Access brand of VPN Plex Brand of media server package RAID Redundant Array of Independent Disks for mass storage SMTP Simple Mail Transfer Protocol SSD Solid State Drive mass storage SSH Secure Shell for remote terminal access SSL Secure Sockets Layer, for transparent encryption VPN Virtual Private Network VPS Virtual Private Server (opposed to shared hosting) k8s Kubernetes container management package nginx Popular HTTP server
15 acronyms in this thread; the most compressed thread commented on today has 10 acronyms.
[Thread #349 for this sub, first seen 13th Dec 2023, 17:15] [FAQ] [Full list] [Contact] [Source code]
It’s basically a vm without the drawbacks of a vm, why would you not? It’s hecking awesome
I would absolutely look into it. Many years ago when Docker emerged, I did not understand it and called it “Hipster shit”. But also a lot of people around me who used Docker at that time did not understand it either. Some lost data, some had servicec that stopped working and they had no idea how to fix it.
Years passed and Containers stayed, so I started to have a closer look at it, tried to understand it. Understand what you can do with it and what you can not. As others here said, I also had to learn how to troubleshoot, because stuff now runs inside a container and you don´t just copy a new binary or library into a container to try to fix something.
Today, my homelab runs 50 Containers and I am not looking back. When I rebuild my Homelab this year, I went full Docker. The most important reason for me was: Every application I run dockerized is predictable and isolated from the others (from the binary side, network side is another story). The issues I had earlier with my Homelab when running everything directly in the Box in Linux is having problems when let´s say one application needs PHP 8.x and another, older one still only runs with PHP 7.x. Or multiple applications have a dependency of a specific library when after updating it, one app works, the other doesn´t anymore because it would need an update too. Running an apt upgrade was always a very exciting moment… and not in a good way. With Docker I do not have these problems. I can update each container on its own. If something breaks in one Container, it does not affect the others.
Another big plus is the Backups you can do. I back up every docker-compose + data for each container with Kopia. Since barely anything is installed in Linux directly, I can spin up a VM, restore my Backups withi Kopia and start all containers again to test my Backup strategy. Stuff just works. No fiddling with the Linux system itself adjusting tons of Config files, installing hundreds of packages to get all my services up and running again when I have a hardware failure.
I really started to love Docker, especially in my Homelab.
Oh, and you would think you have a big resource usage when everything is containerized? My 50 Containers right now consume less than 6 GB of RAM and I run stuff like Jellyfin, Pi-Hole, Homeassistant, Mosquitto, multiple Kopia instances, multiple Traefik Instances with Crowdsec, Logitech Mediaserver, Tandoor, Zabbix and a lot of other things.
It seems like docker would be heavy on resources since it installs & runs everything (mysql, nginx, etc.) numerous times (once for each container), instead of once globally. Is that wrong?
You would think so, yes. But to my surprise, my well over 60 Containers so far consume less than 7 GB of RAM, according to htop. Also, of course Containers can network and share services. For external access for example I run only one instance of traefik. Or one COTURN for Nextcloud and Synapse.
Hi, also used to be a sysadmin and I like things that are simple and work. I like Docker.
Besides what you already noticed (that most software can be found packaged for Docker) here are some other advantages:
- It’s much lighter on resources and efficient than virtual machines.
- It provides a way to automate installs (docker compose) that’s (much) easier to get started with than things like Ansible.
- It provides a clear separation between configuration, runtime, and persistent data and forces you to get organized.
- You can group related services.
- You can control interdependencies, privileges, shared access to resources etc.
- You can define simple or complex virtual networking topologies between containers as you like.
- It adds extra security (for whatever that’s worth to you).
A brief description of my own setup, for ideas, feel free to ask questions:
- Router running OpenWRT + server in a regular PC.
- Server is 32 MB of RAM (bit overkill for now, black Friday upgrade, ran with 4 GB for years), Intel CPU with embedded GPU, OS on M.2 SSD, 8 HDD bays in Linux software RAID (MD).
- OS is Debian stable barebones, only Docker, SSH and NFS are installed on the host directly. Tip: use whatever Linux distro you know and like best.
- Docker is installed from their own repository, not from Debian’s.
- Everything else runs from docker containers, including things like CUPS or Samba.
- I define all containers with compose, and map all persistent data to host storage. This way if I lose a container or even the whole OS I just re-provision from compose definitions and pick up right where I left off. In fact destroying and recreating containers cleanly is common practice with docker.
Learning docker and compose is not very hard esp. if you were on the job.
If you have specific requirements eg. storage, exposing services over internet etc. please ask.
Note: don’t start with Podman or rootless Docker, start with regular Docker. It will be 10x easier. You can transition to the others later if you want.
It seems like docker would be heavy on resources since it installs & runs everything (mysql, nginx, etc.) numerous times (once for each container), instead of once globally. Is that wrong?
There’s nothing stopping you from using a single instance of those and only adding databases and config. The configs that come with projects set them up individually because they need to offer full examples but those configs are only meant as a guideline.
Also keep in mind that the overhead of just running multiple instances isn’t very big. The resources are consumed when you start having connections and using CPU and storing data and so on, and those are going to be the same no matter how many instances you have.