Well, after a while in the container world ive come to realise that keeping all these containers up to date is hard work and time consuming with simple docker compose. I’ve recently learnt that portainer may come to hand here. I believe that feeding the yaml file through portainer allows the latter to take control of updates. Correct?
I have a Truenas Scale machine with a VM running my containers as i find its the easiest approach for secure backps as i replicate the VM to another small sever just in case.
But i have several layers to maintain. I dont like the idea of apps on Truenas as I’m worried i dont have full control of app backup. Is there a simpler way to maintain my containers up to date?
Yah so dont use it ! Use Nix instead.
Removed by mod
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
Fewer Letters More Letters Git Popular version control system, primarily for code HTTP Hypertext Transfer Protocol, the Web IP Internet Protocol SSH Secure Shell for remote terminal access SSL Secure Sockets Layer, for transparent encryption TCP Transmission Control Protocol, most often over IP k8s Kubernetes container management package
6 acronyms in this thread; the most compressed thread commented on today has 7 acronyms.
[Thread #214 for this comm, first seen 5th Apr 2026, 15:30] [FAQ] [Full list] [Contact] [Source code]
I run about 70 containers. I really don’t find it that difficult. I do run a Watchtower fork, but I run it with
--run-once --cleanup. I do that once a month after I feel confident that everyone else has done all the beta testing on the new updates for me. So hats off to all you guys who just yolo your updates. You are an invaluable resource to the selfhosting community. Thank you.As far as Linux updates, I’m running Ubuntu Jammy so those updates don’t usually introduce breaking changes and I complete them as they become available. I use Portainer, but I am unaware of any auto—update features for Docker containers. You can feed it a new yaml and it will replace or recreate the container based on that yaml, but it doesn’t do it automatically. Portainer is just a handy way to consolidate all your container administration in one place in lieu of using the terminal.
There are other options to updating your containers like WUD, or similar. They will alert you that there is an update, but you have to manually initiate the update. Anecdotally, I’ve only encountered one breaking change and that was when Portainer updated, but was incompatible at the time with the current version of Docker, or something like that. Memory is foggy this morning. It took about an hour to find a fix, and implement it, so it wasn’t an excruciating change up.
I messed around in portainer before and I think possibly OP is referring to their feature where it can watch a git repo and anytime a change occurs, it’ll try to do a pull and recreate the container.
Please do report back. I am always down to learn new tricks.
Are you updating 1000’s of stacks every week? I update a couple critical things maybe once a month, and the other stuff maybe twice a year.
I don’t recommend auto updates, because updates break things and dealing with that is a lot of work.
I’ve had auto updates on since day one, and the only thing that has ever broken was when filebrowser changed to filebrowser quantum. I just needed to update my config.
I guess it depends what you run, and how the projects/containers are configured to handle updates and “breaking changes” in particular.
But also, I’m being a bit broad with the term “breaking changes”. Other kinds of “breaking changes” that aren’t strictly crashing the software, but that still cause work include projects that demand a manual database migration before being operational, a config change, or just a UI change that will confuse a user.
The point is, a lot of projects demand user attention which completely eclipses the effort required to execute a docker update.
Well, I also use Stash, and every couple weeks I get prompted to do a database schema upgrade. So I click the button and a few seconds later I’m back to using it.
A while back, some of the arrs started requiring authentication, so I had to create a password.
But outside of those scenarios, I don’t think I’ve seen any significant changes. There’s always slight changes, but I’m pulling updates because I want those changes.
If there is some unusual case where a change is really unwanted, I’ll downgrade and/or restore from backup.
I don’t recommend auto updates, because updates break things and dealing with that is a lot of work.
Learnt this the hard way. Been version pinning ever since.
You’ve done the hard work building the compose file. Push that file to a private GitHub repository, set up renovate bot and it’ll create PR’s to update those containers on whatever cadence and rules you want (such as auto updating bug fixes from certain registries).
Then you just need to set up SSH access to your VM running the containers and a simple GitHub action to push the updated compose file and run docker compose up. That’s what I do and it means updates are just a case of merging in a PR when it suits me.
Also I would suggest ditching the VM and just running the docker commands directly on the TrueNAS host - far less overheads, one less OS to maintain and makes shares resources (like a GPU) easier to manage.
You should look at restic or Kopia for backups, they are super efficient and encrypted. All my docker data is backed up hourly and thanks to the way out handles snapshots, I have backups going back literally years that don’t actually take up much space.
I use Tugtainer to update my containers, makes pretty simple.
Tugtainer
I always giggle
Since there’s no lack of solutions here I’m going to add one more. If you manage to create bash to update the containers then you can have it run with a systemd service that’s easy to set up. It’s very easy to set up and it’ll work the same as running the command no your computer.
Use Nix. It is MUCH more deterministic and capable than silly Docker. It can even be used to deterministically create docker images if you really need them.
Am I mistaken, but isn’t Nix a package manager, where Docker is a container system? They’re related, but really not comparable.
You’re badly misinformed.
Nix is a language, a package manager (the biggest in the world), a dev environment scaffolding, a systemd orchestration tool, a full Linux distribution, and pretty much anything that you can describe infrastructure-as-code as. You can literally do almost anything with Nix. You can even build entire OCI/Docker images with just one Nix file.
Wow it does all that? Definitely not what we need, then.
I think you would be fine just installing the apps in TrueNAS. You can have snapshots, you can have remote backup with e.g. StorJ and updating is so easy. I was also doing stuff manually but eventually found out that it’s not worth it. And realistically I won’t stop using TrueNAS anytime soon.
Truenas apps are just docker containers that were written by someone else anyways. You can always just turn them into a custom app and see it’s internal composition, o just make a custom app and choose the image and settings yourself exactly like in portainer.
https://github.com/mag37/dockcheck
I run this script whenever I want to update my containers. Then it’s simple to individually select the containers to up date, or all of them.
At work I use kubernetes and quite like that (upgrading containers without downtime FTE), but I didn’t bother trying to set up the infrastructure myself. Some argue, it’s not with the efford for self hosting, I dunno.
What I do like to use is Dockge, to keep docker but also keep your sanity. It even offers a single button for “docker compose pull”, which is great of you don’t have to many compose files / stacks. Combine with a simple shell script to batch pull/build all stacks in one go, plus some backup solution, and it’s actually nice to use and does all that I need. I love CLIs, but I’ve had situations where the GUI came in very handy.
#! /bin/bash # note: this will update and START all dockge stacks, even if you stopped them before shopt -s nullglob for proj in /opt/dockge /opt/stacks/*/; do echo "> $proj" docker compose -f "$proj/compose.yaml" up --pull always --build --detach echo "" donedeleted by creator
Podman is an alternative to Docker which integrates better with systemd and it also offers a way to automatically update containers.
I actually tried to switch to podman from docket but I have a major hold up. On my docker setup for my arr stack I have gluetun, and basically how I setup gluetun with the rest is setting up ports on gluetun for the services and for the other services I have a depends on, to make sure gluetun is up before the rest. However I tried to look several times how to do this on podman but no luck. Does anyone here has an idea how this works?
Short version, add this to your Quadlet file (with whatever your service your gluetun Quadlet starts):
[Unit] Requires=gluetun.service After=gluetun.serviceAn article I found helpful when starting with Quadlets, which can even replace Docker compose. https://mo8it.com/blog/quadlet/
I am not sure yet what are quadlets but I will check. Thanks!
Been using an quadlet podman arr stack for a year or two, pretty damn bulletproof once set up, easier to read, rootless, SELinux enabled, systemd controlled, update with podman auto-update. Worth the time to learn.
podlet can help you hit the ground running. It can create Quadlet files out of Podman commands or even (Docker) Compose files. 90% of the time it works every time ;}, but even the oopses get you most of the way there.
My arr stack is set up in a pod which means they all have their own gluetun network and come up as one, but you can just use Network=container:gluetun in container files.
I’ve not really got much experience myself with either docker/podman, but I think you’re looking for podman’s [quadlets]?(https://www.redhat.com/en/blog/quadlet-podman)
Since Podman is based around systemd services managing the containers, why not have a look at systemd .service files? I know you can set dependencies in those and so you can say that your other containers can’t start unless gluetun successfully starts first.
Nice thanks for the tip! I will look into it and see if I can do something about it
Yaknow, now that I know its tightly coupled to systemd I especially don’t care about podman. Thank you genuinely for resolving any curiosity about it, however.
It’s not tightly coupled to anything. It just ships with a systemd generator allowing you to manage containers, pods or networks with systemd if you want. And lots of people are noticing the benefits of that arrangement.
That sounds heavy and complicated. Terraform + plain docker is super easy and makes the machines trivial to replace, as well redeploying updating their containers without downtime.
And I don’t have to learn a damn thing about systemd’s nonsense. Nor do I have to learn a single bit of k8s yaml braindamage.
That sounds heavy and complicated.
It’s neither. A systemd generator just transforms a simple 15 line container text file to a simple 20 line service text file, and then the container lifecycle and dependencies are managed by systemd like any other system or user service.
bro just run the fucking container.
And auto rollback to the previous image if a container fails after an update.
And doesnt run as root by default ! ( which docker does iirc, but can be turned off )
Podman and quadlets are the way to go for monoserver services
Unless you forgot
AutoUpdate=registryin the.containerfile for half of them, like I did.But yeah, I switched to Podman over a year ago and I’m not looking back.
I just started playing with Dockhand and it looks like it has a built in update schedule mechanism. It’s fills a comparable role as Portainer, so maybe check that out.
What I really like about Dockhand is the built-in vulnerability checker for images.











