Well, after a while in the container world ive come to realise that keeping all these containers up to date is hard work and time consuming with simple docker compose. I’ve recently learnt that portainer may come to hand here. I believe that feeding the yaml file through portainer allows the latter to take control of updates. Correct?

I have a Truenas Scale machine with a VM running my containers as i find its the easiest approach for secure backps as i replicate the VM to another small sever just in case.

But i have several layers to maintain. I dont like the idea of apps on Truenas as I’m worried i dont have full control of app backup. Is there a simpler way to maintain my containers up to date?

  • Decronym@lemmy.decronym.xyzB
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    7 days ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    Git Popular version control system, primarily for code
    HTTP Hypertext Transfer Protocol, the Web
    IP Internet Protocol
    SSH Secure Shell for remote terminal access
    SSL Secure Sockets Layer, for transparent encryption
    TCP Transmission Control Protocol, most often over IP
    k8s Kubernetes container management package

    6 acronyms in this thread; the most compressed thread commented on today has 7 acronyms.

    [Thread #214 for this comm, first seen 5th Apr 2026, 15:30] [FAQ] [Full list] [Contact] [Source code]

  • exu@feditown.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 days ago

    Podman is an alternative to Docker which integrates better with systemd and it also offers a way to automatically update containers.

    • ZeDoTelhado@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 days ago

      I actually tried to switch to podman from docket but I have a major hold up. On my docker setup for my arr stack I have gluetun, and basically how I setup gluetun with the rest is setting up ports on gluetun for the services and for the other services I have a depends on, to make sure gluetun is up before the rest. However I tried to look several times how to do this on podman but no luck. Does anyone here has an idea how this works?

      • jabberwock@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 days ago

        Short version, add this to your Quadlet file (with whatever your service your gluetun Quadlet starts):

        [Unit]
        Requires=gluetun.service
        After=gluetun.service
        

        An article I found helpful when starting with Quadlets, which can even replace Docker compose. https://mo8it.com/blog/quadlet/

          • MalReynolds@slrpnk.net
            link
            fedilink
            English
            arrow-up
            0
            ·
            7 days ago

            Been using an quadlet podman arr stack for a year or two, pretty damn bulletproof once set up, easier to read, rootless, SELinux enabled, systemd controlled, update with podman auto-update. Worth the time to learn.

            podlet can help you hit the ground running. It can create Quadlet files out of Podman commands or even (Docker) Compose files. 90% of the time it works every time ;}, but even the oopses get you most of the way there.

            My arr stack is set up in a pod which means they all have their own gluetun network and come up as one, but you can just use Network=container:gluetun in container files.

      • 4am@lemmy.zip
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 days ago

        Since Podman is based around systemd services managing the containers, why not have a look at systemd .service files? I know you can set dependencies in those and so you can say that your other containers can’t start unless gluetun successfully starts first.

        • greyscale@lemmy.grey.ooo
          link
          fedilink
          English
          arrow-up
          0
          ·
          7 days ago

          Yaknow, now that I know its tightly coupled to systemd I especially don’t care about podman. Thank you genuinely for resolving any curiosity about it, however.

          • UnityDevice@lemmy.zip
            link
            fedilink
            English
            arrow-up
            0
            ·
            7 days ago

            It’s not tightly coupled to anything. It just ships with a systemd generator allowing you to manage containers, pods or networks with systemd if you want. And lots of people are noticing the benefits of that arrangement.

            • greyscale@lemmy.grey.ooo
              link
              fedilink
              English
              arrow-up
              0
              ·
              7 days ago

              That sounds heavy and complicated. Terraform + plain docker is super easy and makes the machines trivial to replace, as well redeploying updating their containers without downtime.

              And I don’t have to learn a damn thing about systemd’s nonsense. Nor do I have to learn a single bit of k8s yaml braindamage.

              • UnityDevice@lemmy.zip
                link
                fedilink
                English
                arrow-up
                0
                ·
                7 days ago

                That sounds heavy and complicated.

                It’s neither. A systemd generator just transforms a simple 15 line container text file to a simple 20 line service text file, and then the container lifecycle and dependencies are managed by systemd like any other system or user service.

    • K3CAN@lemmy.radio
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 days ago

      And auto rollback to the previous image if a container fails after an update.

    • Elvith Ma'for@feddit.org
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 days ago

      Unless you forgot AutoUpdate=registry in the .container file for half of them, like I did.

      But yeah, I switched to Podman over a year ago and I’m not looking back.

  • Damarus@feddit.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 days ago

    With Portainer you can do GitOps and have something like Dependabot notify you of updates. I don’t believe, Portainer offers completely unattended container updates.

    If you want it automated, use one of the recent Watchtower forks. It’s not recommended as automated updates may break things or introduce malware through compromised accounts, however it has worked pretty well for my personal stuff. I wouldn’t recommend it for business use.

    • cron@feddit.org
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 days ago

      With portainer business, you could easily build an update procedure yourself. Just create webhooks for the stacks you want to update and run a daily curl script that triggers these hooks.

      • slazer2au@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 days ago

        You get 3 free business licences for free so there isn’t a reason to not use the community edition for small environments.

        • Midnight Wolf@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          7 days ago

          My first used license ‘expired’ after a year, even though I’m 99% sure they are perpetual. So I’m on my second, waiting for that to expire too…

  • ultimate_worrier@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 days ago

    Use Nix. It is MUCH more deterministic and capable than silly Docker. It can even be used to deterministically create docker images if you really need them.

    • BozeKnoflook@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 days ago

      Am I mistaken, but isn’t Nix a package manager, where Docker is a container system? They’re related, but really not comparable.

      • ultimate_worrier@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 days ago

        You’re badly misinformed.

        Nix is a language, a package manager (the biggest in the world), a dev environment scaffolding, a systemd orchestration tool, a full Linux distribution, and pretty much anything that you can describe infrastructure-as-code as. You can literally do almost anything with Nix. You can even build entire OCI/Docker images with just one Nix file.

  • monkeyman512@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 days ago

    I just started playing with Dockhand and it looks like it has a built in update schedule mechanism. It’s fills a comparable role as Portainer, so maybe check that out.

    • Lemmchen@feddit.org
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 days ago

      What I really like about Dockhand is the built-in vulnerability checker for images.

  • Kushan@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 days ago

    You’ve done the hard work building the compose file. Push that file to a private GitHub repository, set up renovate bot and it’ll create PR’s to update those containers on whatever cadence and rules you want (such as auto updating bug fixes from certain registries).

    Then you just need to set up SSH access to your VM running the containers and a simple GitHub action to push the updated compose file and run docker compose up. That’s what I do and it means updates are just a case of merging in a PR when it suits me.

    Also I would suggest ditching the VM and just running the docker commands directly on the TrueNAS host - far less overheads, one less OS to maintain and makes shares resources (like a GPU) easier to manage.

    You should look at restic or Kopia for backups, they are super efficient and encrypted. All my docker data is backed up hourly and thanks to the way out handles snapshots, I have backups going back literally years that don’t actually take up much space.

  • SkyNTP@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 days ago

    Are you updating 1000’s of stacks every week? I update a couple critical things maybe once a month, and the other stuff maybe twice a year.

    I don’t recommend auto updates, because updates break things and dealing with that is a lot of work.

    • IratePirate@feddit.org
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 days ago

      I don’t recommend auto updates, because updates break things and dealing with that is a lot of work.

      Learnt this the hard way. Been version pinning ever since.

    • frongt@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 days ago

      I’ve had auto updates on since day one, and the only thing that has ever broken was when filebrowser changed to filebrowser quantum. I just needed to update my config.

      • SkyNTP@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 days ago

        I guess it depends what you run, and how the projects/containers are configured to handle updates and “breaking changes” in particular.

        But also, I’m being a bit broad with the term “breaking changes”. Other kinds of “breaking changes” that aren’t strictly crashing the software, but that still cause work include projects that demand a manual database migration before being operational, a config change, or just a UI change that will confuse a user.

        The point is, a lot of projects demand user attention which completely eclipses the effort required to execute a docker update.

        • frongt@lemmy.zip
          link
          fedilink
          English
          arrow-up
          0
          ·
          7 days ago

          Well, I also use Stash, and every couple weeks I get prompted to do a database schema upgrade. So I click the button and a few seconds later I’m back to using it.

          A while back, some of the arrs started requiring authentication, so I had to create a password.

          But outside of those scenarios, I don’t think I’ve seen any significant changes. There’s always slight changes, but I’m pulling updates because I want those changes.

          If there is some unusual case where a change is really unwanted, I’ll downgrade and/or restore from backup.

  • Caveman@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 days ago

    Since there’s no lack of solutions here I’m going to add one more. If you manage to create bash to update the containers then you can have it run with a systemd service that’s easy to set up. It’s very easy to set up and it’ll work the same as running the command no your computer.

  • dieTasse@feddit.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 days ago

    I think you would be fine just installing the apps in TrueNAS. You can have snapshots, you can have remote backup with e.g. StorJ and updating is so easy. I was also doing stuff manually but eventually found out that it’s not worth it. And realistically I won’t stop using TrueNAS anytime soon.

    • dustyData@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 days ago

      Truenas apps are just docker containers that were written by someone else anyways. You can always just turn them into a custom app and see it’s internal composition, o just make a custom app and choose the image and settings yourself exactly like in portainer.

  • AzuraTheSpellkissed@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 days ago

    At work I use kubernetes and quite like that (upgrading containers without downtime FTE), but I didn’t bother trying to set up the infrastructure myself. Some argue, it’s not with the efford for self hosting, I dunno.

    What I do like to use is Dockge, to keep docker but also keep your sanity. It even offers a single button for “docker compose pull”, which is great of you don’t have to many compose files / stacks. Combine with a simple shell script to batch pull/build all stacks in one go, plus some backup solution, and it’s actually nice to use and does all that I need. I love CLIs, but I’ve had situations where the GUI came in very handy.

    #! /bin/bash
    # note: this will update and START all dockge stacks, even if you stopped them before
    shopt -s nullglob
    for proj in /opt/dockge /opt/stacks/*/; do
      echo "> $proj"
      docker compose -f "$proj/compose.yaml" up --pull always --build --detach
      echo ""
    done