What to people use and recommend for this? I’ve read a bit about portainer, but I’m still learning - and don’t know what the best solutions are.

Today I have a handful of selfhosted services running on my home machine - mostly installed directly, but a couple running as docker containers. As the scale of my selfhosting has grown, I’ve realized that things would be a lot easier to manage if each service was run as its own container, so that installed services don’t actually affect things on my base OS.

The solution I’m looking for would make it easy (possibly a web UI) for me to monitor, modify, update, and remove containerized services, including networking and storage.

  • irmadlad@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 days ago

    I’ve read a bit about portainer, but I’m still learning

    I started with Portainer, and I still use it. It checks all the boxes for me. I would be remiss if I didn’t mention there are other such platforms to manage Docker containers with such as Podman, Dockage, etc. Like I said, I started with Portainer, and I know how to drive that bus, so I stuck with it.

  • talkingpumpkin@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 days ago

    In your shoes (and, in fact, in mine) I’d try to move away from interactive tools and into file-driven ones.

    Personally I use nixos, run WUD (what’s up docker) to be notified of available updates, and manually test/update the containers once in a while (every couple weeks or so?)

    There are a bazillion other solutions (from stuff like ansible/chef/puppet, to docker-compose, to kubernetes, to… a hand-written bash script) - the idea is to setup stuff via files that you can version, reference and write comments in rather than using some gui for interactive steps that you’ll forget to document in some wiki.

    Monitoring is a whole different beast than configuring: you’ll be probably better off using something that does just that instead of some all-in-one solution. Try looking into something like beszel before going for the full prometheus/graphana stack.

  • fozid@feddit.uk
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 days ago

    I recently moved from docker compose to podman quadlets. Took a bit of effort, but fully foss, and for me it’s set and forget. Have about 30 containers across about 12 services. Have them set to auto update and it all runs through systemd.

    • motruck@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 days ago

      This is the direction I’m headed. Goodbye docker. Quadlets everywhere. Im in the process of converting docker run scripts currently. Any tips or gotchas you can share that you learned?

      • fozid@feddit.uk
        link
        fedilink
        English
        arrow-up
        0
        ·
        4 days ago

        Permissions were a pain to get right for my volumes and data I migrated. Other than that, It’s really not that complicated, it was just a case of covering my compose files to systemd service files and starting the update timer.

  • jimmy90@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 days ago

    try NixOS

    all your containers and other services will be managed through one re-usable file

    if your server is >= 8GB then proxmox gives a nice interface builtin. i use it to make nixos lxc containers in which i run my containers. which does actually make sense

  • GunnarGrop@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 days ago

    I’d absolutely recommend Kubernetes (k3s/rke2) or podman quadlets. Quadlets are a lot easier to get started with, but are still very flexible.

    I’d recommend against using portainer. I tried it quite recently and I did not like it at all. A lot of features are paywalled, and was overall just a frustrating experience. I’ve heard it was a lot better some years ago.

  • Joker@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 days ago

    I personally like dockge, it’s simple and lightweight and I like the fact that the webui has a good phone interface.

  • eodur@piefed.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 days ago

    If you want robust (and a ton to learn) go with k3s for a lightweight Kubernetes deployment and FluxCD.

    If you want simpler go with docker-compose and doco-cd.

    With a GitOps workflow you define it all in files in a bit repo then the server automatically deploys and updates. IMHO its much easier to maintain long term than click ops.

  • gedaliyah@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 days ago

    I was using CLI exclusively for a year or so, but recently added DockMon and it’s helped with updates and at-a-glance management.

  • K3CAN@lemmy.radio
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 days ago

    I’ll second podman quadlets. Good security, full integration with systemd, pods allow applications to easily share a namespace, and you can manage graphically through Cockpit if you really want to.

  • Daniel Quinn@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 days ago

    Kubernetes. For a homelab, the stripped-down k3s is fantastic and surprisingly easy to get going.

    Once you’ve got Kubernetes set up, you can lean on all the many tools already out there for things like deploying complex projects (Helm) and monitoring (Prometheus/Grafana). OpenLens is a nice piece of software you can use to monitor and control your cluster too, as is k9s.

    • Jul (they/she)@piefed.blahaj.zone
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 days ago

      What do you use for repeatable recovery and deployment of systems?

      I’ve looked at ArgoCD and FlexCD. ArgoCD was too flaky. When I made changes to helm files it would often fail to deploy them and the UI often wouldn’t really show the detailed errors from things like helm syntax errors, so it was a pain to troubleshoot.

      FlexCD was just really a pain to configure in the first-place and I didn’t want to learn kustomize when I already have helm charts.

      And neither really supported staged deployments or dealt with dependant services well. So I couldn’t get it to deploy the infrastructure level helm charts like PostgreSQL before deploying the services that depend on it. Technically, with Kubernetes it shouldn’t matter about the order of deployment but in reality when ArgoCD would deploy the other stuff first and wait for it to come up and it never came up because the dependencies weren’t there, it caused it to choke a lot.

      Just an example of the issues I’ve had. But I really want an easy way to make lots of small changes to charts and deploy them quickly as well as being able to quickly recover the cluster from backups if something catastrophic happens like a fire without having to manually deploy each chart. Just curious how others handle it or if it’s always manual deployment of charts via CLI only.

      • Daniel Quinn@lemmy.ca
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        4 days ago

        I’ve used FluxCD in the past and have looked into ArgoCD, but honestly, I’ve not seen any big benefit from either to be honest. I use k8s both at home and at work, and in both cases, we do “imperative” deploys: you run helm install ... either directly or via the CI and stuff is deployed.

        So for example at my last job, our GitLab CI just had a section triggered exclusively for merges into master that ran helm install ... for all three environments. We had three values.yaml files, one for each environment, and when we wanted to deploy a new version, the process was:

        1. Create a tag for our release version (ie. 1.2.3) and push it to the repo. This would trigger a build and push the resulting image into the container registry.
        2. Push an update to the repo with the new tag set in the appropriate Helm values file. If we wanted to deploy 1.2.3 to development but not yet to staging or production, then the tag: value in each of the environment files would look like this:
        • k8s/chart/environments/development.yaml: tag: 1.2.3
        • k8s/chart/environments/staging.yaml: tag: 1.2.2
        • k8s/chart/environments/production.yaml: tag: 1.2.2

        Once that change is pushed, the CI will automatically apply it with helm install ... and make sure that all three environments are what they’re supposed to be.

        As for dependent services, that should all be in your Helm chart so they’re stood up and torn down together. The specific case you mention about “Service A” being dependent on “Service B” but stood up before “Service B” is ready is a classic problem, but easily solved:

        The dependent service (“A” in this case) should have an entrypoint that checks for everything else before starting. Here’s what I’m using right now in a project:

        #!/bin/sh
        
        while ! nc -z postgres 5432; do
          echo "Waiting for postgres..."
          sleep 0.1
        done
        echo "PostgreSQL started"
        
        touch /tmp/ready
        
        exec "$@"
        

        I’ve even got some code that checks that all the Django migrations have run first for the same situation. The Kubernetes philosophy is that any container should be able to die at any time and be eventually be brought back up and that every container needs to be prepared for this. Typically this means that your containers should operate on the basis of “if I can’t work, die, and hope the problem is solved by the time Kubernetes redeploys me”.

    • thejml@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 days ago

      This is how I went and what I’d recommend. But that said, it’s a bit of a steep learning curve as not everything in the self hosted/home lab community comes with helm charts.

      • motruck@lemmy.zip
        link
        fedilink
        English
        arrow-up
        0
        ·
        4 days ago

        Generally my suggestion is if you use kubernetes at work or want to learn it, self host that way to learn. If not don’t add all the complexity and use docker, docker compose, podman, etc. Kubetnetes is overkill for 90% of the use xases Ost companies have but here we are.

    • valar@lemmy.caOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 days ago

      Thanks, what have you liked about switching to this from portainer?

      • AHorseWithNoNeigh@piefed.social
        link
        fedilink
        English
        arrow-up
        0
        ·
        4 days ago

        I concur with the other user: the logs are much easier to access and organized. The compact feel is much more suited to my preference.

      • Kupi@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        4 days ago

        I just recently switched from portainer to dockhand and I really like it. The UI is great and the setup and config wasn’t too complicated. I like that I can put both of my servers into one instance and can update all of my containers from dockhand vs manually. The other thing I like is being able to view the logs for my containers. Idk if it’s a me thing, but whenever I would try to view logs in portainer I would never be able to scroll up as it would update and send me back to the bottom. Again, I could’ve just been doing something wrong, but it always bothered me and I don’t have that issue with dockhand.

    • valar@lemmy.caOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 days ago

      Thanks, I’ve heard of this too. Its hard to tell what the differences in use-case all of these are. I’ll have to do more research into how they work.

      • Evil_Shrubbery@thelemmy.club
        link
        fedilink
        English
        arrow-up
        0
        ·
        4 days ago

        Well, I use it but it’s not directly what I think you are asking about.

        I just use Proxmox (hypervisor) to run vm/lxc which run docker - and I just have backup images of those.

        It has pros & cons but it’s not a docker backup, it’s just that by chance it’s ok for the little use I need it.

        I would go for one of the other recommended solutions but perhaps consider a Proxmox layer underneath if you wanna a full image backup of the server itself.

    • Lka1988@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 days ago

      I’ll second Dockge. It works alongside Docker containers and doesn’t try to shove configs into nonstandard locations and whatnot. Plus if you have multiple Docker instances, you can install Dockge on each of them and link them all together. Very handy.

  • RanchBranch@anarchist.nexus
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 days ago

    I personally have switched over to Komodo after using portainer for years. Never looking back, I love it. Works perfectly and can do GUI, compose files, and repos for docker. I also have multiple machines running stuff and it let’s me fiddle with everything in one UI.

      • RanchBranch@anarchist.nexus
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 days ago

        I like the fact that there isn’t a distinction between the community edition and the business edition. Its all the same thing for Komodo, whereas I felt like Portainer had a bunc hof random things locked away behind the “Business Edition” and that just rubbed me the wrong way. If I’m self hosting something, I feel like I should be able to access all portions of it. The GUI is a little different but once you’re used to it, I feel like it makes more sense for the most part. It has a nice way to connect other machines, so I can monitor all of the different machines in my network that are hosting things. I also wanted to mess around with some of the automation features, but I haven’t had as much time to dick around with that as I would like. I also wanted to start doing stuff from a personal Forgejo, and it was super easy to integrate. (No idea how easy it is on Portainer, as I had already jumped ship at that point)

      • RanchBranch@anarchist.nexus
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 days ago

        Honestly, not entirely certain I did it right, but it was super easy. I literally spun up Komodo, spun down Portainer without shutting any of the other containers/stacks down, then added the same stacks back through the GUI option into Komodo with the same exact compose/title/env options. It literally just recognized that the containers that were already running on my server were the correct ones and “added” them back to the stack in Komodo. I vaguely remember reading that there is a more “correct” way to do it, but I only read about it after the fact.