• DonutsRMeh@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    12 days ago

    Setup Set up my audiobookshelf server successfully. Also, just realized that the Synology NAS that I’ve had running for a couple of years now without really using it much, can be mounted onto my Debian server, that I use a lot, as a mass storage and will work just fine. Mind blown. I now have plenty of storage after struggling for a while. Lmao.

    • shark@lemmy.orgOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      12 days ago

      Set up my audiobookshelf server successfully.

      I’ve been meaning to do this for a while. Do you put ebooks in it too, or just audiobooks and podcasts? I’ve been using BookLore for my ebooks, and really like it – I just wish it was a little faster.

      • DonutsRMeh@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        10 days ago

        Strictly audiobooks. Mostly for my wife because that’s all she listens to. For me, it is 90% ebook on my kobo and 10% audiobooks. Only when I’m doing something around the house or driving do I listen to an audiobook. I’ve also built my own android ABS client that I made to my liking.

  • baller_w@lemmy.zip
    link
    fedilink
    English
    arrow-up
    0
    ·
    12 days ago

    I migrated openaw from docker running on my raspberry pi to an old nuc I had lying around. Backed it with mainly models off of OpenRouter or my local Ollama instance. For very difficult tasks it uses anthropic. Added it to my GitHub repo and implemented Plane for task management. Added a subagent for coding and have it work on touch up or research tasks I don’t have personal time to do. Made an sdlc document that it follows so I can review all of its work. Added a cron so it checks for work every hour. It ran out of tasks in five days. Work quality: C+, but it’s a hell of a lot better than having nothing.

    It helped research and implement SilverBullet for personal notes management in one shot.

    I also migrated all of my services’ DNS resolution to CloudFlare so I get automatic TLS handoff and set up nginx with deny rules so any app I don’t want exposed don’t get proxied.

    This weekend I’m resurrecting my HomeAssistant build.

  • GnuLinuxDude@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    12 days ago

    I’ve been self-hosting for years, but with a recent move comes a recent opportunity to do my network a bit differently. I’m now running a capable OpenWRT router, and support for AdGuard Home is practically built into OpenWRT. I just needed to configure it right and set it up, but the documentation was comprehensive enough.

    For years I had kept a Debian VM for Pi-Hole running. I kept it ultra lean with a cloud kernel and 3 gb of disk space and 160MB of RAM, just so it could control its own network stack. And I’d set devices to manually use its IP address to be covered. AGH seems to be about the same exact thing as Pi-Hole. With my new setup the entire network is covered automatically without having to configure any device. And yes, I know I could’ve done the same before by forwarding the DNS lookups to the Pi-Hole, but I was always afraid it would cause a problem for me and I’d need an easy way to back out of the adblocking. Subjectively, over about 6 years, I only had a couple worthless websites that blocked me out.

    I haven’t yet gotten to the point where I’m trying to also to intercept hardcoded DNS lookups, but soon… It’s not urgent for me because I don’t have sinister devices that do that.

  • Ebby@lemmy.ssba.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    12 days ago

    I finally got around to installing Jellyfin. Still trying to get hardware transcoding working. I think I have it set up, but it still wants to use the CPU. I’m thinking permissions but I ran out of time.

    Fun project.

    • BaconWrappedEnigma@lemmy.nz
      link
      fedilink
      English
      arrow-up
      0
      ·
      12 days ago

      I think QSV is the new “easiest” way if you have an Intel CPU. Here are some docker compose values that might help:

          group_add:
            - "110"
            - "44"
          devices:
            - /dev/dri/renderD128:/dev/dri/renderD128
      

      110 is render

      44 is video

      You can grep render /etc/group to find your values.

      I found CPU accelerated transcoding to be as effective as using GPU acceleration for my small media server setup. Nvidia wasn’t worth it for me.

  • harsh3466@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    12 days ago

    I got a test box set up with nixos and a config that runs all of my services. I wanted to test the declarative rebuild promise of it, so I:

    1. Filled the services with my some of my backed up data (a copy of the data, not the actual backup)
    2. Ran it for a few days using some of the services
    3. Backed up the data of the nixos test server, as well as the nixos config
    4. Reinstalled nixos on the test box, brought in the config, and rebuilt it.

    And it worked!!! All serviced came back with the data, all configuration was correct.

    I’m going to keep testing, and depending on how that goes I may switch my prod server and nas to nixos.

    • smiletolerantly@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      12 days ago

      Very cool!

      Re: the backup / restore of state in NixOS: I found myself writing the same things over and over again for each VM/service, so finally wrote this wrapper module (in action e.g. here for Jellyfin), which confgures both the backup services and timers, as well as adding a simple rsync-restore-jellyfin command to the system packages. In case you find this useful and don’t already have your own abstractions, or a sufficiently different use case 😄

  • TheRagingGeek@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    12 days ago

    This week I saw my 3 machine cluster flailing trying to stay online, digging around identified it as an issue with communication with my NAS. It was running NFS3 and so I swapped that to NFS4.1 and did some tuning and now my services have never been faster!

  • Klox@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    12 days ago

    I’m redoing everything I have from scratch. This week I have FreeIPA set up from OpenTofu + Ansible configs, and enrolls most of my other servers against FreeIPA. I am still migrating TrueNAS to use FreeIPA’s Kerberos Realm for auth, and I need to chown a lot of files for the new UIDs and GIDs homed in FreeIPA. After that, I’m setting up FreeRadius for auth to switches, APs, and Wifi. And then after that, I’m back to overhauling my k8s stack. I have Talos VMs running but didn’t finish patching in Cilium. And after the real fun begins.

  • aksdb@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    12 days ago

    Finally took the time to setup Woodpecker CI to replace Drone. Also finally linked it not only to my self hosted gitea, but also to github, so I can automate a few builds there as well.

    In the process I also learned, that I can set up a whole bunch of pods in a single kube definition for podman/quadlets, which allows me to have a much cleaner setup. Previously I was only aware that you can define a single pod with multiple containers. It makes sense, but it never occurred to me before.

  • sorghum@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    12 days ago

    The nextcloud AIO instance that hadn’t been working since September suddenly started working after I updated it. This was all after their forums did fuck all to help except tell me to get gud. I knew the problem wasn’t on me or my config and I feel so vindicated

    • bobslaede@feddit.dk
      link
      fedilink
      English
      arrow-up
      0
      ·
      12 days ago

      Have you had a look at opencloud? Not many addons, but simple-ish cloud drive and docs and such. Does not use many resources.

      • sorghum@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        12 days ago

        I have an instance running, but haven’t had a ton of time to dedicate on getting it the way I need it. I need a calendar that is accessible anonymously via the web for people to know my availability. File server, CalDAV, and CardDAV I was able to get separate solutions for.

  • thelocalhostinger@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    12 days ago

    Decided to buy a raspberry pi, it arrived, I installed pihole on it and put it into my dad’s house, all in a few days. Biggest win: I just took action and did it, instead of researching, brainstorming and writing down stuff for weeks and then never execute.

  • silenium_dev@feddit.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    12 days ago

    I already had Keycloak set up, but a few services don’t support OIDC or SAML (Jellyfin, Reposilite), so I’ve deployed lldap and connected those services and Keycloak to it. Now I really have a single user across all services

    • WhyJiffie@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      12 days ago

      how did tou migrate your existing accounts to this system? or did you just make a new account from scratch?

      • silenium_dev@feddit.org
        link
        fedilink
        English
        arrow-up
        0
        ·
        12 days ago

        I recreated the Keycloak account from LDAP, and then manually patched the databases for all OIDC-based services to the new account UUID, so the existing accounts are linked to the new Keycloak account.

        I have two Keycloak accounts, one in the master realm for administrative purposes, and one in the apps realm for all my services, so I didn’t break access to Keycloak