Time to start documenting it!
At 71, I have to document. I started a long time ago. I worked for a mec. contractor long ago, and the rule was: ‘If you didn’t write it down, it didn’t happen.’ That just carried over to everything I do.
Do you write down what you write down on the internet?
As in a blog or wiki? I do not because I am not authoritative. What I know came from reading, doing, screwing it up, ad nauseam. When something finally clicks for me, I write it down because 9 times out of 10, I will need that info later. But my writing would be so full of inaccuracies that it would be embarrassing and possibly lead someone astray.
It’s how cults start!
I’ve started to take a l lot more notes at work I guess there will be a time where I take notes of what month it is!
I guess there will be a time where I take notes of what month it is!
You may jest, but there are times when I can’t remember what I had for breakfast. They say that you never truly forget anything, but that our recall mechanism fades over time. For a myriad of reasons, including age, my recall mechanism is shit.
Offt depends what you had and your version of health. I am hopeful that technology helps when I am that age, only a few years but ai agents seem to be a start. Just need to let go of those big data fears.
Don’t look too closely you can jinx it.
NEVER1!!!11!!
I haven’t messed with my raspberry pi in maybe a month… And I think one of my backups got corrupted because I receive an email saying that it failed along with tons of errors every night. Hmm, maybe I should get to that soon…
Off topic, warning: this comment section is making me want to learn things
It’s been 2 days off reddit and my brain has opinions other than “aaaargh” or “meh”.
Proceed with caution
I had a automatic reboot of all VMs and the hypervisor because of a kernel update at night. Nextcloud decided to start in maintenance mode and Jellyfin refused to start because the cache folder didn’t have enough space left. Authentik also complained about outdated provider configuration…
Need to investigate the Nextcloud and Authentic issue during weekend 🤗
I havent messed much with my servers in 2 years. I think that means I’ll hit my RIO in another 5 :)
Actually, one thing I want to do is switch from services being on a subdomain to services being on a path.
immich.myserver.com -> myserver.com/immich jellyfin.myserver.com -> myserver.com/jellyfinI’m getting tired of having to update DNS records every time I want to add a new service.
I guess the tricky part will be making sure the services support this kind of routing…
Wildcard CNAME pointing to your reverse proxy who then figures out where to route the request to? That’s what I’ve been doing - this way there’s no need to ever update DNS at all :)
I find the path a bit clunky because the apps themselves will oftentimes get confused (especially front-ends). So keeping everything “bare” wrt path, and just on “separate” subdomains is usually my preferred approach.
In Nginx you can do rewrites so services think they are at the root.
Why are you having to update your DNS records when you add a new service? Just set up a wildcard A record to send *.myserver.com to the reverse proxy and you never have to touch it again. If your DNS doesn’t let you set wildcard A records, then switch to a better DNS.
Not OP but a lot of people probably use pi-hole which doesn’t support wildcards for some inane reason
That’s my case. I send every new subdomain to my nginx IP on pi-hole and then use nginx as a reverse proxy
That was my exact setup as well until I switched to a different router which supported both custom DNS entries and blocklists, thereby making the pi-hole redundant
I run opnsense, so I need to dump pi-hole. But I don’t have the energy right now to do that.
Pi-Hole was pretty straightforward at the time and I did not look back since then. Annoying, but easy.
It does support it, you just have to add it to dnsmasq. I have it Setup under
misc.dnsmasq_lineslike so:address=/proxy.example.com/192.0.0.100 local=/proxy.example.com/Then I have my proxied service reachable under
service.proxy.example.comI switched to Technitium and I’ve been pretty happy. Seems very robust, and as a bonus was easy to use it to stop DNS leaks (each upstream has a static route through a different Mullvad VPN, and since they’re queried in parallel, a VPN connection can go down without losing any DNS…maybe this is how pihole would have handled it too though).
And of course, wildcards supported no problem.
I had the same idea, but the solution I thought about is finding a way to define my DNS records as code, so I can automate the deployment. But the pain is tolerable so far (I have maybe 30 subdomains?), I haven’t done anything yet
Alternatively if you’re tired of manual DNS configuration:
FreeIPA, like AD but fer ur *Nix boxes
Configures users, sudoer group, ssh keys, and DNS in one go.
Also lotta services can be integrated using LDAP auth too.
So far I’ve got proxmox, jellyfin, zoneminder, mediawiki, and forgejo authing against freeipa in top of my samba shares.
Ansible works too just because its uses ssh, but I’ve yet to figure out how to build ansible inventories dynamically off of freeIPA host groups. Seen a coupla old scripts but that’s about it.
Current freeipa plugin for it seems more about automagic deployment of new domains.
Having a very similar infrastructure, I would love to know if you ever find anything that works for this. I’ve been maintaining a SnipeIT instance manually, but that’s a real PITA. Tried the same with ITSM-NG, but haven’t even lookid in it for months.
Can’t believe nobody here mentioned nixOS so far? How about moving all of your configs in a flake and manage all of your systems with it?
I already have Ansible to manage my system and I like to have the same base between my pc and my server build muscle memory.
If I was managing a pc fleet I would consider NixOS, but I don’t see the appeal right now.
Okay, but why not create more work for yourself by rebuilding everything from scratch?
I made a git repo and started putting all of my dot files in a Stow and then I forgot why I was doing it in the first place.
So that when setting up a new system, you can migrate all your user configuration easily, while also version-controlling it.
git commit --message 'So that when setting up a new system, you can migrate all your user configuration easily, while also version-controlling it.'
heck i really wish we could all throw a party together. part swap, stories swap. show off cool shit for everyone to copy.
help each other fill in the missing pieces
y’all seem like cool peeps meme-ing about shit nobody else gets!
time to test the backups!
time to test the backups!
Always a white knuckle event for me
https://wiki.hackerspaces.org/List_of_Hacker_Spaces
Also check out meetup.com for linux user groups and other events.
You just described a convention.
I wish it was stable
I had a drive die yesterday
Time to distro-hop!
That’s not a homelab, that’s a home server.
Living the good life
Nothing to install? Not with that attitude!
Start a 10" rack.
No try migrating all your docker containers to podman.
Don’t encourage me.
And then try turning on SELinux!
I set my homelab up on Bazzite immutable with podman and SELinux. It took a while to work everything out and have it boot up into a valid state hahaha
Any reason you chose Bazzite for your homelab distro? First I’ve heard of someone doing that!
Wouldn’t an immutable OS be overall a pretty good idea for a stable server?
At the start I just wanted a desktop machine that runs Steam through sunshine/moonlight so hardware support and gaming stuff such was very important.
My homelab used to run on my laptop when it could all fit within a couple 100s of GB and I was the only user but moving it was tricky. Since I’m a programmer I’m not afraid of this stuff so I just spent the hours to figure out one problem at a time.
I ended up figuring out adding HDD whitelist in SELinux, make it accessible in podman, manually edit fstab because tools didn’t work, systemd service for startup, logging in automatically where I already forgot everything and would have not had to do any of this on a bog standard Ubuntu server.
Respect! I too often take it for granted that it’s a privilege for my gaming rig and my homelab server to be separate boxes.
My server is Almalinux, my laptop is Mint, and my gaming rig is Nobara. But if I had to consolidate everything in to one machine, I’d pick Nobara.
I came to the same conclusion, Nobara for would have been best.
It’s not that difficult to get SELinux working with podman quadlets, especially if you run things rootless. I have a kerberized service account for each application I host and my quadlets are configured to run under those. I very rarely encounter applications that simoky can’t be run rootless but I usually can find an adequate alternative. I think right now the only thing that runs as root is one of the talk or collabora containers in my nextcloud stack. No selinux issues either.
I use podman-compose with system accounts and I don’t have a ton of issues. The biggest one is that I can’t seem to get bluetooth and pip working on Home Assistant at the same time. Most of the servers I manage have SELinux and it works fine as long as I use
:z/:Zwith bind mounts.A few years ago, I set up a VPS for my friend’s business; at the time, I didn’t know how to work with SELinux so I just turned it off. I tried to flip it back on, and it somehow bricked the system. We had to restore from a backup. Since then, I’ve been afraid to enable it on my flagship homelab server.
are you sure it really bricked it? when turning it on, on next boot it needs to go over all the files and retag them or something like that, and it can take a significant amount of time
Honestly, I don’t know what happened, but it was unreachable via SSH and the web console. There shouldn’t have been a ton of files to tag since it was an Almalinux system that started with SELinux enabled, and all we added was a container app or two.
that started with SELinux enabled
that does not matter, it needs to go over all of them. I don’t know how long it takes with SSD, but with HDD it can take a half an hour or more, with a mostly base system. and the kernel starts doing this very early, when not even systemd or other processes are running, so no ssh, but web console should have been working to see what its doing
Just did that last weekend. Nothing to do anymore. 😢
Did you do Quadlets?
Yes of course. Had to spend a couple of hours fixing permission related issues.
But did you run them as rootful or the intended rootless way.
I had problems getting apps with multiple containers working in quadlets (definitely a knowledge issue on my part, but didn’t feel the time learning it was beneficial, but will probably revisit during kubernetes learning) so went back to podman with docker compose.
And then migrate all your podman containers to proxmox
I test in my Homeproduction





















