

Just lol at Synology trying to do an Nvidia
Just lol at Synology trying to do an Nvidia
There’s plenty of N100/N350 motherboards with 6 SATA ports on AliExpress, grab them while you can
Synology is like Ubiquity in the self-hosted community: sure it’s self-hosted, but it’s definitely not yours. End of the day you get to deal with their decisions.
Terramaster lets you run your own OS on their machine. That’s basically what a homelabber wants: a good chassis and components. I couldn’t see a reason to buy a Synology after Terramaster and Ugreen started ramping out their product lines which let you run whatever OS you wanted. Synology at this point is for people who either don’t know what they’re doing or want to remain hands-off with storage management (which is valid; you don’t want to do more work when you get home for work). Unfortunately, such customers are now out in the lurch, so TrueNAS or trust some other company to hold your data safe.
Alpine isn’t exactly fortified either. It needs some work too. Ideally you’d use a deblobbed kernel with KSPP and use MAC, harden permissions, install hardened_malloc. I don’t recall if there’s CIS benchmarks or STIGs for Alpine but those are very important too. These are my basic steps for hardening anything. But Alpine has the advantage of being lean from the start. Ideally you’d compile your packages with hardened flags like on Gentoo but for a regular container and VM host that might be too much (or not - depends on your appetite for this stuff).
Your complaint is genuine and I assure you that the sentiment is shared amongst many people here. I do not like that sub for its excessively tight policies. You must also consider that Reddit has its eye on that sub since it might spread awareness to other Reddit users and harm Reddit’s bottom line.
Either way, I stick to Lemmy and Kbin. Reddit doesn’t let me create accounts over TOR and I2P anymore, which means I’m not going to be able to participate anyway.
I’m looking at buildbot
I don’t get it. Where is the idea that “Fedora focuses on security” coming from? Fedora requires an equivalent amount of work like other distros to harden it.
I personally use Alpine because I trust busybox to have less attack surface than normal Linux utils
Not if you’re running a FOSS ROM (at least you’d hope that is the case) unless the firmware is compromised (which if true would affect EVERYONE so I’m sure somebody had their eye on such things)
Unfortunately, you’re done here. You’re going to need a new number if you value your privacy. I can never trust any big company; you can try waiving GDPR in their faces all you want but with a spineless EU and too much power in such companies, you have to trust them to delete your data. I’m sure you realise that this is a silly venture.
As long as you can trust Apple, sure
I wish they did. I can’t believe non-profits are suing each other.
deleted by creator
Oh I get it. Auto-pull the repos to the master nodes’ local storage for if something bad happens, and when that does, use the automatically pulled (and hopefully current) code to fix what broke.
Good idea
Downvoted. You didn’t read the rules of the sub. Yes, you should be allowed to do it, morally speaking. But if you red their notice you’ll realise exactly why you aren’t allowed to do so. Anyway welcome to Lemmy
Well it’s a tougher question to answer when it’s an active-active config rather than a master slave config because the former would need minimum latency possible as requests are bounced all over the place. For the latter, I’ll probably set up to pull every 5 minutes, so 5 minutes of latency (assuming someone doesn’t try to push right when the master node is going down).
I don’t think the likes of Github work on a master-slave configuration. They’re probably on the active-active side of things for performance. I’m surprised I couldn’t find anything on this from Codeberg though, you’d think they have already solved this problem and might have published something. Maybe I missed it.
I didn’t find anything in the official git book either, which one do you recommend?
Thanks for the comment. There’s no special use-case: it’ll just be me and a couple of friends using it anyway. But I would like to make it highly available. It doesn’t need to be 5 - 2 or 3 would be fine too but I don’t think the number would change the concept.
Ideally I’d want all servers to be updated in real-time, but it’s not necessary. I simply want to run it like so because I want to experience what the big cloud providers run for their distributed git services.
Thanks for the idea about update hooks, I’ll read more about it.
Well the other choice was Reddit so I decided to post here (Reddit flags my IP and doesn’t let me create an account easily). I might ask on a couple of other forums too.
Thanks
This is a fantastic comment. Thank you so much for taking the time.
I wasn’t planning to run a GUI for my git servers unless really required, so I’ll probably use SSH. Thanks, yes that makes the part of the reverse proxy a lot easier.
I think your idea of having a designated “master” (server 1) and having rolling updates to the rest of the servers is a brilliant idea. The replication procedure becomes a lot easier this way, and it also removes the need for the reverse-proxy too! - I can just use Keepalived, set up weights to make one of them the master and corresponding slaves for failover. It also won’t do round-robin so no special stuff for sticky sessions! This is great news from the perspective of networking for this project.
Hmm, you said to enable pushing repos to the remote git repo instead of having it pull? I was going create a wireguard tunnel and have it accessible from my network for some stuff but I guess it makes sense.
Thanks again for the wonderful comment.
Sorry, I don’t understand. What happens when my k8s cluster goes down taking my git server with it?
You can’t protect your data if you use those apps. Pick one.