Ah NFS… It’s so good when it works! When it doesn’t though, figuring out why is like trying to navigate someone else’s house in pitch dark.
Ah NFS… It’s so good when it works! When it doesn’t though, figuring out why is like trying to navigate someone else’s house in pitch dark.
That makes zero sense. Where did you get that idea from?
For reference, here are their docs describing key management. https://tailscale.com/blog/tailscale-key-management
I found Tailscale to be easier to install and configure than ZeroTier, and also to have better performance.
I have never used Twingate.
Hey! Sorry you had these bad experiences.
My setup is on Debian testing
and is documented on this blog post: https://blog.c10l.cc/09122023-debian-gaming
I don’t have an Nvidia card but other than that, this should give you a head start, including virtual surround on headphones if that’s your thing!
I promise it’s not a lot of work and I tried to make it all easy to follow (feedback welcome though!).
If you decide to give it a go, let me know how it went!
Speculating is great for troubleshooting. Every time someone speculates a possible cause, it’s possible to devise a way to test it. It’s called hypothesising. Each tested hypothesis, regardless of the actual results, helps to further the understanding of the problem.
I’ve been using glauth + Authelia for a couple years with no issues and almost zero maintenance.
Yes, absolutely. Ideally there would be an automated check that runs periodically and alerts if things don’t work as expected.
Monitoring if the backup task succeeded is important but that’s tue easy part of ensuring it works.
A backup is only working if it can be restored. If you don’t test that you can restore it in case of disaster, you don’t really know if it’s working.
Ah got it. I didn’t know there was a free tier!
How do you use ChatGPT anonymously? It requires a valid login linked to a payment method. It doesn’t get any less anonymous than that.
The main “instability” I’ve found with testing
or sid
is just that because new packages are added quickly, sometimes you’ll have dependency clashes.
Pretty much every time the package manager will take care of keeping things sane and not upgrading a package that will cause any incompatibility.
The main issue is if at some point you decide to install something that has conflicting dependencies with something you already have installed. Those are usually solvable with a little aptitude
-fu as long as there are versions available to sort things out neatly.
A better first step to newer packages is probably stable
with backports
though.
Not much use to go Ubuntu or Mint, unless you have specific issues with Debian that don’t happen with those. Even then, it may be one apt install
away from a fix.
If you want to try out BSD, power to you. I wouldn’t experiment on a backup computer though, unless by backup you just mean you want to have the spare hardware and will format it with Debian if you ever need to make it your main computer anyway.
Otherwise, just run Debian!
Up until a few months ago, Vulkan was very unstable on BG3. It’s been fine for a while though. I haven’t made performance or smoothness comparisons though, I just default to Vulkan and it’s been fine.
I don’t mind the order of path, arguments and options, but what the hell is the deal with long arguments with a single dash? i.e. -name
instead of —-name
Stability is no longer an advantage when you are cherry picking from Sid lol.
This makes no sense. When 95% of the system is based on Debian stable
, you get pretty much full stability of the base OS. All you need to pull in from the other releases is Mesa and related packages.
Perhaps the kernel as well, but I suspect they’re compiling their own with relevant parameters and features for the SD anyway, so not even that.
Why would they manually package them? Just grab the packages you need from testing
or sid
. This way you keep the solid Debian stable
base OS and still bring in the latest and greatest of the things that matter for gaming.
You don’t and likely never will get a fully open stack for those GPUs. Even the latest Radeon cards have a lot of closed-source binary blobs for firmware.
Where the line is drawn between the driver and the firmware blobs makes a massive difference though. Look at the recent case of AMD trying (and failing) to license HDMI 2.1+ for their open source drivers.
I don’t think I’ve ever come across a DNS provider that blocks wildcards.
I’ve been using wildcard DNS and certificates to accompany them both at home and professional in large scale services (think hundreds to thousands of applications) for many years without an issue.
The problem described in that forum is real (and in fact is pretty much how the recent attack on Fritz!Box users works) but in practice I’ve never seen it being an issue in a service VM or container. A very easy way to avoid it completely is to just not declare your host domain the same as the one in DNS.
If they’re all resolving to the same IP and using a reverse proxy for name-based routing, there’s no need for multiple A records. A single wildcard should suffice.
The public keys can be stored anywhere, it doesn’t matter. That’s why they’re called public: because they’re not private, they’re not sensitive, they’re not a secret.