Ah, see, I’m Canadian so that only works like two months out of the year when we’re able to emerge from our igloos…
Just this guy, you know?
Ah, see, I’m Canadian so that only works like two months out of the year when we’re able to emerge from our igloos…
Or burned out because they get pulled into every project that’s gone off the rails.
Times like this I’m glad I have not one but two friends who are backyard beekeepers. They are more than happy to give away the enormous amount of honey they collect each year…
Gentle heating in a hot water bath or the microwave will liquify that honey again.
Take it to an electronics recycling center. Seriously.
If you already have a homelab, you plan to replace it, you don’t want to repair it, and you don’t have an obvious use case for another machine (it’s just another computer; you either have the need for another computer or you don’t), then holding onto it is just hoarding.
Yes I’m aware of the security tradeoffs with testing, which is why I’ve started refraining from mentioning it as an option as pedants like to pop out of the woodwork and mention this exact issue every damn time.
Also, testing absolutely gets “security support”, the issue is that security fixes don’t land in testing immediately and so there can be some delay. As per the FAQ:
Security for testing benefits from the security efforts of the entire project for unstable. However, there is a minimum two-day migration delay, and sometimes security fixes can be held up by transitions. The Security Team helps to move along those transitions holding back important security uploads, but this is not always possible and delays may occur.
Thats seriously overstating things. I’ve been running testing or sid for years and years, and I can only remember a handful of times where anything meaningfully broke. And typically its dependency breakages, not actual software breakages.
For the target users of Debian stable? No.
Debian stable is for servers or other applications where security and predictability are paramount. For that application I absolutely do not want a lot package churn. Quite the opposite.
Meanwhile Sid provides a rolling release experience that in practice is every bit as stable as any other rolling release distro.
And if I have something running stable and I really need to pull in the latest of something, I can always mix and match.
What makes Debian unique is that it offers a spectrum of options for different use cases and then lets me choose.
If you don’t want that, fine, don’t use Debian. But for a lot of us, we choose Debian because of how it’s managed, not in spite of it.
So don’t run stable on a desktop? If you want a bleeding edge rolling release, that’s what sid is for.
Oh, no worries, just figured I’d add that extra little bit of detail as it’s a useful hook into a lot of other git concepts.
For folks unaware, the technical git term, here, is a ‘ref’. Everything that points to a commit is a ref, whether it’s HEAD, the tip of a branch, or a tag. If the git manpage mentions a ‘ref’ that’s what it’s talking about.
deleted by creator
If you have an Android phone I can’t recommend Genius Scan enough. Fast, accurate, lots of features. I use it with syncthing by exporting the files to a folder that’s configured to sync the paperless input folder.
Just want to say thank you! Paperless is one of the first things I recommend to anyone considering self hosting their infra. Amazing piece of work!
deleted by creator
“Huh weird, I tried to use and it’s not working. Welp, guess I better fix it…”
That’s a goal, but it’s hardly the only goal.
My goal is to get a synthesis of search results across multiple engines while eliminating tracking URLs and other garbage. In short it’s a better UX for me first and foremost, and self-hosting allows me to customize that experience and also own uptime/availability. Privacy (through elimination of cookies and browser fingerprinting) is just a convenient side effect.
That said, on the topic of privacy, it’s absolutely false to say that by self-hosting you get the same effect as using the engines directly. Intermediating my access to those search engines means things like cookies and fingerprinting cannot be used to link my search history to my browsing activity.
Furthermore, in my case I host SearX on a VPS that’s independent of my broadband connection which means even IP can’t be used to correlate my activity.
Oh god, I’m old…
Your first two paragraphs make the picture worse, not better.
As for your last, I’m not writing an economics thesis. It was a quick analysis to illustrate a problem no sane person disputes: streaming services have substantially driven down revenue for artists, to the point that for many it’s genuinely impossible to create their art while making a living wage.
Is it better than piracy? Sure. At least the artists are getting something (well, unless you drop below Spotify’s streaming cutoff, in which case you can get fucked). But it’s still a shitty deal and gives consumers someone else to blame as artists slowly bleed out.
Not funny once you realize all doctors are actually lizard people in human skin suits performing experiments on us. QED sucker!