Wait you can train the Futo keyboard? I tried it a while ago and noticed the poor accuracy and decided to shelve it for a while.
Wait you can train the Futo keyboard? I tried it a while ago and noticed the poor accuracy and decided to shelve it for a while.
Gotcha, that sounds like searx is a good option then. At first I thought you were giving a reason why searx isn’t a suitable alternative as the gp comment is downvoted a bunch.
Can’t you just not enable the Yandex backend if you’re selfhosting searx?
At least with radicle all the forks will still exist even if the authoritative copy is taken down. And even then I think because radicle is like BitTorrent, anybody who pinned the main repo would still be seeding it so it would be very hard to scrub it completely. The main challenge in using radicle is getting an active contributor with some reputation to maintain their copy on there. Otherwise there’s no momentum and nobody will pin the countless mirrors published by randos.
Either that or charging a micro transaction for loading the page. But yeah the goal is to make it cost a small amount that is insignificant to a regular user but adds up to a huge amount at the scale of a spam farm. And it’s also the same rationale behind hashing passwords with multiple rounds. It adds a tiny lag when you log in correctly but adds an insane amount of work if you’re checking every phrase in a password cracking dictionary using an offline attack because it adds up. (In the online scenario you just block them after a few attempts)
Hah I wish we could ignore them. It seems to just vary from ISP to ISP in the US but our small town ISP turns off your connection and puts you behind a captive portal forcing you to click through and accept what you did wrong before your connection is turned back on.
Our ISP sends 3 strike letters :(
I’ve done a backup swap with friends a couple times. Security wasn’t much of a worry since we connected to each other’s boxes over ssh or wireguard or similar and used tools that allowed encryption. The biggest challenge for us was that in my selfhosting friend group we all prefer different protocols so we had to figure out what each of us wanted to use to connect and access filesystems and set that up. The second challenge was ensuring uptime and that the remote access we set up for each other stayed up - and that’s what killed the project as we all eventually stopped maintaining the remote access and nobody seemed to care - so if I were to do it again I would make sure all participants have alerts monitoring their shared endpoint.
I tried the .ps one and it worked for me
Federation sounded interesting so I looked at the website and it sounds like on prem can’t yet federate with people using “cloud” which I guess is the hosted version - they can only federate with other on-prem instances.
It looks promising though and would be cool to host my own instance and still chat with friends.
I can’t wait until the immich photo editor gets enabled and hopefully it eventually duplicates all the google photos editor features because that’s the only reason I keep around the google photos app.
You can achieve a similar thing using vlans - usually by default they’re isolated but you may add specific rules that allow traffic between vlans if it meets certain criteria (specific ports, specific types of traffic, traffic to or from specific hosts, any combination of those). So yeah you can imagine client isolation being like having each client on their own vlan - except without needing a different subnet for each client.
To add to the other reply, client isolation is about controlling whether an ap, switch, or router willingly sends traffic between clients. Because of that, it doesn’t kick in if you listen to packets over the air before they’ve been received by an AP. For that kind of security you need a wifi specific security measure - which I think “enhanced open” is what you’d be interested in. It allows you to have an open passwordless wifi but it generates temporary encryption keys for each connected client, then the rest is as if it was using WPA, so that you don’t need to enter a password but your traffic gets encrypted and protected from anyone else listening in on the WiFi.
If you combine both then you should have a network where each device is isolated both over the air and from a routing perspective so that each device only sees an Internet connection and no other devices.
The same way filebot and any other tool does - the file needs to have some label, either an absolute episode number or a season + episode number. I’m not aware of any tool that is able to look at the contents of the video to figure out which episode it is visually without any information from the filename - but I’d be happy to be proven wrong because I would be impressed.
Sonarr/radarr does analyze the content somewhat but that’s just for gathering resolution, codec, HDR, audio languages, and subtitle information, which can all be added to the filename format for inclusion during renaming.
I second using sonarr/radarr, once imported it detects episodes and lets you one click rename to a specific format and folder organization.
If you don’t want any of the other features of sonarr/radarr (like having a way to filter and manage your collection to see what’s in what quality or from what release group, searching multiple indexers with a single search, being able to send a specific search result to a downloader and have it automatically imported and organized when complete, or have auto downloading based on requests using scoring rules that you set), then there’s also filebot which a lot of people seem to like and seems to be just for matching with online metadata and renaming.
But I haven’t tried filebot since I like the extra features and capabilities of sonarr/radarr. It makes it easy to manage several library folders like an archive for anything that’s been reviewed, is complete, and in a quality/codec that I’m satisfied with, and keeping track of currently airing shows in my active folder which is where I also keep auto downloaded stuff I haven’t reviewed.
I use a nuc10i7fnkn and since transcoding is almost entirely done using the dedicated quicksync hardware in the CPU you don’t end up actually using the CPU much. So I’m sure it would work on an older generation or the i5 version. I don’t know much about the N100 but it looks like it would be very capable. Supposedly it boosts to 3+GHz and it’s a 10nm node compared to my NUCs 14nm. But the GPU has the same number of execution units so I’m not sure if the quicksync transcoding performance is that different. I saw someone mention 3 simultaneous 4K transcodes and I think I got about that much on mine. Generally for quick sync performance you just compare the Intel hd or uhd graphics model (like 630, 730, uhd, etc) and the number of execution units and that should correlate to the performance. Also check the Wikipedia page for quicksync for codec compatibility (under the Hardware decoding and encoding section), but anything recent will handle most stuff you’d need: https://en.m.wikipedia.org/wiki/Intel_Quick_Sync_Video
I actually run my arrstack on a Synology, it has official support for docker and docker-compose. Granted I do have a higher powered model (the DS1621xs+) but most of the arrstack is fairly low power friendly.
You can also get away with running Plex on a nas but I would only do it if 1. Your nas has a quick sync supported CPU and you get that enabled properly or 2. You go the direct streaming only / no transcoding setup - which means checking the codec support for all client devices and either only downloading exactly the supported codecs or pre-transcoding everything.
What I do is actually run Plex/JF on a separate nuc and point it at the nas using a network mount. Just don’t use a network mount for the Plex app database (maybe same applies to JF too), just mount the media files itself. Running Plex and having it access the DB over a network mount is a big no no for various reasons.
I use a Synology nas which has official support for docker / docker-compose to run my arrstack and has n+2 btrfs redundancy. Then for running Plex and jellyfin I use an Intel nuc10i7 with quick sync with the nas media folder mounted over the network but using a direct gigabit link between the 2 so that the traffic stays off my switch.
I could have gotten away with doing it all on the nas if I forewent ECC in favor of quick sync, but my first priority with my nas is keeping personal artifacts safe so I went with ECC.
The creation tool also just lets you save the iso - but for some reason the media creation tool gives you a different iso than if you spoofed a non-windows user agent on the windows download website so that it gives you a direct link to the iso instead of getting you to install the creation tool. And for some reason only one of them worked with DISM to repair my system in order to be able to run windows update successfully.
Not sure about Facebook since authenticating for private videos is a hurdle, but for my partner who uses a mac I downloaded open video downloader which is just a foss GUI for ytdl, it also keeps ytdl up to date which is a requirement for me since I don’t want to be called when it stops working. I think on windows you have to manually install msvc2010redist but besides that it seems to just work out of the box.