• 0 Posts
  • 26 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle
  • S410@kbin.socialtoPrivacy@lemmy.mlAndroid Microphone Snooping
    link
    fedilink
    arrow-up
    2
    arrow-down
    4
    ·
    8 months ago

    Android is sending a ton of data, though, even if you’re not doing anything internet related. It, also, kinda reacts to “okay, google”, which wouldn’t really be possible if it wasn’t listening.

    Now, it obviously doesn’t keep a continuous, lossless audio stream from the phone to some google server. But, it could be sending text parsed from audio locally, or just snippets of audio when the thing detects speech. Relatively normal stuff to collect for analytics purposes, actually.

    Now, data like that could “easily” get “misplaced”, of course, and end up in the ad-shoveling machine… Not necessary at Google’s hands: could be any app, really. Facebook, TickTok, random free to play Candy Crush clone, etc. But if that data gets into the interwoven clusterfuck of advertisement might, it will likely end up having an effect on the ads shown to the user.



  • Dualbooting is possible and easy: just gotta shrink the Windows partition and install Linux next to it. Make sure to not format the whole thing by mistake, though. A lot of Linux installers want to format the disk by default, so you have to pick manual mode and make sure to shrink (not delete and re-create!) the windows partition.

    As for its usefulness, however… Switching the OS is incredibly annoying. Every time you want to do that you have to shut down the system completely and boot it back up. That means you have to stop everything you’re doing, save all the progress, and then try to get back to speed 2 minutes later. After a while the constant rebooting gets really old.

    Furthermore, Linux a completely different system that shares only some surface level things with Windows. Switching to it basically means re-learning how to use a computer almost from scratch, which is, also, incredibly frustrating.

    The two things combined very quickly turn into a temptation to just keep using the more familiar system. (Been there, done that.)

    I think I’ll have to agree with people who propose Virtual Machines as a solution.

    Running Linux in a VM on Windows would let you play around with it, tinker a little and see what software is and isn’t available on it. From there you’ll be able to decide if you’re even willing to dedicate more time and effort to learning it.

    If you decide to continue, you can dual boot Windows and Linux. But not to be able to switch between the two, but to be able to back out of the experiment.

    Instead, the roles of the OSes could be reversed: a second copy of Windows could be install in a VM, which, in turn, would run on Linux.

    That way, you’d still have a way to run some more picky Windows software (that is, software that refuses to work in Wine) without actually booting into Windows.

    This approach would maximize exposure to Linux, while still allowing to back out of the experiment at any moment.


  • S410@kbin.socialtoLinux@lemmy.mlI dislike wayland
    link
    fedilink
    arrow-up
    21
    arrow-down
    1
    ·
    8 months ago

    Wayland has it’s fair share of problems that haven’t been solved yet, but most of those points are nonsense.

    If that person lived a little over a hundred years ago and wrote a rant about cars vs horses instead, it’d go something like this:

    Think twice before abandoning Horses. Cars break everything!
    Cars break if you stuff hay in the fuel tank!
    Cars are incompatible with horse shoes!
    You can’t shove your dick in a car’s mouth!

    The rant you’re linking makes about as much sense.


  • Simply disabling registration of new accounts using Tor/VPN should be sufficient and won’t affect existing users.

    Although, requiring verification of accounts made via those would be a better approach. Require captchas to prevent automated posting. Automatically mark posts made from new accounts and/or via Tor or a VPN for moderation review.

    There are way to mitigate spam that aren’t as blunt and overreaching as blanket banning entire IP ranges. This approach is the dumbest, least competent way of ensuring any kind of security, and, honestly, awfully close to being needlessly discriminating. Fuck everyone from countries with draconian internet censorship, I guess?


  • Meanwhile Discord misses half the features Matrix has. It’s almost as if they’re different projects with similar, but different goals.

    One tries to be a flexible, interoperable, and secure protocol for communication, that’s free for anyone to implement and use…

    The other is a for-profit company that cherishes its centralized nature and far reaching control, allowing them to sell you random bells and whistles, collect your data unobstructed, and lure in investors and advertisers.







  • I use Arch + Gnome with VRR patches on my main PC.

    It find it actually easier to use than e.g. fedora or ubuntu due to better documentation and way more available packages in the repos… With many, many more packages being in AUR!

    By installing all the stuff commonly found on other distros (and which many consider bloat), you’ll get basically the same thing as, well, any other distro. I have all the “bloat” like NetworkManager, Gnome, etc. which is known to work together very well and which tries to be smart and auto-configure a lot of stuff. Bloat it may be, but I am lazy~

    Personally, I think it’s better to stick to upstream distros whenever possible. For example Nobra, which is being recommended in this thread quite a lot, is maintained by a single person. In reality, it’s not much more than regular Fedora with a couple of tweaks and optimizations. Vast majority of those one could do themselves on the upstream distro and avoid being dependent that one person. It is a single point of failure. after all.



  • OpenSUSE + KDE is a really solid choice, I’d say.

    The most important Linux advice I have is this: Linux isn’t Windows. Don’t expect things to works the same.
    Don’t try too hard to re-configure things that don’t match the way things are on Windows. If there isn’t an easy way to get a certain behavior, there’s probably a reason for it.


  • Not once did I claim that LLMs are sapient, sentient or even have any kind of personality. I didn’t even use the overused term “AI”.

    LLMs, for example, are something like… a calculator. But for text.

    A calculator for pure numbers is a pretty simple device all the logic of which can be designed by a human directly.

    When we want to create a solver for systems that aren’t as easily defined, we have to resort to other methods. E.g. “machine learning”.

    Basically, instead of designing all the logic entirely by hand, we create a system which can end up in a number of finite, yet still near infinite states, each of which defines behavior different from the other. By slowly tuning the model using existing data and checking its performance we (ideally) end up with a solver for something a human mind can’t even break up into the building blocks, due to the shear complexity of the given system (such as a natural language).

    And like a calculator that can derive that 2 + 3 is 5, despite the fact that number 5 is never mentioned in the input, or that particular formula was not a part of the suit of tests that were used to verify that the calculator works correctly, a machine learning model can figure out that “apple slices + batter = apple pie”, assuming it has been tuned (aka trained) right.


  • Not once did I claim that LLMs are sapient, sentient or even have any kind of personality. I didn’t even use the overused term “AI”.

    LLMs, for example, are something like… a calculator. But for text.

    A calculator for pure numbers is a pretty simple device all the logic of which can be designed by a human directly.

    When we want to create a solver for systems that aren’t as easily defined, we have to resort to other methods. E.g. “machine learning”.

    Basically, instead of designing all the logic entirely by hand, we create a system which can end up in a number of finite, yet still near infinite states, each of which defines behavior different from the other. By slowly tuning the model using existing data and checking its performance we (ideally) end up with a solver for something a human mind can’t even break up into the building blocks, due to the shear complexity of the given system (such as a natural language).

    And like a calculator that can derive that 2 + 3 is 5, despite the fact that number 5 is never mentioned in the input, or that particular formula was not a part of the suit of tests that were used to verify that the calculator works correctly, a machine learning model can figure out that “apple slices + batter = apple pie”, assuming it has been tuned (aka trained) right.


  • Learning is, essentially, “algorithmically copy-paste”. The vast majority of things you know, you’ve learned from other people or other people’s works. What makes you more than a copy-pasting machine is the ability to extrapolate from that acquired knowledge to create new knowledge.

    And currently existing models can often do the same! Sometimes they make pretty stupid mistakes, but they often do, in fact, manage to end up with brand new information derived from old stuff.

    I’ve tortured various LLMs with short stories, questions and riddles, which I’ve written specifically for the task and which I’ve asked the models to explain or rewrite. Surprisingly, they often get things either mostly or absolutely right, despite the fact it’s novel data they’ve never seen before. So, there’s definitely some actual learning going on. Or, at least, something incredibly close to it, to the point it’s nigh impossible to differentiate it from actual learning.


  • It’s illegal if you copy-paste someone’s work verbatim. It’s not illegal to, for example, summarize someone’s work and write a short version of it.

    As long as overfitting doesn’t happen and the machine learning model actually learns general patterns, instead of memorizing training data, it should be perfectly capable of generating data that’s not copied verbatim from humans. Whom, exactly, a model is plagiarizing if it generates a summarized version of some work you give it, particularly if that work is novel and was created or published after the model was trained?