• Donkter@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    2 days ago

    Some anti AI people are so corny. Like there’s so much to hate about AI. It’s evil in tons of different ways. But this just comes off as ignorant.

    • AbsolutelyNotAVelociraptor@piefed.social
      link
      fedilink
      English
      arrow-up
      41
      ·
      2 days ago

      You still have to meet a chatgpt relay drone then. I’ve met some. The conversation with them is basically you asking them something (usually about their assumed field of expertise) and them relaying to you whatever bullshit the chatbot vomits to them. Especially fun when you meet them in a working context where they are supposedly an “expert” that comes to solve an issue for you.

      • jama211@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        18 hours ago

        people that dumb would’ve previously been even more wrong than the chatbot they’re using though

      • howdy@lemmy.ml
        link
        fedilink
        English
        arrow-up
        7
        ·
        2 days ago

        yes I came to the comments to say this! it’s very depressing. people who choose to not use their brain should not have access to llms

    • Zink@programming.dev
      link
      fedilink
      English
      arrow-up
      17
      ·
      2 days ago

      It cuts to ONE OF the roots of the problem. It’s just not the “evil gigacorp” problem.

      It’s the problem of the effect on the user, regardless of how evil or altruistic the AI and its creator are.

      I have lamented in a few comments recently about how many people seem to think the purpose of technology is to make it so they don’t have to put effort into their life. They don’t need to learn, and they don’t need to create. They just need the right technology and a good enough bank balance to pay for it.

      I’m a tech person but for the last couple years I have made my hobbies and home life as much about nature and life sciences and physically interacting with the outdoors, building shit, taking care of my animals, etc. It has been very very good.

    • porous_grey_matter@lemmy.ml
      link
      fedilink
      English
      arrow-up
      16
      ·
      2 days ago

      Nah it’s pretty funny, this accurately describes a bunch of people (as accurately as a meme can or should, anyway)

      • EldritchFemininity@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        10
        ·
        1 day ago

        Many studies have been released recently about the rapid loss of cognitive abilities and skills due to the use of AI. It’s like how driving everywhere causes your muscles to atrophy, except it’s your critical thinking and reasoning skills, and it starts to happen within days or weeks of relying upon AI to do the work for you. Programmers who use AI and then stop have been found to write worse code after they stopped using AI than before they started, even for basic tasks. Reliance becomes dependence as you can no longer do the work yourself.

        This meme is quite literally true.

        • jama211@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          18 hours ago

          And those studies are very context dependent and type of tech dependent etc etc etc, but because it fits your preconceived biases because you dislike AI generally you parrot it blindly. Kind of ironic don’t you think?

          • Mystic Mushroom [Ze/Zir]@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 hours ago

            I’ve found that around 80% of FuckAI discourse is down to preconcieved biases and personal dislike while clinging to the 20% of real and very hyper specific arguments to try and support those opinions as factual or objective.

            It’s why the top 3 anti-AI arguments are:

            • Intellectual property
            • Climate change (Carbon footprint, if the subject is LocalLLMs)
            • Brain rot arguments, which might use very new, possibly flawed studies, or just abuse clinical terms as if they’re slurs (“schizo”, “psycho”, “delusional” etc.)
            • its_kim_love@lemmy.blahaj.zone
              link
              fedilink
              English
              arrow-up
              1
              ·
              3 hours ago

              I wouldn’t call that list hyperspecific at all. You forgot:

              • surveillance state being built on the back of these ai agents
              • military application of ai
              • the acceleration of the breakdown of Internet social spaces
              • the inability to do a simple Google search without having to sift through thousands upon thousands of garbage AI articles
              • Mystic Mushroom [Ze/Zir]@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                1
                ·
                3 hours ago

                It isn’t really. It’s their broad and common arguments. The things that unite them.

                The hyperspecific arguments are the ones about environmental impacts. And I say they’re hyperspecific because they only relate to corporate AI models which only exist because of the venture capitalist bubble. Which is being applied to all AI models even the small FOSS ones.

                Mental related ones are hyper specific because they only apply to specific unhealthy use cases, but are being applied broadly to everyone (i.e. people who call me a “schizo” for sharing Art made by an AI).

                The factual arguments are hyper specific.

                • its_kim_love@lemmy.blahaj.zone
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  3 hours ago

                  So if you discount 99% of AI then we wouldn’t have anything to be upset about? That doesn’t seem very coherent.

                  Edit: This isn’t the first time I’m hearing your arguments. It reminds me of the metaverse arguments where every failed metaverse is not really the metaverse. The problem with that is you don’t get to dictate what counts as AI, and what doesn’t. Those other options you talk about aren’t the problem, and so they’re being rightly ignored in this discussion. Just because there are good guys with guns doesn’t mean the bad guys with guns are not a problem, especially when the bad guys have more/bigger guns.

                  • Mystic Mushroom [Ze/Zir]@lemmy.dbzer0.com
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    3 hours ago

                    This isn’t the first time I’m hearing your arguments. It reminds me of the metaverse arguments where every failed metaverse is not really the metaverse. The problem with that is you don’t get to dictate what counts as AI, and what doesn’t.

                    I think these are different arguments because I’m not saying corporate AI isn’t AI itself. Just not the whole concept. Compare that to the the Metaverse which is in and of itself a corporate project designed solely for a capitalist purpose. I’ve never seen a Metaverse built for a hobbyist purpose but I have seen and used AIs built for hobbyist or community purposes. Just like corporate social media isn’t all social media. Corporate AI isn’t all AI, many people including @db0@lemmy.dbzer0.com are building out non-corporate solutions. Which is why the arguments can’t be applied equally to everything.

                  • Mystic Mushroom [Ze/Zir]@lemmy.dbzer0.com
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    3 hours ago

                    No I think it’s fair to be mad at AI companies, and the harm they cause. That anger has to be attributed towards the actual problem though, it needs to be based on the fact. Not directed at people who use AI, who probably would use the more efficient open models if they knew about them.

          • its_kim_love@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            1
            ·
            8 hours ago

            You do realize that studies take time, the larger the scope and context the longer it takes to complete. Of course the first studies to come out are context dependant. What does that have to do with the price of butter?

            • Mystic Mushroom [Ze/Zir]@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              1
              ·
              3 hours ago

              Unfortunately most people in the FuckAI crowd only care about facts when those facts support their feelings. Critical thinking and cognitive decline being a long time coming in the last few decades attributed to computer use in general as well as a decline in the competency of the US educational system isn’t as supportive to their arguments as something that says “AI exposure is making people stupid and crazy”.