You still have to meet a chatgpt relay drone then. I’ve met some. The conversation with them is basically you asking them something (usually about their assumed field of expertise) and them relaying to you whatever bullshit the chatbot vomits to them. Especially fun when you meet them in a working context where they are supposedly an “expert” that comes to solve an issue for you.
It cuts to ONE OF the roots of the problem. It’s just not the “evil gigacorp” problem.
It’s the problem of the effect on the user, regardless of how evil or altruistic the AI and its creator are.
I have lamented in a few comments recently about how many people seem to think the purpose of technology is to make it so they don’t have to put effort into their life. They don’t need to learn, and they don’t need to create. They just need the right technology and a good enough bank balance to pay for it.
I’m a tech person but for the last couple years I have made my hobbies and home life as much about nature and life sciences and physically interacting with the outdoors, building shit, taking care of my animals, etc. It has been very very good.
Many studies have been released recently about the rapid loss of cognitive abilities and skills due to the use of AI. It’s like how driving everywhere causes your muscles to atrophy, except it’s your critical thinking and reasoning skills, and it starts to happen within days or weeks of relying upon AI to do the work for you. Programmers who use AI and then stop have been found to write worse code after they stopped using AI than before they started, even for basic tasks. Reliance becomes dependence as you can no longer do the work yourself.
And those studies are very context dependent and type of tech dependent etc etc etc, but because it fits your preconceived biases because you dislike AI generally you parrot it blindly. Kind of ironic don’t you think?
I’ve found that around 80% of FuckAI discourse is down to preconcieved biases and personal dislike while clinging to the 20% of real and very hyper specific arguments to try and support those opinions as factual or objective.
Brain rot arguments, which might use very new, possibly flawed studies, or just abuse clinical terms as if they’re slurs (“schizo”, “psycho”, “delusional” etc.)
It isn’t really. It’s their broad and common arguments. The things that unite them.
The hyperspecific arguments are the ones about environmental impacts. And I say they’re hyperspecific because they only relate to corporate AI models which only exist because of the venture capitalist bubble. Which is being applied to all AI models even the small FOSS ones.
Mental related ones are hyper specific because they only apply to specific unhealthy use cases, but are being applied broadly to everyone (i.e. people who call me a “schizo” for sharing Art made by an AI).
So if you discount 99% of AI then we wouldn’t have anything to be upset about? That doesn’t seem very coherent.
Edit: This isn’t the first time I’m hearing your arguments. It reminds me of the metaverse arguments where every failed metaverse is not really the metaverse. The problem with that is you don’t get to dictate what counts as AI, and what doesn’t. Those other options you talk about aren’t the problem, and so they’re being rightly ignored in this discussion. Just because there are good guys with guns doesn’t mean the bad guys with guns are not a problem, especially when the bad guys have more/bigger guns.
This isn’t the first time I’m hearing your arguments. It reminds me of the metaverse arguments where every failed metaverse is not really the metaverse. The problem with that is you don’t get to dictate what counts as AI, and what doesn’t.
I think these are different arguments because I’m not saying corporate AI isn’t AI itself. Just not the whole concept. Compare that to the the Metaverse which is in and of itself a corporate project designed solely for a capitalist purpose. I’ve never seen a Metaverse built for a hobbyist purpose but I have seen and used AIs built for hobbyist or community purposes.
Just like corporate social media isn’t all social media. Corporate AI isn’t all AI, many people including @db0@lemmy.dbzer0.com are building out non-corporate solutions.
Which is why the arguments can’t be applied equally to everything.
No I think it’s fair to be mad at AI companies, and the harm they cause. That anger has to be attributed towards the actual problem though, it needs to be based on the fact. Not directed at people who use AI, who probably would use the more efficient open models if they knew about them.
You do realize that studies take time, the larger the scope and context the longer it takes to complete. Of course the first studies to come out are context dependant. What does that have to do with the price of butter?
Unfortunately most people in the FuckAI crowd only care about facts when those facts support their feelings. Critical thinking and cognitive decline being a long time coming in the last few decades attributed to computer use in general as well as a decline in the competency of the US educational system isn’t as supportive to their arguments as something that says “AI exposure is making people stupid and crazy”.
Some anti AI people are so corny. Like there’s so much to hate about AI. It’s evil in tons of different ways. But this just comes off as ignorant.
if anyone wants one of the recent studies disproving this poster https://ai-project-website.github.io/AI-assistance-reduces-persistence/
Its literally not tho, it’s pretty accurate.
Also I’d rather be corny and sincere then idiotic and fake.
lol right?
You still have to meet a chatgpt relay drone then. I’ve met some. The conversation with them is basically you asking them something (usually about their assumed field of expertise) and them relaying to you whatever bullshit the chatbot vomits to them. Especially fun when you meet them in a working context where they are supposedly an “expert” that comes to solve an issue for you.
people that dumb would’ve previously been even more wrong than the chatbot they’re using though
yes I came to the comments to say this! it’s very depressing. people who choose to not use their brain should not have access to llms
It does actually rot your brain though. Like that is literally true information.
thanks for proving their point
It cuts to ONE OF the roots of the problem. It’s just not the “evil gigacorp” problem.
It’s the problem of the effect on the user, regardless of how evil or altruistic the AI and its creator are.
I have lamented in a few comments recently about how many people seem to think the purpose of technology is to make it so they don’t have to put effort into their life. They don’t need to learn, and they don’t need to create. They just need the right technology and a good enough bank balance to pay for it.
I’m a tech person but for the last couple years I have made my hobbies and home life as much about nature and life sciences and physically interacting with the outdoors, building shit, taking care of my animals, etc. It has been very very good.
well now you’re just being rude
Ignore them, they’re a troll. They don’t have anything real to add to the discussion, and they never did.
Okay
Nah it’s pretty funny, this accurately describes a bunch of people (as accurately as a meme can or should, anyway)
Agreed. Lots of great reasons to hate on AI, but this isn’t one of them.
Many studies have been released recently about the rapid loss of cognitive abilities and skills due to the use of AI. It’s like how driving everywhere causes your muscles to atrophy, except it’s your critical thinking and reasoning skills, and it starts to happen within days or weeks of relying upon AI to do the work for you. Programmers who use AI and then stop have been found to write worse code after they stopped using AI than before they started, even for basic tasks. Reliance becomes dependence as you can no longer do the work yourself.
This meme is quite literally true.
And those studies are very context dependent and type of tech dependent etc etc etc, but because it fits your preconceived biases because you dislike AI generally you parrot it blindly. Kind of ironic don’t you think?
I’ve found that around 80% of FuckAI discourse is down to preconcieved biases and personal dislike while clinging to the 20% of real and very hyper specific arguments to try and support those opinions as factual or objective.
It’s why the top 3 anti-AI arguments are:
I wouldn’t call that list hyperspecific at all. You forgot:
It isn’t really. It’s their broad and common arguments. The things that unite them.
The hyperspecific arguments are the ones about environmental impacts. And I say they’re hyperspecific because they only relate to corporate AI models which only exist because of the venture capitalist bubble. Which is being applied to all AI models even the small FOSS ones.
Mental related ones are hyper specific because they only apply to specific unhealthy use cases, but are being applied broadly to everyone (i.e. people who call me a “schizo” for sharing Art made by an AI).
The factual arguments are hyper specific.
So if you discount 99% of AI then we wouldn’t have anything to be upset about? That doesn’t seem very coherent.
Edit: This isn’t the first time I’m hearing your arguments. It reminds me of the metaverse arguments where every failed metaverse is not really the metaverse. The problem with that is you don’t get to dictate what counts as AI, and what doesn’t. Those other options you talk about aren’t the problem, and so they’re being rightly ignored in this discussion. Just because there are good guys with guns doesn’t mean the bad guys with guns are not a problem, especially when the bad guys have more/bigger guns.
I think these are different arguments because I’m not saying corporate AI isn’t AI itself. Just not the whole concept. Compare that to the the Metaverse which is in and of itself a corporate project designed solely for a capitalist purpose. I’ve never seen a Metaverse built for a hobbyist purpose but I have seen and used AIs built for hobbyist or community purposes. Just like corporate social media isn’t all social media. Corporate AI isn’t all AI, many people including @db0@lemmy.dbzer0.com are building out non-corporate solutions. Which is why the arguments can’t be applied equally to everything.
No I think it’s fair to be mad at AI companies, and the harm they cause. That anger has to be attributed towards the actual problem though, it needs to be based on the fact. Not directed at people who use AI, who probably would use the more efficient open models if they knew about them.
You do realize that studies take time, the larger the scope and context the longer it takes to complete. Of course the first studies to come out are context dependant. What does that have to do with the price of butter?
Even before the advent of LLMs, more than half of Americans couldn’t read beyond a 6th grade level.
Correlation != causation, but unfortunately the aforementioned problem affects a lot of people.
Unfortunately most people in the FuckAI crowd only care about facts when those facts support their feelings. Critical thinking and cognitive decline being a long time coming in the last few decades attributed to computer use in general as well as a decline in the competency of the US educational system isn’t as supportive to their arguments as something that says “AI exposure is making people stupid and crazy”.
was waiting for when it becomes a wierd al yankovic joke. it didn’t