Many studies have been released recently about the rapid loss of cognitive abilities and skills due to the use of AI. It’s like how driving everywhere causes your muscles to atrophy, except it’s your critical thinking and reasoning skills, and it starts to happen within days or weeks of relying upon AI to do the work for you. Programmers who use AI and then stop have been found to write worse code after they stopped using AI than before they started, even for basic tasks. Reliance becomes dependence as you can no longer do the work yourself.
And those studies are very context dependent and type of tech dependent etc etc etc, but because it fits your preconceived biases because you dislike AI generally you parrot it blindly. Kind of ironic don’t you think?
I’ve found that around 80% of FuckAI discourse is down to preconcieved biases and personal dislike while clinging to the 20% of real and very hyper specific arguments to try and support those opinions as factual or objective.
Brain rot arguments, which might use very new, possibly flawed studies, or just abuse clinical terms as if they’re slurs (“schizo”, “psycho”, “delusional” etc.)
It isn’t really. It’s their broad and common arguments. The things that unite them.
The hyperspecific arguments are the ones about environmental impacts. And I say they’re hyperspecific because they only relate to corporate AI models which only exist because of the venture capitalist bubble. Which is being applied to all AI models even the small FOSS ones.
Mental related ones are hyper specific because they only apply to specific unhealthy use cases, but are being applied broadly to everyone (i.e. people who call me a “schizo” for sharing Art made by an AI).
So if you discount 99% of AI then we wouldn’t have anything to be upset about? That doesn’t seem very coherent.
Edit: This isn’t the first time I’m hearing your arguments. It reminds me of the metaverse arguments where every failed metaverse is not really the metaverse. The problem with that is you don’t get to dictate what counts as AI, and what doesn’t. Those other options you talk about aren’t the problem, and so they’re being rightly ignored in this discussion. Just because there are good guys with guns doesn’t mean the bad guys with guns are not a problem, especially when the bad guys have more/bigger guns.
This isn’t the first time I’m hearing your arguments. It reminds me of the metaverse arguments where every failed metaverse is not really the metaverse. The problem with that is you don’t get to dictate what counts as AI, and what doesn’t.
I think these are different arguments because I’m not saying corporate AI isn’t AI itself. Just not the whole concept. Compare that to the the Metaverse which is in and of itself a corporate project designed solely for a capitalist purpose. I’ve never seen a Metaverse built for a hobbyist purpose but I have seen and used AIs built for hobbyist or community purposes.
Just like corporate social media isn’t all social media. Corporate AI isn’t all AI, many people including @db0@lemmy.dbzer0.com are building out non-corporate solutions.
Which is why the arguments can’t be applied equally to everything.
The Metaverse wasn’t the only metaverse. I’m talking about that, Decentraland and dozens of other failed projects. Every time one of them failed the other grifters used to decry that it wasn’t the real metaverse. Just like you’re decrying that corporate AI shouldn’t be the focus of the discussion despite it literally being 99% of the compute, capital and use cases.
I didn’t say it shouldn’t be a focus of discussion. I’m saying separate it from open source projects and their users, because open source projects and their users don’t help Sam Altman and friends in any way. These projects exist largely in spite of corporate AI.
No I think it’s fair to be mad at AI companies, and the harm they cause. That anger has to be attributed towards the actual problem though, it needs to be based on the fact. Not directed at people who use AI, who probably would use the more efficient open models if they knew about them.
Why? Why shouldn’t I be angry at the people who are also complicit? They’re also part of the problem. The people who use AI are contributing to most of the issues I and you listed. What part of their actions make them innocent in this?
Are they though? Are people who use AI horde or their own GPUs “complicit” in “most of the issues”?
If people most people used their own GPUs how many of these issues would really remain without the capitalist aspect (including it being shoved into every website by venture capitalists to support their ponzi scheme so they don’t crash and burn)?
You do realize that studies take time, the larger the scope and context the longer it takes to complete. Of course the first studies to come out are context dependant. What does that have to do with the price of butter?
Unfortunately most people in the FuckAI crowd only care about facts when those facts support their feelings. Critical thinking and cognitive decline being a long time coming in the last few decades attributed to computer use in general as well as a decline in the competency of the US educational system isn’t as supportive to their arguments as something that says “AI exposure is making people stupid and crazy”.
Many studies have been released recently about the rapid loss of cognitive abilities and skills due to the use of AI. It’s like how driving everywhere causes your muscles to atrophy, except it’s your critical thinking and reasoning skills, and it starts to happen within days or weeks of relying upon AI to do the work for you. Programmers who use AI and then stop have been found to write worse code after they stopped using AI than before they started, even for basic tasks. Reliance becomes dependence as you can no longer do the work yourself.
This meme is quite literally true.
And those studies are very context dependent and type of tech dependent etc etc etc, but because it fits your preconceived biases because you dislike AI generally you parrot it blindly. Kind of ironic don’t you think?
I’ve found that around 80% of FuckAI discourse is down to preconcieved biases and personal dislike while clinging to the 20% of real and very hyper specific arguments to try and support those opinions as factual or objective.
It’s why the top 3 anti-AI arguments are:
I wouldn’t call that list hyperspecific at all. You forgot:
It isn’t really. It’s their broad and common arguments. The things that unite them.
The hyperspecific arguments are the ones about environmental impacts. And I say they’re hyperspecific because they only relate to corporate AI models which only exist because of the venture capitalist bubble. Which is being applied to all AI models even the small FOSS ones.
Mental related ones are hyper specific because they only apply to specific unhealthy use cases, but are being applied broadly to everyone (i.e. people who call me a “schizo” for sharing Art made by an AI).
The factual arguments are hyper specific.
So if you discount 99% of AI then we wouldn’t have anything to be upset about? That doesn’t seem very coherent.
Edit: This isn’t the first time I’m hearing your arguments. It reminds me of the metaverse arguments where every failed metaverse is not really the metaverse. The problem with that is you don’t get to dictate what counts as AI, and what doesn’t. Those other options you talk about aren’t the problem, and so they’re being rightly ignored in this discussion. Just because there are good guys with guns doesn’t mean the bad guys with guns are not a problem, especially when the bad guys have more/bigger guns.
I think these are different arguments because I’m not saying corporate AI isn’t AI itself. Just not the whole concept. Compare that to the the Metaverse which is in and of itself a corporate project designed solely for a capitalist purpose. I’ve never seen a Metaverse built for a hobbyist purpose but I have seen and used AIs built for hobbyist or community purposes. Just like corporate social media isn’t all social media. Corporate AI isn’t all AI, many people including @db0@lemmy.dbzer0.com are building out non-corporate solutions. Which is why the arguments can’t be applied equally to everything.
The Metaverse wasn’t the only metaverse. I’m talking about that, Decentraland and dozens of other failed projects. Every time one of them failed the other grifters used to decry that it wasn’t the real metaverse. Just like you’re decrying that corporate AI shouldn’t be the focus of the discussion despite it literally being 99% of the compute, capital and use cases.
I didn’t say it shouldn’t be a focus of discussion. I’m saying separate it from open source projects and their users, because open source projects and their users don’t help Sam Altman and friends in any way. These projects exist largely in spite of corporate AI.
No I think it’s fair to be mad at AI companies, and the harm they cause. That anger has to be attributed towards the actual problem though, it needs to be based on the fact. Not directed at people who use AI, who probably would use the more efficient open models if they knew about them.
Why? Why shouldn’t I be angry at the people who are also complicit? They’re also part of the problem. The people who use AI are contributing to most of the issues I and you listed. What part of their actions make them innocent in this?
Are they though? Are people who use AI horde or their own GPUs “complicit” in “most of the issues”? If people most people used their own GPUs how many of these issues would really remain without the capitalist aspect (including it being shoved into every website by venture capitalists to support their ponzi scheme so they don’t crash and burn)?
You do realize that studies take time, the larger the scope and context the longer it takes to complete. Of course the first studies to come out are context dependant. What does that have to do with the price of butter?
Even before the advent of LLMs, more than half of Americans couldn’t read beyond a 6th grade level.
Correlation != causation, but unfortunately the aforementioned problem affects a lot of people.
Unfortunately most people in the FuckAI crowd only care about facts when those facts support their feelings. Critical thinking and cognitive decline being a long time coming in the last few decades attributed to computer use in general as well as a decline in the competency of the US educational system isn’t as supportive to their arguments as something that says “AI exposure is making people stupid and crazy”.