It isn’t really. It’s their broad and common arguments. The things that unite them.
The hyperspecific arguments are the ones about environmental impacts. And I say they’re hyperspecific because they only relate to corporate AI models which only exist because of the venture capitalist bubble. Which is being applied to all AI models even the small FOSS ones.
Mental related ones are hyper specific because they only apply to specific unhealthy use cases, but are being applied broadly to everyone (i.e. people who call me a “schizo” for sharing Art made by an AI).
So if you discount 99% of AI then we wouldn’t have anything to be upset about? That doesn’t seem very coherent.
Edit: This isn’t the first time I’m hearing your arguments. It reminds me of the metaverse arguments where every failed metaverse is not really the metaverse. The problem with that is you don’t get to dictate what counts as AI, and what doesn’t. Those other options you talk about aren’t the problem, and so they’re being rightly ignored in this discussion. Just because there are good guys with guns doesn’t mean the bad guys with guns are not a problem, especially when the bad guys have more/bigger guns.
This isn’t the first time I’m hearing your arguments. It reminds me of the metaverse arguments where every failed metaverse is not really the metaverse. The problem with that is you don’t get to dictate what counts as AI, and what doesn’t.
I think these are different arguments because I’m not saying corporate AI isn’t AI itself. Just not the whole concept. Compare that to the the Metaverse which is in and of itself a corporate project designed solely for a capitalist purpose. I’ve never seen a Metaverse built for a hobbyist purpose but I have seen and used AIs built for hobbyist or community purposes.
Just like corporate social media isn’t all social media. Corporate AI isn’t all AI, many people including @db0@lemmy.dbzer0.com are building out non-corporate solutions.
Which is why the arguments can’t be applied equally to everything.
The Metaverse wasn’t the only metaverse. I’m talking about that, Decentraland and dozens of other failed projects. Every time one of them failed the other grifters used to decry that it wasn’t the real metaverse. Just like you’re decrying that corporate AI shouldn’t be the focus of the discussion despite it literally being 99% of the compute, capital and use cases.
I didn’t say it shouldn’t be a focus of discussion. I’m saying separate it from open source projects and their users, because open source projects and their users don’t help Sam Altman and friends in any way. These projects exist largely in spite of corporate AI.
No I think it’s fair to be mad at AI companies, and the harm they cause. That anger has to be attributed towards the actual problem though, it needs to be based on the fact. Not directed at people who use AI, who probably would use the more efficient open models if they knew about them.
Why? Why shouldn’t I be angry at the people who are also complicit? They’re also part of the problem. The people who use AI are contributing to most of the issues I and you listed. What part of their actions make them innocent in this?
Are they though? Are people who use AI horde or their own GPUs “complicit” in “most of the issues”?
If people most people used their own GPUs how many of these issues would really remain without the capitalist aspect (including it being shoved into every website by venture capitalists to support their ponzi scheme so they don’t crash and burn)?
I don’t care about 1% of the users. The only time I’ll have a problem with them is when they share their shitty AI art, shitty text, or in this case shitty opinions on AI. If that hurts your feelings then you’re going to have to learn to live with it or ignore it. I don’t care which. You keep switching who you’re defending. In one comment it’s the poor ignorant users who haven’t heard of open source alternatives, and then when I reply to that it changes to people who are already using those alternatives.
I wouldn’t call that list hyperspecific at all. You forgot:
It isn’t really. It’s their broad and common arguments. The things that unite them.
The hyperspecific arguments are the ones about environmental impacts. And I say they’re hyperspecific because they only relate to corporate AI models which only exist because of the venture capitalist bubble. Which is being applied to all AI models even the small FOSS ones.
Mental related ones are hyper specific because they only apply to specific unhealthy use cases, but are being applied broadly to everyone (i.e. people who call me a “schizo” for sharing Art made by an AI).
The factual arguments are hyper specific.
So if you discount 99% of AI then we wouldn’t have anything to be upset about? That doesn’t seem very coherent.
Edit: This isn’t the first time I’m hearing your arguments. It reminds me of the metaverse arguments where every failed metaverse is not really the metaverse. The problem with that is you don’t get to dictate what counts as AI, and what doesn’t. Those other options you talk about aren’t the problem, and so they’re being rightly ignored in this discussion. Just because there are good guys with guns doesn’t mean the bad guys with guns are not a problem, especially when the bad guys have more/bigger guns.
I think these are different arguments because I’m not saying corporate AI isn’t AI itself. Just not the whole concept. Compare that to the the Metaverse which is in and of itself a corporate project designed solely for a capitalist purpose. I’ve never seen a Metaverse built for a hobbyist purpose but I have seen and used AIs built for hobbyist or community purposes. Just like corporate social media isn’t all social media. Corporate AI isn’t all AI, many people including @db0@lemmy.dbzer0.com are building out non-corporate solutions. Which is why the arguments can’t be applied equally to everything.
The Metaverse wasn’t the only metaverse. I’m talking about that, Decentraland and dozens of other failed projects. Every time one of them failed the other grifters used to decry that it wasn’t the real metaverse. Just like you’re decrying that corporate AI shouldn’t be the focus of the discussion despite it literally being 99% of the compute, capital and use cases.
I didn’t say it shouldn’t be a focus of discussion. I’m saying separate it from open source projects and their users, because open source projects and their users don’t help Sam Altman and friends in any way. These projects exist largely in spite of corporate AI.
They actually do. They normalize it.
No I think it’s fair to be mad at AI companies, and the harm they cause. That anger has to be attributed towards the actual problem though, it needs to be based on the fact. Not directed at people who use AI, who probably would use the more efficient open models if they knew about them.
Why? Why shouldn’t I be angry at the people who are also complicit? They’re also part of the problem. The people who use AI are contributing to most of the issues I and you listed. What part of their actions make them innocent in this?
Are they though? Are people who use AI horde or their own GPUs “complicit” in “most of the issues”? If people most people used their own GPUs how many of these issues would really remain without the capitalist aspect (including it being shoved into every website by venture capitalists to support their ponzi scheme so they don’t crash and burn)?
I don’t care about 1% of the users. The only time I’ll have a problem with them is when they share their shitty AI art, shitty text, or in this case shitty opinions on AI. If that hurts your feelings then you’re going to have to learn to live with it or ignore it. I don’t care which. You keep switching who you’re defending. In one comment it’s the poor ignorant users who haven’t heard of open source alternatives, and then when I reply to that it changes to people who are already using those alternatives.
Disengage.