Well, I might get disliked for this opinion, but in some cases it’s perfectly fine for a computer to make a management decision. However, this should also mean that the person in charge of said computer, or the one putting the decision by the computer into actual action, should be the one that gets held responsible. There’s also the thing where it should be questioned how responsible it is to even consider the management decisions of a computer in a specific field. What I’m saying is that there’s no black and white answer here.
I’ve thought about this wrt to AI and work. Every time I sit in a post mortem it’s about human errors and process fixes.
The day a post mortem ends with “well the AI did it so nothing we can do” is the day I look towards… with dread.
The directors not going to like this
I asked computer if I should read the article, it said no. Am I in an abusive relationship?
That is ridiculous, clearly. I’ll use mainstream search engine, tailor made to my needs, to make sure it cannot happen
“No networked computers!” Colonial fleet high command standing orders
Cylons hate this little trick.
A complete one-eighty nowadays…“As a highly paid “business” exec I have no ideas…computer, tell me what to do.”
That’s why board executives and business are so excited about it
They can finally get rid of McKinsey and blame it on cheaper and faster trendy butthole logo of the month.
Why would they get rid of McKinsey? That would make dinner at the club super awkward!
It’s not my fault
I was just following ordersIt’s just company policyIt’s just a misstep in the algorithm
I’m sorry the computer said layoffs so… Get fucked.
Ai says you’re not a citizen. Deported.
I generally agree.
Imagine however, that a machine objectively makes the better decisions than any person. Should we then still trust the humans decision just to have someone who is accountable?
What is the worth of having someone who is accountable anyway? Isn’t accountability just an incentive for humans to not just fuck things up? It’s also nice for pointing fingers if things go bad - but is there actually any value in that?
Imagine however, that a machine objectively makes the better decisions than any person.
You can’t know if a decision is good or bad without a person to evaluate it. The situation you’re describing isn’t possible.
the people who deploy a machine […] should be accountable for those actions.
How is this meaningfully different from just having them make the decisions in the first place? Are they too stupid?
Imagine however, that a machine
That’s hypothetical. In the real world, in the human society, the humans who are part of corporations and receiving profits by making/selling these computers must also bear the responsibility.
Tbf that leads to the problem of:
Company/Individual makes program that is in no way meant for making management decision.
Someone else comes and deploys that program to make management decisions.
The ones that made that program couldn’t stop the ones that deployed it from deploying it.
Even if the maker aimed to make a decision-making program, and marketed it as so. Whoever deployed it is ultimately the responsible for it. As long as the maker doesn’t fake tests or certifications of course, I’m sure that would violate many laws.
The premise is that a computer must never make a management decision. Making a program capable of management decisons already failed. The deployment and use of that program to that end is already built upon that failure.
I believe those who deploy the machines should be responsible in the first place. The corporations who make/sell those machines should be accountable if they deceptively and intentionally program those machines to act maliciously or in somebody else’s interest.
That’s the neat thing, you can deny accountability by blaming the computer’s decision
Unfortunately, what’s actually happening is humans are being kept in the loop of AI decisions solely to take the blame if the AI screws up.
So the CEOs who bought the AI, and the company that sold the AI, and the AI tool itself, all get to dodge responsibility for the AI 's failures by blaming a human worker.
For example, this discussion of an AI generated summer reading guide that hallucinated a bunch of non-existent books:
The freelance writer who authored this giant summer reading guide with all its lists had been tasked with doing the work of literally dozens of writers, editors and fact-checkers. We don’t know whether his boss told him he had to use AI, but there’s no way one writer could do all that work without AI.
In other words, that writer’s job wasn’t to write the article. His job was to be the “human in the loop” for an AI that wrote the articles, but on a schedule and with a workload that precluded his being able to do a good job. It’s more true to say that his job was to be the AI’s “accountability sink” (in the memorable phrasing of Dan Davies): he was being paid to take the blame for the AI’s mistakes.
A COMPUTER CAN NEVER BE HELD ACCOUNTABLE
THEREFORE A COMPUTER MUST
NEVERMAKEAMANAGEMENT DECISIONs
You are essentially saying
“Management is essential, replace the common work force with AI”Well…If I get fired, I will hold you accountable!
The computer can’t be held accountable, but the programmer and operator can.
I could go on a whole thing about mission rules and command decisions here, but I’m sick of typing for the day.
So when is Musk getting held accountable for making a literal US funded Nazi waifu bot
When the humans win the class war against the lizards.
This endless separation into “managers” and “not managers” is so unproductive. Everyone manages something. That’s why you’re employed.
Everyone manages something.
Most workers manage something and create value. Managers are only managing, remove them and nothing changes - usually things get more optimized, actually.
Sounds like something a manager would say. Some of us produce, create value through our labor, while some sit their fat asses at a desk and only grace the production floor to make everybody’s day just a little more difficult. So you just get on back up there to the big house and let us handle things out here where you can’t hack it.
I manage to get out of bed.
Barely.
I don’t think this is wise at all.
Its just people putting into words their wish to be able to punish and appoint blame above their wishes to be pragmatic.
If software is better at something, there is no reason to be mad at that software.
More than that, the idea that the software vendor could not be held liable is farcical. Of course they could be, or the company running said software. In fact, they’d probably get more shit than managers who regularly get away with ridiculous shit.
I mean wage theft is the biggest form of theft for a reason, and none of the wage thieves are machines (or at least most aren’t).
Almost everything you said here is wrong. Much of it dangerously so.
I like the part where you have no details or arguments, just vibes.
You don’t reason someone out of a position they didnt reason themselves into, and I cannot figure out how to come to conclusions that incorrect. Just leaving a warning; my reply wasn’t for you.
You’re right about wage theft being common. So that’s something.
This is pure pseudo intellectualism because you literally have no argument or point.
You have no reasoning and are projecting that onto me because you can’t explain this opinion your feelings have brought you to.
I’m not willing to argue with you. I’ve argued this with you¹ a thousand times, you are not rational. Everyone who reads your shit knows what I’m talking about. Ask them.
¹perhaps with a different name and face, but otherwise indistinguishable. It gets tedious.
With the amount you’ve typed you could have easily typed a rationale. The truth is your opinions don’t hold weight and have no good rationale. That is all.
The burden of proof is on you. Show me one example of a company being held liable (really liable, not a settlement/fine for a fraction of the money they made) for a software mistake that hurt people.
The reality is that a company can make X dollars with software that makes mistakes, and then pay X/100 dollars when that hurts people and goes to court. That’s not a punishment, that’s a cost of business. And the company pays that fine and the humans who mode those decisions are shielded from further repercussions.
When you said:
the idea that the software vendor could not be held liable is farcical
We need YOU to back that up. The rest of us have seen it never be accurate.
And it gets worse when the software vendor is a step removed: See flock cameras making big mistakes. Software decided that this car was stolen, but it was wrong. The police intimidated an innocent civilian because the software was wrong. Not only were the police not held accountable, Flock was never even in the picture.
Since when are managers held accountable? Is this new?
TBF Management can barely make any management decisions either…
are are rarely held accountable.












