A new wave of AI is poised to transform the technologies we use everyday. Trust must be at the core of how we develop and deploy AI, everyday, all the time. It is not an optional ‘add-on’. Mozilla has long championed a world where AI is more trustworthy, investing in startups, advocating for laws, and… Continue reading About Us
Do gun manufacturers get in trouble when someone shoots somebody?
Do car manufacturers get in trouble when someone runs somebody over?
Do search engines get in trouble if they accidentally link to harmful sites?
What about social media sites getting in trouble for users uploading illegal content?
Mozilla doesn’t need to host an uncensored model, but their open source AI should be able to be trained to uncensored. So I’m not asking them to host this themselves, which is an important distinction I should have made.
Which uncensored LLMs exist already, so any argument about the damage they can cause is already possible.
Do car manufacturers get in trouble when someone runs somebody over?
Yes, if it can be shown the accident was partially caused by the manufacturer’s neglect. If a safety measure was not in place or did not work properly. Or if it happens suspiciously more often with models from this brand. Apart from solid legal trouble, they can get into PR trouble if many people start to think that way, no matter if it’s true.
Then let me spell it out: If ChatGPT convinces a child to wash their hands with self-made bleach, be sure to expect lawsuits and a shit storm coming for OpenAI.
If that occurs, but no liability can be found on the side of ChatGPT, be sure to expect petitions and a shit storm coming for legislators.
We generally expect individuals and companies to behave in society with peace and safety in mind, including strangers and minors.
Liabilities and regulations exist for these reasons.
Let me spell it out: I’m not asking for companies to host these services. They are not held liable.
For this example to be related, ChatGPT would need to be open source and let you plug in your own model. We should have the freedom to plug in our own trained models, even uncensored ones. This is the case with LLAma and other AI systems right now, and I’m encouraging Mozilla’s AI to allow us to do the same thing.
If you ask how to build a bomb and it tells you, wouldn’t Mozilla get in trouble?
Do gun manufacturers get in trouble when someone shoots somebody?
Do car manufacturers get in trouble when someone runs somebody over?
Do search engines get in trouble if they accidentally link to harmful sites?
What about social media sites getting in trouble for users uploading illegal content?
Mozilla doesn’t need to host an uncensored model, but their open source AI should be able to be trained to uncensored. So I’m not asking them to host this themselves, which is an important distinction I should have made.
Which uncensored LLMs exist already, so any argument about the damage they can cause is already possible.
Yes, if it can be shown the accident was partially caused by the manufacturer’s neglect. If a safety measure was not in place or did not work properly. Or if it happens suspiciously more often with models from this brand. Apart from solid legal trouble, they can get into PR trouble if many people start to think that way, no matter if it’s true.
That’s very unrelated
Then let me spell it out: If ChatGPT convinces a child to wash their hands with self-made bleach, be sure to expect lawsuits and a shit storm coming for OpenAI.
If that occurs, but no liability can be found on the side of ChatGPT, be sure to expect petitions and a shit storm coming for legislators.
We generally expect individuals and companies to behave in society with peace and safety in mind, including strangers and minors.
Liabilities and regulations exist for these reasons.
Again… this is still missing the point.
Let me spell it out: I’m not asking for companies to host these services. They are not held liable.
For this example to be related, ChatGPT would need to be open source and let you plug in your own model. We should have the freedom to plug in our own trained models, even uncensored ones. This is the case with LLAma and other AI systems right now, and I’m encouraging Mozilla’s AI to allow us to do the same thing.
Why are lolbertarians on lemmy?