Which is exactly what Sarah Silverman is claiming ChatGPT is doing.
And, beyond a individual crime of a person reading a pirated book, again, we’re talking about ChatGPT and other AI magnifying reach and speed, beyond what an individual person ever could do even if they did nothing but read pirated material all day, not unlike websites like The Pirate Bay. Y’know, how those website constantly get taken down and have to move around the globe to areas where they’re beyond the reach of the law, due to the crimes they’re doing.
I’m not like, anti-piracy or anything. But also, I don’t think companies should be using pirated software, and my big concern about LLMs aren’t really for private use, but for corporate use.
Consuming content illegally is by definition a crime, yes. It also has no effect on your output. A summary or review of that content will not be infringing, it will still be fair use.
A more substantial work inspired by that content could be infringing or not depending on how close it is to the original content but not on the legality of your viewing of that content.
Nor is it relevant. If you have any success with your copy you are going to cause way more damage to the original creator than pirating one copy.
And, beyond a individual crime of a person reading a pirated book, again, we’re talking about ChatGPT and other AI magnifying reach and speed, beyond what an individual person ever could do even if they did nothing but read pirated material all day, not unlike websites like The Pirate Bay. Y’know, how those website constantly get taken down and have to move around the globe to areas where they’re beyond the reach of the law, due to the crimes they’re doing.
I can assure you that The Pirate Bay is quite stable. I would like to point out that none of AI vendors has been actually convicted of copyright infringement yet. That their use is infringing and a crime is your opinion.
It also going to be irrelevant because there are companies that do own massive amounts of copyrighted materials and will be able to train their own AIs, both to sell as a service and to cut down on labor costs of creating new materials. There are also companies that got people to agree to licensing their content for AI training such as Adobe.
So copyright law will not be able to help creators. So there will be a push for more laws and regulators. Depending on what they manage to push through you can forget non major corp backed AI, reduced fair use rights (as in unapproved reviews being de-facto illegal) and perhaps a new push against software that could be used for piracy such as non-regulated video or music players, nevermind encoders etc.
Consuming content illegally is by definition a crime, yes. It also has no effect on your output. A summary or review of that content will not be infringing, it will still be fair use.
That their use is infringing and a crime is your opinion.
“My opinion”? have you read the headline? Its not my opinion that matters, its that of the prosecution in this lawsuit. And this lawsuit indeed alleges that copyright infringement has occurred; it’ll be up to the courts to see if the claim holds water.
I’m definitely not sure that GPT4 or other AI models are copyright infringing or otherwise illegal. But, I think that there’s enough that seems questionable that a lawsuit is valid to do some fact-finding, and honestly, I feel like the law is a few years behind on AI anyway.
But it seem plausible that the AI could be found to be ‘illegally distributing works’, or otherwise have broken IP laws at some point during their training or operation. A lot depends on what kind of agreements were signed over the contents of the training packages, something I frankly know nothing about, and would like to see come to light.
“My opinion”? have you read the headline? Its not my opinion that matters, its that of the prosecution in this lawsuit. And this lawsuit indeed alleges that copyright infringement has occurred; it’ll be up to the courts to see if the claim holds water.
No, the opinion that matters is the opinion of the judge. Before we have a decision, there is no copyright infringement.
I’m definitely not sure that GPT4 or other AI models are copyright infringing or otherwise illegal. But, I think that there’s enough that seems questionable that a lawsuit is valid to do some fact-finding
You sure speak as if you do.
and honestly, I feel like the law is a few years behind on AI anyway.
But it seem plausible that the AI could be found to be ‘illegally distributing works’, or otherwise have broken IP laws at some point during their training or operation. A lot depends on what kind of agreements were signed over the contents of the training packages, something I frankly know nothing about, and would like to see come to light.
I 've said in my previous post that copyright will not solve the problems, what you describe as it being behind AI. Considering how the laws regarding copyright ‘caught up with the times’ in the beginning of the internet… I am not optimistic the changes will be beneficial to society.
Consuming content illegally is by definition a crime, yes.
What law makes it illegal to consume an unauthorized copy of a work?
That’s not a flippant question. I am being absolutely serious. Copyright law prohibits the creation and distribution of unauthorized copies; it does not prohibit the reception, possession, or consumption of those copies. You can only declare content consumption to be “illegal” if there is actually a law against it.
What law makes it illegal to consume an unauthorized copy of a work?
That’s not a flippant question. I am being absolutely serious. Copyright law prohibits the creation and distribution of unauthorized copies; it does not prohibit the reception, possession, or consumption of those copies. You can only declare content consumption to be “illegal” if there is actually a law against it.
I mean, you can do that, but that’s a crime.
Which is exactly what Sarah Silverman is claiming ChatGPT is doing.
And, beyond a individual crime of a person reading a pirated book, again, we’re talking about ChatGPT and other AI magnifying reach and speed, beyond what an individual person ever could do even if they did nothing but read pirated material all day, not unlike websites like The Pirate Bay. Y’know, how those website constantly get taken down and have to move around the globe to areas where they’re beyond the reach of the law, due to the crimes they’re doing.
I’m not like, anti-piracy or anything. But also, I don’t think companies should be using pirated software, and my big concern about LLMs aren’t really for private use, but for corporate use.
Consuming content illegally is by definition a crime, yes. It also has no effect on your output. A summary or review of that content will not be infringing, it will still be fair use.
A more substantial work inspired by that content could be infringing or not depending on how close it is to the original content but not on the legality of your viewing of that content.
Nor is it relevant. If you have any success with your copy you are going to cause way more damage to the original creator than pirating one copy.
I can assure you that The Pirate Bay is quite stable. I would like to point out that none of AI vendors has been actually convicted of copyright infringement yet. That their use is infringing and a crime is your opinion.
It also going to be irrelevant because there are companies that do own massive amounts of copyrighted materials and will be able to train their own AIs, both to sell as a service and to cut down on labor costs of creating new materials. There are also companies that got people to agree to licensing their content for AI training such as Adobe.
So copyright law will not be able to help creators. So there will be a push for more laws and regulators. Depending on what they manage to push through you can forget non major corp backed AI, reduced fair use rights (as in unapproved reviews being de-facto illegal) and perhaps a new push against software that could be used for piracy such as non-regulated video or music players, nevermind encoders etc.
“My opinion”? have you read the headline? Its not my opinion that matters, its that of the prosecution in this lawsuit. And this lawsuit indeed alleges that copyright infringement has occurred; it’ll be up to the courts to see if the claim holds water.
I’m definitely not sure that GPT4 or other AI models are copyright infringing or otherwise illegal. But, I think that there’s enough that seems questionable that a lawsuit is valid to do some fact-finding, and honestly, I feel like the law is a few years behind on AI anyway.
But it seem plausible that the AI could be found to be ‘illegally distributing works’, or otherwise have broken IP laws at some point during their training or operation. A lot depends on what kind of agreements were signed over the contents of the training packages, something I frankly know nothing about, and would like to see come to light.
No, the opinion that matters is the opinion of the judge. Before we have a decision, there is no copyright infringement.
I 've said in my previous post that copyright will not solve the problems, what you describe as it being behind AI. Considering how the laws regarding copyright ‘caught up with the times’ in the beginning of the internet… I am not optimistic the changes will be beneficial to society.
What law makes it illegal to consume an unauthorized copy of a work?
That’s not a flippant question. I am being absolutely serious. Copyright law prohibits the creation and distribution of unauthorized copies; it does not prohibit the reception, possession, or consumption of those copies. You can only declare content consumption to be “illegal” if there is actually a law against it.
Which legal system?