A pre-trained model alone can’t really be open source. Without the source code and full data set used to generate it, a model alone is analogous to a binary.
@sunstoned@Ephera That’s nonsense. You could write the scripts, collect the data, publish all, but without the months of GPU training you wouldn’t have the trained model, so it would all be worthless. The code used to train all the proprietary models is already open-source, it’s things like PyTorch, Tensorflow etc. For a model to be open-source means you can download the weights and you are allowed to use it as you please, including modifying it and publishing again. It’s not about the dataset.
You have a point that intensive and costly training process plays a factor in the usefulness of a truly open source gigantic model. I’ll assume here that you’re referring to the likes of Llama3.1’s heavy variant or a similarly large LLM. Note that I wasn’t referring to gigantic LLMs specifically when referring to “models”. It is a very broad category.
However, that doesn’t change the definition of open source.
If I have an SDK to interact with a binary and “use it as [I] please” does that mean the binary is then open source because I can interact with it and integrate it into other systems and publish those if I wish? :)
@sunstoned Please don’t assume anything, it’s not healthy.
To answer your question - it depends on the license of that binary. You can’t just automatically consider something open-source. Look at the license. Meta, Microsoft and Google routinely misrepresents their licenses, calling them “open-source” even when they aren’t.
But the main point is that you can put closed source license on a model trained from open-source data. Unfortunately. You are barking under the wrong tree.
Explicitly stating assumptions is necessary for good communication. That’s why we do it in research. :)
it depends on the license of that binary
It doesn’t, actually. A binary alone, by definition, is not open source as the binary is the product of the source, much like a model is the product of training and refinement processes.
You can’t just automatically consider something open source
On this we agree :) which is why saying a model is open source or slapping a license on it doesn’t make it open source.
the main point is that you can put closed source license on a model trained from open source data
Actually the ability to legally produce closed source material depends heavily on how the data is licensed in that case
This is not the main point, at all. This discussion is regarding models that are released under an open source license. My argument is that they cannot be truly open source on their own.
@dandi8 but you are the one who is changing it. And who said it’s not feasible? Mixtral model is open-source. WizardLM2 is open-source. Phi3:mini is open-source… what’s your point?
But the license of the model is not related to the license of the data used for training, nor the license for the scripts and libraries. Those are three separate things.
Open-source software (OSS) is computer software that is released under a license in which the copyright holder grants users the rights to use, study, change, and distribute the software and its source code to anyone and for any purpose.
From Mistral’s FAQ:
We do not communicate on our training datasets. We keep proprietary some intermediary assets (code and resources) required to produce both the Open-Source models and the Optimized models. Among others, this involves the training logic for models, and the datasets used in training.
Unfortunately we’re unable to share details about the training and the datasets (extracted from the open Web) due to the highly competitive nature of the field.
The training data set is a vital part of the source code because without it, the rest of it is useless. The model is the compiled binary, the software itself.
If you can’t share part of your source code due to the “highly competetive nature of the field” (or whatever other reason), your software is not open source.
I cannot lool at Mistral’s source and see that, oh yes, it behaves this way because it was trained on this piece of data in particular - because I was not given accesa to this data.
I cannot build Mistral from scratch, because I was not given a vital piece of the recipe.
I cannot fork Mistral and create a competitor from it, because the devs specifically said they’re not providing the source because they don’t want me to.
You can keep claiming that releasing the binary makes it open source, but that’s not going to make it correct.
> The training data set is a vital part of the source code because without it, the rest of it is useless.
This is simply false. Dataset is not the “source code” of a model. You need to delete this notion from your brain. Model is not the same as a compiled binary.
@dandi8 But the proof is in your quote. Open source is a license which allows people to study the source code. The source code of a model is a bunch of float numbers, and you can study it as much as you want in Mixtral and others. Clearly a model can be published without the dataset (Mixtral), and also a model can be closed, hosted, unavailable for study (OpenAI). I think you need to find some argument showing how “source code” of a model = the dataset. It just isn’t so.
That’s like saying the source code of a binary is a bunch of hexadecimal numbers. You can use a hex editor to look at the “source” of every binary but it’s not human readable…
Yes, the model can be published without the dataset - that makes it, by definition, freeware (free to distribute). It can even be free for commercial use. That doesn’t make it open source.
At best, the tools to generate a model may be open source, but, by definition, the model itself can never be considered open-source unless the training data and the tools are both open-source.
My point precisely :)
A pre-trained model alone can’t really be open source. Without the source code and full data set used to generate it, a model alone is analogous to a binary.
@sunstoned @Ephera That’s nonsense. You could write the scripts, collect the data, publish all, but without the months of GPU training you wouldn’t have the trained model, so it would all be worthless. The code used to train all the proprietary models is already open-source, it’s things like PyTorch, Tensorflow etc. For a model to be open-source means you can download the weights and you are allowed to use it as you please, including modifying it and publishing again. It’s not about the dataset.
Quite aggressive there friend. No need for that.
You have a point that intensive and costly training process plays a factor in the usefulness of a truly open source gigantic model. I’ll assume here that you’re referring to the likes of
Llama3.1
’s heavy variant or a similarly large LLM. Note that I wasn’t referring to gigantic LLMs specifically when referring to “models”. It is a very broad category.However, that doesn’t change the definition of open source.
If I have an SDK to interact with a binary and “use it as [I] please” does that mean the binary is then open source because I can interact with it and integrate it into other systems and publish those if I wish? :)
@sunstoned Please don’t assume anything, it’s not healthy.
To answer your question - it depends on the license of that binary. You can’t just automatically consider something open-source. Look at the license. Meta, Microsoft and Google routinely misrepresents their licenses, calling them “open-source” even when they aren’t.
But the main point is that you can put closed source license on a model trained from open-source data. Unfortunately. You are barking under the wrong tree.
Explicitly stating assumptions is necessary for good communication. That’s why we do it in research. :)
It doesn’t, actually. A binary alone, by definition, is not open source as the binary is the product of the source, much like a model is the product of training and refinement processes.
On this we agree :) which is why saying a model is open source or slapping a license on it doesn’t make it open source.
Just because open source AI is not feasible at the moment is no reason to change the definition of open source.
@dandi8 but you are the one who is changing it. And who said it’s not feasible? Mixtral model is open-source. WizardLM2 is open-source. Phi3:mini is open-source… what’s your point?
But the license of the model is not related to the license of the data used for training, nor the license for the scripts and libraries. Those are three separate things.
https://en.m.wikipedia.org/wiki/Open-source_software
From Mistral’s FAQ:
https://huggingface.co/mistralai/Mistral-7B-v0.1/discussions/8
The training data set is a vital part of the source code because without it, the rest of it is useless. The model is the compiled binary, the software itself.
If you can’t share part of your source code due to the “highly competetive nature of the field” (or whatever other reason), your software is not open source.
I cannot lool at Mistral’s source and see that, oh yes, it behaves this way because it was trained on this piece of data in particular - because I was not given accesa to this data.
I cannot build Mistral from scratch, because I was not given a vital piece of the recipe.
I cannot fork Mistral and create a competitor from it, because the devs specifically said they’re not providing the source because they don’t want me to.
You can keep claiming that releasing the binary makes it open source, but that’s not going to make it correct.
@dandi8
> The training data set is a vital part of the source code because without it, the rest of it is useless.
This is simply false. Dataset is not the “source code” of a model. You need to delete this notion from your brain. Model is not the same as a compiled binary.
Gee, you sure put a lot of effort into supporting your argument in this comment.
@dandi8 But the proof is in your quote. Open source is a license which allows people to study the source code. The source code of a model is a bunch of float numbers, and you can study it as much as you want in Mixtral and others. Clearly a model can be published without the dataset (Mixtral), and also a model can be closed, hosted, unavailable for study (OpenAI). I think you need to find some argument showing how “source code” of a model = the dataset. It just isn’t so.
That’s like saying the source code of a binary is a bunch of hexadecimal numbers. You can use a hex editor to look at the “source” of every binary but it’s not human readable…
Yes, the model can be published without the dataset - that makes it, by definition, freeware (free to distribute). It can even be free for commercial use. That doesn’t make it open source.
At best, the tools to generate a model may be open source, but, by definition, the model itself can never be considered open-source unless the training data and the tools are both open-source.