Open labeste opened 1 year ago
@rsxdalv Thanks for the links. It is said "See the paper for more details about the training set and corresponding preprocessing." Do we know what this "paper" is ? There's no easy link to it, anyway. But even if these are the sources for training this program, it doesn't answer my question. For instance, does this artist know he has sold his music in order to feed some AI that could possibly one day replace him ?
https://www.pond5.com/fr/artist/marscott "Selling my music on Pond5 allows me to focus on what I love: making music. I'm able to experiment with different genres and new ways of creating tracks because Pond5 gives me access to a pool of very diverse customers. The interface is easy to use, and the team is friendly and responsive." Diego Martínez a/k/a Marscott, Music Artist, Chile.
Let's read what says Pond5 to the artists selling their music on this platform (their FAQ) : "How will my content be used?" "- Pond5 has a global client base and sales team working for you—your content could be used for a wide variety of projects around the world, ranging from student films to television series and feature films." No mention of Meta/Facebook/AI developers here, of course...
Alright for the "paper". Still I really don't see anything about the artists sampled and used in the data sets or any consultation, just to check they're aware of all this. In music industry, you can borrow or sample (should I say also "interpolate" ?) some other artist to produce another title, you have to pay royalties (Pond5 is actually doing this), but you also probably have to ask for permission. We all know some stories about an artist who didn't let his music be used for some publicity or marketing advert, yet agreed to let his music be covered by others. That's why I opened this issue and asked about it. Not really a technical one, but still could the biggest of all on this kind of "sorcerer's apprentice" AI developments...
the dataset is 'internal' (according to the paper) so until it is released to the public I don't think we will be seeing any details about the creators or who is involved. I wouldn't be quite so harsh to the developers of this model. Due to the lack of information on this subject, most likely a nondisclosure agreement, we cant expect them to respond with potentially confidential information other then what is readily available and already published. Although I presume that they made sure to take the proper precautions before creating a dataset like this to not infringe on the authors rights or interests, which if not the case could lead to issues later on.
Actually, according to the paper, the dataset is half 'internal" ('10K high-quality music tracks', licensed), and the other half coming from Shutterstock and Pond5 (so 10K too). There's an interesting discussion about all those matters here : https://youtu.be/Wij2wH_Xi1c One of the conclusions is about transparency, and that's why I opened this issue, since we don't know who are the artists who made this technology usable - beside developers, of course. And we don't know if those are aware this is happening, and if they are involved beyond what they think they signed for. Am I harsh, really ? When you can see that open source Chat GPT can already replace some basic code developer, Midjourney and DALL-E can achieve some pictures for many marketing requirements without really the need to hire some visual artist... Isn't this harsh ? Is it too much to ask about some kind of equity and transparency ? Well, I'm not convinced myself 'they made sure to take the proper precautions'. All the GAFAM are known to push and break the ethical or legal boundaries in order to gain some monopoly, then when public opinion is becoming aware of some issues, they have a bunch of lawyers to resolve the problem or try to lower the fines... But in the meantime, it can be too late and harm is done. Meta / Facebook is clearly not the last, and has been sentenced numerous times.
Actually, according to the paper, the dataset is half 'internal" ('10K high-quality music tracks', licensed), and the other half coming from Shutterstock and Pond5 (so 10K too). There's an interesting discussion about all those matters here : https://youtu.be/Wij2wH_Xi1c One of the conclusions is about transparency, and that's why I opened this issue, since we don't know who are the artists who made this technology usable - beside developers, of course. And we don't know if those are aware this is happening, and if they are involved beyond what they think they signed for. Am I harsh, really ? When you can see that open source Chat GPT can already replace some basic code developer, Midjourney and DALL-E can achieve some pictures for many marketing requirements without really the need to hire some visual artist... Isn't this harsh ? Is it too much to ask about some kind of equity and transparency ? Well, I'm not convinced myself 'they made sure to take the proper precautions'. All the GAFAM are known to push and break the ethical or legal boundaries in order to gain some monopoly, then when public opinion is becoming aware of some issues, they have a bunch of lawyers to resolve the problem or try to lower the fines... But in the meantime, it can be too late and harm is done. Meta / Facebook is clearly not the last, and has been sentenced numerous times.
I purposefully left out the ethical implications of work like this from that perspective not to start an argument about it, which was both ways inevitable. The main reason I said to not treat the researchers so "harshly" is the fact that the researchers are not intitled to the work that they do working under a major company like Meta Research, so pointing a finger at the researchers who have poured hours into their own preferred fields of computer science would be unreasonable. I am not trying to defend Meta, or any other possible monopoly, I want to defend the researchers. I am in favor of the current push to open source the hardware and software that platforms like ChatGPT and CharacterAI depend on, but I am also aware that this must be regulated on some level (whatever that might look like). So please, lets not start this debate. As I think this is not the place to talk about such a topic (mainly being for error reports to improve this software), If you want to debate about this I am sure there are multiple Reddit forums or other places to talk about it.
Back to the topic at hand, I guess you wouldn't be able to get any answers on how this dataset was put together and what restrictions and freedoms where placed on the composers, writers, and band members, unless you managed to find somebody that contributed to it (which seems like a slim chance now looking at it).
Just a reminder that there's other completely opaque companies that offer description based music generation. Not open source, no information about the data set, nothing. In addition, this model is under a very strict license - CC BY NC 4.0. I am sure you are familiar with it. Although it's not a guarantee for the future of all possible scenarios, we can't say "legally" that they have unleashed music generation.
On Mon, Jul 10, 2023, 1:23 PM Kelvin Clien @.***> wrote:
Actually, according to the paper, the dataset is half 'internal" ('10K high-quality music tracks', licensed), and the other half coming from Shutterstock and Pond5 (so 10K too). There's an interesting discussion about all those matters here : https://youtu.be/Wij2wH_Xi1c http://url One of the conclusions is about transparency, and that's why I opened this issue, since we don't know who are the artists who made this technology usable - beside developers, of course. And we don't know if those are aware this is happening, and if they are involved beyond what they think they signed for. Am I harsh, really ? When you can see that open source Chat GPT can already replace some basic code developer, Midjourney and DALL-E can achieve some pictures for many marketing requirements without really the need to hire some visual artist... Isn't this harsh ? Is it too much to ask about some kind of equity and transparency ? Well, I'm not convinced myself 'they made sure to take the proper precautions'. All the GAFAM are known to push and break the ethical or legal boundaries in order to gain some monopoly, then when public opinion is becoming aware of some issues, they have a bunch of lawyers to resolve the problem or try to lower the fines... But in the meantime, it can be too late and harm is done. Meta / Facebook is clearly not the last, and has been sentenced numerous times.
I purposefully left out the ethical implications of work like this from that perspective not to start an argument about it, which was both ways inevitable. The main reason I said to not treat the researchers so "harshly" is the fact that the researchers are not intitled to the work that they do working under a major company like Meta Research, so pointing a finger at the researchers who have poured hours into their own preferred fields of computer science would be unreasonable. I am not trying to defend Meta, or any other possible monopoly, I want to defend the researchers. I am in favor of the current push to open source the hardware and software that platforms like ChatGPT and CharacterAI depend on, but I am also aware that this must be regulated on some level (whatever that might look like). So please, lets not start this debate. As I think this is not the place to talk about such a topic (mainly being for error reports to improve this software), If you want to debate about this I am sure there are multiple Reddit forums or other places to talk about it.
Back to the topic at hand, I guess you wouldn't be able to get any answers on how this dataset was put together and what restrictions and freedoms where placed on the composers, writers, and band members, unless you managed to find somebody that contributed to it (which seems like a slim chance now looking at it).
— Reply to this email directly, view it on GitHub https://github.com/facebookresearch/audiocraft/issues/137#issuecomment-1628660734, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABTRXI72W6DV2X2ISRDQFUDXPPJZHANCNFSM6AAAAAAZZWNWKI . You are receiving this because you were mentioned.Message ID: @.***>
@rsxdalv Hey thanks for your input! Two questions:
Where did you find the model license, here? https://github.com/facebookresearch/audiocraft/blob/main/LICENSE_weights
What other "opaque companies that offer description-based music generation" are you specifically talking about? Just wondering, I'm trying to learn the space. Thanks!
Also the "CC BY NC 4.0" license does not seem so strict. According to this docs page on Creative Commons it allows sharing for commercial purposes (which I'm guessing is the purpose those 10k/20k artists would care about: https://creativecommons.org/licenses/by-nd/4.0/).
Also the "CC BY NC 4.0" license does not seem so strict. According to this docs page on Creative Commons it allows sharing for commercial purposes (which I'm guessing is the purpose those 10k/20k artists would care about: https://creativecommons.org/licenses/by-nd/4.0/).
Please don't mix BY ND with BY NC, they are very different. Here's NC: https://creativecommons.org/licenses/by-nc/4.0/
@rsxdalv Hey thanks for your input! Two questions:
- Where did you find the model license, here? https://github.com/facebookresearch/audiocraft/blob/main/LICENSE_weights
- What other "opaque companies that offer description-based music generation" are you specifically talking about? Just wondering, I'm trying to learn the space. Thanks!
Yes. Model is MIT, so if you had a perfectly legally ideal dataset and a huge amount of resources you could make your own musicgen without worrying about the Model weights which are not MIT.
To round this discussion off, here is a wiki page on CC's definition of NonCommercial (for informational, not legal, use only):
https://wiki.creativecommons.org/wiki/NonCommercial_interpretation
@rsxdalv Hey thanks for your input! Two questions:
- Where did you find the model license, here? https://github.com/facebookresearch/audiocraft/blob/main/LICENSE_weights
- What other "opaque companies that offer description-based music generation" are you specifically talking about? Just wondering, I'm trying to learn the space. Thanks!
Yes. Model is MIT, so if you had a perfectly legally ideal dataset and a huge amount of resources you could make your own musicgen without worrying about the Model weights which are not MIT.
What's the difference between a model and its weights? I thought weights were the model...
@rsxdalv Hey thanks for your input! Two questions:
- Where did you find the model license, here? https://github.com/facebookresearch/audiocraft/blob/main/LICENSE_weights
- What other "opaque companies that offer description-based music generation" are you specifically talking about? Just wondering, I'm trying to learn the space. Thanks!
Yes. Model is MIT, so if you had a perfectly legally ideal dataset and a huge amount of resources you could make your own musicgen without worrying about the Model weights which are not MIT.
What's the difference between a model and its weights? I thought weights were the model...
A model is the digital representation of a neural network, so the structure, encoder etcetera (most exclusively as programmed by the developers). the weights of a model tell that model how much it should change the input and how it should change the input, this is mostly done through a process called training and fine-tuning. Training is done at a network scale, where an expected input and an expected output is given, and using an algorithm it changes the weights and biases of the neurons in the network to achieve the expected output with the given input.
so in simple terms, the model is the code and the weights and biases allow the code to give the expected output for any given input.
Please do note that that is a high level overview, and should not be generalized between all neural networks, as they can differ quite a lot depending on their intended use case
@rsxdalv Hey thanks for your input! Two questions:
- Where did you find the model license, here? https://github.com/facebookresearch/audiocraft/blob/main/LICENSE_weights
- What other "opaque companies that offer description-based music generation" are you specifically talking about? Just wondering, I'm trying to learn the space. Thanks!
Yes. Model is MIT, so if you had a perfectly legally ideal dataset and a huge amount of resources you could make your own musicgen without worrying about the Model weights which are not MIT.
What's the difference between a model and its weights? I thought weights were the model...
The model is the structure of the "child's" brain, the training dataset is the curriculum, and the final weights after training is the "educated child". But sometimes this is smudged to call the actual functional "app" the model.
For a well known reference, ChatGPT takes months to train and has cost millions to get the correct weight values.
@rsxdalv I am confused: can we use the music generated by musicgen for commercial use or not?
@rsxdalv I am confused: can we use the music generated by musicgen for commercial use or not?
No you can't, see the link he posted:
@rsxdalv I am confused: can we use the music generated by musicgen for commercial use or not?
No you can't, see the link he posted:
ok, thanks... no wonder nobody is talking about it anymore ;)
All this is interesting, but I'm still struggling to find the answer about this issue ^^
who cares
who cares
Hello grumpy smurf. Well, maybe a few million of people, I guess...
Everything is in the title. I'm astonished there's only mention of the "20K hours of licensed music" without giving any sources for this (or it could be hidden somewhere I didn't see it). Which artists ? Which original material ? Without those, you would have to train your program yourself, or the community. At least it would be a real open source philosophy. But without mention from the original material - used with or without permission - this could be theft. Thanks in advance for your answers.