Open msaroufim opened 1 year ago
Hey! I'm interested in contributing to the vision transformer model, but I don't have any prior open source contribution experience. Would it be okay for me to proceed with this project?
Yes please go for it
I am interested as well. What is a video model? I am looking at some video examples from tensorflow and keras. Would a spinoff of this suffice? That dataset looks like a standard intro video dataset.
Which of these problems can be comfortably exercised on a 2080ti(12gb vram)?
@msaroufim I would Like to contribute to examples related to graph neural networks ,Is there any specific Dataset i should choose for this or i can choose any dataset of my own choice for examples .
Yes to all of the above
@msaroufim I am interested for PyTorch Open-Source Contribution. Thank you for sharing New examples requested Note, would like to contribute to Diffusion Model, stable diffusion and Vision Transformer sections will keep you posted as my work progress. Please let me know your thoughts on taking up this project. Thank you. Regards, Aditi
hey! I would love to contribute to Stable diffusion. Can i take this up ?
Hi @Krish2002 yes please go for it
I would love to contribute to FlowNet. Can I take this up?
Hi @IMvision12 please do!
I would like to add a video vision transformer model.
Edit: Video ViTs are already present in Torchvision, can i still go ahead with this idea? thanks
@abhi-glitchhg please do just keep in mind the implementation has to be from scratch and not just call the torcvision constructor
I would like to contribute to Graph Neural Network. However, is there some specific task or model in mind or can I choose any?
Hi @msaroufim I would like to implement language translation using encoder-decoder architecture. Can I take this?
@JoseLuisC99 any task or model you like! As long as it's from scratch in pure Pytorch
Hey @msaroufim I would like to work on stable diffusion and some others topics as well. Thanks
@guptaaryan16 @HemanthSai7 @JoseLuisC99 @abhi-glitchhg assigned some models to you, lemme know if you need any help to get it over the finish line. Thanks!
Can I take up implementing Controlnet - guided diffusion?
Apologies, I could not complete it. If someone else is interested, feel free to take it up - I am no longer working on it.
@msaroufim can I use the transformers library to use the tokenizers?
@msaroufim can I take up NERF?
@msaroufim I'd like to implement a text to 3d model. At the moment, I'm deciding between Test2Mesh and CLIP-Forge. DreamFusion seems a little complex.
@HemanthSai7 yes but the model should be pure PyTorch, bonus point for a from scratch tokenizer! @bhavyashahh Sure! @QasimKhan5x Sounds good, either of those models work
Please reference this issue in your PRs, like "Re #1131".
@msaroufim Sorry, but owing to university examinations I will not be able to participate this time. However, if anybody wants to take FlowNet, they are welcome to do so. :)
Wanted to give updates on my task. I have completed preparing the dataset(tokenization, data-loading, etc) for the translation task and will start with Positional Embeddings and other layers.
Hey! I am wondering whether the Vision Transformer model is taken or not. I am willing to contribute. Or otherwise, would you be interested if I work on the Swin Transformer model? Many thanks.
Thanks @HemanthSai7
@yishengpei vision transformer was completed already but would be happy to review swin transformer
I'm seeing a lot of nan values when I print the attn_output_weights in nn.MultiheadAttention in the decoder block. Is it expected or is it due to a fault in the logic?
def generate_square_subsequent_mask(seq_len):
mask = (torch.triu(torch.ones(seq_len, seq_len)) == 1).transpose(0, 1)
mask = mask.float().masked_fill(mask == 0, float('-inf')).masked_fill(mask== 1, float(0.0))
return mask
I'm unable to understand this way of generating masks used in the source code.
I'm seeing a lot of nan values when I print the attn_output_weights in nn.MultiheadAttention in the decoder block. Is it expected or is it due to a fault in the logic?
Could you please share a repro? That's certainly not expected
def generate_square_subsequent_mask(seq_len): mask = (torch.triu(torch.ones(seq_len, seq_len)) == 1).transpose(0, 1) mask = mask.float().masked_fill(mask == 0, float('-inf')).masked_fill(mask== 1, float(0.0)) return mask
I'm unable to understand this way of generating masks used in the source code.
What exactly is confusing you?
Running it prints a reasonable looking mask to me
generate_square_subsequent_mask(10)
tensor([[0., -inf, -inf, -inf, -inf, -inf, -inf, -inf, -inf, -inf],
[0., 0., -inf, -inf, -inf, -inf, -inf, -inf, -inf, -inf],
[0., 0., 0., -inf, -inf, -inf, -inf, -inf, -inf, -inf],
[0., 0., 0., 0., -inf, -inf, -inf, -inf, -inf, -inf],
[0., 0., 0., 0., 0., -inf, -inf, -inf, -inf, -inf],
[0., 0., 0., 0., 0., 0., -inf, -inf, -inf, -inf],
[0., 0., 0., 0., 0., 0., 0., -inf, -inf, -inf],
[0., 0., 0., 0., 0., 0., 0., 0., -inf, -inf],
[0., 0., 0., 0., 0., 0., 0., 0., 0., -inf],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]])
@msaroufim I was trying to create my own mask rule. Found out where I went wrong and fixed it.
@msaroufim I would like to contribute to the OpenAI whisper implementation. Can I take this up?
Sure yeah! That sound cool
What is the expected outcome in this context? The model and its pretrained weights can be obtained from OpenAI's Whisper repository. One possible approach is to utilize the model class and showcase the process of training or fine-tuning, as the original repository does not include training functionality.
Hi, I see the GNN example is taken care of as a GCN example. Would other GNN variants such as GAT, GraphSage, etc. be helpful, or would they be counted as kinda duplicates?
@ebrahimpichka id love to see variants.
@msaroufim great, I'd like to work on it.
@msaroufim if no one is working on Flownet, can I be assigned?
@mingxzhao it's yours
@msaroufim When you say "Differentiable physics", do you mean Physic-informed deep learning? Or do you have some other architecture in mind?
Yeah that's what I had in mind but open to other cool sounding models
Cool! I think that I can try PINNs and another modern solution using graph nets, so please assign me this problem.
Hello, I am new to contributing to PyTorch, @msaroufim Can I try contributing a new example "UNet Image Segmenation"?
Thank You
Hi folks yeah please go for it, no need to ask me for permission just send a PR and tag me so I can review
Greetings, I just came across this and would also like to be a part of it. As of now, I can only see that toolformer is the only model that is not taken up or assigned to anyone. Is it fine if I work on this? And also, is yes, I would need some assistance as to where to start from on this. Regards, Sarthak <3
Hey @HemanthSai7 and @msaroufim I've just submitted a PR for the language translation example. Sorry if I duplicated any work you've been working on but I have not heard movement on the language translation example for 6 months so I thought it would be safe to work on it.
Greetings, I just came across this and would also like to be a part of it. As of now, I can only see that toolformer is the only model that is not taken up or assigned to anyone. Is it fine if I work on this? And also, is yes, I would need some assistance as to where to start from on this. Regards, Sarthak <3
I looked through the PR which reference this issue and seems like a lot of models were assigned but never done so I'll start working on some of them. For now, I am picking up FlowNet as I've worked with something similar. I'll push a PR for it soon. Cheers <3
Hi everyone, @svekars and I are looking to increase the number of new contributions to pytorch/examples, this might be especially interesting to you if you've never contributed to an open source project before.
At a high level, we're looking for new interesting models.
So here's what you need to do
Your implementation needs to include
run_python_examples.sh
As an example this recent contribution by @sudomaze is a good one to follow https://github.com/pytorch/examples/pull/1003/files
Here are some model ideas
Model ideas
But I'm quite open to anything we don't have that's cool