pytorch / examples

A set of examples around pytorch in Vision, Text, Reinforcement Learning, etc.
https://pytorch.org/examples
BSD 3-Clause "New" or "Revised" License
22.06k stars 9.49k forks source link

New examples requested #1131

Open msaroufim opened 1 year ago

msaroufim commented 1 year ago

Hi everyone, @svekars and I are looking to increase the number of new contributions to pytorch/examples, this might be especially interesting to you if you've never contributed to an open source project before.

At a high level, we're looking for new interesting models.

So here's what you need to do

  1. Check out our contributing guide: https://github.com/pytorch/examples/blob/main/CONTRIBUTING.md
  2. Pick a model idea - I've listed a few below, comment on this task so others know you're working on it
  3. Implement your model from scratch using PyTorch, no external dependencies will be allowed to keep the examples as educational as possible

Your implementation needs to include

  1. A folder with your code which needs to define
    1. Your model architecture
    2. Training code
    3. Evaluation code
    4. An argparser
  2. Make sure your script runs in CI so it doesn't break in the future by adding it to run_python_examples.sh
  3. README describing any usage instructions

As an example this recent contribution by @sudomaze is a good one to follow https://github.com/pytorch/examples/pull/1003/files

Here are some model ideas

Model ideas

But I'm quite open to anything we don't have that's cool

niyarrbarman commented 1 year ago

Hey! I'm interested in contributing to the vision transformer model, but I don't have any prior open source contribution experience. Would it be okay for me to proceed with this project?

msaroufim commented 1 year ago

Yes please go for it

2357juan commented 1 year ago

I am interested as well. What is a video model? I am looking at some video examples from tensorflow and keras. Would a spinoff of this suffice? That dataset looks like a standard intro video dataset.

Which of these problems can be comfortably exercised on a 2080ti(12gb vram)?

xorsuyash commented 1 year ago

@msaroufim I would Like to contribute to examples related to graph neural networks ,Is there any specific Dataset i should choose for this or i can choose any dataset of my own choice for examples .

msaroufim commented 1 year ago

Yes to all of the above

aditikhare007 commented 1 year ago

@msaroufim I am interested for PyTorch Open-Source Contribution. Thank you for sharing New examples requested Note, would like to contribute to Diffusion Model, stable diffusion and Vision Transformer sections will keep you posted as my work progress. Please let me know your thoughts on taking up this project. Thank you. Regards, Aditi

Krish2002 commented 1 year ago

hey! I would love to contribute to Stable diffusion. Can i take this up ?

msaroufim commented 1 year ago

Hi @Krish2002 yes please go for it

IMvision12 commented 1 year ago

I would love to contribute to FlowNet. Can I take this up?

msaroufim commented 1 year ago

Hi @IMvision12 please do!

abhi-glitchhg commented 1 year ago

I would like to add a video vision transformer model.

Edit: Video ViTs are already present in Torchvision, can i still go ahead with this idea? thanks

msaroufim commented 1 year ago

@abhi-glitchhg please do just keep in mind the implementation has to be from scratch and not just call the torcvision constructor

JoseLuisC99 commented 1 year ago

I would like to contribute to Graph Neural Network. However, is there some specific task or model in mind or can I choose any?

HemanthSai7 commented 1 year ago

Hi @msaroufim I would like to implement language translation using encoder-decoder architecture. Can I take this?

msaroufim commented 1 year ago

@JoseLuisC99 any task or model you like! As long as it's from scratch in pure Pytorch

guptaaryan16 commented 1 year ago

Hey @msaroufim I would like to work on stable diffusion and some others topics as well. Thanks

msaroufim commented 1 year ago

@guptaaryan16 @HemanthSai7 @JoseLuisC99 @abhi-glitchhg assigned some models to you, lemme know if you need any help to get it over the finish line. Thanks!

arunppsg commented 1 year ago

Can I take up implementing Controlnet - guided diffusion?

Apologies, I could not complete it. If someone else is interested, feel free to take it up - I am no longer working on it.

HemanthSai7 commented 1 year ago

@msaroufim can I use the transformers library to use the tokenizers?

bhavyashahh commented 1 year ago

@msaroufim can I take up NERF?

QasimKhan5x commented 1 year ago

@msaroufim I'd like to implement a text to 3d model. At the moment, I'm deciding between Test2Mesh and CLIP-Forge. DreamFusion seems a little complex.

msaroufim commented 1 year ago

@HemanthSai7 yes but the model should be pure PyTorch, bonus point for a from scratch tokenizer! @bhavyashahh Sure! @QasimKhan5x Sounds good, either of those models work

svekars commented 1 year ago

Please reference this issue in your PRs, like "Re #1131".

IMvision12 commented 1 year ago

@msaroufim Sorry, but owing to university examinations I will not be able to participate this time. However, if anybody wants to take FlowNet, they are welcome to do so. :)

HemanthSai7 commented 1 year ago

Wanted to give updates on my task. I have completed preparing the dataset(tokenization, data-loading, etc) for the translation task and will start with Positional Embeddings and other layers.

yishengpei commented 1 year ago

Hey! I am wondering whether the Vision Transformer model is taken or not. I am willing to contribute. Or otherwise, would you be interested if I work on the Swin Transformer model? Many thanks.

msaroufim commented 1 year ago

Thanks @HemanthSai7

@yishengpei vision transformer was completed already but would be happy to review swin transformer

HemanthSai7 commented 1 year ago

I'm seeing a lot of nan values when I print the attn_output_weights in nn.MultiheadAttention in the decoder block. Is it expected or is it due to a fault in the logic?

HemanthSai7 commented 1 year ago
def generate_square_subsequent_mask(seq_len):
  mask = (torch.triu(torch.ones(seq_len, seq_len)) == 1).transpose(0, 1)
  mask = mask.float().masked_fill(mask == 0, float('-inf')).masked_fill(mask== 1, float(0.0))
  return mask

I'm unable to understand this way of generating masks used in the source code.

msaroufim commented 1 year ago

I'm seeing a lot of nan values when I print the attn_output_weights in nn.MultiheadAttention in the decoder block. Is it expected or is it due to a fault in the logic?

Could you please share a repro? That's certainly not expected

msaroufim commented 1 year ago
def generate_square_subsequent_mask(seq_len):
  mask = (torch.triu(torch.ones(seq_len, seq_len)) == 1).transpose(0, 1)
  mask = mask.float().masked_fill(mask == 0, float('-inf')).masked_fill(mask== 1, float(0.0))
  return mask

I'm unable to understand this way of generating masks used in the source code.

What exactly is confusing you?

Running it prints a reasonable looking mask to me

generate_square_subsequent_mask(10)
tensor([[0., -inf, -inf, -inf, -inf, -inf, -inf, -inf, -inf, -inf],
        [0., 0., -inf, -inf, -inf, -inf, -inf, -inf, -inf, -inf],
        [0., 0., 0., -inf, -inf, -inf, -inf, -inf, -inf, -inf],
        [0., 0., 0., 0., -inf, -inf, -inf, -inf, -inf, -inf],
        [0., 0., 0., 0., 0., -inf, -inf, -inf, -inf, -inf],
        [0., 0., 0., 0., 0., 0., -inf, -inf, -inf, -inf],
        [0., 0., 0., 0., 0., 0., 0., -inf, -inf, -inf],
        [0., 0., 0., 0., 0., 0., 0., 0., -inf, -inf],
        [0., 0., 0., 0., 0., 0., 0., 0., 0., -inf],
        [0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]])
HemanthSai7 commented 1 year ago

@msaroufim I was trying to create my own mask rule. Found out where I went wrong and fixed it.

MBora commented 1 year ago

@msaroufim I would like to contribute to the OpenAI whisper implementation. Can I take this up?

msaroufim commented 1 year ago

Sure yeah! That sound cool

MBora commented 1 year ago

What is the expected outcome in this context? The model and its pretrained weights can be obtained from OpenAI's Whisper repository. One possible approach is to utilize the model class and showcase the process of training or fine-tuning, as the original repository does not include training functionality.

ebrahimpichka commented 1 year ago

Hi, I see the GNN example is taken care of as a GCN example. Would other GNN variants such as GAT, GraphSage, etc. be helpful, or would they be counted as kinda duplicates?

msaroufim commented 1 year ago

@ebrahimpichka id love to see variants.

ebrahimpichka commented 1 year ago

@msaroufim great, I'd like to work on it.

mingxzhao commented 10 months ago

@msaroufim if no one is working on Flownet, can I be assigned?

msaroufim commented 10 months ago

@mingxzhao it's yours

JoseLuisC99 commented 10 months ago

@msaroufim When you say "Differentiable physics", do you mean Physic-informed deep learning? Or do you have some other architecture in mind?

msaroufim commented 10 months ago

Yeah that's what I had in mind but open to other cool sounding models

JoseLuisC99 commented 10 months ago

Cool! I think that I can try PINNs and another modern solution using graph nets, so please assign me this problem.

kausthub-kannan commented 9 months ago

Hello, I am new to contributing to PyTorch, @msaroufim Can I try contributing a new example "UNet Image Segmenation"?

Thank You

msaroufim commented 9 months ago

Hi folks yeah please go for it, no need to ask me for permission just send a PR and tag me so I can review

sarthak247 commented 7 months ago

Greetings, I just came across this and would also like to be a part of it. As of now, I can only see that toolformer is the only model that is not taken up or assigned to anyone. Is it fine if I work on this? And also, is yes, I would need some assistance as to where to start from on this. Regards, Sarthak <3

NoahSchiro commented 3 months ago

Hey @HemanthSai7 and @msaroufim I've just submitted a PR for the language translation example. Sorry if I duplicated any work you've been working on but I have not heard movement on the language translation example for 6 months so I thought it would be safe to work on it.