XiangLi1999 / Diffusion-LM

Diffusion-LM
Apache License 2.0
1.03k stars 134 forks source link

Importance of small vocab size and dimensionality of diffusion space for e2e-tgt experiments #11

Closed jwkirchenbauer closed 1 year ago

jwkirchenbauer commented 2 years ago

Hi,

I notice that using the default settings of the code for training the diffusion model, the result is the use of a small vocab custom tokenizer and matching randomly initialized embedding layer (821 -> 16).

I also know this setting uses a small latent space of dimension 16 that the hidden states are projected to and from during diffusion.

How important were these two factors for training? I see that using the pretrained bert tokenizer was an option, I assume this doesn't work as well? Were these smaller scale components required to get the diffusion-lm to be stable in training and converge?

XiangLi1999 commented 2 years ago

Hi,

Thanks for the question! I think the default is outdated and not optimal. I would recommend following the readme instructions and run with end-to-end training by adding the argument --app "--predict_xstart True --training_mode e2e

For the random initialized embedding, I find that larger embeddings dimensions would hurt performance, especially if you dont set --predict_xstart True . My intuition is that it would need to spend too much modeling effort memorizing the large embeddings.

In general, if you use the command in README (i.e., --app "--predict_xstart True --training_mode e2e) to train, dimension size is more of a hyper-parameter. For harder datasets, it's important to set the dimension size larger (e.g. 128 or 256), further expanding the embedding size can lead to diminishing returns. For easier datsaets, it's fine setting dimension even smaller.

The other factor you asked is about is vocab size. I think smaller vocab size is compatible with modeling via smaller embedding dimensions. Enlarge the vocab size requires also make the embedding dimension larger. I have tried some preliminary experiments using full BERT vocabulary, and it also seems to generate fluent text.

Hope it helps!

jwkirchenbauer commented 2 years ago

Thanks for the reply!

To clarify, you're suggesting remove the --vocab_size 821/11043 argument from the README commands? Does that keep the vocab larger? (I trained the two models for e2e and ROC with the commands in the README as given). Also, since you used spacy, you have a whitespace tokenization (relevant to later comment).

Cool. So I'm clearly not a reviewer otherwise I wouldn't be chatting with you directly at this time, but I guess if I were a core question/request I'd have would be a better understanding of the relationship between the diffusion random variable dimensionality, say 16, and the transformer hidden state dimensionality, say 128. It's not obvious to me what choices you'd need to make about the expressiveness of the diffusion space versus the expressiveness of the $P_{\theta}$ predictor.

Your figure 4 does give an ablation of learned versus fixed embeddings as well as $x_0\ \text{or}\ \epsilon$ size but not of the up projection size. Intuitively since you're using randomly initialized transformer rather than pretrained blocks, you are free to try to use ("get away with") a more compact transformer that is cheaper to evaluate. But if it's the case that you need the strong upscaling to > 2x the diffusion vector size (and a full BERT scale set of encoder blocks), that's important to understand.

Based on the dynamics of your ROC stories experiment (I decoded all checkpoints with the model at 10k step increments) it takes a while for the larger vocabulary/dimensionality model to move away from repeating the same words and structures, and start generating variety, grammar, and appropriate padding. I'm curious to see whether your Diffusion-LM method scales beyond these vocabularies to full scale bpe/subword tokenization schemes, language modeling dataset sizes, less "homogenous" sentence structures than the restaurants and ROC, and longer sequence lengths.

Inspired to maybe look into some of these myself 🙂

XiangLi1999 commented 2 years ago

re1: no. I am not saying that you should remove --vocab_size xxx. If you remove it, you would get a run time error about dimension size mismatch. You probably would need to go into the code to change the tokenizer setup (aka. unk thresholding) to adjust the vocab size.

re2: I didnt try adjusting the dimension of the Transformer block. You are right, one could try adjusting the output dimension of the Transformer model, and this could be an interesting ablation studies. For all my experiments, I use the same Transformer architecture. So, I dont have much intution about whether I need " strong upscaling to > 2x the diffusion vector size".

jwkirchenbauer commented 2 years ago
  1. 👍 yup that's what I thought

  2. Yea of course, there's always a long tail of things to try no worries

jwkirchenbauer commented 2 years ago

Hi again,

I was wondering whether you could provide any more insight into the parameter settings you used when you

tried some preliminary experiments using full BERT vocabulary, and it also seems to generate fluent text.

I'm working on a slight reimplementation of your setup and what doesn't seem to work is a standard BERT tokenizer (30k vocab) and embedding/hidden dimension for the model. Basically a standard hf model configured like so:

"hidden_size": 768, # model and embedding dim
"intermediate_size": 3072,
"max_position_embeddings": 512, # max seq len
"num_attention_heads": 12,
"num_hidden_layers": 12,

Without your up-projection and down-projection layers, and at the standard bert max seq length, this would mean that the diffusion is occurring in a (512,768) dim space. i.e. this is the shape of the "image" you're passing to whatever diffusion utility library you're using to implement noising and schedules.

Did you ever experiment on this scale?

Also, the datasets you used have a somewhat more repetitive structure than standard LM data like wikitext, or C4. Did you ever attempt this with data with more structural diversity?

XiangLi1999 commented 2 years ago

Hi, I have tried using BERT tokenizer (30k vocab) but with a smaller dimension_dim = 128. And it seems to be working. So I would recommend trying that first, before scaling to dim=768.