Closed se4u closed 4 years ago
Hi @se4u ,
Thanks for your interest in our repo! Regarding your questions:
Basically, given a starting word (x0=)x1, you first generate its equilibrium x1=x2. Then given with x0=x1 as padding, x1=x2 as input, you generate x2=x3. After that, (x0,x1)=(x1,x2) will be the padding, x2=x3 will be the input and you shall generate x3*=x4, etc.
Let me know if these answered your questions :-)
for DEQ, as the output is an equilibrium point, you can keep whatever that's been generated as "history padding". You thus only need to solve for the equilibrium of one word, not 4 words.
My apologies but I am still a little confused, consider the equations for the trellis network. This suggests that at time t we will have to recompute the equilibrium for all the hidden states up to time t. I.e.
So at time t we will need to recompute the hidden state sequence up to time t-1. Is this correct? Or did you mean that at time t=2 you will re-use z₁ generated at time t=1? This seems to be wrong to me because the z₁ generated at time t=1 need not be the same as the z*₁ generated at time t=2, because the sequences at equilibrium may be arbitrarily different.
Thank you again for taking the time to answer these questions.
No, that is incorrect. In sentence generation, at t=2 you only need to compute z2, while holding z1 fixed. You shouldn't need to find the equilibrium point of t=1 again.
The way to think about this is to analyze in Transformer's QKV form: only t=2 exists in the query (Q); z1* will be directly appended to the key/value (K/V) part.
No, that is incorrect. In sentence generation, at t=2 you only need to compute z2, while holding z1 fixed. You shouldn't need to find the equilibrium point of t=1 again.
Got it 👍Thanks a lot for clarifying this point.
However I just wanted to note -- and I think you'll agree -- that the equilibrium state found this way will be different from the equilibrium state found by the procedure that I outlined above. I.e. we can have a different procedure, say DEQ', which takes a sequence of tokens and produces a same size sequence of hidden states in one shot.
It's hard to say; what I observed is that as long as the model is causal (e.g., decoder Transformer, or temporal ConvNet), the equilibrium states obtained at sequence level or (one-by-one) token level are usually the same. It's hard to show that the equilibrium is unique, but empirically, this is usually what I observed to be true. May be easier to see this if we consider DEQ not as a Broyden unrolling but as an infinite-depth network.
as long as the model is causal
Ah got it, I think this was the key part, if the whole model is causal then zi will never use information from x{j > i}.
Dear Shaojie, I am trying to understand the time complexity of generating a sentence from the DEQ model and the complexity of finding the perplexity of a sentence.
The DEQ model takes a sequence of observations, and generates a sequence of latent states of equal size. If I want to generate a sequence of length T, then I will need to first generate a sequence of length 1, then length 2 and so on till length T. So the generation time will be quadratic in the length of the sentence. Is this correct?
During training, we can generate an complete sequence of latent states for the full input sequence in one shot. But then ith hidden state zi can depend on x{i+1}. Does that mean that even at training time, we need to split a sentence into all of its prefixes? Also what is the training objective? Is it log-likelihood of predicting x_{i+1} given z_i through a linear+softmax layer? The paper does not go into these details, and I couldn't easily figure it out from the code.