infinite sequence length inference is currently supported by stepping through each token; however, I do not see any support for training on this. The parallel scan applied per batch does not have support to save the final hidden state for the following batch, so are there any plans to train mamba on context lengths > 4096 or whatever is the length cap from memory?
infinite sequence length inference is currently supported by stepping through each token; however, I do not see any support for training on this. The parallel scan applied per batch does not have support to save the final hidden state for the following batch, so are there any plans to train mamba on context lengths > 4096 or whatever is the length cap from memory?