Open lorenzoh opened 2 years ago
No issues here with the proposed API.
Typically, in FastAI, we have a "batch of images" or a "batch of tabular entries." Similarly, here we have a "batch of sequences." Ultimately, the model will want a "sequence of batches" though, so this transformation needs to happen somewhere. After this transformation, it becomes very hard to access each sample individually, so it must only happen at the end. Even if we do this as a final encoding step, there's the question of how FastAI understands the encoded block. With other data, you can view the individual encoded samples or encoded batch. What will the view look like here?
Can you explain a bit more what you mean by "sequence of batches" so I can wrap my head around it?
Can you explain a bit more what you mean by "sequence of batches" so I can wrap my head around it?
Yeah, even I didn't get it. Batches don't have to be in a "sequence" to be fed into the model but a batch should have sequences.
Flux RNNs expect an input format of (features x batch) x sequence length
, but the data loader will generate (features x sequence length) x batch
by default. Ideally that transposition happens as late as possible, but it does need to happen at some point.
Batches don't have to be in a "sequence" to be fed into the model but a batch should have sequences.
Quite the opposite for Flux as Brian pointed out. Let me add to this in case there is uncertainty about how recurrence is handled in Flux.
If you have a recurrent model m
(i.e. a cell wrapped in Flux.Recur
) that accepts a vector of features, x
, then m(x)
will evaluate a single time step and update the internal state of m
. Suppose a single sample is sequence of features, xs
, then we evaluate the full sequence as [m(x) for x in xs]
.
Batching serves many purposes in ML, but one of them is achieving higher utilization for hardware that supports parallelism. So, in the framework described above, we want m(xbatch)
to evaluate m
at a given time step for multiple samples concurrently. This means that xbatch
should have dimensions (features x batch)
to hit BLAS etc. Since xbatch
is only a single time step, to represent a sequence, we need a vector where each element is a single time step like xbatch
. This vector, xbatches
, is evaluated as [m(xbatch) for xbatch in xbatches]
, making xbatches
have dimensions (features x batch) x sequence_length
.
The relevant detail here for the issue is that once you have the data in this format, accessing a single sample becomes cumbersome. You have to iterate over xbatches
to access each time step, slice the batch of features to access the correct column, then merge the results together into a single sequence. That's why this operation can only happen at the end. If it is done too early, then all the encodings that require random access to samples will be cumbersome and slow. This also means that the transformation should happen to a batch, because applying MLUtils.batchseq
to the entire dataset is necessarily "too early."
TL;DR:
Hm, I see the issue and how this doesn't solve it. Of course putting the batchseq into the model is not desirable either. Instead of introducing a lot of new APIs to make this possible it may be doable to stick with the simple encode and instead introduce a ´Batch <: WrapperBlock´ that has the default implementation above. The encoding that does the padding could then have a custom method for encode that takes in a Batch block and performs the batchseq operation, returning data for a SequenceBatch <: Wrapperblock block. This way we wouldn't have to introduce any new APIs while unifying observation- and batch-level transformations and not breaking existing encode implementations. What do you think?
Yeah I like this approach better because of the unification. It addresses the concerns about tying batchseq
into the data block visualization. Now, it should be clear to the user that the encoded data is stored as a "sequence of batches."
Yeah, I think the approach Lorenz suggested should be "the way" to achieve this batch-wise encoding.
But, where do we encode this? Will this be a part of the initial transformations? Or just before passing the data to the model?
Adding this kind of first-class support for batches will entail a lot of changes to FastAi.jl internals, e.g. applying encode to batches and not individual samples, but should ultimately reduce the amount of code. We could then make it an encoding that transforms a ´Batch{NumberVector}´ into something like a ´SequenceBatch{NumberVector}´.
Until we find time to implement those changes, though, I would continue with the current method of doing the sequencing.
Sometimes encodings need to be able to take into account batch information, as in a sequence learning task where samples in a batch should be padded to the length of the longest sequence.
Currently, all
Encoding
s transform individual samples, which is great for simplicity and composability, but doesn't allow implementing these batch-level transformations.A usage of encodings in basically every training loop is
taskdataloaders
which will always give batches of encoded data. We could have this use a new functionencodebatch(encoding, context, block, samples)
that transforms multiple samples at a time. This would operate on vectors of samples, not a collated batch, since not all kinds of data can be collated (e.g. different-sized images).By default, it would simply delegate to the single-sample
encode
function:But it could be overwritten by individual encodings:
Tagging relevant parties @Chandu-4444 @darsnack @ToucheSir for discussion.