ziplab / LITv2

[NeurIPS 2022 Spotlight] This is the official PyTorch implementation of "Fast Vision Transformers with HiLo Attention"
Apache License 2.0
241 stars 14 forks source link

How to use videos as input for HiLo? #17

Open q958287831 opened 4 months ago

q958287831 commented 4 months ago

Dear author, I would like to input the data dimensions [2,64,300,64,64] into HiLo, which represent dimension, number of channels, video sequence, height, and width, respectively. But the acceptable data for HiLo is [batch_size, batch_size,hidden_dimension], I don't know how to convert it to a suitable dimension. I am a beginner in the field of artificial intelligence, and your answer is crucial to me!

HubHop commented 3 months ago

Hi @q958287831, thank you for your interest! A video clip can be thought of as a list of frames. Initially, for HiLo, we have an input tensor in the shape of [B, N, D], where:

With a video clip, you will have an additional dimension:

Now, your video data might be represented as a tensor in the shape of [B, T, N, D]. Next, we can reshape this data into [B * T, N, D], which effectively combines all frames of different video clips into the batch size. This allows HiLo to process the data.

However, note that HiLo does not handle the dependencies in the temporal dimension. There could be a promising extension in future work.