fani-lab / LADy

LADy 💃: A Benchmark Toolkit for Latent Aspect Detection Enriched with Backtranslation Augmentation
Other
3 stars 3 forks source link

OCTIS.CTM throws a value error during the training phase #67

Open farinamhz opened 6 months ago

farinamhz commented 6 months ago

I tried to train the CTM baseline using the OCTIS library on our Twitter dataset but ran into a value error. However, when we experimented with other benchmark datasets, everything worked smoothly.

The error message we encountered was: ValueError: Expected more than 1 value per channel when training, got input size torch.Size([1, 25]). This error originated from the _verify_batch_size function in the torch library, and it appeared to be related to an uneven division of the sample size by the batch size, resulting in a remainder of 1.

Upon investigation, I compared the OCTIS code for the CTM model with the official CTM code and noticed that they had included drop_last=True in the dataloader to address this issue but it has not been added to part of code related to CTM in OCTIS library. Therefore, it is necessary for us to update the OCTIS fork accordingly.

farinamhz commented 6 months ago

Update: same issue happened for NeuralLDA from OCTIS.

farinamhz commented 6 months ago

We need to add drop_last=True to the dataloader for both OCTIS.NeuralLDA (avitm_model.py) and OCTIS.CTM (ctm.py) to resolve the issue with batch_size.

farinamhz commented 6 months ago

The task is completed! (https://github.com/fani-lab/OCTIS/commit/65738092d512baa03725de04874ebba9e376c88d and https://github.com/fani-lab/OCTIS/commit/39d8b5bed5720bbb752dc045fab0ccc318893edc) We can now close this issue. @hosseinfani