decile-team / cords

Reduce end to end training time from days to hours (or hours to minutes), and energy requirements/costs by an order of magnitude using coresets and data selection.
https://cords.readthedocs.io/en/latest/
MIT License
316 stars 53 forks source link

[Bug] Got weight with same value when running examples. #80

Closed HaoKang-Timmy closed 1 year ago

HaoKang-Timmy commented 1 year ago

Hi, I tested the example with Supervised learning and Glister strategy. https://github.com/decile-team/cords/blob/main/examples/SL/image_classification/python_notebooks/CORDS_SL_CIFAR10_Custom_Train.ipynb But when I print the weight of the train loader, they are all 1.0. I believe that by using Glister strategy, we will get different weights.

tensor([1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
        1., 1.], device='cuda:0')

Is that a bug or something special? Thanks.

HaoKang-Timmy commented 1 year ago

Could you please have a look at this? @krishnatejakk,@noilreed

krishnatejakk commented 1 year ago

@HaoKang-Timmy GLISTER is not a weighted subset selection problem. Hence, it always returns equal weights. We include weights with all strategies to make the training framework consistent.

HaoKang-Timmy commented 1 year ago

Thank you very much. It seems that I was wrong.

Krishnateja Killamsetty @.***> 于2022年10月17日周一 23:19写道:

@HaoKang-Timmy https://github.com/HaoKang-Timmy GLISTER is not a weighted subset selection problem. Hence, it always returns equal weights. We include weights with all strategies to make the training framework consistent.

— Reply to this email directly, view it on GitHub https://github.com/decile-team/cords/issues/80#issuecomment-1281040837, or unsubscribe https://github.com/notifications/unsubscribe-auth/AOKSYWYLK6NSYCDF3RLWZBTWDVVBNANCNFSM6AAAAAAQ63MSKE . You are receiving this because you were mentioned.Message ID: @.***>