fastai / course22p2

course.fast.ai 2022 part 2
https://course.fast.ai/Lessons/part2.html
Apache License 2.0
464 stars 252 forks source link

MetricsCB assumes n_inp=1 #15

Open johnrobinsn opened 1 year ago

johnrobinsn commented 1 year ago

I'm playing around with using miniai on different types of tasks. Currently, a siamese network (same/not same) with two input images (and will probably be trying out object detection next which will require multiple y values).

I'm using TrainCB, which allows you to specify n_inp=2. But no support for n_inp > 1 in MetricsCB. I'd be happy to do a pull request but wanted to bounce a few ideas around.

Option 1) just push n_inp into learner and MetricsCB could be modified to honor that Option 2) add get_x and get_y methods to Learner; TrainCB could patch those (before_fit) for n_inp>1; and MetricsCB would just do *learn.get_y() to evaluate loss

option 1 is probably a reasonable quick workaround for now. option 2 is a few more changes to the core learner, but probably better?

Thoughts? Thanks John

johnrobinsn commented 1 year ago

Also let me know if there is a better spot to discuss topics like this...

PiotrCzapla commented 1 year ago

Probably pinging Jeremy on discord make sense. I like the idea of putting n_inp on learn next to the batch as it can be used in other places as well. My mixup implementation assumes right now batch has only 2 elements.

sky1ove commented 2 months ago

I'm playing around with using miniai on different types of tasks. Currently, a siamese network (same/not same) with two input images (and will probably be trying out object detection next which will require multiple y values).

I'm using TrainCB, which allows you to specify n_inp=2. But no support for n_inp > 1 in MetricsCB. I'd be happy to do a pull request but wanted to bounce a few ideas around.

Option 1) just push n_inp into learner and MetricsCB could be modified to honor that Option 2) add get_x and get_y methods to Learner; TrainCB could patch those (before_fit) for n_inp>1; and MetricsCB would just do *learn.get_y() to evaluate loss

option 1 is probably a reasonable quick workaround for now. option 2 is a few more changes to the core learner, but probably better?

Thoughts? Thanks John

I'm playing around with using miniai on different types of tasks. Currently, a siamese network (same/not same) with two input images (and will probably be trying out object detection next which will require multiple y values).

I'm using TrainCB, which allows you to specify n_inp=2. But no support for n_inp > 1 in MetricsCB. I'd be happy to do a pull request but wanted to bounce a few ideas around.

Option 1) just push n_inp into learner and MetricsCB could be modified to honor that Option 2) add get_x and get_y methods to Learner; TrainCB could patch those (before_fit) for n_inp>1; and MetricsCB would just do *learn.get_y() to evaluate loss

option 1 is probably a reasonable quick workaround for now. option 2 is a few more changes to the core learner, but probably better?

Thoughts? Thanks John

I agree. Here's my solution , change it to "*x, y = to_cpu(learn.batch)", and "self.loss.update(to_cpu(learn.loss), weight=len(y))"