Closed Zixiu99 closed 1 week ago
Additionally, is there a way to modify the code so that the x-axis of the TensorBoard graphs represents epochs instead of batches?
This issue is stale because it has been open for 30 days with no activity.
I would also like to ask if there's a simple solution to this.
train_utils.py/train_one_epoch()
returns all of that.I'm new to object detection networks, so I'm not sure if loss is utilised in the same way as elsewhere. I want to use validation loss for hyperparameter tuning and both train and validation set losses to check for overfitting. Is it more common to use accuracy metrics PCDet provides for this here?
Okay, I think I figured out how to get validation loss working without custom implementations and other silliness. This applies to the AnchorHead dense head, so your mileage may vary.
True
. An alternative that doesn't require modifying PCDet's code (which I'm not a fan of) would be to manually call assign_targets()
and edit forward_ret_dict
from outside, but I haven't looked into obtaining the data_dict
parameter. Maybe it's simple.get_loss()
function after each prediction, for instance here in this simplified bit from eval_utils.py
(you still need to output it somewhere to console, Tensorboard, or a file):
losses = []
for i, batch_dict in enumerate(dataloader):
load_data_to_gpu(batch_dict)
with torch.no_grad():
pred_dicts, ret_dict = model(batch_dict)
# stuff pertaining to pred_dicts and ret_dict omitted
losses.append(model.dense_head.get_loss()[1]['rpn_loss'])
loss = np.average(losses)
If anyone with more knowledge can chime in whether this is the right way or I'm doing something wrong, I'd be grateful. However, it appears to work.
This issue is stale because it has been open for 30 days with no activity.
Hi did it work for u??
Were u able to compute validation loss during training?
This issue is stale because it has been open for 30 days with no activity.
This issue was closed because it has been inactive for 14 days since being marked as stale.
The current project contains only training loss and learning rate curves, how can I modify def train_one_epoch() to compute the validation loss during the training session?