DrSleep / DenseTorch

An easy-to-use wrapper for work with dense per-pixel tasks in PyTorch (including multi-task learning)
MIT License
62 stars 9 forks source link

visualize and unsupervised loss #11

Open huangyuan2020 opened 4 years ago

huangyuan2020 commented 4 years ago

Thank you very much for your work, I find your examples are supervised, I wonder if I can use some custom unsupervised loss function to train, or do you have any suggestions? In addition, I found that the training process was encapsulated in dt.engine, so could Tensorboardx be used to visualize the intermediate results of the training process?

DrSleep commented 4 years ago
  1. You can specify which loss per each task to use. If you implement your own loss function to train, supervised or unsupervised, you can specify it in this example here. By default all the loss functions accept the tensor with predictions and the tensor with ground truth during the forward pass (you can see some examples here). For the unsupervised case, you can provide some dummy ground truth and then implement the logic however you want.
  2. At the moment you cannot visualise intermediate results of the training (or validation) process. As you rightly pointed out, the training process is hidden within dt.engine.train and neither inputs nor predictions are being used outside it. There is some WIP that adds an option of visualisation callback, but it is not a priority right now