mitmul / deeppose

DeepPose implementation in Chainer
http://static.googleusercontent.com/media/research.google.com/ja//pubs/archive/42237.pdf
GNU General Public License v2.0
408 stars 129 forks source link

Model and Parameters For LSP Dataset #23

Open kazunaritakeichi opened 8 years ago

kazunaritakeichi commented 8 years ago

I trained lsp dataset with default parameters. The test error is large. Do you have appropriate model or parameters for lsp dataset ?

Thanks!

lunzueta commented 8 years ago

Hi @ktak199 I just did the same thing today and I also saw that the error was much higher than in the case of FLIC. I guess we should check more in detail what parameters are used in the original paper (https://arxiv.org/pdf/1312.4659v3.pdf). I'm now training with MPII using the same default parameters as with FLIC, to see what happens, but I'll retake the training/testing with LSP afterwards. I'll tell you if I get better results after tuning the parameters. Please, let me know too if you are luckier after tuning the parameters too.

kazunaritakeichi commented 8 years ago

Hi @lunzueta Thank you! Ok, I'll also try and tell you!

lunzueta commented 8 years ago

Hi @ktak199. I've tested this time with MPII and the default parameters. The tests have less error in general than in the case of LSP, but they still are quite bad compared to FLIC. So, I guess that in both cases specific parameters should be used. During the training I observed that in both cases it tended to overfitting quite quickly.

kazunaritakeichi commented 8 years ago

Hi @lunzueta. One way against overfittin may be tuning dropout parameters. http://stats.stackexchange.com/questions/109976/in-convolutional-neural-networks-how-to-prevent-the-overfitting

lunzueta commented 8 years ago

@ktak199 The dropout is already considered in the implementation, with the same value mentioned in the paper: h = F.dropout(F.relu(self.fc6(h)), train=self.train, ratio=0.6) h = F.dropout(F.relu(self.fc7(h)), train=self.train, ratio=0.6)

Now, I'm training LSP with the following parameter changes:

kazunaritakeichi commented 8 years ago

@lunzueta σ parameter is set to 1.0 for FLIC and 2.0 for LSP in the paper and lr parameter is set to 0.0005 for both datasets, isn't it? I don't yet know which lcn should be 0 or 1.

I'm testing with the following parameter. ・cropping 0 "For LSP we use the full image as initial bounding box since the humans are relatively tightly cropped by design."

lunzueta commented 8 years ago

Hi @ktak199. Yes, you are right about σ, I said it wrong. I've continued doing some more tests changing the parameters (crop vs no-crop, local contrast vs no-local-contrast, etc), but I'm not getting... let's say... "normal" results with LSP. The result I normally get in the tests is a very small avatar (compared to the actual body size) in the middle of the image. I'm a bit stuck with this too. Now, I was just trying to do this same training using the caffe branch instead the master branch, to see if this could be something related to the deep learning framework. I'll let you know. Good luck with your tests too, I hope we can get something closer to the expected results.

yutuofish2 commented 8 years ago

Hi @lunzueta I am running on MPII by setting the dropout ratio as 0.9. The other parameters are left as default. Currently the test loss has started to converge, however it is still at a high loss.

image

image

kazunaritakeichi commented 8 years ago

@lunzueta This is log.png (cropping is 0). test loss is increasing... log

lunzueta commented 8 years ago

Thanks for sharing this @yutuofish2. I see you are training for more than 600 epochs. I wonder how many should be a good number, but I see that your training looks much better than what I was getting.

yutuofish2 commented 8 years ago

@ktak199 You would need to modify the function fliplr() in transform.py. The authors have fixed this problem about 10 hours ago. However, it seems that there are still some bugs ...

lunzueta commented 8 years ago

This time I trained a model with LSP, just changing the optimizer to 'MomentumSGD', and maintaining the rest of parameters the same way. I got the following results, which still aren't good enough: log Good to know that there have been some new fixes in the code. I'll try them next. Thanks for that @mitmul!

kazunaritakeichi commented 8 years ago

I tried newer version(shell/train_lsp.sh). Below is the result. log

lunzueta commented 8 years ago

@ktak199 I was also doing the same thing, but I still was at Epoch 200, and I'm getting a similar graphic: log So, what do you think it might be happening? Maybe it's too early and we should we wait till Epoch 1000? Just in case, meanwhile, I'm going to train with FLIC again in another PC to see if it still trains as before.

mitmul commented 8 years ago

Sorry for inconvenience, there are some fatal bugs maybe in data processing part. I'm trying to find them now and will update if I could fix them and confirm the training can be done correctly. So please wait or try to find bugs and send PRs. Thanks.

lunzueta commented 8 years ago

Thank you very much for taking care of this issue @mitmul. I'm learning a lot from all this :-)

kazunaritakeichi commented 8 years ago

Thank you so much @mitmul ! I'll learn the paper and the code so that I can contribute.

lunzueta commented 8 years ago

Could the problem be, in the case of LSP, that there are some joint positions with negative values (expressing that they are occluded) and these make the training get crazy? I say this because I've retrained with FLIC for a few epochs and it looked to be converging normally. The only difference I see are those negative values.

lunzueta commented 8 years ago

Well... I started a new training with MPII, which has all the body joint positions marked on the image, and after about 130 epochs of training I got this graphic, with a strange outlier and which doesn't seem to converge: log And this kind of results from it, which are always the same pose: test_130_tiled_pred So, certainly, I guess we should review in detail how the data is processed.

kazunaritakeichi commented 8 years ago

I tried with FLIC dataset. I got the similar result with MPII @lunzueta.

lunzueta commented 8 years ago

Hi guys. Based on the code provided in the caffe branch, I've done some tests with MPII (I attach the caffe net and solver files I've used for that), and after training for some hundred of epochs it seems to give some responses that make more sense (not always the same mean pose as shown above). In order to generate the LMDB format data, I used the same functions provided in this code (cropping, etc), but without applying the local contrast (because this wasn't possible to reproduce in Caffe), so I don't think that the failure is there. The AlexNet architecture defined in Chainer format also seems to be correct. So, taking this into account, where could the failure be? (I still couldn't find it)

deeppose.zip

aspenlin commented 5 years ago

Hi @lunzueta @yutuofish2 may I ask which python program you use to plot the images with joints positions on them? The only one I can find is evaluate_flic.py but it still doesn't seem right.