Closed csr closed 4 years ago
Hello Cesare and Teddy, good question and hard to say without knowing your background better. What kind of work do you like to do? Add a new dataset? Add a new base network? Try a different loss? Try a different head architecture? Add a new task? Run on new hardware? Improve training schedule/procedure? ...
Thank you, those are great starting points. We'll first read the paper carefully and get familiar with running it locally and then reply back here. Thanks again!
Hey Sven & team,
Teddy and I met today and we thought it would be interesting to test PifPaf on the NightsOwl dataset (given our limited experience with Machine Learning and Computer Vision). We're not sure if this is the best dataset/problem to work on because it seems that the NightsOwl annotations provide a bounding box for each person, whereas PifPaf returns a complete human pose. However we considered it because PifPaf performs very well with low resolution images and was built for urban mobility (ie. detecting pedestrians at night). Perhaps we could compare the number of bounding boxes in the NightsOwl dataset (ground truth) with the number of human poses found by PifPaf, and then see which type of images perform the poorest. Perhaps we could compare the PifPaf performance with the results reported on the NightsOwls dataset paper – they have tested the dataset on multiple person detector algorithms.
We're not sure, but we'll probably need to resize each image from the NightsOwl dataset to a format that works best with PifPaf (or maybe PifPaf does this automatically? I see DEBUG:openpifpaf.transforms.pad [...]
when running the prediction with the debug
flag, so data augmentation is probably done automatically by the library).
What do you think? Any problems you could foresee? If this is a terrible idea, we're also happy to hear that so that we can work on a different dataset or problem. Thanks again for you time.
NightOwls is a good idea. In fact, I have done a quick nightowls test a while ago just to see where we are. OpenPifPaf might be optimized for the use case (small instances, difficult lighting), but the metric that people use is MR and quite different from AP in pose estimates. Out-of-the-box the performance in MR wont be great (quite bad actually). But there is an opportunity here to investigate, to see what the precise reason is and to improve OpenPifPaf. I have never had the time to look further into nightowls.
Please also refer to a NightOwls dataset implementation by @george-adaimi here: https://github.com/vita-epfl/openpifpaf/compare/multipledatasets He has experience using OpenPifPaf for detection. This wasn't merged into master yet because we will have a new plugin architecture, but something like this will come.
How to compile the evironment? and how to use the evironment train other type dataset ?
Hi @csr and @teddykabg, are you moving forward with NightOwls? I would also like to point out the dev
branch to you. As the name says, the branch is for development and unstable, but it comes with a new plugin architecture that should be very helpful for you. To add a new dataset, you write a plugin instead of modifying code everywhere. The dev branch has its own version of the guide here: https://vita-epfl.github.io/openpifpaf/dev/plugins_overview.html
It also comes with an example plugin that adds Cifar10 to OpenPifPaf (explained in the guide). You can follow that or start from CocoKp
or CocoDet
.
Hi @svenkreiss, we are! (and thrilled to do so, we feel like we've already been learning a lot). Thank you so much for following up with that valuable info about the dev branch and the plugin page. This is our project proposal so far for our research course if you're curious. Right now, this is what's in our mind (mainly just me thinking out loud):
Sounds like a good plan.
Just to give you some quick pointers: I'd create a new DataModule for NightOwls. If you are not interested in training, the only thing you need to implement is the eval_loader()
function and a metric. When you then run the command python -m openpifpaf.eval --dataset=nightowls ...
it should use that data loader and compute your metric.
You are going to be one of the first to implement evaluation on a different dataset than what the model is trained on. This should be a standard use case but is completely untested right now. Feel free to file issues, start discussions here when something is not working and contribute fixes.
Hey Sven & team! Thanks a lot again for helping us with this. We still haven't looked closely into the plugin architecture because we have other upcoming deadlines, but we'll work on it later in the year. We feel it's respectful to close this issue for now to keep things tidy and ask more specific questions in separate issues. Thank you
Thanks. See you soon
Hi Sven & team,
This looks like a great library to contribute to. One of our university courses is asking to find a good research project and make some contributions to it to develop new knowledge or understanding (it doesn't have to be a big contribution, even something small works). Is there anything in PifPaf that could use some help from two first-year Master's students in Computer Vision and ML? Just a simple outline of the steps you think we should take could work, and then we'll take it from there.
I've taken a look at Contribute and at the related project monoloco (very cool). Thanks!
Cesare & Teddy (cc @teddykabg)