imatge-upc / egocentric-2016-saliency

Research on the prediction of visual saliency in egocentric vision.
http://imatge-upc.github.io/egocentric-2016-saliency/
6 stars 3 forks source link

Compute the saliency maps for a collection of images #5

Open xavigiro opened 8 years ago

xavigiro commented 8 years ago

Once a saliency map from a single image has been computed, the next step is to compute the saliency maps for a collection of them. Please check this internal ticket and ask @marccarne for details and examples about how to do this.

xavigiro commented 8 years ago

After our meeting today, we have agree that Monica will try to run the SalNet model (CVPR 2016) in c8, as this one is supposed to run in Caffe. Please @monicachs report any problem in this issue as @agilmor is very familiar with whatever type of problem you may encounter.

xavigiro commented 8 years ago

Monica has tried lately to run processes in c8 but she has reported that the server is always busy.

As a solution: @monicachs : even if the GPUs in c8 are busy, you can launch your process and leave it in the queue. Log out and just check the results much later (hours or days). I am not sure if she needs to guarante ethe persistance of the session with tmux as well. Could you please solve this question @agilmor ?

@agilmor : Monica does not require to train a network now, but use a pre-trained network to generate its response for an image. As far as I know, this could be done without a GPU, is that right ? So she could maybe just run the command without specifying any GPU so that a CPU takes the task ? Even if it takes longer than a GPU, it is preferable to take "longer" than to wait in a queue. I also seem to remember that we had some "smaller" GPUs in other servers which are not usually used. Maybe she should use these ones ? In any case, almost all my students are already reporting occupation problems with the GPUs, so the announced bottleneck is already here.

Thanks !

agilmor commented 8 years ago

Hi!

Monica has tried lately to run processes in c8 but she has reported that

the server is always busy.

The GPUs on c8 has been busy the last 7 hours, yes. But there are still some other GPUs available: 3 splitted on c5, c7 and v1, and 2 on v2.

Why do you use only c8? Could you try the others?

@monicachs https://github.com/monicachs : even if the GPUs in c8 are busy, you can launch your process and leave it in the queue.

Yes!

Log out and just check the results much later (hours or days). I am not sure if she needs to guarante ethe persistance of the session with tmux as well. Could you please solve this question @agilmor https://github.com/agilmor ?

Yes, tmux is our best friend for remote long experiments. My recomendation is to follow this tutorial http://www.hamvocke.com/blog/a-quick-and-easy-guide-to-tmux/ to understand tmux and start using it in our servers.

@agilmor https://github.com/agilmor : Monica does not require to train a network now, but use a pre-trained network to generate its response for an image.

Ok...

As far as I know, this could be done without a GPU, is that right ?

Not really sure...

I mean, I know that everything can be done on CPU! For sure! But you need to setup the CPU mode somehow... Sometimes it's just a run-time parameter, sometimes it's an automatic fallback mechanism, and sometimes you actually need toi rebuild the sources?

Not sure in your case...

So she could maybe just run the command without specifying any GPU so that

a CPU takes the task ? Even if it takes longer than a GPU, it is preferable to take "longer" than to wait in a queue.

Well... it depends right? If you wait for a whole day, but using GPU saves you 10 days... waiting in the queue for a day is still a better option?

I also seem to remember that we had some "smaller" GPUs in other servers which are not usually used. Maybe she should use these ones ? In any case, almost all my students are already reporting occupation problems with the GPUs, so the announced bottleneck is already here.

Yes! Cleary the % of GPU jobs is raising a lot! Now 90% of jobs are asking for a GPU... wow!

But yes, my recomendation is to ask for a GPU (--gres=gpu:1) but not to force any node (not -w c8). Recently we did some fixes and maybe now we have more not so small GPUs available!

See you!

Albert

PD: By the way... sure you want to open so platform-related issues on github? Our trac is still a good option for this kind of issues, right? ;-)