ZitongYu / CDCN

Central Difference Convolutional Networks (CVPR'20)
Other
552 stars 179 forks source link

Depth Maps for OULU-NPU #14

Closed CuauSuarez closed 2 years ago

CuauSuarez commented 4 years ago

First of all, thank you very much for providing your code.

Reading through, i realized that you have some files with depth maps and others with bounding boxes, how did you obtain this? as the OULU dataset does not include any of them (just info of the position of the eyes).

Thank you

ZitongYu commented 4 years ago

bbox is extracted with MTCNN; depth map is via PRNet

punitha-valli commented 4 years ago

Can you please tell me about the map_dir, Then, also about the train_iamges directory

I have the oulu-npu dataset

It will be a great help to my research work

Thanks in advance

punitha-valli commented 4 years ago

@ZitongYu @CuauLuzbel @punitha-valli

Can you please tell me about the map_dir, Then, also about the train_iamges directory

I have the oulu-npu dataset

It will be a great help to my research work

Thanks in advance

luan1412167 commented 3 years ago

@punitha-valli @ZitongYu Can you share me how to arrange training data folder?

punitha-valli commented 3 years ago

You need to split the videos into frames And use the dataloader as per the source code My Best wishes

Thank you

On Wed, Jun 2, 2021, 3:49 PM luan1412167 @.***> wrote:

@punitha-valli https://github.com/punitha-valli @ZitongYu https://github.com/ZitongYu Can you share me how to arrange training data folder?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/ZitongYu/CDCN/issues/14#issuecomment-852821656, or unsubscribe https://github.com/notifications/unsubscribe-auth/AMM7B276LKLMKWN3BJYTKATTQXPBNANCNFSM4MOWOLDA .

luan1412167 commented 3 years ago

@punitha-valli I dont know what is 6_3_20_5_121_scene.dat and info_list file ? Can you help me clarify it?

punitha-valli commented 3 years ago

it is about the boundary box,

You can obtain the boundary box from MTCNN, it is used to extract the face region from the image

On Wed, 2 Jun 2021 at 17:12, luan1412167 @.***> wrote:

@punitha-valli https://github.com/punitha-valli I dont know what is 6_3_20_5_121_scene.dat and info_list file ? Can you help me clarify it?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/ZitongYu/CDCN/issues/14#issuecomment-852856291, or unsubscribe https://github.com/notifications/unsubscribe-auth/AMM7B2YMHM23H2QIS4A4BVLTQXYYTANCNFSM4MOWOLDA .

busrasi commented 3 years ago

I have used OULU dataset to train the model. When I run train_CDCN.py I get the error "FileNotFoundError: [Errno 2] No such file or directory: '/home/busrasirin/KYC/TRAIN/data/train_map/6_2_19_2'".

I think this train method requires map_dir files. I don't have map images. How can I access map_images or generate these images? I have also added the error trace

/home/busrasirin/KYC/TRAIN/cdcn-env/lib/python3.8/site-packages/torch/optim/lr_scheduler.py:129: UserWarning: Detected call of lr_scheduler.step() before optimizer.step(). In PyTorch 1.1.0 and later, you should call them in the opposite order: optimizer.step() before lr_scheduler.step(). Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of lr_scheduler.step() before optimizer.step(). " Traceback (most recent call last): File "train_CDCN.py", line 461, in train_test() File "train_CDCN.py", line 302, in train_test for i, sample_batched in enumerate(dataloader_train): File "/home/busrasirin/KYC/TRAIN/cdcn-env/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 521, in next data = self._next_data() File "/home/busrasirin/KYC/TRAIN/cdcn-env/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 561, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "/home/busrasirin/KYC/TRAIN/cdcn-env/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/busrasirin/KYC/TRAIN/cdcn-env/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 49, in data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/busrasirin/KYC/TRAIN/CDCN/CVPR2020_paper_codes/Load_OULUNPU_train.py", line 231, in getitem image_x, map_x = self.get_single_image_x(image_path, map_path, videoname) File "/home/busrasirin/KYC/TRAIN/CDCN/CVPR2020_paper_codes/Load_OULUNPU_train.py", line 249, in get_single_image_x frames_total = len([name for name in os.listdir(map_path) if os.path.isfile(os.path.join(map_path, name))]) FileNotFoundError: [Errno 2] No such file or directory: '/home/busrasirin/KYC/TRAIN/data/train_map/6_2_19_2'

By the way, I made some changes in train_CDCN.py file. These changes are

Your feedback would be very much appreciated.

debasmitdas commented 2 years ago

According to https://github.com/timesler/facenet-pytorch/blob/master/models/mtcnn.py, the MTCNN network required threshold values for face detection.

What threshold values did you choose for the network and whether it was adapted or fixed ?

Also, with your thresholds, did you encounter empty or wrong face detections and how did you deal with those images ?

punitha-valli commented 2 years ago

HI,

As per my experience, if you set the nominal threshold (0.5 as per my remembrance) capture all the faces of a database

On Fri, 3 Dec 2021 at 03:46, Debasmit Das @.***> wrote:

According to https://github.com/timesler/facenet-pytorch/blob/master/models/mtcnn.py, the MTCNN network required threshold values for face detection.

What threshold values did you choose for the network and whether it was adapted or fixed ?

Also, with your thresholds, did you encounter empty or wrong face detections and how did you deal with those images ?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/ZitongYu/CDCN/issues/14#issuecomment-984947552, or unsubscribe https://github.com/notifications/unsubscribe-auth/AMM7B2YTB7QGW53MUE4PN63UO7EJDANCNFSM4MOWOLDA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.

debasmitdas commented 2 years ago

People, who have tried extracting depth maps using PRNet, which github repository did you find to be most reliable and useful ?

punitha-valli commented 2 years ago

hi, please refer to this repository,

https://github.com/YadiraF/PRNet

Thanks,

On Thu, 23 Dec 2021 at 08:08, Debasmit Das @.***> wrote:

People, who have tried extracting depth maps using PRNet, which github repository did you find to be most reliable and useful ?

— Reply to this email directly, view it on GitHub https://github.com/ZitongYu/CDCN/issues/14#issuecomment-999951547, or unsubscribe https://github.com/notifications/unsubscribe-auth/AMM7B23LQ3GSN3CA2SWIMEDUSJR7TANCNFSM4MOWOLDA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.

You are receiving this because you were mentioned.Message ID: @.***>

Celinesiya commented 2 years ago

@ZitongYu @CuauLuzbel @punitha-valli

Can you please tell me about the map_dir, Then, also about the train_iamges directory

I have the oulu-npu dataset

It will be a great help to my research work

Thanks in advance

Hi! Have you solved this problem?What should I put into map_dir?Thank you a lot!!

maywander commented 2 years ago

Can you share more about how to extract more from videos to images?

punitha-valli commented 2 years ago

Increase frames per second, in your python coding so you can able to get the more number of frames from one video.

On Tue, Aug 2, 2022, 11:05 PM maywander @.***> wrote:

Can you share more about how to extract more from videos to images?

— Reply to this email directly, view it on GitHub https://github.com/ZitongYu/CDCN/issues/14#issuecomment-1202770145, or unsubscribe https://github.com/notifications/unsubscribe-auth/AMM7B27AY24MAF5YN75TQMTVXE2JZANCNFSM4MOWOLDA . You are receiving this because you were mentioned.Message ID: @.***>

Deepthi2992 commented 5 days ago

Can you please tell me the naming convention of '1_1_01_1.avi', '1_1_01_2.avi','1_1_01_3.avi', 1_1_01_4.avi', '1_1_01_5.avi', '1_1_02_1.avi', '1_1_02_2.avi'. Thanks in advance.

punitha-valli commented 5 days ago

Hi Deepthi , i have closed this research by 2020, as per my memory, the last digit represents the FPS (I captured more frames within the FPS). And the remaining digits represent the video file name with respect the the folder and sub-folders. There as my own modifications for generating the depth image as a Data base.

Thanks, Punitha

On Thu, 21 Nov 2024 at 16:06, Deepthi2992 @.***> wrote:

Can you please tell me the naming convention of '1_1_01_1.avi', '1_1_01_2.avi','1_1_01_3.avi', 1_1_01_4.avi', '1_1_01_5.avi', '1_1_02_1.avi', '1_1_02_2.avi'. Thanks in advance.

— Reply to this email directly, view it on GitHub https://github.com/ZitongYu/CDCN/issues/14#issuecomment-2490324835, or unsubscribe https://github.com/notifications/unsubscribe-auth/AMM7B23L3HBRL5KCFZY2QVL2BWH6VAVCNFSM6AAAAABSGLUIG2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIOJQGMZDIOBTGU . You are receiving this because you were mentioned.Message ID: @.***>

Deepthi2992 commented 5 days ago

Okay , thank you.