VlSomers / bpbreid

[WACV23] A strong baseline for body part-based person re-identification
Other
169 stars 16 forks source link

python scripts/get_labels.py --source ./ABSK #21

Open y1b2h3 opened 1 year ago

y1b2h3 commented 1 year ago

Hello, author; Can you help me solve this problem?

(bpbreid1) D:\downloads\bpbreid-main>python scripts/get_labels.py --source ./ABSK

y1b2h3 commented 1 year ago

Hello, author. Can I change the dataset from human to animal? How should I operate or modify it? about get_label.py

VlSomers commented 1 year ago

Hi @y1b2h3, to use on an animal dataset, you should first generate the animal parsing labels for your animal dataset using a Pifpaf model trained on another dataset with similar animals (or the same dataset). You should use that animal PifPaf model within the script provided by @samihormi (have a look at the README). You can also create the parsing labels using any other strategies, for instance using SAM or even doing it manually. Then you should create a Torchreid dataset class for your new dataset by replicating what was done for the other ReID datasets (you can have a look at how it's done for OccludedDuke in 'torchreid/data/datasets/image/occluded_dukemtmc.py' for instance).

VlSomers commented 1 year ago

Hi @y1b2h3, you can try to run a training with an existing dataset to see how it works, and then create your own subclass of "torchreid/data/datasets/dataset.py" by mimicking what is done in 'torchreid/data/datasets/image/occluded_dukemtmc.py' for instance. Then you should register your dataset in "torchreid/data/datasets/init.py". Then in the yaml config you can choose your dataset as source and target: sources: ['your_dataset'] targets: ['your_dataset'] You can also have a look at the official Torchreid documentation for more information: https://kaiyangzhou.github.io/deep-person-reid/user_guide.html#use-your-own-dataset Finally, make sure that your images and masks are properly loaded when launching the training, this loading is happening inside "torchreid.data.datasets.dataset.ImageDataset.getitem(...)"

ellzeycunha0 commented 1 year ago

@y1b2h3 I have the same issue of "ValueError: not enough values to unpack (expected 2, got 0)". It is fixed by replacing the version of openPifPaf. I remember that I also change the the output dim of the model in get_labels.py. And finally, I get the correct heat map in my own custom dataset. Thanks for the author's great work.

y1b2h3 commented 1 year ago

@y1b2h3 I have the same issue of "ValueError: not enough values to unpack (expected 2, got 0)". It is fixed by replacing the version of openpose. I remember that I also change the the output dim of the model in get_labels.py. And finally, I get the correct heat map in my own custom dataset. Thanks for the author's great work.

Hello, friend: Thank you very much for your answer

  1. When using a custom dataset, you modified get_ Where are the labels. py files located?
  2. I encountered the following error while using a custom dataset for training. Can you resolve it? Python scripts/main.py -- config file configs/bpbreid/bpbreid Absk Train.yaml

    => Start training Traceback (most recent call last): File "D:\downloads\bpbreid-main\scripts\main.py", line 273, in main() File "D:\downloads\bpbreid-main\scripts\main.py", line 184, in main engine.run(*engine_run_kwargs(cfg)) File "d:\downloads\bpbreid-main\torchreid\engine\engine.py", line 204, in run self.train( File "d:\downloads\bpbreid-main\torchreid\engine\engine.py", line 264, in train for self.batch_idx, data in enumerate(self.train_loader): File "D:\Anaconda\envs\bpbreid3\lib\site-packages\torch\utils\data\dataloader.py", line 628, in next data = self._next_data() File "D:\Anaconda\envs\bpbreid3\lib\site-packages\torch\utils\data\dataloader.py", line 1333, in _next_data return self._process_data(data) File "D:\Anaconda\envs\bpbreid3\lib\site-packages\torch\utils\data\dataloader.py", line 1359, in _process_data data.reraise() File "D:\Anaconda\envs\bpbreid3\lib\site-packages\torch_utils.py", line 543, in reraise raise exception RuntimeError: Caught RuntimeError in DataLoader worker process 0. Original Traceback (most recent call last): File "D:\Anaconda\envs\bpbreid3\lib\site-packages\torch\utils\data_utils\worker.py", line 302, in _worker_loop data = fetcher.fetch(index) File "D:\Anaconda\envs\bpbreid3\lib\site-packages\torch\utils\data_utils\fetch.py", line 61, in fetch return self.collate_fn(data) File "D:\Anaconda\envs\bpbreid3\lib\site-packages\torch\utils\data_utils\collate.py", line 265, in default_collate return collate(batch, collate_fn_map=default_collate_fn_map) File "D:\Anaconda\envs\bpbreid3\lib\site-packages\torch\utils\data_utils\collate.py", line 128, in collate return elem_type({key: collate([d[key] for d in batch], collate_fn_map=collate_fn_map) for key in elem}) File "D:\Anaconda\envs\bpbreid3\lib\site-packages\torch\utils\data_utils\collate.py", line 128, in return elem_type({key: collate([d[key] for d in batch], collate_fn_map=collate_fn_map) for key in elem}) File "D:\Anaconda\envs\bpbreid3\lib\site-packages\torch\utils\data_utils\collate.py", line 120, in collate return collate_fn_map[elem_type](batch, collate_fn_map=collate_fn_map) File "D:\Anaconda\envs\bpbreid3\lib\site-packages\torch\utils\data_utils\collate.py", line 172, in collate_numpy_array_fn return collate([torch.as_tensor(b) for b in batch], collate_fn_map=collate_fn_map) File "D:\Anaconda\envs\bpbreid3\lib\site-packages\torch\utils\data_utils\collate.py", line 120, in collate return collate_fn_map[elem_type](batch, collate_fn_map=collate_fn_map) File "D:\Anaconda\envs\bpbreid3\lib\site-packages\torch\utils\data_utils\collate.py", line 162, in collate_tensorfn out = elem.new(storage).resize(len(batch), list(elem.size())) RuntimeError: Trying to resize storage that is not resizable

VlSomers commented 1 year ago

Hi @y1b2h3, the purpose of the collate.py (where you have the error), is to process the output of the dataloader: this error means there's something wrong with the data returned by the dataloader, when building the training batch. The data returned by the dataloader and processed by the collate function comes from the getitem(...) function here: "torchreid.data.datasets.dataset.ImageDataset.getitem(...)". If you want to solve that error, you have to analyse all the data inside the "sample" object returned by this get_item function and understand what is causing the error: maybe you have an empty array, a wrong data type, or anything else

gao1qiang commented 1 year ago

@y1b2h3 I have the same issue of "ValueError: not enough values to unpack (expected 2, got 0)". It is fixed by replacing the version of openpose. I remember that I also change the the output dim of the model in get_labels.py. And finally, I get the correct heat map in my own custom dataset. Thanks for the author's great work.

@y1b2h3 I have the same issue of "ValueError: not enough values to unpack (expected 2, got 0)". It is fixed by replacing the version of openpose. I remember that I also change the the output dim of the model in get_labels.py. And finally, I get the correct heat map in my own custom dataset. Thanks for the author's great work.

Hello friend, I use a custom dataset (not a human) about get_ In addition to modifying the corresponding model, which specific areas do I need to modify in the labels. py file code? I have weak coding skills. Thank you for your kind guidance

ellzeycunha0 commented 1 year ago

@gao1qiang Give me your email. I would like to share my code and solve your problem.

gao1qiang commented 1 year ago

After the training (the custom dataset is not about people), although map and rank indicators were obtained, there were no attention maps of various body parts in my visual heat map

1e0604e5b96b9c62dbe2938889c5e8e

gao1qiang commented 1 year ago

it is my Query-gallery body part paird distance distance distribution.png. Why is this happening? 2cb3d4ffc8d2c29b02b063a0fb09a1a

ritaanthem commented 11 months ago

@gao1qiang Give me your email. I would like to share my code and solve your problem.

Hi @ellzeycunha0 , I have the same issue of "ValueError: not enough values to unpack (expected 2, got 0)". Could you please share your code to help me solve the problem?