Open Ab-34 opened 6 months ago
Hi, thank you for using our repo. Your implementation looks fine and is very similar to my own version. However, as active learning is very prone to randomness, you may get very different results for different rounds, particularly, for COCO this tends to happen because we are using less data ( < 10%)
Thank you for your response! I will try running it multiple times to check the same.
Also, a follow up question, could you please provide the configuration files that you used for the Faster RCNN implementation? Currently only the Retinanet ones are present.
Thank you, Abhijnya
Hi, thank you for adding random sampling. could you please provide me with packages version you've used, I'm facing some issues with installing the correct version of mmcv
Hi, thank you for adding random sampling. could you please provide me with packages version you've used, I'm facing some issues with installing the correct version of mmcv
The commands I used to build my environment:
git clone https://github.com/ChenhongyiYang/PPAL.git
conda create -n "ppal" python=3.8.10
conda activate ppal
pip install torch==1.10.0+cu111 torchvision==0.11.0+cu111 torchaudio==0.10.0 -f https://download.pytorch.org/whl/torch_stable.html
pip install -U openmim
mim install mmengine
cd PPAL
pip install mmcv-full==1.4.0 -f https://download.openmmlab.com/mmcv/dist/cu111/torch1.10.0/index.html
python setup.py install
pip install yapf==0.40.1
pip uninstall numpy
python -m pip install numpy==1.23.1
Thank you so much for your answer, what about the parameters, did you train the model on the whole coco dataset ?
Thank you so much for your answer, what about the parameters, did you train the model on the whole coco dataset ?
Yup full dataset, with default parameters that were provided
hey the unlabeled_inference_result.bbox.json file is saving emtpy how do i fix it?
Thank you for your code! I was able to exactly replicate the results of your approach (PPAL) on both COCO and Pascal VOC datasets, on Retinanet, without modifying any configuration files.
As the implementation of Random Sampling was not provided, I coded it up myself, by changing the al_round and al_acquisition function to take the round_unlabelled_json as input, shuffle it, and pick the first [budget] image_ids.
But this is giving me much higher values than your random plot, and in case of COCO there is very less gap. Perhaps this is due to my implementation. Could you please provide the code for your random implementation?
Plots:
Code:
In run_al_voc.py: