Open Heni-Loukil opened 1 year ago
@Heni-Loukil I reformatted your comment. For the future, please wrap code or tracebacks in triple backticks, e.g.
tracback or code here
Could please also run the collection script you have downloaded and post its output here as prompted by the template?
wget https://raw.githubusercontent.com/pytorch/pytorch/main/torch/utils/collect_env.py
# For security purposes, please check the contents of collect_env.py before running it.
python collect_env.py # <-- you didn't do this
Finally, I'm seeing the following line in the traceback:
D:\Projects\venv\lib\site-packages\coco_eval\coco_eval.py
This looks like an installed package, but we don't provide a coco_eval
module. Could you clarify what this is?
🐛 Describe the bug
Okay so I am working with roboflow and I am using the version coco.json, I am working on a DETR model so I trained the model on thermal images ( without colors ) and then I wanted to test on some optical images (rgb) when I show the image of detections on my trained model the bounding box is there and the detections are good even when the model is not trained on optical images. The problem is when I call the evaluator.update(predictions) I get an error
I assure you there is no problem with the json files, I tried working with preprocessed images "Grayscale" it worked once but other versions even with "grayscale" generate the same problem as with rbg model, I think there is a problem with the coco_eval function. this is the code
Versions