ultralytics / ultralytics

NEW - YOLOv8 🚀 in PyTorch > ONNX > OpenVINO > CoreML > TFLite
https://docs.ultralytics.com
GNU Affero General Public License v3.0
28.34k stars 5.64k forks source link

How to validate a model with the modified JSON file? #14393

Open yuebaiqinghui opened 2 months ago

yuebaiqinghui commented 2 months ago

Search before asking

Question

As we know, we can save validation results to a JSON file with save_json=True. Now, I want to modify this JSON file(use other methods and replace some data), how can I use the modified JSON to validate again? I need the new F1 and maps data for contrast.

Additional

No response

glenn-jocher commented 2 months ago

@yuebaiqinghui hello,

To validate a model using a modified JSON file, you can follow these steps:

  1. Modify the JSON File: Make the necessary changes to your JSON file using your preferred method.

  2. Load the Modified JSON: Use the modified JSON file as your validation data source. You can create a custom dataset configuration file that points to your modified JSON file.

Here's an example of how you can do this in Python:

from ultralytics import YOLO

# Load your model
model = YOLO("path/to/your/model.pt")

# Validate using the modified JSON file
results = model.val(data="path/to/your/custom_dataset.yaml")
print(results.box.map)  # mAP50-95

In your custom dataset configuration file (custom_dataset.yaml), make sure to specify the path to your modified JSON file under the val section.

Example custom_dataset.yaml:

path: ../datasets/your_dataset
train: images/train
val: path/to/your/modified.json
names:
  0: class_name
  1: another_class_name
  ...

This approach allows you to validate your model using the modified JSON file and obtain new F1 and mAP scores for comparison.

For more details on creating a custom dataset configuration file, you can refer to the Ultralytics documentation.

yuebaiqinghui commented 2 months ago

@yuebaiqinghui hello,

To validate a model using a modified JSON file, you can follow these steps:

  1. Modify the JSON File: Make the necessary changes to your JSON file using your preferred method.
  2. Load the Modified JSON: Use the modified JSON file as your validation data source. You can create a custom dataset configuration file that points to your modified JSON file.

Here's an example of how you can do this in Python:

from ultralytics import YOLO

# Load your model
model = YOLO("path/to/your/model.pt")

# Validate using the modified JSON file
results = model.val(data="path/to/your/custom_dataset.yaml")
print(results.box.map)  # mAP50-95

In your custom dataset configuration file (custom_dataset.yaml), make sure to specify the path to your modified JSON file under the val section.

Example custom_dataset.yaml:

path: ../datasets/your_dataset
train: images/train
val: path/to/your/modified.json
names:
  0: class_name
  1: another_class_name
  ...

This approach allows you to validate your model using the modified JSON file and obtain new F1 and mAP scores for comparison.

For more details on creating a custom dataset configuration file, you can refer to the Ultralytics documentation.

thank you for your reply. the JSON file with save_json=True, format like [{"image_id": 601, "category_id": 0, "bbox": [83.209, 448.881, 60.56, 60.558], "score": 0.92498} .... ], it does not point to the val labels, I try it but it report errors: FileNotFoundError: val: Error loading data from D:\PycharmProjects\runs\detect\val15\predictions.json

yuebaiqinghui commented 2 months ago

I saw the docs but I did not found something about 'Load the Modified JSON'

glenn-jocher commented 2 months ago

Hello @yuebaiqinghui,

To clarify, the JSON file generated with save_json=True is intended for evaluation purposes and not as a direct input for validation. If you want to validate using modified predictions, you need to convert your JSON predictions into a format that can be used as ground truth annotations.

Here's a step-by-step approach:

  1. Modify the JSON File: Make the necessary changes to your JSON file.
  2. Convert JSON to COCO Format: Ensure your modified JSON follows the COCO format for annotations.
  3. Create a Custom Dataset Configuration: Point to your modified JSON file in the val section.

Example custom_dataset.yaml:

path: ../datasets/your_dataset
train: images/train
val: path/to/your/modified_annotations.json
names:
  0: class_name
  1: another_class_name
  ...
  1. Validate the Model:
    
    from ultralytics import YOLO

Load your model

model = YOLO("path/to/your/model.pt")

Validate using the modified annotations

results = model.val(data="path/to/your/custom_dataset.yaml") print(results.box.map) # mAP50-95



For more details on the COCO format, you can refer to the [COCO dataset documentation](https://cocodataset.org/#format-data).

If you encounter any issues, please ensure your dataset configuration and JSON format are correct. Feel free to share a reproducible example if the problem persists.
yuebaiqinghui commented 2 months ago

thanks a lot. I will try, if I fail again, I can use 'coco_eval.evaluate()' method directly?I saw the def eval_json(self, stats): function in ultralytics/models/yolo/detect/val.py, but I did not enter the breakpoint of this method during validation

glenn-jocher commented 2 months ago

Hello @yuebaiqinghui,

You're on the right track! If you encounter issues with the modified JSON approach, you can indeed use the coco_eval.evaluate() method directly for evaluation. This method is designed to handle COCO-style annotations and can provide you with the necessary metrics.

Here's a quick example of how you might use it:

from ultralytics.yolo.utils.metrics import coco_eval

# Assuming you have your predictions and ground truth in COCO format
predictions = 'path/to/your/predictions.json'
ground_truth = 'path/to/your/ground_truth.json'

# Evaluate
coco_eval.evaluate(predictions, ground_truth)

Make sure your JSON files are correctly formatted according to the COCO standard. If you need further assistance, please provide a reproducible example so we can help you more effectively. You can find guidance on creating a minimum reproducible example here.

Best of luck, and feel free to reach out if you have more questions! 😊

DWendou commented 1 month ago

Hello @yuebaiqinghui,

You're on the right track! If you encounter issues with the modified JSON approach, you can indeed use the coco_eval.evaluate() method directly for evaluation. This method is designed to handle COCO-style annotations and can provide you with the necessary metrics.

Here's a quick example of how you might use it:

from ultralytics.yolo.utils.metrics import coco_eval

# Assuming you have your predictions and ground truth in COCO format
predictions = 'path/to/your/predictions.json'
ground_truth = 'path/to/your/ground_truth.json'

# Evaluate
coco_eval.evaluate(predictions, ground_truth)

Make sure your JSON files are correctly formatted according to the COCO standard. If you need further assistance, please provide a reproducible example so we can help you more effectively. You can find guidance on creating a minimum reproducible example here.

Best of luck, and feel free to reach out if you have more questions! 😊

In ultralytics==8.2.74, coco_eval cannot be imported, or do I have the wrong version?