Closed vismayaps closed 1 year ago
Hello vismayaps, Cna you please tell me how you created customized tf.dataset for retinaNet because tfds.load is for ready to use datasets such coco2017. I am also working with custom images but while training I am getting loss=000+e0 for all epochs It would be great if you can help me out here .
@vismayaps this should help you run inference with your own images
image_path = ... # path to your image file
image = tf.image.decode_image(tf.io.read_file(image_path), channels=3)
image = tf.cast(image, dtype=tf.float32)
input_image, ratio = prepare_image(image)
detections = inference_model.predict(input_image)
num_detections = detections.valid_detections[0]
class_names = [
int2str(int(x)) for x in detections.nmsed_classes[0][:num_detections]
]
visualize_detections(
image,
detections.nmsed_boxes[0][:num_detections] / ratio,
class_names,
detections.nmsed_scores[0][:num_detections],
)
Hello vismayaps, Thank you for helping me out
I don't understand the box_variance in classes LabelEncoder and DecoderPredictions. In the LabelEncoder, I saw (box_target = box_target / self._box_variance). In the DecoderPrediction, I saw (boxes = box_predictions * self._box_variance). Could you please explain why we need box_variance?
Hello @mj1156,
I am having the same issue as you. I could make it work with my custom dataset, but I am getting loss=000+e0. Did you solve this issue? Any ideas @vismayaps ?
Hallo @gaburiero
I think, I got loss=000+e0 because of two reasons : 1. Dataset labelling is missing in my input pipeline and 2. the predefined anchor boxes values according to COCO dataset.
Hello @mj1156 ,
Thanks for the feedback, I could make it work (after some time of effort), It turns out that my dataset was in VOC format and I used some converter script to COCO format. However, the conversion was not compatible with the script, so I had to modify the preprocess method in order for it to accept my dataset, and after some tries, I could make it work.
My problem is a single class detection, and the confidence is pretty low (around 0.1), is there any way to improve it? I just trained for 10 epochs.
Hi @mj1156, I also get a loss: 0.0000e+00 .
How did you manage to change the predefined anchor boxes values? I just changed the num_classes from 80 to 1 (since I have only one class).
Did you or @gabrielpeixoto-cvai manage to train the model successfully for your custom data?
Hello @adavradou,
Yes, I could make it work. However, I am not sure if my anchor box change attempt was successful (I think so though, but I down know how to prove it). For the anchor boxes, I changed the constructor of the AnchorBox class:
def __init__(self):
self.aspect_ratios = [0.5, 1.0, 2.0]
self.scales = [2 ** x for x in [0, 1 / 3, 2 / 3]]
self._num_anchors = len(self.aspect_ratios) * len(self.scales)
self._strides = [2 ** i for i in range(3, 8)]
self._areas = [x ** 2 for x in [32.0, 64.0, 128.0, 256.0, 512.0]]
self._anchor_dims = self._compute_dims()
and played with areas, scales, and aspect ratios. You can print the result anchor boxes and sizes from get_anchors method.
However, the confidence is still low, which is not a problem for my application, but I tried everything I read online (increase number of samples, changing anchor boxes, etc...) and I got no success. Let me know if you have any success in the future.
@mj1156 can you provide details on how you set up a pipeline to load data for a custom dataset?
This issue is stale because it has been open for 14 days with no activity. It will be closed if no further activity occurs. Thank you.
This issue was closed because it has been inactive for 28 days. Please reopen if you'd like to work on this further.
Hi, First of all thank you for the awesome content. But I was wondering how to test the model on custom images once the training is done. I've completed the training part and it is performing detection on the example images given. I would like to test the retina net model on my image data set. Any help would be appreciated. Thanks in Advance