Open SMZCC opened 7 years ago
install opencv-python then replace the loop with the following code.
try:
for i in range(0, int(len(train_box))):
for i in range(0, int(len(train_box)/BATCH_SIZE)):
cur_batch = sess.run(batch_queue)
start_time = time.time()
[batch_loss, fc4] = sess.run([tracknet.loss, tracknet.fc4], feed_dict={tracknet.image: cur_batch[0],
tracknet.target: cur_batch[1],
tracknet.bbox: cur_batch[2]})
img = cv2.imread(train_search[i])
THeight, TWidth = img.shape[:2]
point1 = (int(fc4[0][0] * TWidth / 10), int(fc4[0][1] * THeight / 10))
point2 = (int(point1[0] + fc4[0][2] * TWidth / 10), int(point1[1] + fc4[0][3] * THeight / 10))
gtpoint1 = (int(train_box[i][0]* TWidth / 10), int(train_box[i][1] * THeight / 10))
gtpoint2 = (int(gtpoint1[0] + train_box[i][2] * TWidth / 10), int(gtpoint1[1] + train_box[i][3]* THeight / 10))
cv2.rectangle(img, point1, point2, (255, 0, 0), 2)
cv2.rectangle(img, gtpoint1, gtpoint2, (0, 255, 255), 2) # draw gt rect.
fileName = './result/output%d.jpg' % i
cv2.imwrite(fileName, img)
cv2.imshow("img", img)
logging.info('batch box:\n %s' % (fc4))
logging.info('gt batch box:\n %s' % (cur_batch[2]))
logging.info('batch loss = %f' % (batch_loss))
logging.debug('test: time elapsed: %.3fs.' % (time.time() - start_time))
@812015941 Thank you very much, however, the result is so bad (;´༎ຶД༎ຶ`). Is the groundtruth really a truth?
I believe so dude LOL. Now i am trying to make it work like old school trackers, giving it the init bounding box and image, and in every loop it returns updated bounding box.
I think you made a little mistake in your code for visualizing the images with the bounding boxes.
For the second ground truth point you added the x and y position of the bottom right vertices of the rectangle. But it should look like this.
gtpoint2 = (int(train_box[i][2] * TWidth / 10), int(train_box[i][3]* THeight / 10))
Then the gtbox should look better.
Also you need to add:
cv2.waitKey(1)
after
cv2.imshow("img", img)
too actually view the image in the opencv window
Similar change needs to be done to point2:
point2 = (int(fc4[0][2] * TWidth / 10), int(fc4[0][3] * THeight / 10))
This will fix the predicted bbox.
I download the model, and run the load_and_test.py .There isn't any information printed then the program end, which makes me even doubt that I had something wrong. So,how could I see the progress of test/training ?For example , there is a window showing the video with the predicted bounding box on it. Thanks a lot!