Closed huxian0402 closed 4 years ago
Hi, @huxian0402 You can calibrate your camera using a checkerboard pattern and OpenCV functions as described in these articles. https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_calib3d/py_calibration/py_calibration.html https://www.learnopencv.com/camera-calibration-using-opencv/
Thanks a lot. @hysts And i have trained with resnet_preact_train_using_all_data.yaml for 40 epochs on mpiigaze. Then I evaluate checkpoint_0040.pth with resnet_preact_eval.yaml, the result is that the mean angle error (deg) is less than 2.00 for each person id. It's good. But why the Mean Test Angle Error [degree] for your Results is 5.73?
@huxian0402
Ah, resnet_preact_train_using_all_data.yaml
is prepared for actual gaze estimation with webcam, not for training/evaluation experiments.
Using that YAML file, the model is trained with data from all 15 people. Since you are using data of all people, the score is for the training data itself. That's the reason why the test score seems so low.
In case of MPIIGaze/MPIIFaceGaze, in order to see the generalization performance to new unknown people, the model is evaluated by leave-one-person-out, which means the model should be trained on data from 14 people and tested on those of the remaining one person.
@hysts , Yes, I made a wrong test and now I understand. You really help me a lot. Thanks again.
Glad to hear that. :)
Hi, @huxian0402 You can calibrate your camera using a checkerboard pattern and OpenCV functions as described in these articles. https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_calib3d/py_calibration/py_calibration.html https://www.learnopencv.com/camera-calibration-using-opencv/
Hi hysts, I have a question which I hope you can answer. If I have already obtained my own camera parameters, do I modify them directly in sample_params.yaml? Thanj you very much. Yours Kelly
Hi, @Kelly-ZH
That works, but basically the file is not supposed to be changed directly. You can specify your camera parameter file here.
Hi, @Kelly-ZH
That works, but basically the file is not supposed to be changed directly. You can specify your camera parameter file here.
Thank you for your soon reply. And I have two other question, that is,
@Kelly-ZH
exp_rootdir = pathlib.Path('experiments/mpiigaze/lenet/exp00')
exp_dirs = sorted(exp_rootdir.glob('*'))
angle_errors = []
for exp_dir in exp_dirs:
with open(exp_dir / 'eval/error.txt') as f:
angle_errors.append(float(f.read()))
angle_errors = np.array(angle_errors)
plt.bar(np.arange(len(angle_errors)),
angle_errors,
color=plt.rcParams['axes.prop_cycle'].by_key()['color'])
plt.hlines(angle_errors.mean(), -0.5, 14.5, ls='--', color='blue')
plt.grid(alpha=0.5)
plt.xticks(np.arange(15), np.arange(15))
plt.xlabel('Person ID')
plt.ylabel('Angle Error [deg]')
plt.title('LeNet\n' f'Mean Angle Error: {angle_errors.mean():.2f} [deg]')
plt.show()
@Kelly-ZH
- Not sure what you mean. It's not strange ResNet outperforms LeNet. Could you be more specific?
- You can use matplotlib to draw such figures:
exp_rootdir = pathlib.Path('experiments/mpiigaze/lenet/exp00') exp_dirs = sorted(exp_rootdir.glob('*')) angle_errors = [] for exp_dir in exp_dirs: with open(exp_dir / 'eval/error.txt') as f: angle_errors.append(float(f.read())) angle_errors = np.array(angle_errors) plt.bar(np.arange(len(angle_errors)), angle_errors, color=plt.rcParams['axes.prop_cycle'].by_key()['color']) plt.hlines(angle_errors.mean(), -0.5, 14.5, ls='--', color='blue') plt.grid(alpha=0.5) plt.xticks(np.arange(15), np.arange(15)) plt.xlabel('Person ID') plt.ylabel('Angle Error [deg]') plt.title('LeNet\n' f'Mean Angle Error: {angle_errors.mean():.2f} [deg]') plt.show()
About 1, I specify it as below:
@Kelly-ZH
Hmm... I'm confused. You said your evaluation result was quite different from mine, but your result seems to be just in line with mine. As you may know, ResNet has a lot more capacity than LeNet, sot it's natural for ResNet to outperform LeNet. In fact, the results are exactly what we expect.
@Kelly-ZH
Hmm... I'm confused. You said your evaluation result was quite different from mine, but your result seems to be just in line with mine. As you may know, ResNet has a lot more capacity than LeNet, sot it's natural for ResNet to outperform LeNet. In fact, the results are exactly what we expect.
Thank you for your reply. I have found the mistakes, that is the model didn't agree with the evaluate data. I really appreciate your kindness
Good work, but how to get my own camera params to calibrate my camera in practice? @hysts