hysts / pytorch_mpiigaze

An unofficial PyTorch implementation of MPIIGaze and MPIIFaceGaze
MIT License
346 stars 86 forks source link

how to get my own camera params? #11

Closed huxian0402 closed 4 years ago

huxian0402 commented 4 years ago

Good work, but how to get my own camera params to calibrate my camera in practice? @hysts

hysts commented 4 years ago

Hi, @huxian0402 You can calibrate your camera using a checkerboard pattern and OpenCV functions as described in these articles. https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_calib3d/py_calibration/py_calibration.html https://www.learnopencv.com/camera-calibration-using-opencv/

huxian0402 commented 4 years ago

Thanks a lot. @hysts And i have trained with resnet_preact_train_using_all_data.yaml for 40 epochs on mpiigaze. Then I evaluate checkpoint_0040.pth with resnet_preact_eval.yaml, the result is that the mean angle error (deg) is less than 2.00 for each person id. It's good. But why the Mean Test Angle Error [degree] for your Results is 5.73?

hysts commented 4 years ago

@huxian0402 Ah, resnet_preact_train_using_all_data.yaml is prepared for actual gaze estimation with webcam, not for training/evaluation experiments. Using that YAML file, the model is trained with data from all 15 people. Since you are using data of all people, the score is for the training data itself. That's the reason why the test score seems so low. In case of MPIIGaze/MPIIFaceGaze, in order to see the generalization performance to new unknown people, the model is evaluated by leave-one-person-out, which means the model should be trained on data from 14 people and tested on those of the remaining one person.

huxian0402 commented 4 years ago

@hysts , Yes, I made a wrong test and now I understand. You really help me a lot. Thanks again.

hysts commented 4 years ago

Glad to hear that. :)

Kelly-ZH commented 3 years ago

Hi, @huxian0402 You can calibrate your camera using a checkerboard pattern and OpenCV functions as described in these articles. https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_calib3d/py_calibration/py_calibration.html https://www.learnopencv.com/camera-calibration-using-opencv/

Hi hysts, I have a question which I hope you can answer. If I have already obtained my own camera parameters, do I modify them directly in sample_params.yaml? Thanj you very much. Yours Kelly

hysts commented 3 years ago

Hi, @Kelly-ZH

That works, but basically the file is not supposed to be changed directly. You can specify your camera parameter file here.

Kelly-ZH commented 3 years ago

Hi, @Kelly-ZH

That works, but basically the file is not supposed to be changed directly. You can specify your camera parameter file here.

Thank you for your soon reply. And I have two other question, that is,

  1. I have trained the resnet and lenet besed on mpiigaze dataset, but my evaluate datas are quite different from yours . My resnet model works better than lenet model. I'm a little confused.
  2. How to draw the figure like this.
hysts commented 3 years ago

@Kelly-ZH

  1. Not sure what you mean. It's not strange ResNet outperforms LeNet. Could you be more specific?
  2. You can use matplotlib to draw such figures:
exp_rootdir = pathlib.Path('experiments/mpiigaze/lenet/exp00')
exp_dirs = sorted(exp_rootdir.glob('*'))
angle_errors = []
for exp_dir in exp_dirs:
    with open(exp_dir / 'eval/error.txt') as f:
        angle_errors.append(float(f.read()))
angle_errors = np.array(angle_errors)

plt.bar(np.arange(len(angle_errors)),
        angle_errors,
        color=plt.rcParams['axes.prop_cycle'].by_key()['color'])
plt.hlines(angle_errors.mean(), -0.5, 14.5, ls='--', color='blue')
plt.grid(alpha=0.5)
plt.xticks(np.arange(15), np.arange(15))
plt.xlabel('Person ID')
plt.ylabel('Angle Error [deg]')
plt.title('LeNet\n' f'Mean Angle Error: {angle_errors.mean():.2f} [deg]')
plt.show()
Kelly-ZH commented 3 years ago

@Kelly-ZH

  1. Not sure what you mean. It's not strange ResNet outperforms LeNet. Could you be more specific?
  2. You can use matplotlib to draw such figures:
exp_rootdir = pathlib.Path('experiments/mpiigaze/lenet/exp00')
exp_dirs = sorted(exp_rootdir.glob('*'))
angle_errors = []
for exp_dir in exp_dirs:
    with open(exp_dir / 'eval/error.txt') as f:
        angle_errors.append(float(f.read()))
angle_errors = np.array(angle_errors)

plt.bar(np.arange(len(angle_errors)),
        angle_errors,
        color=plt.rcParams['axes.prop_cycle'].by_key()['color'])
plt.hlines(angle_errors.mean(), -0.5, 14.5, ls='--', color='blue')
plt.grid(alpha=0.5)
plt.xticks(np.arange(15), np.arange(15))
plt.xlabel('Person ID')
plt.ylabel('Angle Error [deg]')
plt.title('LeNet\n' f'Mean Angle Error: {angle_errors.mean():.2f} [deg]')
plt.show()

About 1, I specify it as below:

hysts commented 3 years ago

@Kelly-ZH

Hmm... I'm confused. You said your evaluation result was quite different from mine, but your result seems to be just in line with mine. As you may know, ResNet has a lot more capacity than LeNet, sot it's natural for ResNet to outperform LeNet. In fact, the results are exactly what we expect.

Kelly-ZH commented 3 years ago

@Kelly-ZH

Hmm... I'm confused. You said your evaluation result was quite different from mine, but your result seems to be just in line with mine. As you may know, ResNet has a lot more capacity than LeNet, sot it's natural for ResNet to outperform LeNet. In fact, the results are exactly what we expect.

Thank you for your reply. I have found the mistakes, that is the model didn't agree with the evaluate data. I really appreciate your kindness