dimitris-christodoulou57 / Model-aware_3D_Eye_Gaze

MIT License
33 stars 3 forks source link

Is there more recent code? #1

Open B-IVALD opened 5 months ago

B-IVALD commented 5 months ago

Hello @dimitris-christodoulou57 , I'm trying to run this repo with the goal of inputting my own image for inference and outputting an overlayed imaged. Is this the most up to date package and documentation you have? The reason I ask is because:

Any updates/additional information would be amazing. Thanks!

Heena-S-Patel commented 2 months ago

Could You please provide an updated code? The provided scripts don't work for inference or training.

dimitris-christodoulou57 commented 2 months ago

Hello @Heena-S-Patel, We're sorry to hear that.

Could you please provide more details about the errors or problems you've encountered?

I've added the script 'run_job.sh' to provide more insight into the arguments required. Could you check this?

dimitris-christodoulou57 commented 2 months ago

Hello @B-IVALD, This is the most current version of the package and the documentation.

  1. For testing, you need to specify the following certain arguments: --only_test=1 and --path_model= and adjust the model parameters appropriately based on your setup and the model configuration you're testing. Please note that our code also performs validation checks during the testing phase!
  2. Regarding the visualization, you can refer to the eye_model.ipynb notebook. You can check the 3D eye model implementation with various parameters that can be adjusted to visualize the effects directly using the notebook. There is a section of commented code inside the 'render_semantics' function and some functions in the gaze_estimation.py file that were specifically used for direct visualization during model output via an interactive session. This code was commented out during the training phases, but you can uncomment these lines and use the provided functions for your tests to see the visualization results directly.
Heena-S-Patel commented 2 months ago

Thank You so much for Your kind response, I'll check eye_model.ipynb file and update you soon.

.... Best Regards, Dr. Heena Patel Ph +91 70961 83225

On Thu, Apr 18, 2024 at 4:49 AM Dimitris Christodoulou < @.***> wrote:

Hello @B-IVALD https://github.com/B-IVALD, This is the most current version of the package and the documentation.

  1. For testing, you need to specify the following certain arguments: --only_test=1 and --path_model= and adjust the model parameters appropriately based on your setup and the model configuration you're testing. Please note that our code also performs validation checks during the testing phase!
  2. Regarding the visualization, you can refer to the eye_model.ipynb notebook. You can check the 3D eye model implementation with various parameters that can be adjusted to visualize the effects directly using the notebook. There is a section of commented code inside the 'render_semantics' function and some functions in the gaze_estimation.py file that were specifically used for direct visualization during model output via an interactive session. This code was commented out during the training phases, but you can uncomment these lines and use the provided functions for your tests to see the visualization results directly.

— Reply to this email directly, view it on GitHub https://github.com/dimitris-christodoulou57/Model-aware_3D_Eye_Gaze/issues/1#issuecomment-2062637459, or unsubscribe https://github.com/notifications/unsubscribe-auth/AHRZWG5CQMPNA3TNUMRJS6DY537PLAVCNFSM6AAAAABB6UXYWSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANRSGYZTONBVHE . You are receiving this because you were mentioned.Message ID: <dimitris-christodoulou57/Model-aware_3D_Eye_Gaze/issues/1/2062637459@ github.com>

Heena-S-Patel commented 2 months ago

Hello @Heena-S-Patel, We're sorry to hear that.

Could you please provide more details about the errors or problems you've encountered?

I've added the script 'run_job.sh' to provide more insight into the arguments required. Could you check this?

Sure, I'll check

Heena-S-Patel commented 2 months ago

You've provided three pre-trained models, but it's unclear which architecture corresponds to each model. Could you please specify which architecture corresponds to each of the models you've provided?

Additionally, I want to confirm that I'm currently working with the Segmentation + Gaze + Center model, and I'm considering using the res_50_3 architecture. Providing clarification on the selection of the pre-trained model and corresponding architecture will help ensure that I select the appropriate architecture for each task. Thank you.

Heena-S-Patel commented 2 months ago

Hello @Heena-S-Patel, We're sorry to hear that. Could you please provide more details about the errors or problems you've encountered? I've added the script 'run_job.sh' to provide more insight into the arguments required. Could you check this?

Sure, I'll check

I've set parameters according to Your suggestions, I'm getting this error while testing

Traceback (most recent call last): File "run.py", line 187, in train(args, path_dict, validation_mode=False, test_mode=True) File "/home/openx/Documents/Projects/3DEyeGaze/Model-aware_3D_Eye_Gaze/main.py", line 183, in train train_validation_loops(net, File "/home/openx/Documents/Projects/3DEyeGaze/Model-aware_3D_Eye_Gaze/main.py", line 487, in train_validation_loops test_result = forward(net, File "/home/openx/Documents/Projects/3DEyeGaze/Model-aware_3D_Eye_Gaze/scripts.py", line 188, in forward out_dict, out_dict_valid = net(data_dict, args) File "/home/openx/miniconda3/envs/3deyegaze/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, kwargs) File "/home/openx/Documents/Projects/3DEyeGaze/Model-aware_3D_Eye_Gaze/models/res_50_3/res_50_3.py", line 185, in forward elOut, elConf = self.elReg(enc_op[-1]) File "/home/openx/miniconda3/envs/3deyegaze/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, *kwargs) File "/home/openx/Documents/Projects/3DEyeGaze/Model-aware_3D_Eye_Gaze/models/regresion_module.py", line 62, in forward x = self.conv_ops(x) File "/home/openx/miniconda3/envs/3deyegaze/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(input, kwargs) File "/home/openx/miniconda3/envs/3deyegaze/lib/python3.8/site-packages/torch/nn/modules/container.py", line 141, in forward input = module(input) File "/home/openx/miniconda3/envs/3deyegaze/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, *kwargs) File "/home/openx/Documents/Projects/3DEyeGaze/Model-aware_3D_Eye_Gaze/models/basic_blocks.py", line 101, in forward x = self.conv(x) File "/home/openx/miniconda3/envs/3deyegaze/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(input, **kwargs) File "/home/openx/miniconda3/envs/3deyegaze/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 447, in forward return self._conv_forward(input, self.weight, self.bias) File "/home/openx/miniconda3/envs/3deyegaze/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 443, in _conv_forward return F.conv2d(input, weight, bias, self.stride, RuntimeError: Given groups=1, weight of size [153, 153, 3, 3], expected input[4, 2048, 8, 10] to have 153 channels, but got 2048 channels instead

krishnaadithya commented 2 months ago

Hello @dimitris-christodoulou57 , I'm trying to run this repo with the goal of inputting my own image for inference and outputting an overlayed imaged. Is this the most up to date package and documentation you have? The reason I ask is because:

  • I've run into numerous issues even trying to run a test session using a subset of TEyeD images.
  • I'm having trouble figuring out which Args need to be specified for testing.
  • The visualization code seems to be separate and incomplete from the rest of the repo.

Any updates/additional information would be amazing. Thanks!

@B-IVALD from where did you get TEyeD dataset?

B-IVALD commented 2 months ago

@krishnaadithya Instructions are in the README. You connect via FTP.