-
Hi Sid, great work! I am trying to understand the codebase and have quite a few questions and confusions. Please reply-
1. Is the TCC model trained separately for each task of each dataset? (e.g. i…
-
I source both `rvm` and [`nvm`](https://github.com/creationix/nvm) so my `PATH` looks like:
```
/home/ajcrites/.nvm/v5.5.0/bin:/home/ajcrites/.npm/bin:/home/ajcrites/.rvm/gems/ruby-2.0.0-p353/bin
```…
-
Limitless Audio Format is a newer audio format that is intended to be a free, open, and as implied largely limitless alternative to the currently available object-based audio formats. Providing suppor…
-
I'm getting this error while using the EGL rendering backend:
`mujoco.FatalError: Offscreen framebuffer is not complete, error 0x8cdd`
The error is rare (but still fatal for a long job) so not s…
-
Hi~ Thanks for your great work.
I am playing with the locomotion soccer environment. But I have no idea about the meaning of some observation variables:
They look like the following:
```
"…
-
Hi @moberweger ,
Thank you for the code. I am trying to implement V2V PoseNet algorithm on First-Person Hand Action Benchmark. But they mentioned deep prior++ for center extraction.
I am trying …
-
This issue is intended to collect related papers appearing in ICCV'23/NeurIPS'23 and recent papers missed out in the survey. Please feel free to post comments if you have any suggestions!
-
-
Hello, I have read your paper "InAViT-Interaction Region Visual Transformer for Egocentric Action Anticipation" carefully. I found difficulties in extracting hand and object box features during the pr…
-
Hi,
I find that you use version0 under the folder datasets/labels, but when I download the CharadesEgo dataset I get the label of version1. What version do you use to get the result of the paper?
Th…