-
Hi, thank you so much for the great contributions to this space. I was wondering if there is any way to download only the audio (no video), either take- or unprocessed-level via a CLI argument. I have…
-
You should identify and read some (3-5) scientific papers or works where similar to your research.
I think that in the world of egocentric they use a lot the term "attention" as a similar concept to …
-
Hi,
Sorry for bothering again, but I have an issue with the visualisation of the egocentric alignment of my data. When I run
`vame.egocentric_alignment(config, pose_ref_index=[0,10], use_video…
-
Thanks for your great work!
I read the paper from CHORE, and find that you have finished saveral related work....respeact!
I am very interested in scene information for human behavior, and have a …
-
I'd like to run visualization.py but surprisingly found that the pre-released dataset doesn't match the codes.
1. tool_model should be in format join(object_model_root, target_name + "_cm.obj") and …
-
**Is your feature request related to a problem? Please describe.**
Visualisation of potential satellite flares (eg Starlink)
**Describe the solution you'd like**
When selecting a satellite, the p…
-
Hi, I find this paper to be very interesting, Im part of a research program at the University of Barcelona and we want to try this network for Egocentric vision, we have our dataset in a folder struct…
-
https://arxiv.org/pdf/1712.04961.pdf
NHZlX updated
6 years ago
-
Hi, thanks for your interest to introduce a new **survey** to be added to the list.
You can either fork the repository and add it yourself (in Data folder, there is a csv file, "surveys.csv") or co…
-
Hello, thanks for your wonderful work! I'm interested in the egocentric action recognition task and try to do some further research based on this project. So could you please provide the script of fi…