Open GuangtaoLyu opened 6 months ago
what does your result look like? Seems you attached two images from the paper and demo
I run the eval.py. i got some obj and ply files. i also try the vis script in README. i got the many images. but no color and scene?
---Original--- From: @.> Date: Tue, May 21, 2024 00:33 AM To: @.>; Cc: "Guangtao Lyu ( 吕光涛 @.**@.>; Subject: Re: [y-zheng18/GIMO] how to visualize the results as the demo.gif ?(Issue #7)
what does your result look like? Seems you attached two images from the paper and demo
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
you might need to use the original scene mesh to render the results you want, not the point cloud. For example, you can use ./data/GIMO/classroom0219/scene_obj/textured_output.obj
which has color information.
thank you. i will try it.
---Original--- From: @.> Date: Tue, May 21, 2024 00:42 AM To: @.>; Cc: "Guangtao Lyu ( 吕光涛 @.**@.>; Subject: Re: [y-zheng18/GIMO] how to visualize the results as the demo.gif ?(Issue #7)
you might need to use the original scene mesh to render the results you want, not the point cloud. For example, you can use ./data/GIMO/classroom0219/scene_obj/textured_output.obj which has color information.
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
you might need to use the original scene mesh to render the results you want, not the point cloud. For example, you can use
./data/GIMO/classroom0219/scene_obj/textured_output.obj
which has color information.
how to visualize the results like this.
hello, zheng yang:
fig 1
as the fig.1 in your paper, you say that you collect a diverse range of daily activities. i don't find the text description in the dataset and i only find some words in the dataset.csv same as your reply in the issue#3.
Does each motion sequence correspond to only one activity in the fig. 2?
In our experiments, we predict the future motion in 5 seconds from 3 seconds input, where the first 3 seconds of a trajectory is just about to start an activity (i.e., beginning to move for fetching a book) in our dataset, and in the next 5 seconds the trajectory proceeds to finish the activity. We set the motion frame rate to 2 fps, i.e., 6 pose input and 10 pose output. Note that once the waypoints are predicted, a full motion sequence with high fps can be easily generated [51]
fig 2
Message ID: @.***>
Each motion sequence corresponds to one activity defined in Tab.2. Some sequences have text descriptions in dataset.csv.
you might need to use the original scene mesh to render the results you want, not the point cloud. For example, you can use
./data/GIMO/classroom0219/scene_obj/textured_output.obj
which has color information.how to visualize the results like this.
For this, project the 3D gaze point to images to visualize 2D gaze.
hello, thank you for your great work. i run the render_blender.py and i get the result as follows. how can i get the result like the demo or the picture in the paper?