I'm now doing some research on whether is it possible to use world model to solve Visual Long-term task, as the Franka Kitchen is an awesome env for robotics manipulation, so I'm now very interested in whether could the Franka-kitchen support vision mode observation ?
Pitch
env.step() could return the obs who is the ndarray standing for the image
Alternatives
I'm now could only think that the render function could give some image back, but I'm not sure the computation efficiency of it.
Checklist
[x] I have checked that there is no similar issue in the repo (required)
Proposal
Could the Frankakitchen Env have a RGB Mode ?
Motivation
I'm now doing some research on whether is it possible to use world model to solve Visual Long-term task, as the Franka Kitchen is an awesome env for robotics manipulation, so I'm now very interested in whether could the Franka-kitchen support vision mode observation ?
Pitch
env.step() could return the obs who is the ndarray standing for the image
Alternatives
I'm now could only think that the render function could give some image back, but I'm not sure the computation efficiency of it.
Checklist