Closed AntonBock closed 2 years ago
Hi @AntonBock
To test the policy in simulation or real-world you only need to compose the observation/state space with information from the environment (sensors, etc.) and apply the actions taken by the policy to the environment (controllers)...
So, it all depends on the environment you have... Could you please provide more information about your simulated environment and the real one? Are you planning to use a middleware to control the objective in your environment, for example, ROS?
Hi again,
Thanks for the quick response!
We use ROS to control a Franka Panda arm. Using your ppo_Franka_Cabinet.py example, we have trained a Franka robot in Isaac Gym.
How could we modify the franka_cabinet example to use observations we get from ROS, instead of wrapping Gym and getting the observations from there?
Hi @AntonBock
I think the following questions are relevant for testing in real-world:
FrankaCabinet
task it is self.cfg["env"]["numObservations"] = 23
)? How do you pretend to control the robot: setting the joints directly using a ROS topic or with MoveIt?dt: 0.0166 # 1/60 seconds
)?Well... I think creating a separate environment, for testing in real-world, that uses ROS to build the observation space and control the robot, is the best solution... Currently, I am creating a testing environment (in the real world) using ROS... I think we can discuss its implementation here the day after tomorrow...
Hi @Toni-SM Those are some of the same considerations we have made, and we should be able to get all the required information and rewards at the correct frequency.
We look forward to hearing about your ROS environment tomorrow
Hello,
We have trained a policy that we would like to test on a real-world setup. Does SKRL have any built-in support for this, or do you have any recommended method of doing this?
-Anton