Closed GAOYUAN-robot closed 6 months ago
After getting the means and stds of real sensor, you just need to match the arrow direction between sim and real sensors. This is to make sure the real sensor is installed in the correct side and orientation. Refer to _examples/UnstableGraspExp/experiment/ugreal.gif for the correct left side.
For more detailed sim and real matching, we might have to change the tactile-parameters in simulation. But this will be the last resort.
If I don't change the tactile-parameters in gelslim_left.obj and gelslim_right.obj. How can I match the arrow direction between sim and real sensors more accurately?
I used env.data_stat() to update the values of self.tactile_means and self.tactile_stds,but the arrow direction of the sim sensor seems very confusing. With the same parameters, the arrow direction of the sim sensor is sometimes the same as the real sensor and sometimes opposite.
Sometimes tactile images like this appear in the simulation environment.
How can I adjust the tactile image in the simulation environment to the result shown in the picture?
Hi Gao Yuan, Can you make the same picture above but with real robot and sensor?
Simulations go wild sometimes in extreme cases, we just need to focus on the normal cases. Please help to investigate in what circumstances will you observe opposite arrow on real sensor.
Thanks.
After I run "python train_sb3.py ./storage/ug_01-15_13-16 show". The arrow direction in the simulation environment seems to be opposite to the arrow direction in the paper. As shown in the picture, assuming that the gelsight sensor is on the side close to the person, the direction should be clockwise instead of counter clockwise. How can I change the arrow direction of the sensor in the simulation environment? Swapping the sensor locations in the real environment seems to have little effect, because if we assume that the gelsight sensor on the side close to the person will be used, the situation seems to be the same for left and right sensors.
The arrow direction of the sensor in the paper is the same as the arrow direction of the real sensor :
After I run "python train_sb3.py ./storage/ug_01-15_13-16", the arrow direction in the simulation environment seems to be correct. But when I run "python train_sb3.py ./storage/ug_01-15_13-16" , the arrow direction in the simulation environment seems to be opposite.
As shown in the picture below, is the left sensor(the blue one) used in the simulation environment?
May I ask if I can have the complete code for this project via e-mail? I remember that I seemed not download the latest version when downloading the open tactile simulator , which may have an impact on the simulation results.
After I run "python train_sb3.py ./storage/ug_01-15_13-16", the arrow direction in the simulation environment seems to be correct. But when I run "python train_sb3.py ./storage/ug_01-15_13-16 show" , the arrow direction in the simulation environment seems to be opposite.
In simulation left is the blue sensor: https://github.com/enyen/TactileSimulation/blob/02884291b42af636eae25f003357855002840c9a/envs/assets/unstable_grasp/unstable_grasp.xml#L35
On real robot left is the lit-up sensor: examples/UnstableGraspExp/experiment/ug_real.gif
It seem that you are not using the latest version, try git pull
and refer to the README on how to do inference.
After I run "python train_sb3.py ./storage/ug_01-15_13-16", the arrow direction in the simulation environment seems to be correct. But when I run "python train_sb3.py ./storage/ug_01-15_13-16" , the arrow direction in the simulation environment seems to be opposite.
if you have not setup ssh for github, you can try download zip
in the green code
button on top of the repo page.
I have downloaded the latest version and I use the correct left sensor. After running "python train_sb3.py ./storage/ug_01-15_13-16 show". Is the arrow direction in the simulation environment correct? (In the picture, the closer side to people is the blue sensor). In a real environment, the arrow direction of the blue sensor seems to be counter-clockwise.
I used env.render_tactile() to draw the picture.
Yes, it is correct. You have to look at the front, not from back of the sensor.
If you want to complete the simplest task related to center of mass movement, how many obs do I need to take and average them?
30~50 with variable weight
Thank you for your response!
When obtaining the observation values, Is the action taken each time random?
Should I only modify the weight of the container, or is it also necessary to adjust the center of mass of the object?
the random action is suppose to change the center of mass, but you could adjust it after every grasp as you like.
Is there any files named "means.npy", " stds.npy" that I can use as a reference? I took many groups of the means and stds of the obs of real sensors. But the same RL strategy seems difficult to achieve the same results on real sensors. There seems to be nothing wrong with the RL strategy itself.
I guess the system is not very particular about the exact means
and stds
.
What do you mean by 'difficult'? Do you have any numbers or videos?
It seems that the means and stds of obs in the real world will influence the effect of RL strategy in the real world.
https://github.com/enyen/TactileSimulation/assets/148218659/dc3a0101-6cea-428a-b5c2-2dd059e30aa8
https://github.com/enyen/TactileSimulation/assets/148218659/1251ac94-7ff4-4dc3-bf40-b04a469c98bd
So I was wondering if you have any files about the obs in the real world that I can use as a reference.
There may be other problems that affect the performence of RL strategies in the real world.
Have you visualized the real sensor before?
cd examples/UnstableGraspExp/marker_flow
python marker_flow.py
We are not using the right sensor, so it is better to unplug it to prevent confusion.
After I run "python marker_flow.py", the tactile image of the sensor on the left will be displayed(GS2).
Before running "python test_ur5", I confirmed that the left sensor(GS2) was used. Did the data from the left and right sensors interfere with each other during the operation?
No, left and right sensors are totally independent. But there is no reason to keep the right sensor pluged it. Have you tried to match the real sensor image with the simulated ones? Are the arrows pointing in the same direction under the same situation?
Thank you for your response! There are pictures of real sensors and simulated sensors, they seem to match well. Is it correct?
In the simulation environment: After running "python train_sb3.py ./storage/ug_01-15_13-16 show", the arrow direction of the blue sensor is counter-clockwise.(In the picture, the left sensor is the blue sensor, which is on the side away from people).
In the real environment: when we use the left sensor(which is closer to people), the arrow direction of the left sensor is counter-clockwise.
Thank you for your response! 1. We discuss this problem last week, the conclusion is that the rotation direction of the blue sensor in the simulation environment is correct, and the blue sensor in the simulation environment corresponds to the left sensor in reality,Is it correct?
2. Do you mean the tactile sensors in the two pictures below don't match? In the simulation environment, the arrow direction of the blue sensor is counter-clockwise. In the real environment, the arrow direction of the left sensor is also counter-clockwise.
3. Is there any files named "means.npy", " stds.npy" that I can use as a reference? The obs of the real sensor may affect the performence of RL strategies in the real world.
I have closed the right sensor, only the left sensor is reserved ( the reference document is /examples/UnstableGraspExp/experiment/ug_real.gif):
No, simulated (right sensor nearer) bar tilted left, real (left sensor nearer) bar tilted right.
the sensor is assigned correctly, but based on your images the bars are tilted in opposite direction between sim and real. So you might need to test again with the same bar tilt setting.
the marker displacement is not very obvious, try to reduce the gripping force, or increase the weight of the load
you could also reduce the stds temporary and only for better visualization, change it back later https://github.com/enyen/TactileSimulation/blob/8014df67a736d30bae45675d60c5c54afbbaee23/examples/UnstableGraspExp/marker_flow/marker_flow.py#L161
flow = (flow - self.tactile_norm[0]) / (self.tactile_norm[1] / 3)
means
and stds
. I got them using the same method.Thank you for your response!
In the real environment, when the action[0] representing the "gripper position" is inverted, the gripper of the manipulator will move closer to the center of mass.
Best regards!
As shown in the picture below, I modified the program "unstable_grasp_env.py", and inverse the rotation direction of the blue sensor in the simulation environment, Is this modification correct?
After this modification, in the real environment, the gripper of the manipulator moves closer to the center of mass.
flows = self.normalize_flow(flows)
flows[:, :, 1, :, :] *= -1
return flows
to match with sim, not the correct sim env in unstable_grasp_env.py
.
Dear Dr enyen
I encountered some new difficulties while attempting to modify the "unstable_grasp.xml" file
I noticed that I can import the obj file into the xml file using the method shown in the picture below (adopted from a task named "dclaw_rotate"), How is the collision file in this code generated? such as contacts/base_link.txt,contacts/one0_link.txt,contacts/one1_link.txt ...( I generated .obj files from Solidworks and 3D maxs)
If I want to insert an obj file into xml file and generate an object, how can I define its collision parameters so that it can come into contact with other objects?
Could you please provide guidance on how to address these issues? Your assistance would be greatly appreciated.
Thank you kindly.
Hi,
you can study more about the xml modelling through the examples in the original repo this work adopted from: https://github.com/enyen/TactileSimulation/blob/8014df67a736d30bae45675d60c5c54afbbaee23/README.md?plain=1#L2
As well as the one it adopted from: https://github.com/eanswer/DiffHand
Dear enyen puang
How can I adjust the tactile image in the simulation environment to the result shown in the picture?
I referred to README.md and used env.data_stat() to update the values of self.tactile_means and self.tactile_stds, but it seems that I cannot achieve the result shown in the picture above.
Therefore, the tactile images in the simulation environment cannot perfectly match those in the real environment.