Unity-Technologies / Robotics-Object-Pose-Estimation

A complete end-to-end demonstration in which we collect training data in Unity and use that data to train a deep neural network to predict the pose of a cube. This model is then deployed in a simulated robotic pick-and-place task.
Apache License 2.0
293 stars 75 forks source link

Do I have the possibility to run the training by google colab? #62

Closed RockStheff closed 2 years ago

RockStheff commented 2 years ago

Because of not being privileged with a good machine to work the training of CNN Vgg 16, I would like to run part 3 of the tutorial with the graphics card of google. Is it possible to run in that environment? If yes could explain me in the best possible way.

Thanks :)

JonathanLeban commented 2 years ago

Hello @RockStheff , thank you for your answer! Yes it is totally doable. I will put some code here that you might need to modify a bit but it will give you a good help. The first step is to install of the required packages, do you know how to do it? Then, if you run the following code in a google colab cell that should work.



from easydict import EasyDict
from pose_estimation.pose_estimation_estimator import PoseEstimationEstimator

# You need to load the config 
config_file_path = "PATH_TO_config.yaml"
config = yaml.load(open(config_file_path, "r"), Loader=yaml.FullLoader)
config = EasyDict(config)

# Then you create the estimator
estimator = PoseEstimationEstimator(config=config)

# you call the train method
estimator.train()```
JonathanLeban commented 2 years ago

Also I advice to create the jupyter notebook inside the pose_estimation folder

RockStheff commented 2 years ago

Thank God for responding so quickly. I’m starting studies for training, because my college stay online, in google colab. I would like, if it is possible for you of course, elaborate a tutorial to train your model with colab.

I tried several times to simulate this part 3, but I had several problems installing Conda in colab to run your model. I tried to spin it by this approach:

image

I would very much like that, if possible you elaborate a tutorial for training in colab. Thank you very much for your attention.

Note: I had Failed to load the dependency contained in environment.yml and environment-gpu.yml.

RockStheff commented 2 years ago

And another question, if I wanted to estimate the pose of the end-Effector instead of the cube. Would I have that possibility? How would I do it?

JonathanLeban commented 2 years ago

Give me a bit of time and I will simulate the training on google colab and share it with you @RockStheff

JonathanLeban commented 2 years ago

And another question, if I wanted to estimate the pose of the end-Effector instead of the cube. Would I have that possibility? How would I do it?

You would need to label the end effector of the arm, add that label to the labeling script (as it is explained for the cube). To avoid changing the code in the dataset.py script, I would advice to uncheck the labeling component of the cube. Then you collect the data. And you will not have to change anything in the code

JonathanLeban commented 2 years ago
image
JonathanLeban commented 2 years ago
image
JonathanLeban commented 2 years ago

I hope it will solve your problem @RockStheff and if you have any other questions related to the project feel free to open a new request! Thank you for your interest!

RockStheff commented 2 years ago

And another question, if I wanted to estimate the pose of the end-Effector instead of the cube. Would I have that possibility? How would I do it?

You would need to label the end effector of the arm, add that label to the labeling script (as it is explained for the cube). To avoid changing the code in the dataset.py script, I would advice to uncheck the labeling component of the cube. Then you collect the data. And you will not have to change anything in the code

Because I would like to make this change, while training the model to estimate the cube I would like to estimate for the Effector end. This in part 3 of your tutorial. Is this your solution valid for part 3 of training?

JonathanLeban commented 2 years ago

I am not sure I well understood your question. Do you want to estimate both the end effector and the cube or only the end effector?

RockStheff commented 2 years ago

I am not sure I well understood your question. Do you want to estimate both the end effector and the cube or only the end effector?

no, only the end effector. But I wanted to specify already the end effector in the training part of the neural network (In the third part of your tutorial.).

JonathanLeban commented 2 years ago

I see. However in the example shown by the project, the end effector is static so need to estimate it but I don't know your use case so you might need to do it. In term of the network (part 3 of the tutorial), you don't have to change anything. However, you will have to change things in the part 2 where everything that has been done for the cube , you have to do it for the end effector. The challenge that you will face is in the randomization of the position of the end effector. You can only rotate it. In order to have a good domain randomization, I would advice to randomize the rotation of all the joints of the robots. But keep in mind that the rotation of each joint is bounded so you will have to set up the boundaries. Steps to do:

RockStheff commented 2 years ago

I see. However in the example shown by the project, the end effector is static so need to estimate it but I don't know your use case so you might need to do it. In term of the network (part 3 of the tutorial), you don't have to change anything. However, you will have to change things in the part 2 where everything that has been done for the cube , you have to do it for the end effector. The challenge that you will face is in the randomization of the position of the end effector. You can only rotate it. In order to have a good domain randomization, I would advice to randomize the rotation of all the joints of the robots. But keep in mind that the rotation of each joint is bounded so you will have to set up the boundaries. Steps to do:

  • replicate the labeling part done on the cube and apply it on the end effector and undo it on the cube
  • customize the rotation randomizer so that you can set up boundaries from the user interface
  • add the rotation randomizer so the simulation scenario
  • add the rotation randomizer tag to all the joints of the robot you want to rotate
  • collect the data and train your network

Oh yes, I get it. Thank you very much!