Closed HankerSia closed 3 years ago
Hi @HankerSia the issues in this repo focus on bug reports and questions regarding the functionalities. We do not provide support for RL training. We encourage you to check out the robosuite-benchmarking repo to see our experimental setups with RL.
Can anyone made a short introduction about the observation and reward of a simple picking task. Then what actions (like torque of the joints of the robotic arm)should be input for this environment. And how to utilize the environment such as object and robot defined in robosuite in the picking task. In short, Can anyone made a tutorial about a simple picking task using ddpg and robosuite?
The furniture repo is so complex. I just need a demo with the simplest implementation.
If this request is hard, I can supply a control demo of pendulm using ddpg, you can tell me how to modify it, so that I can achieve a simple grasp task by using robosuite and ddpg.
If it is possible, I will pay for this. But, for I am a PHD candidate, I can only afford $200 for this demo or tutorial. Thanks in advance!
`import os import random import gym from collections import deque
import numpy as np import tensorflow as tf
from keras.layers import Input, Dense, Lambda, concatenate from keras.models import Model from keras.optimizers import Adam import keras.backend as K
class DDPG(): """Deep Deterministic Policy Gradient Algorithms. """ def init(self): super(DDPG, self).init()
loss = self.critic.train_on_batch([X1, X2], y)
if name == 'main': model = DDPG()
model.save_history(history, 'ddpg.csv')
`