weiyx16 / Active-Perception

Deep Reinforcement Learning for Robotic Pushing and Picking in Cluttered Environment
https://weiyx16.github.io/RobotGrasping
MIT License
13 stars 7 forks source link

No module named 'dqn' #4

Closed ChenyangRan closed 5 years ago

ChenyangRan commented 5 years ago

Hi, I use the cmd:

python main.py --is_train=True

It shows that No module named 'dqn'. But I find there is dqn folder. Maybe the error is from

from dqn.agent import Agent

Could you please help me?

weiyx16 commented 5 years ago

Have you git clone the whole repo? I'm not sure, for I'm also confused by module input with python. Maybe you can try to change the file agent_8.py or agent_18.py in dqn folder to agent.py, the main difference between these two files is a little modification on U-Net structure.

ChenyangRan commented 5 years ago

Have you git clone the whole repo? I'm not sure, for I'm also confused by module input with python. Maybe you can try to change the file _agent8.py or _agent18.py in dqn folder to agent.py, the main difference between these two files is a little modification on U-Net structure.

Thanks for your reply. Now I have install all the requirements and test your pkg. However, when I type

python main.py --is_train=False --is_sim=True

1

I think maybe I don't use v-rep. Then I use ./vrep.sh and start simulation. Then type _python main.py --is_train=False --issim=True , it also don't work. 2

It seems that I should use the real hardware. But I just want to test DQN in simulation. Could you please help me? I'm sorry that I don't know how to use v-rep well. By the way, Torch 7 is needed if I just want to test DQN?

ChenyangRan commented 5 years ago

And should I write simExtRemoteApiStart(19999) in kinect and UR5 under ./simulation/Simulation.ttt? Such as : 3

Looking forward to your reply.

weiyx16 commented 5 years ago

Use V-Rep

I use V-Rep as follows:

I'm not very familiar with V-Rep, for I only need it to a simple scene simulation. Maybe you need more skills, you can directly search in the website :-)

About the running

There's another mistake I notice that, the environment you run is actually this one: ./experiment/environment.py, but it should be running this one ./simulation/environment.py. I'm not sure why this error occured. Maybe you print if the params is_sim really set to be True at here at Line 72 in main.py:

        else:
            print(config.is_sim)   # New
            if config.is_sim:
                env = DQNEnvironment(config)
                agent = Agent(config, env, sess)
                agent.play()
                agent.randomplay()

If it is False, you can try to set it to be True directly here, else I will check where the bug is. Thank you.

Dependency

Torch7 is needed by affordance model , the DQN itself only depends on tensorflow

ChenyangRan commented 5 years ago

Use V-Rep

I use V-Rep as follows:

  • Type in ./vrep.sh and the software will launch.
  • Open scene file. In our repo, we use ./simulation/Simulation.ttt.
  • To start connection between python script and V-Rep (kind of network connection, use port & C/S structure), you should type in simRemoteApi.start(19999) in the bottom command line of V-Rep. Use a screenshot from website: Snipaste_2019-10-11_10-23-15
  • Start the V-Rep simulation, then you can use python script.

I'm not very familiar with V-Rep, for I only need it to a simple scene simulation. Maybe you need more skills, you can directly search in the website :-)

About the running

There's another mistake I notice that, the environment you run is actually this one: ./experiment/environment.py, but it should be running this one ./simulation/environment.py. I'm not sure why this error occured. Maybe you print if the params _issim really set to be True at here at Line 72 in main.py:

        else:
            print(config.is_sim)   # New
            if config.is_sim:
                env = DQNEnvironment(config)
                agent = Agent(config, env, sess)
                agent.play()
                agent.randomplay()

If it is False, you can try to set it to be True directly here, else I will check where the bug is. Thank you.

Dependency

Torch7 is needed by affordance model , the DQN itself only depends on tensorflow

Oh, thank you very much. I think I have fixed it.

  from simulation.environment import DQNEnvironment
 #from experiment.environment import REALEnvironment   # New

Maybe it happens here:

"""
    Set Kinect:
"""
logger = createConsoleLogger(LoggerLevel.Debug)
setGlobalLogger(logger)

fn = Freenect2()
num_devices = fn.enumerateDevices()
if num_devices == 0:
    print(" [!] No kinect device connected!")
    sys.exit(1)

It seems that the code will run and warns me when I import ./experiment/environment.py .

Now I have problems on load map, and have post my question on Error occurred during creating affordance map #3 issues. Looking forward to your reply.

weiyx16 commented 5 years ago

Glad to hear that. I have noticed the problem you mentioned in issue #3 . I will figure out what's wrong.