I run into this error when launching turtlebot3 in every stage.
Here is the log:
`Model: "sequential_1"
Layer (type) Output Shape Param #
dense_1 (Dense) (None, 64) 1728
dense_2 (Dense) (None, 64) 4160
dropout_1 (Dropout) (None, 64) 0
dense_3 (Dense) (None, 5) 325
activation_1 (Activation) (None, 5) 0
Total params: 6,213
Trainable params: 6,213
Non-trainable params: 0
Model: "sequential_2"
Layer (type) Output Shape Param #
dense_4 (Dense) (None, 64) 1728
dense_5 (Dense) (None, 64) 4160
dropout_2 (Dropout) (None, 64) 0
dense_6 (Dense) (None, 5) 325
activation_2 (Activation) (None, 5) 0
Total params: 6,213
Trainable params: 6,213
Non-trainable params: 0
2019-10-22 17:55:18.469423: I tensorflow/core/platform/cpu_feature_guard.cc:145] This TensorFlow binary is optimized with Intel(R) MKL-DNN to use the following CPU instructions in performance critical operations: SSE4.1 SSE4.2
To enable them in non-MKL-DNN operations, rebuild TensorFlow with the appropriate compiler flags.
2019-10-22 17:55:18.521843: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2165800000 Hz
2019-10-22 17:55:18.522309: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x562b143f8c70 executing computations on platform Host. Devices:
2019-10-22 17:55:18.522380: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): ,
OMP: Info #212: KMP_AFFINITY: decoding x2APIC ids.
OMP: Info #210: KMP_AFFINITY: Affinity capable, using global cpuid leaf 11 info
OMP: Info #154: KMP_AFFINITY: Initial OS proc set respected: 0,1
OMP: Info #156: KMP_AFFINITY: 2 available OS procs
OMP: Info #157: KMP_AFFINITY: Uniform topology
OMP: Info #179: KMP_AFFINITY: 1 packages x 2 cores/pkg x 1 threads/core (2 total cores)
OMP: Info #214: KMP_AFFINITY: OS proc to physical thread map:
OMP: Info #171: KMP_AFFINITY: OS proc 0 maps to package 0 core 0
OMP: Info #171: KMP_AFFINITY: OS proc 1 maps to package 0 core 1
OMP: Info #250: KMP_AFFINITY: pid 2208 tid 2208 thread 0 bound to OS proc set 0
2019-10-22 17:55:18.523169: I tensorflow/core/common_runtime/process_util.cc:115] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
2019-10-22 17:55:18.736622: W tensorflow/compiler/jit/mark_for_compilation_pass.cc:1412] (One-time warning): Not using XLA:CPU for cluster because envvar TF_XLA_FLAGS=--tf_xla_cpu_global_jit was not set. If you want XLA:CPU, either set that envvar, or use experimental_jit_scope to enable XLA:CPU. To confirm that XLA is active, pass --vmodule=xla_compilation_cache=1 (as a proper command-line flag, not via TF_XLA_FLAGS) or set the envvar XLA_FLAGS=--xla_hlo_profile.
[INFO] [1571763321.147374, 0.590000]: Goal position : 0.6, 0.0
Traceback (most recent call last):
File "/home/francesco/catkin_ws/src/turtlebot3_machine_learning/turtlebot3_dqn/nodes/turtlebot3_dqn_stage_1", line 174, in
agent.trainModel()
File "/home/francesco/catkin_ws/src/turtlebot3_machine_learning/turtlebot3_dqn/nodes/turtlebot3_dqn_stage_1", line 121, in trainModel
q_value = self.model.predict(states.reshape(1, 26))
ValueError: cannot reshape array of size 362 into shape (1,26)
[turtlebot3_dqn_stage_1-1] process has died [pid 2208, exit code 1, cmd /home/francesco/catkin_ws/src/turtlebot3_machine_learning/turtlebot3_dqn/nodes/turtlebot3_dqn_stage_1 __name:=turtlebot3_dqn_stage_1 __log:=/home/francesco/.ros/log/7f08f056-f4ec-11e9-96b9-acb57dbdebcb/turtlebot3_dqn_stage_1-1.log].
log file: /home/francesco/.ros/log/7f08f056-f4ec-11e9-96b9-acb57dbdebcb/turtlebot3_dqn_stage_1-1*.log
all processes on machine have died, roslaunch will exit
shutting down processing monitor...
... shutting down processing monitor complete
done
`
Hello,
I run into this error when launching turtlebot3 in every stage. Here is the log:
`Model: "sequential_1"
Layer (type) Output Shape Param #
dense_1 (Dense) (None, 64) 1728
dense_2 (Dense) (None, 64) 4160
dropout_1 (Dropout) (None, 64) 0
dense_3 (Dense) (None, 5) 325
activation_1 (Activation) (None, 5) 0
Total params: 6,213 Trainable params: 6,213 Non-trainable params: 0
Model: "sequential_2"
Layer (type) Output Shape Param #
dense_4 (Dense) (None, 64) 1728
dense_5 (Dense) (None, 64) 4160
dropout_2 (Dropout) (None, 64) 0
dense_6 (Dense) (None, 5) 325
activation_2 (Activation) (None, 5) 0
Total params: 6,213 Trainable params: 6,213 Non-trainable params: 0
2019-10-22 17:55:18.469423: I tensorflow/core/platform/cpu_feature_guard.cc:145] This TensorFlow binary is optimized with Intel(R) MKL-DNN to use the following CPU instructions in performance critical operations: SSE4.1 SSE4.2 To enable them in non-MKL-DNN operations, rebuild TensorFlow with the appropriate compiler flags. 2019-10-22 17:55:18.521843: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2165800000 Hz 2019-10-22 17:55:18.522309: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x562b143f8c70 executing computations on platform Host. Devices: 2019-10-22 17:55:18.522380: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0):,
OMP: Info #212: KMP_AFFINITY: decoding x2APIC ids.
OMP: Info #210: KMP_AFFINITY: Affinity capable, using global cpuid leaf 11 info
OMP: Info #154: KMP_AFFINITY: Initial OS proc set respected: 0,1
OMP: Info #156: KMP_AFFINITY: 2 available OS procs
OMP: Info #157: KMP_AFFINITY: Uniform topology
OMP: Info #179: KMP_AFFINITY: 1 packages x 2 cores/pkg x 1 threads/core (2 total cores)
OMP: Info #214: KMP_AFFINITY: OS proc to physical thread map:
OMP: Info #171: KMP_AFFINITY: OS proc 0 maps to package 0 core 0
OMP: Info #171: KMP_AFFINITY: OS proc 1 maps to package 0 core 1
OMP: Info #250: KMP_AFFINITY: pid 2208 tid 2208 thread 0 bound to OS proc set 0
2019-10-22 17:55:18.523169: I tensorflow/core/common_runtime/process_util.cc:115] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
2019-10-22 17:55:18.736622: W tensorflow/compiler/jit/mark_for_compilation_pass.cc:1412] (One-time warning): Not using XLA:CPU for cluster because envvar TF_XLA_FLAGS=--tf_xla_cpu_global_jit was not set. If you want XLA:CPU, either set that envvar, or use experimental_jit_scope to enable XLA:CPU. To confirm that XLA is active, pass --vmodule=xla_compilation_cache=1 (as a proper command-line flag, not via TF_XLA_FLAGS) or set the envvar XLA_FLAGS=--xla_hlo_profile.
[INFO] [1571763321.147374, 0.590000]: Goal position : 0.6, 0.0
Traceback (most recent call last):
File "/home/francesco/catkin_ws/src/turtlebot3_machine_learning/turtlebot3_dqn/nodes/turtlebot3_dqn_stage_1", line 174, in
agent.trainModel()
File "/home/francesco/catkin_ws/src/turtlebot3_machine_learning/turtlebot3_dqn/nodes/turtlebot3_dqn_stage_1", line 121, in trainModel
q_value = self.model.predict(states.reshape(1, 26))
ValueError: cannot reshape array of size 362 into shape (1,26)
[turtlebot3_dqn_stage_1-1] process has died [pid 2208, exit code 1, cmd /home/francesco/catkin_ws/src/turtlebot3_machine_learning/turtlebot3_dqn/nodes/turtlebot3_dqn_stage_1 __name:=turtlebot3_dqn_stage_1 __log:=/home/francesco/.ros/log/7f08f056-f4ec-11e9-96b9-acb57dbdebcb/turtlebot3_dqn_stage_1-1.log].
log file: /home/francesco/.ros/log/7f08f056-f4ec-11e9-96b9-acb57dbdebcb/turtlebot3_dqn_stage_1-1*.log
all processes on machine have died, roslaunch will exit
shutting down processing monitor...
... shutting down processing monitor complete
done
`