Open ogugugugugua opened 5 years ago
I had the same issue. Be sure that in the glob operations in model.py for train and test, you're referencing a path that actually exists. There is a datasets/ folder in the path that didn't exist in my download so when I fixed that, it started training fine.
Hi, sumuzhao, i've tried to start training your network with your dataset under the instructions
python
main.py --dataset_A_dir='JC_J' --dataset_B_dir='JC_C' --type='cyclegan' --model='base' --sigma_d=0--phase='train'
but things didnot go right. here's what comes out:2019-04-11 17:31:09.960154: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA 2019-04-11 17:31:11.424925: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 0 with properties: name: Tesla K80 major: 3 minor: 7 memoryClockRate(GHz): 0.8235 pciBusID: 0000:8b:00.0 totalMemory: 11.17GiB freeMemory: 11.10GiB 2019-04-11 17:31:11.425039: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0 2019-04-11 17:31:12.042054: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix: 2019-04-11 17:31:12.042099: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988] 0 2019-04-11 17:31:12.042112: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0: N 2019-04-11 17:31:12.043982: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10758 MB memory) -> physical GPU (device: 0, name: Tesla K80, pci bus id: 0000:8b:00.0, compute capability: 3.7) WARNING:tensorflow:From /home/lab-xie.yulin/graduateDesign/CycleGAN-Music-Style-Transfer/ops.py:110: calling reduce_max (from tensorflow.python.ops.math_ops) with keep_dims is deprecated and will be removed in a future version. Instructions for updating: keep_dims is deprecated, use keepdims instead generatorA2B/g_e1_c/Conv/weights:0 generatorA2B/g_e1_bn/scale:0 generatorA2B/g_e1_bn/offset:0 generatorA2B/g_e2_c/Conv/weights:0 generatorA2B/g_e2_bn/scale:0 generatorA2B/g_e2_bn/offset:0 generatorA2B/g_e3_c/Conv/weights:0 generatorA2B/g_e3_bn/scale:0 generatorA2B/g_e3_bn/offset:0 generatorA2B/g_r1_c1/Conv/weights:0 generatorA2B/g_r1_bn1/scale:0 generatorA2B/g_r1_bn1/offset:0 generatorA2B/g_r1_c2/Conv/weights:0 generatorA2B/g_r1_bn2/scale:0 generatorA2B/g_r1_bn2/offset:0 generatorA2B/g_r2_c1/Conv/weights:0 generatorA2B/g_r2_bn1/scale:0 generatorA2B/g_r2_bn1/offset:0 generatorA2B/g_r2_c2/Conv/weights:0 generatorA2B/g_r2_bn2/scale:0 generatorA2B/g_r2_bn2/offset:0 generatorA2B/g_r3_c1/Conv/weights:0 generatorA2B/g_r3_bn1/scale:0 generatorA2B/g_r3_bn1/offset:0 generatorA2B/g_r3_c2/Conv/weights:0 generatorA2B/g_r3_bn2/scale:0 generatorA2B/g_r3_bn2/offset:0 generatorA2B/g_r4_c1/Conv/weights:0 generatorA2B/g_r4_bn1/scale:0 generatorA2B/g_r4_bn1/offset:0 generatorA2B/g_r4_c2/Conv/weights:0 generatorA2B/g_r4_bn2/scale:0 generatorA2B/g_r4_bn2/offset:0 generatorA2B/g_r5_c1/Conv/weights:0 generatorA2B/g_r5_bn1/scale:0 generatorA2B/g_r5_bn1/offset:0 generatorA2B/g_r5_c2/Conv/weights:0 generatorA2B/g_r5_bn2/scale:0 generatorA2B/g_r5_bn2/offset:0 generatorA2B/g_r6_c1/Conv/weights:0 generatorA2B/g_r6_bn1/scale:0 generatorA2B/g_r6_bn1/offset:0 generatorA2B/g_r6_c2/Conv/weights:0 generatorA2B/g_r6_bn2/scale:0 generatorA2B/g_r6_bn2/offset:0 generatorA2B/g_r7_c1/Conv/weights:0 generatorA2B/g_r7_bn1/scale:0 generatorA2B/g_r7_bn1/offset:0 generatorA2B/g_r7_c2/Conv/weights:0 generatorA2B/g_r7_bn2/scale:0 generatorA2B/g_r7_bn2/offset:0 generatorA2B/g_r8_c1/Conv/weights:0 generatorA2B/g_r8_bn1/scale:0 generatorA2B/g_r8_bn1/offset:0 generatorA2B/g_r8_c2/Conv/weights:0 generatorA2B/g_r8_bn2/scale:0 generatorA2B/g_r8_bn2/offset:0 generatorA2B/g_r9_c1/Conv/weights:0 generatorA2B/g_r9_bn1/scale:0 generatorA2B/g_r9_bn1/offset:0 generatorA2B/g_r9_c2/Conv/weights:0 generatorA2B/g_r9_bn2/scale:0 generatorA2B/g_r9_bn2/offset:0 generatorA2B/g_r10_c1/Conv/weights:0 generatorA2B/g_r10_bn1/scale:0 generatorA2B/g_r10_bn1/offset:0 generatorA2B/g_r10_c2/Conv/weights:0 generatorA2B/g_r10_bn2/scale:0 generatorA2B/g_r10_bn2/offset:0 generatorA2B/g_d1_dc/Conv2d_transpose/weights:0 generatorA2B/g_d1_bn/scale:0 generatorA2B/g_d1_bn/offset:0 generatorA2B/g_d2_dc/Conv2d_transpose/weights:0 generatorA2B/g_d2_bn/scale:0 generatorA2B/g_d2_bn/offset:0 generatorA2B/g_pred_c/Conv/weights:0 generatorB2A/g_e1_c/Conv/weights:0 generatorB2A/g_e1_bn/scale:0 generatorB2A/g_e1_bn/offset:0 generatorB2A/g_e2_c/Conv/weights:0 generatorB2A/g_e2_bn/scale:0 generatorB2A/g_e2_bn/offset:0 generatorB2A/g_e3_c/Conv/weights:0 generatorB2A/g_e3_bn/scale:0 generatorB2A/g_e3_bn/offset:0 generatorB2A/g_r1_c1/Conv/weights:0 generatorB2A/g_r1_bn1/scale:0 generatorB2A/g_r1_bn1/offset:0 generatorB2A/g_r1_c2/Conv/weights:0 generatorB2A/g_r1_bn2/scale:0 generatorB2A/g_r1_bn2/offset:0 generatorB2A/g_r2_c1/Conv/weights:0 generatorB2A/g_r2_bn1/scale:0 generatorB2A/g_r2_bn1/offset:0 generatorB2A/g_r2_c2/Conv/weights:0 generatorB2A/g_r2_bn2/scale:0 generatorB2A/g_r2_bn2/offset:0 generatorB2A/g_r3_c1/Conv/weights:0 generatorB2A/g_r3_bn1/scale:0 generatorB2A/g_r3_bn1/offset:0 generatorB2A/g_r3_c2/Conv/weights:0 generatorB2A/g_r3_bn2/scale:0 generatorB2A/g_r3_bn2/offset:0 generatorB2A/g_r4_c1/Conv/weights:0 generatorB2A/g_r4_bn1/scale:0 generatorB2A/g_r4_bn1/offset:0 generatorB2A/g_r4_c2/Conv/weights:0 generatorB2A/g_r4_bn2/scale:0 generatorB2A/g_r4_bn2/offset:0 generatorB2A/g_r5_c1/Conv/weights:0 generatorB2A/g_r5_bn1/scale:0 generatorB2A/g_r5_bn1/offset:0 generatorB2A/g_r5_c2/Conv/weights:0 generatorB2A/g_r5_bn2/scale:0 generatorB2A/g_r5_bn2/offset:0 generatorB2A/g_r6_c1/Conv/weights:0 generatorB2A/g_r6_bn1/scale:0 generatorB2A/g_r6_bn1/offset:0 generatorB2A/g_r6_c2/Conv/weights:0 generatorB2A/g_r6_bn2/scale:0 generatorB2A/g_r6_bn2/offset:0 generatorB2A/g_r7_c1/Conv/weights:0 generatorB2A/g_r7_bn1/scale:0 generatorB2A/g_r7_bn1/offset:0 generatorB2A/g_r7_c2/Conv/weights:0 generatorB2A/g_r7_bn2/scale:0 generatorB2A/g_r7_bn2/offset:0 generatorB2A/g_r8_c1/Conv/weights:0 generatorB2A/g_r8_bn1/scale:0 generatorB2A/g_r8_bn1/offset:0 generatorB2A/g_r8_c2/Conv/weights:0 generatorB2A/g_r8_bn2/scale:0 generatorB2A/g_r8_bn2/offset:0 generatorB2A/g_r9_c1/Conv/weights:0 generatorB2A/g_r9_bn1/scale:0 generatorB2A/g_r9_bn1/offset:0 generatorB2A/g_r9_c2/Conv/weights:0 generatorB2A/g_r9_bn2/scale:0 generatorB2A/g_r9_bn2/offset:0 generatorB2A/g_r10_c1/Conv/weights:0 generatorB2A/g_r10_bn1/scale:0 generatorB2A/g_r10_bn1/offset:0 generatorB2A/g_r10_c2/Conv/weights:0 generatorB2A/g_r10_bn2/scale:0 generatorB2A/g_r10_bn2/offset:0 generatorB2A/g_d1_dc/Conv2d_transpose/weights:0 generatorB2A/g_d1_bn/scale:0 generatorB2A/g_d1_bn/offset:0 generatorB2A/g_d2_dc/Conv2d_transpose/weights:0 generatorB2A/g_d2_bn/scale:0 generatorB2A/g_d2_bn/offset:0 generatorB2A/g_pred_c/Conv/weights:0 discriminatorB/d_h0_conv/Conv/weights:0 discriminatorB/d_h1_conv/Conv/weights:0 discriminatorB/d_bn1/scale:0 discriminatorB/d_bn1/offset:0 discriminatorB/d_h3_pred/Conv/weights:0 discriminatorA/d_h0_conv/Conv/weights:0 discriminatorA/d_h1_conv/Conv/weights:0 discriminatorA/d_bn1/scale:0 discriminatorA/d_bn1/offset:0 discriminatorA/d_h3_pred/Conv/weights:0
and as i take a lood at the log, it shows that is not UTF-8 encoded. Could you tell me what is going on as i am not sure why the training just stops without many errors? Thanks in advance for your help. Appreciate that.
I had the same problem.You need to preprocess the .npy file given according to the dataset part in readme.Use the step 6 in testfile.py and check your file path in the glob operation in model.py.Good luck.
Could you give more details about this problem?
For further users:
If you pass --dataset_dir /path/to/dir to main.py, you have to modify the following lines in model.py
dataA = glob('./datasets/{}/train/*.*'.format(self.dataset_A_dir))
dataB = glob('./datasets/{}/train/*.*'.format(self.dataset_B_dir))
to
dataA = glob("{}/{}/train/*.*".format(self.dataset_dir, self.dataset_A_dir))
likewise for dataset B and for the test steps, e.g. just search for "glob" within the file and adapt.
Hi, sumuzhao, i've tried to start training your network with your dataset under the instructions
python
main.py --dataset_A_dir='JC_J' --dataset_B_dir='JC_C' --type='cyclegan' --model='base' --sigma_d=0--phase='train'
but things didnot go right. here's what comes out:2019-04-11 17:31:09.960154: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA 2019-04-11 17:31:11.424925: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 0 with properties: name: Tesla K80 major: 3 minor: 7 memoryClockRate(GHz): 0.8235 pciBusID: 0000:8b:00.0 totalMemory: 11.17GiB freeMemory: 11.10GiB 2019-04-11 17:31:11.425039: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0 2019-04-11 17:31:12.042054: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix: 2019-04-11 17:31:12.042099: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988] 0 2019-04-11 17:31:12.042112: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0: N 2019-04-11 17:31:12.043982: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10758 MB memory) -> physical GPU (device: 0, name: Tesla K80, pci bus id: 0000:8b:00.0, compute capability: 3.7) WARNING:tensorflow:From /home/lab-xie.yulin/graduateDesign/CycleGAN-Music-Style-Transfer/ops.py:110: calling reduce_max (from tensorflow.python.ops.math_ops) with keep_dims is deprecated and will be removed in a future version. Instructions for updating: keep_dims is deprecated, use keepdims instead generatorA2B/g_e1_c/Conv/weights:0 generatorA2B/g_e1_bn/scale:0 generatorA2B/g_e1_bn/offset:0 generatorA2B/g_e2_c/Conv/weights:0 generatorA2B/g_e2_bn/scale:0 generatorA2B/g_e2_bn/offset:0 generatorA2B/g_e3_c/Conv/weights:0 generatorA2B/g_e3_bn/scale:0 generatorA2B/g_e3_bn/offset:0 generatorA2B/g_r1_c1/Conv/weights:0 generatorA2B/g_r1_bn1/scale:0 generatorA2B/g_r1_bn1/offset:0 generatorA2B/g_r1_c2/Conv/weights:0 generatorA2B/g_r1_bn2/scale:0 generatorA2B/g_r1_bn2/offset:0 generatorA2B/g_r2_c1/Conv/weights:0 generatorA2B/g_r2_bn1/scale:0 generatorA2B/g_r2_bn1/offset:0 generatorA2B/g_r2_c2/Conv/weights:0 generatorA2B/g_r2_bn2/scale:0 generatorA2B/g_r2_bn2/offset:0 generatorA2B/g_r3_c1/Conv/weights:0 generatorA2B/g_r3_bn1/scale:0 generatorA2B/g_r3_bn1/offset:0 generatorA2B/g_r3_c2/Conv/weights:0 generatorA2B/g_r3_bn2/scale:0 generatorA2B/g_r3_bn2/offset:0 generatorA2B/g_r4_c1/Conv/weights:0 generatorA2B/g_r4_bn1/scale:0 generatorA2B/g_r4_bn1/offset:0 generatorA2B/g_r4_c2/Conv/weights:0 generatorA2B/g_r4_bn2/scale:0 generatorA2B/g_r4_bn2/offset:0 generatorA2B/g_r5_c1/Conv/weights:0 generatorA2B/g_r5_bn1/scale:0 generatorA2B/g_r5_bn1/offset:0 generatorA2B/g_r5_c2/Conv/weights:0 generatorA2B/g_r5_bn2/scale:0 generatorA2B/g_r5_bn2/offset:0 generatorA2B/g_r6_c1/Conv/weights:0 generatorA2B/g_r6_bn1/scale:0 generatorA2B/g_r6_bn1/offset:0 generatorA2B/g_r6_c2/Conv/weights:0 generatorA2B/g_r6_bn2/scale:0 generatorA2B/g_r6_bn2/offset:0 generatorA2B/g_r7_c1/Conv/weights:0 generatorA2B/g_r7_bn1/scale:0 generatorA2B/g_r7_bn1/offset:0 generatorA2B/g_r7_c2/Conv/weights:0 generatorA2B/g_r7_bn2/scale:0 generatorA2B/g_r7_bn2/offset:0 generatorA2B/g_r8_c1/Conv/weights:0 generatorA2B/g_r8_bn1/scale:0 generatorA2B/g_r8_bn1/offset:0 generatorA2B/g_r8_c2/Conv/weights:0 generatorA2B/g_r8_bn2/scale:0 generatorA2B/g_r8_bn2/offset:0 generatorA2B/g_r9_c1/Conv/weights:0 generatorA2B/g_r9_bn1/scale:0 generatorA2B/g_r9_bn1/offset:0 generatorA2B/g_r9_c2/Conv/weights:0 generatorA2B/g_r9_bn2/scale:0 generatorA2B/g_r9_bn2/offset:0 generatorA2B/g_r10_c1/Conv/weights:0 generatorA2B/g_r10_bn1/scale:0 generatorA2B/g_r10_bn1/offset:0 generatorA2B/g_r10_c2/Conv/weights:0 generatorA2B/g_r10_bn2/scale:0 generatorA2B/g_r10_bn2/offset:0 generatorA2B/g_d1_dc/Conv2d_transpose/weights:0 generatorA2B/g_d1_bn/scale:0 generatorA2B/g_d1_bn/offset:0 generatorA2B/g_d2_dc/Conv2d_transpose/weights:0 generatorA2B/g_d2_bn/scale:0 generatorA2B/g_d2_bn/offset:0 generatorA2B/g_pred_c/Conv/weights:0 generatorB2A/g_e1_c/Conv/weights:0 generatorB2A/g_e1_bn/scale:0 generatorB2A/g_e1_bn/offset:0 generatorB2A/g_e2_c/Conv/weights:0 generatorB2A/g_e2_bn/scale:0 generatorB2A/g_e2_bn/offset:0 generatorB2A/g_e3_c/Conv/weights:0 generatorB2A/g_e3_bn/scale:0 generatorB2A/g_e3_bn/offset:0 generatorB2A/g_r1_c1/Conv/weights:0 generatorB2A/g_r1_bn1/scale:0 generatorB2A/g_r1_bn1/offset:0 generatorB2A/g_r1_c2/Conv/weights:0 generatorB2A/g_r1_bn2/scale:0 generatorB2A/g_r1_bn2/offset:0 generatorB2A/g_r2_c1/Conv/weights:0 generatorB2A/g_r2_bn1/scale:0 generatorB2A/g_r2_bn1/offset:0 generatorB2A/g_r2_c2/Conv/weights:0 generatorB2A/g_r2_bn2/scale:0 generatorB2A/g_r2_bn2/offset:0 generatorB2A/g_r3_c1/Conv/weights:0 generatorB2A/g_r3_bn1/scale:0 generatorB2A/g_r3_bn1/offset:0 generatorB2A/g_r3_c2/Conv/weights:0 generatorB2A/g_r3_bn2/scale:0 generatorB2A/g_r3_bn2/offset:0 generatorB2A/g_r4_c1/Conv/weights:0 generatorB2A/g_r4_bn1/scale:0 generatorB2A/g_r4_bn1/offset:0 generatorB2A/g_r4_c2/Conv/weights:0 generatorB2A/g_r4_bn2/scale:0 generatorB2A/g_r4_bn2/offset:0 generatorB2A/g_r5_c1/Conv/weights:0 generatorB2A/g_r5_bn1/scale:0 generatorB2A/g_r5_bn1/offset:0 generatorB2A/g_r5_c2/Conv/weights:0 generatorB2A/g_r5_bn2/scale:0 generatorB2A/g_r5_bn2/offset:0 generatorB2A/g_r6_c1/Conv/weights:0 generatorB2A/g_r6_bn1/scale:0 generatorB2A/g_r6_bn1/offset:0 generatorB2A/g_r6_c2/Conv/weights:0 generatorB2A/g_r6_bn2/scale:0 generatorB2A/g_r6_bn2/offset:0 generatorB2A/g_r7_c1/Conv/weights:0 generatorB2A/g_r7_bn1/scale:0 generatorB2A/g_r7_bn1/offset:0 generatorB2A/g_r7_c2/Conv/weights:0 generatorB2A/g_r7_bn2/scale:0 generatorB2A/g_r7_bn2/offset:0 generatorB2A/g_r8_c1/Conv/weights:0 generatorB2A/g_r8_bn1/scale:0 generatorB2A/g_r8_bn1/offset:0 generatorB2A/g_r8_c2/Conv/weights:0 generatorB2A/g_r8_bn2/scale:0 generatorB2A/g_r8_bn2/offset:0 generatorB2A/g_r9_c1/Conv/weights:0 generatorB2A/g_r9_bn1/scale:0 generatorB2A/g_r9_bn1/offset:0 generatorB2A/g_r9_c2/Conv/weights:0 generatorB2A/g_r9_bn2/scale:0 generatorB2A/g_r9_bn2/offset:0 generatorB2A/g_r10_c1/Conv/weights:0 generatorB2A/g_r10_bn1/scale:0 generatorB2A/g_r10_bn1/offset:0 generatorB2A/g_r10_c2/Conv/weights:0 generatorB2A/g_r10_bn2/scale:0 generatorB2A/g_r10_bn2/offset:0 generatorB2A/g_d1_dc/Conv2d_transpose/weights:0 generatorB2A/g_d1_bn/scale:0 generatorB2A/g_d1_bn/offset:0 generatorB2A/g_d2_dc/Conv2d_transpose/weights:0 generatorB2A/g_d2_bn/scale:0 generatorB2A/g_d2_bn/offset:0 generatorB2A/g_pred_c/Conv/weights:0 discriminatorB/d_h0_conv/Conv/weights:0 discriminatorB/d_h1_conv/Conv/weights:0 discriminatorB/d_bn1/scale:0 discriminatorB/d_bn1/offset:0 discriminatorB/d_h3_pred/Conv/weights:0 discriminatorA/d_h0_conv/Conv/weights:0 discriminatorA/d_h1_conv/Conv/weights:0 discriminatorA/d_bn1/scale:0 discriminatorA/d_bn1/offset:0 discriminatorA/d_h3_pred/Conv/weights:0
and as i take a lood at the log, it shows that is not UTF-8 encoded. Could you tell me what is going on as i am not sure why the training just stops without many errors? Thanks in advance for your help. Appreciate that.