Closed j99ca closed 9 months ago
Validation metrics are printed next to loss in tqdm progress bar.
Regarding the logging, I'm not sure, I believe that for this you'd need to disable logging as stated here by setting enable_progress_bar
to false, but I'm not sure how that would then work with metric logging.
Which metrics are the validation metrics in the progress bar during training. Here's a snippet from my cloud logs for an efficient-ad model training:
Epoch 2: 8%|▊ | 38/452 [00:35<06:28, 1.07it/s, loss=3.94, v_num=0, train_st_step=3.440, train_ae_step=0.256, train_stae_step=0.0508, train_loss_step=3.750, train_st_epoch=4.080, train_ae_epoch=0.336, train_stae_epoch=0.0553, train_loss_epoch=4.470]
They should be added at the end of validation.
Hmmm, Perhaps I am misunderstanding the logs and the validation logging. Can you point to me what lines specifies the validation loss from the following logs? Here's a snippet from the end of 1 training epoch to the next with the validation Quantiles calculation and the validation data loader lines.
2023-12-06T19:27:09.985-04:00 Epoch 13: 95%|█████████▍| 1188/1251 [17:13<00:54, 1.15it/s, loss=2.21, v_num=0, train_st_step=2.000, train_ae_step=0.102, train_stae_step=0.0189, train_loss_step=2.120, train_st_epoch=2.210, train_ae_epoch=0.117, train_stae_epoch=0.0233, train_loss_epoch=2.350]
2023-12-06T19:27:09.985-04:00 Validation: 0it [00:00, ?it/s]#033[A
2023-12-06T19:27:09.985-04:00 2023-12-06 23:27:09,184 - anomalib.models.efficient_ad.lightning_model - INFO - Calculate Validation Dataset Quantiles
2023-12-06T19:27:09.985-04:00 Calculate Validation Dataset Quantiles: 0%| | 0/63 [00:00<?, ?it/s]
2023-12-06T19:27:12.986-04:00 Calculate Validation Dataset Quantiles: 2%|▏ | 1/63 [00:03<03:23, 3.28s/it]
2023-12-06T19:27:12.986-04:00 Calculate Validation Dataset Quantiles: 3%|▎ | 2/63 [00:03<01:34, 1.55s/it]
2023-12-06T19:27:13.987-04:00 Calculate Validation Dataset Quantiles: 5%|▍ | 3/63 [00:03<01:00, 1.00s/it]
2023-12-06T19:27:13.987-04:00 Calculate Validation Dataset Quantiles: 6%|▋ | 4/63 [00:04<00:43, 1.35it/s]
2023-12-06T19:27:13.987-04:00 Calculate Validation Dataset Quantiles: 8%|▊ | 5/63 [00:04<00:34, 1.67it/s]
2023-12-06T19:27:14.987-04:00 Calculate Validation Dataset Quantiles: 10%|▉ | 6/63 [00:05<00:29, 1.95it/s]
2023-12-06T19:27:14.987-04:00 Calculate Validation Dataset Quantiles: 11%|█ | 7/63 [00:05<00:25, 2.18it/s]
2023-12-06T19:27:14.987-04:00 Calculate Validation Dataset Quantiles: 13%|█▎ | 8/63 [00:05<00:23, 2.37it/s]
2023-12-06T19:27:15.987-04:00 Calculate Validation Dataset Quantiles: 14%|█▍ | 9/63 [00:06<00:22, 2.36it/s]
2023-12-06T19:27:15.987-04:00 Calculate Validation Dataset Quantiles: 16%|█▌ | 10/63 [00:06<00:21, 2.50it/s]
2023-12-06T19:27:16.988-04:00 Calculate Validation Dataset Quantiles: 17%|█▋ | 11/63 [00:06<00:19, 2.60it/s]
2023-12-06T19:27:16.988-04:00 Calculate Validation Dataset Quantiles: 19%|█▉ | 12/63 [00:07<00:18, 2.69it/s]
2023-12-06T19:27:16.988-04:00 Calculate Validation Dataset Quantiles: 21%|██ | 13/63 [00:07<00:18, 2.75it/s]
2023-12-06T19:27:17.988-04:00 Calculate Validation Dataset Quantiles: 22%|██▏ | 14/63 [00:07<00:17, 2.78it/s]
2023-12-06T19:27:17.988-04:00 Calculate Validation Dataset Quantiles: 24%|██▍ | 15/63 [00:08<00:17, 2.81it/s]
2023-12-06T19:27:17.988-04:00 Calculate Validation Dataset Quantiles: 25%|██▌ | 16/63 [00:08<00:16, 2.84it/s]
2023-12-06T19:27:18.989-04:00 Calculate Validation Dataset Quantiles: 27%|██▋ | 17/63 [00:08<00:16, 2.85it/s]
2023-12-06T19:27:18.989-04:00 Calculate Validation Dataset Quantiles: 29%|██▊ | 18/63 [00:09<00:15, 2.86it/s]
2023-12-06T19:27:18.989-04:00 Calculate Validation Dataset Quantiles: 30%|███ | 19/63 [00:09<00:15, 2.87it/s]
2023-12-06T19:27:19.989-04:00 Calculate Validation Dataset Quantiles: 32%|███▏ | 20/63 [00:09<00:14, 2.88it/s]
2023-12-06T19:27:19.989-04:00 Calculate Validation Dataset Quantiles: 33%|███▎ | 21/63 [00:10<00:14, 2.88it/s]
2023-12-06T19:27:19.989-04:00 Calculate Validation Dataset Quantiles: 35%|███▍ | 22/63 [00:10<00:14, 2.89it/s]
2023-12-06T19:27:20.989-04:00 Calculate Validation Dataset Quantiles: 37%|███▋ | 23/63 [00:10<00:13, 2.89it/s]
2023-12-06T19:27:20.990-04:00 Calculate Validation Dataset Quantiles: 38%|███▊ | 24/63 [00:11<00:13, 2.89it/s]
2023-12-06T19:27:20.990-04:00 Calculate Validation Dataset Quantiles: 40%|███▉ | 25/63 [00:11<00:13, 2.88it/s]
2023-12-06T19:27:21.990-04:00 Calculate Validation Dataset Quantiles: 41%|████▏ | 26/63 [00:12<00:12, 2.88it/s]
2023-12-06T19:27:21.990-04:00 Calculate Validation Dataset Quantiles: 43%|████▎ | 27/63 [00:12<00:12, 2.88it/s]
2023-12-06T19:27:21.990-04:00 Calculate Validation Dataset Quantiles: 44%|████▍ | 28/63 [00:12<00:12, 2.88it/s]
2023-12-06T19:27:22.990-04:00 Calculate Validation Dataset Quantiles: 46%|████▌ | 29/63 [00:13<00:11, 2.88it/s]
2023-12-06T19:27:22.990-04:00 Calculate Validation Dataset Quantiles: 48%|████▊ | 30/63 [00:13<00:11, 2.88it/s]
2023-12-06T19:27:22.990-04:00 Calculate Validation Dataset Quantiles: 49%|████▉ | 31/63 [00:13<00:11, 2.88it/s]
2023-12-06T19:27:23.991-04:00 Calculate Validation Dataset Quantiles: 51%|█████ | 32/63 [00:14<00:10, 2.88it/s]
2023-12-06T19:27:23.991-04:00 Calculate Validation Dataset Quantiles: 52%|█████▏ | 33/63 [00:14<00:10, 2.88it/s]
2023-12-06T19:27:23.991-04:00 Calculate Validation Dataset Quantiles: 54%|█████▍ | 34/63 [00:14<00:10, 2.88it/s]
2023-12-06T19:27:24.991-04:00 Calculate Validation Dataset Quantiles: 56%|█████▌ | 35/63 [00:15<00:09, 2.89it/s]
2023-12-06T19:27:24.991-04:00 Calculate Validation Dataset Quantiles: 57%|█████▋ | 36/63 [00:15<00:09, 2.88it/s]
2023-12-06T19:27:25.992-04:00 Calculate Validation Dataset Quantiles: 59%|█████▊ | 37/63 [00:15<00:09, 2.88it/s]
2023-12-06T19:27:25.992-04:00 Calculate Validation Dataset Quantiles: 60%|██████ | 38/63 [00:16<00:08, 2.88it/s]
2023-12-06T19:27:25.992-04:00 Calculate Validation Dataset Quantiles: 62%|██████▏ | 39/63 [00:16<00:08, 2.88it/s]
2023-12-06T19:27:26.992-04:00 Calculate Validation Dataset Quantiles: 63%|██████▎ | 40/63 [00:16<00:07, 2.88it/s]
2023-12-06T19:27:26.992-04:00 Calculate Validation Dataset Quantiles: 65%|██████▌ | 41/63 [00:17<00:07, 2.88it/s]
2023-12-06T19:27:26.992-04:00 Calculate Validation Dataset Quantiles: 67%|██████▋ | 42/63 [00:17<00:07, 2.85it/s]
2023-12-06T19:27:27.993-04:00 Calculate Validation Dataset Quantiles: 68%|██████▊ | 43/63 [00:17<00:06, 2.86it/s]
2023-12-06T19:27:27.993-04:00 Calculate Validation Dataset Quantiles: 70%|██████▉ | 44/63 [00:18<00:06, 2.87it/s]
2023-12-06T19:27:27.993-04:00 Calculate Validation Dataset Quantiles: 71%|███████▏ | 45/63 [00:18<00:06, 2.88it/s]
2023-12-06T19:27:28.993-04:00 Calculate Validation Dataset Quantiles: 73%|███████▎ | 46/63 [00:18<00:05, 2.88it/s]
2023-12-06T19:27:28.993-04:00 Calculate Validation Dataset Quantiles: 75%|███████▍ | 47/63 [00:19<00:05, 2.88it/s]
2023-12-06T19:27:28.993-04:00 Calculate Validation Dataset Quantiles: 76%|███████▌ | 48/63 [00:19<00:05, 2.88it/s]
2023-12-06T19:27:29.993-04:00 Calculate Validation Dataset Quantiles: 78%|███████▊ | 49/63 [00:19<00:04, 2.89it/s]
2023-12-06T19:27:29.993-04:00 Calculate Validation Dataset Quantiles: 79%|███████▉ | 50/63 [00:20<00:04, 2.89it/s]
2023-12-06T19:27:29.993-04:00 Calculate Validation Dataset Quantiles: 81%|████████ | 51/63 [00:20<00:04, 2.89it/s]
2023-12-06T19:27:30.994-04:00 Calculate Validation Dataset Quantiles: 83%|████████▎ | 52/63 [00:21<00:03, 2.89it/s]
2023-12-06T19:27:30.994-04:00 Calculate Validation Dataset Quantiles: 84%|████████▍ | 53/63 [00:21<00:03, 2.89it/s]
2023-12-06T19:27:30.994-04:00 Calculate Validation Dataset Quantiles: 86%|████████▌ | 54/63 [00:21<00:03, 2.89it/s]
2023-12-06T19:27:31.994-04:00 Calculate Validation Dataset Quantiles: 87%|████████▋ | 55/63 [00:22<00:02, 2.90it/s]
2023-12-06T19:27:31.994-04:00 Calculate Validation Dataset Quantiles: 89%|████████▉ | 56/63 [00:22<00:02, 2.90it/s]
2023-12-06T19:27:31.994-04:00 Calculate Validation Dataset Quantiles: 90%|█████████ | 57/63 [00:22<00:02, 2.90it/s]
2023-12-06T19:27:32.995-04:00 Calculate Validation Dataset Quantiles: 92%|█████████▏| 58/63 [00:23<00:01, 2.90it/s]
2023-12-06T19:27:32.995-04:00 Calculate Validation Dataset Quantiles: 94%|█████████▎| 59/63 [00:23<00:01, 2.90it/s]
2023-12-06T19:27:32.995-04:00 Calculate Validation Dataset Quantiles: 95%|█████████▌| 60/63 [00:23<00:01, 2.89it/s]
2023-12-06T19:27:33.995-04:00 Calculate Validation Dataset Quantiles: 97%|█████████▋| 61/63 [00:24<00:00, 2.89it/s]
2023-12-06T19:27:33.995-04:00 Calculate Validation Dataset Quantiles: 98%|█████████▊| 62/63 [00:24<00:00, 2.90it/s]
2023-12-06T19:27:33.995-04:00 Calculate Validation Dataset Quantiles: 100%|██████████| 63/63 [00:24<00:00, 3.41it/s]
2023-12-06T19:27:34.995-04:00 Calculate Validation Dataset Quantiles: 100%|██████████| 63/63 [00:24<00:00, 2.53it/s]
2023-12-06T19:27:36.996-04:00 Validation: 0%| | 0/63 [00:00<?, ?it/s]#033[A
2023-12-06T19:27:36.996-04:00 Validation DataLoader 0: 0%| | 0/63 [00:00<?, ?it/s]#033[A
2023-12-06T19:27:37.996-04:00 Validation DataLoader 0: 2%|▏ | 1/63 [00:00<00:09, 6.64it/s]#033[A
2023-12-06T19:27:37.996-04:00 Epoch 13: 95%|█████████▌| 1189/1251 [17:41<00:55, 1.12it/s, loss=2.21, v_num=0, train_st_step=2.000, train_ae_step=0.102, train_stae_step=0.0189, train_loss_step=2.120, train_st_epoch=2.210, train_ae_epoch=0.117, train_stae_epoch=0.0233, train_loss_epoch=2.350]
2023-12-06T19:27:37.997-04:00 Validation DataLoader 0: 3%|▎ | 2/63 [00:00<00:09, 6.35it/s]#033[A
2023-12-06T19:27:37.997-04:00 Epoch 13: 95%|█████████▌| 1190/1251 [17:42<00:54, 1.12it/s, loss=2.21, v_num=0, train_st_step=2.000, train_ae_step=0.102, train_stae_step=0.0189, train_loss_step=2.120, train_st_epoch=2.210, train_ae_epoch=0.117, train_stae_epoch=0.0233, train_loss_epoch=2.350]
2023-12-06T19:27:37.997-04:00 Validation DataLoader 0: 5%|▍ | 3/63 [00:00<00:09, 6.26it/s]#033[A
2023-12-06T19:27:37.997-04:00 Epoch 13: 95%|█████████▌| 1191/1251 [17:42<00:53, 1.12it/s, loss=2.21, v_num=0, train_st_step=2.000, train_ae_step=0.102, train_stae_step=0.0189, train_loss_step=2.120, train_st_epoch=2.210, train_ae_epoch=0.117, train_stae_epoch=0.0233, train_loss_epoch=2.350]
2023-12-06T19:27:37.997-04:00 Validation DataLoader 0: 6%|▋ | 4/63 [00:00<00:09, 6.18it/s]#033[A
2023-12-06T19:27:37.997-04:00 Epoch 13: 95%|█████████▌| 1192/1251 [17:42<00:52, 1.12it/s, loss=2.21, v_num=0, train_st_step=2.000, train_ae_step=0.102, train_stae_step=0.0189, train_loss_step=2.120, train_st_epoch=2.210, train_ae_epoch=0.117, train_stae_epoch=0.0233, train_loss_epoch=2.350]
2023-12-06T19:27:37.997-04:00 Validation DataLoader 0: 8%|▊ | 5/63 [00:00<00:09, 6.13it/s]#033[A
2023-12-06T19:27:37.997-04:00 Epoch 13: 95%|█████████▌| 1193/1251 [17:42<00:51, 1.12it/s, loss=2.21, v_num=0, train_st_step=2.000, train_ae_step=0.102, train_stae_step=0.0189, train_loss_step=2.120, train_st_epoch=2.210, train_ae_epoch=0.117, train_stae_epoch=0.0233, train_loss_epoch=2.350]
2023-12-06T19:27:37.997-04:00 Validation DataLoader 0: 10%|▉ | 6/63 [00:00<00:09, 6.10it/s]#033[A
2023-12-06T19:27:37.997-04:00 Epoch 13: 95%|█████████▌| 1194/1251 [17:42<00:50, 1.12it/s, loss=2.21, v_num=0, train_st_step=2.000, train_ae_step=0.102, train_stae_step=0.0189, train_loss_step=2.120, train_st_epoch=2.210, train_ae_epoch=0.117, train_stae_epoch=0.0233, train_loss_epoch=2.350]
2023-12-06T19:27:38.997-04:00 Validation DataLoader 0: 11%|█ | 7/63 [00:01<00:09, 6.09it/s]#033[A
2023-12-06T19:27:38.997-04:00 Epoch 13: 96%|█████████▌| 1195/1251 [17:42<00:49, 1.12it/s, loss=2.21, v_num=0, train_st_step=2.000, train_ae_step=0.102, train_stae_step=0.0189, train_loss_step=2.120, train_st_epoch=2.210, train_ae_epoch=0.117, train_stae_epoch=0.0233, train_loss_epoch=2.350]
2023-12-06T19:27:38.997-04:00 Validation DataLoader 0: 13%|█▎ | 8/63 [00:01<00:09, 6.06it/s]#033[A
2023-12-06T19:27:38.997-04:00 Epoch 13: 96%|█████████▌| 1196/1251 [17:43<00:48, 1.13it/s, loss=2.21, v_num=0, train_st_step=2.000, train_ae_step=0.102, train_stae_step=0.0189, train_loss_step=2.120, train_st_epoch=2.210, train_ae_epoch=0.117, train_stae_epoch=0.0233, train_loss_epoch=2.350]
2023-12-06T19:27:38.997-04:00 Validation DataLoader 0: 14%|█▍ | 9/63 [00:01<00:10, 5.12it/s]#033[A
2023-12-06T19:27:38.997-04:00 Epoch 13: 96%|█████████▌| 1197/1251 [17:43<00:47, 1.13it/s, loss=2.21, v_num=0, train_st_step=2.000, train_ae_step=0.102, train_stae_step=0.0189, train_loss_step=2.120, train_st_epoch=2.210, train_ae_epoch=0.117, train_stae_epoch=0.0233, train_loss_epoch=2.350]
2023-12-06T19:27:38.997-04:00 Validation DataLoader 0: 16%|█▌ | 10/63 [00:01<00:10, 5.19it/s]#033[A
2023-12-06T19:27:38.997-04:00 Epoch 13: 96%|█████████▌| 1198/1251 [17:43<00:47, 1.13it/s, loss=2.21, v_num=0, train_st_step=2.000, train_ae_step=0.102, train_stae_step=0.0189, train_loss_step=2.120, train_st_epoch=2.210, train_ae_epoch=0.117, train_stae_epoch=0.0233, train_loss_epoch=2.350]
2023-12-06T19:27:39.998-04:00 Validation DataLoader 0: 17%|█▋ | 11/63 [00:02<00:09, 5.25it/s]#033[A
2023-12-06T19:27:39.998-04:00 Epoch 13: 96%|█████████▌| 1199/1251 [17:43<00:46, 1.13it/s, loss=2.21, v_num=0, train_st_step=2.000, train_ae_step=0.102, train_stae_step=0.0189, train_loss_step=2.120, train_st_epoch=2.210, train_ae_epoch=0.117, train_stae_epoch=0.0233, train_loss_epoch=2.350]
2023-12-06T19:27:39.998-04:00 Validation DataLoader 0: 19%|█▉ | 12/63 [00:02<00:09, 5.30it/s]#033[A
2023-12-06T19:27:39.998-04:00 Epoch 13: 96%|█████████▌| 1200/1251 [17:43<00:45, 1.13it/s, loss=2.21, v_num=0, train_st_step=2.000, train_ae_step=0.102, train_stae_step=0.0189, train_loss_step=2.120, train_st_epoch=2.210, train_ae_epoch=0.117, train_stae_epoch=0.0233, train_loss_epoch=2.350]
2023-12-06T19:27:39.998-04:00 Validation DataLoader 0: 21%|██ | 13/63 [00:02<00:09, 5.32it/s]#033[A
2023-12-06T19:27:39.998-04:00 Epoch 13: 96%|█████████▌| 1201/1251 [17:44<00:44, 1.13it/s, loss=2.21, v_num=0, train_st_step=2.000, train_ae_step=0.102, train_stae_step=0.0189, train_loss_step=2.120, train_st_epoch=2.210, train_ae_epoch=0.117, train_stae_epoch=0.0233, train_loss_epoch=2.350]
2023-12-06T19:27:39.998-04:00 Validation DataLoader 0: 22%|██▏ | 14/63 [00:02<00:09, 5.36it/s]#033[A
2023-12-06T19:27:39.998-04:00 Epoch 13: 96%|█████████▌| 1202/1251 [17:44<00:43, 1.13it/s, loss=2.21, v_num=0, train_st_step=2.000, train_ae_step=0.102, train_stae_step=0.0189, train_loss_step=2.120, train_st_epoch=2.210, train_ae_epoch=0.117, train_stae_epoch=0.0233, train_loss_epoch=2.350]
2023-12-06T19:27:39.998-04:00 Validation DataLoader 0: 24%|██▍ | 15/63 [00:02<00:08, 5.40it/s]#033[A
2023-12-06T19:27:39.998-04:00 Epoch 13: 96%|█████████▌| 1203/1251 [17:44<00:42, 1.13it/s, loss=2.21, v_num=0, train_st_step=2.000, train_ae_step=0.102, train_stae_step=0.0189, train_loss_step=2.120, train_st_epoch=2.210, train_ae_epoch=0.117, train_stae_epoch=0.0233, train_loss_epoch=2.350]
2023-12-06T19:27:39.998-04:00 Validation DataLoader 0: 25%|██▌ | 16/63 [00:02<00:08, 5.43it/s]#033[A
2023-12-06T19:27:39.998-04:00 Epoch 13: 96%|█████████▌| 1204/1251 [17:44<00:41, 1.13it/s, loss=2.21, v_num=0, train_st_step=2.000, train_ae_step=0.102, train_stae_step=0.0189, train_loss_step=2.120, train_st_epoch=2.210, train_ae_epoch=0.117, train_stae_epoch=0.0233, train_loss_epoch=2.350]
2023-12-06T19:27:40.999-04:00 Validation DataLoader 0: 27%|██▋ | 17/63 [00:03<00:08, 5.19it/s]#033[A
2023-12-06T19:27:40.999-04:00 Epoch 13: 96%|█████████▋| 1205/1251 [17:45<00:40, 1.13it/s, loss=2.21, v_num=0, train_st_step=2.000, train_ae_step=0.102, train_stae_step=0.0189, train_loss_step=2.120, train_st_epoch=2.210, train_ae_epoch=0.117, train_stae_epoch=0.0233, train_loss_epoch=2.350]
2023-12-06T19:27:40.999-04:00 Validation DataLoader 0: 29%|██▊ | 18/63 [00:03<00:08, 5.22it/s]#033[A
2023-12-06T19:27:40.999-04:00 Epoch 13: 96%|█████████▋| 1206/1251 [17:45<00:39, 1.13it/s, loss=2.21, v_num=0, train_st_step=2.000, train_ae_step=0.102, train_stae_step=0.0189, train_loss_step=2.120, train_st_epoch=2.210, train_ae_epoch=0.117, train_stae_epoch=0.0233, train_loss_epoch=2.350]
2023-12-06T19:27:40.999-04:00 Validation DataLoader 0: 30%|███ | 19/63 [00:03<00:08, 5.25it/s]#033[A
2023-12-06T19:27:40.999-04:00 Epoch 13: 96%|█████████▋| 1207/1251 [17:45<00:38, 1.13it/s, loss=2.21, v_num=0, train_st_step=2.000, train_ae_step=0.102, train_stae_step=0.0189, train_loss_step=2.120, train_st_epoch=2.210, train_ae_epoch=0.117, train_stae_epoch=0.0233, train_loss_epoch=2.350]
2023-12-06T19:27:40.999-04:00 Validation DataLoader 0: 32%|███▏ | 20/63 [00:03<00:08, 5.28it/s]#033[A
2023-12-06T19:27:40.999-04:00 Epoch 13: 97%|█████████▋| 1208/1251 [17:45<00:37, 1.13it/s, loss=2.21, v_num=0, train_st_step=2.000, train_ae_step=0.102, train_stae_step=0.0189, train_loss_step=2.120, train_st_epoch=2.210, train_ae_epoch=0.117, train_stae_epoch=0.0233, train_loss_epoch=2.350]
2023-12-06T19:27:40.999-04:00 Validation DataLoader 0: 33%|███▎ | 21/63 [00:03<00:07, 5.31it/s]#033[A
2023-12-06T19:27:40.999-04:00 Epoch 13: 97%|█████████▋| 1209/1251 [17:45<00:37, 1.13it/s, loss=2.21, v_num=0, train_st_step=2.000, train_ae_step=0.102, train_stae_step=0.0189, train_loss_step=2.120, train_st_epoch=2.210, train_ae_epoch=0.117, train_stae_epoch=0.0233, train_loss_epoch=2.350]
2023-12-06T19:27:41.999-04:00 Validation DataLoader 0: 35%|███▍ | 22/63 [00:04<00:07, 5.33it/s]#033[A
2023-12-06T19:27:41.999-04:00 Epoch 13: 97%|█████████▋| 1210/1251 [17:45<00:36, 1.14it/s, loss=2.21, v_num=0, train_st_step=2.000, train_ae_step=0.102, train_stae_step=0.0189, train_loss_step=2.120, train_st_epoch=2.210, train_ae_epoch=0.117, train_stae_epoch=0.0233, train_loss_epoch=2.350]
2023-12-06T19:27:41.999-04:00 Validation DataLoader 0: 37%|███▋ | 23/63 [00:04<00:07, 5.36it/s]#033[A
2023-12-06T19:27:41.999-04:00 Epoch 13: 97%|█████████▋| 1211/1251 [17:46<00:35, 1.14it/s, loss=2.21, v_num=0, train_st_step=2.000, train_ae_step=0.102, train_stae_step=0.0189, train_loss_step=2.120, train_st_epoch=2.210, train_ae_epoch=0.117, train_stae_epoch=0.0233, train_loss_epoch=2.350]
2023-12-06T19:27:42.000-04:00 Validation DataLoader 0: 38%|███▊ | 24/63 [00:04<00:07, 5.38it/s]#033[A
2023-12-06T19:27:42.000-04:00 Epoch 13: 97%|█████████▋| 1212/1251 [17:46<00:34, 1.14it/s, loss=2.21, v_num=0, train_st_step=2.000, train_ae_step=0.102, train_stae_step=0.0189, train_loss_step=2.120, train_st_epoch=2.210, train_ae_epoch=0.117, train_stae_epoch=0.0233, train_loss_epoch=2.350]
2023-12-06T19:27:42.000-04:00 Validation DataLoader 0: 40%|███▉ | 25/63 [00:04<00:07, 5.00it/s]#033[A
2023-12-06T19:27:42.000-04:00 Epoch 13: 97%|█████████▋| 1213/1251 [17:46<00:33, 1.14it/s, loss=2.21, v_num=0, train_st_step=2.000, train_ae_step=0.102, train_stae_step=0.0189, train_loss_step=2.120, train_st_epoch=2.210, train_ae_epoch=0.117, train_stae_epoch=0.0233, train_loss_epoch=2.350]
2023-12-06T19:27:43.000-04:00 Validation DataLoader 0: 41%|████▏ | 26/63 [00:05<00:07, 5.03it/s]#033[A
2023-12-06T19:27:43.000-04:00 Epoch 13: 97%|█████████▋| 1214/1251 [17:46<00:32, 1.14it/s, loss=2.21, v_num=0, train_st_step=2.000, train_ae_step=0.102, train_stae_step=0.0189, train_loss_step=2.120, train_st_epoch=2.210, train_ae_epoch=0.117, train_stae_epoch=0.0233, train_loss_epoch=2.350]
2023-12-06T19:27:43.000-04:00 Validation DataLoader 0: 43%|████▎ | 27/63 [00:05<00:07, 5.06it/s]#033[A#015Epoch 13: 97%|█████████▋| 1215/1251 [17:47<00:31, 1.14it/s, loss=2.21, v_num=0, train_st_step=2.000, train_ae_step=0.102, train_stae_step=0.0189, train_loss_step=2.120, train_st_epoch=2.210, train_ae_epoch=0.117, train_stae_epoch=0.0233, train_loss_epoch=2.350]
2023-12-06T19:27:43.000-04:00 Validation DataLoader 0: 44%|████▍ | 28/63 [00:05<00:06, 5.08it/s]#033[A
2023-12-06T19:27:43.000-04:00 Epoch 13: 97%|█████████▋| 1216/1251 [17:47<00:30, 1.14it/s, loss=2.21, v_num=0, train_st_step=2.000, train_ae_step=0.102, train_stae_step=0.0189, train_loss_step=2.120, train_st_epoch=2.210, train_ae_epoch=0.117, train_stae_epoch=0.0233, train_loss_epoch=2.350]
2023-12-06T19:27:43.000-04:00 Validation DataLoader 0: 46%|████▌ | 29/63 [00:05<00:06, 5.11it/s]#033[A
2023-12-06T19:27:43.000-04:00 Epoch 13: 97%|█████████▋| 1217/1251 [17:47<00:29, 1.14it/s, loss=2.21, v_num=0, train_st_step=2.000, train_ae_step=0.102, train_stae_step=0.0189, train_loss_step=2.120, train_st_epoch=2.210, train_ae_epoch=0.117, train_stae_epoch=0.0233, train_loss_epoch=2.350]
2023-12-06T19:27:43.000-04:00 Validation DataLoader 0: 48%|████▊ | 30/63 [00:05<00:06, 5.14it/s]#033[A
2023-12-06T19:27:43.000-04:00 Epoch 13: 97%|█████████▋| 1218/1251 [17:47<00:28, 1.14it/s, loss=2.21, v_num=0, train_st_step=2.000, train_ae_step=0.102, train_stae_step=0.0189, train_loss_step=2.120, train_st_epoch=2.210, train_ae_epoch=0.117, train_stae_epoch=0.0233, train_loss_epoch=2.350]
2023-12-06T19:27:43.000-04:00 Validation DataLoader 0: 49%|████▉ | 31/63 [00:06<00:06, 5.16it/s]#033[A#015Epoch 13: 97%|█████████▋| 1219/1251 [17:47<00:28, 1.14it/s, loss=2.21, v_num=0, train_st_step=2.000, train_ae_step=0.102, train_stae_step=0.0189, train_loss_step=2.120, train_st_epoch=2.210, train_ae_epoch=0.117, train_stae_epoch=0.0233, train_loss_epoch=2.350]
2023-12-06T19:27:44.001-04:00 Validation DataLoader 0: 51%|█████ | 32/63 [00:06<00:05, 5.19it/s]#033[A
2023-12-06T19:27:44.001-04:00 Epoch 13: 98%|█████████▊| 1220/1251 [17:47<00:27, 1.14it/s, loss=2.21, v_num=0, train_st_step=2.000, train_ae_step=0.102, train_stae_step=0.0189, train_loss_step=2.120, train_st_epoch=2.210, train_ae_epoch=0.117, train_stae_epoch=0.0233, train_loss_epoch=2.350]
2023-12-06T19:27:44.001-04:00 Validation DataLoader 0: 52%|█████▏ | 33/63 [00:06<00:05, 5.10it/s]#033[A
2023-12-06T19:27:44.001-04:00 Epoch 13: 98%|█████████▊| 1221/1251 [17:48<00:26, 1.14it/s, loss=2.21, v_num=0, train_st_step=2.000, train_ae_step=0.102, train_stae_step=0.0189, train_loss_step=2.120, train_st_epoch=2.210, train_ae_epoch=0.117, train_stae_epoch=0.0233, train_loss_epoch=2.350]
2023-12-06T19:27:44.001-04:00 Validation DataLoader 0: 54%|█████▍ | 34/63 [00:06<00:05, 5.05it/s]#033[A
2023-12-06T19:27:44.001-04:00 Epoch 13: 98%|█████████▊| 1222/1251 [17:48<00:25, 1.14it/s, loss=2.21, v_num=0, train_st_step=2.000, train_ae_step=0.102, train_stae_step=0.0189, train_loss_step=2.120, train_st_epoch=2.210, train_ae_epoch=0.117, train_stae_epoch=0.0233, train_loss_epoch=2.350]
2023-12-06T19:27:44.001-04:00 Validation DataLoader 0: 56%|█████▌ | 35/63 [00:06<00:05, 5.07it/s]#033[A
2023-12-06T19:27:44.001-04:00 Epoch 13: 98%|█████████▊| 1223/1251 [17:48<00:24, 1.14it/s, loss=2.21, v_num=0, train_st_step=2.000, train_ae_step=0.102, train_stae_step=0.0189, train_loss_step=2.120, train_st_epoch=2.210, train_ae_epoch=0.117, train_stae_epoch=0.0233, train_loss_epoch=2.350]
2023-12-06T19:27:45.002-04:00 Validation DataLoader 0: 57%|█████▋ | 36/63 [00:07<00:05, 5.09it/s]#033[A
2023-12-06T19:27:45.002-04:00 Epoch 13: 98%|█████████▊| 1224/1251 [17:48<00:23, 1.15it/s, loss=2.21, v_num=0, train_st_step=2.000, train_ae_step=0.102, train_stae_step=0.0189, train_loss_step=2.120, train_st_epoch=2.210, train_ae_epoch=0.117, train_stae_epoch=0.0233, train_loss_epoch=2.350]
2023-12-06T19:27:45.002-04:00 Validation DataLoader 0: 59%|█████▊ | 37/63 [00:07<00:05, 5.11it/s]#033[A
2023-12-06T19:27:45.002-04:00 Epoch 13: 98%|█████████▊| 1225/1251 [17:48<00:22, 1.15it/s, loss=2.21, v_num=0, train_st_step=2.000, train_ae_step=0.102, train_stae_step=0.0189, train_loss_step=2.120, train_st_epoch=2.210, train_ae_epoch=0.117, train_stae_epoch=0.0233, train_loss_epoch=2.350]
2023-12-06T19:27:45.002-04:00 Validation DataLoader 0: 60%|██████ | 38/63 [00:07<00:04, 5.13it/s]#033[A
2023-12-06T19:27:45.002-04:00 Epoch 13: 98%|█████████▊| 1226/1251 [17:49<00:21, 1.15it/s, loss=2.21, v_num=0, train_st_step=2.000, train_ae_step=0.102, train_stae_step=0.0189, train_loss_step=2.120, train_st_epoch=2.210, train_ae_epoch=0.117, train_stae_epoch=0.0233, train_loss_epoch=2.350]
2023-12-06T19:27:45.002-04:00 Validation DataLoader 0: 62%|██████▏ | 39/63 [00:07<00:04, 5.15it/s]#033[A
2023-12-06T19:27:45.002-04:00 Epoch 13: 98%|█████████▊| 1227/1251 [17:49<00:20, 1.15it/s, loss=2.21, v_num=0, train_st_step=2.000, train_ae_step=0.102, train_stae_step=0.0189, train_loss_step=2.120, train_st_epoch=2.210, train_ae_epoch=0.117, train_stae_epoch=0.0233, train_loss_epoch=2.350]
2023-12-06T19:27:45.002-04:00 Validation DataLoader 0: 63%|██████▎ | 40/63 [00:07<00:04, 5.16it/s]#033[A
2023-12-06T19:27:45.002-04:00 Epoch 13: 98%|█████████▊| 1228/1251 [17:49<00:20, 1.15it/s, loss=2.21, v_num=0, train_st_step=2.000, train_ae_step=0.102, train_stae_step=0.0189, train_loss_step=2.120, train_st_epoch=2.210, train_ae_epoch=0.117, train_stae_epoch=0.0233, train_loss_epoch=2.350]
2023-12-06T19:27:46.003-04:00 Validation DataLoader 0: 65%|██████▌ | 41/63 [00:08<00:04, 4.97it/s]#033[A
2023-12-06T19:27:46.003-04:00 Epoch 13: 98%|█████████▊| 1229/1251 [17:49<00:19, 1.15it/s, loss=2.21, v_num=0, train_st_step=2.000, train_ae_step=0.102, train_stae_step=0.0189, train_loss_step=2.120, train_st_epoch=2.210, train_ae_epoch=0.117, train_stae_epoch=0.0233, train_loss_epoch=2.350]
2023-12-06T19:27:46.003-04:00 Validation DataLoader 0: 67%|██████▋ | 42/63 [00:08<00:04, 4.94it/s]#033[A
2023-12-06T19:27:46.003-04:00 Epoch 13: 98%|█████████▊| 1230/1251 [17:50<00:18, 1.15it/s, loss=2.21, v_num=0, train_st_step=2.000, train_ae_step=0.102, train_stae_step=0.0189, train_loss_step=2.120, train_st_epoch=2.210, train_ae_epoch=0.117, train_stae_epoch=0.0233, train_loss_epoch=2.350]
2023-12-06T19:27:46.003-04:00 Validation DataLoader 0: 68%|██████▊ | 43/63 [00:08<00:04, 4.95it/s]#033[A
2023-12-06T19:27:46.003-04:00 Epoch 13: 98%|█████████▊| 1231/1251 [17:50<00:17, 1.15it/s, loss=2.21, v_num=0, train_st_step=2.000, train_ae_step=0.102, train_stae_step=0.0189, train_loss_step=2.120, train_st_epoch=2.210, train_ae_epoch=0.117, train_stae_epoch=0.0233, train_loss_epoch=2.350]
2023-12-06T19:27:46.003-04:00 Validation DataLoader 0: 70%|██████▉ | 44/63 [00:08<00:03, 4.97it/s]#033[A
2023-12-06T19:27:46.003-04:00 Epoch 13: 98%|█████████▊| 1232/1251 [17:50<00:16, 1.15it/s, loss=2.21, v_num=0, train_st_step=2.000, train_ae_step=0.102, train_stae_step=0.0189, train_loss_step=2.120, train_st_epoch=2.210, train_ae_epoch=0.117, train_stae_epoch=0.0233, train_loss_epoch=2.350]
2023-12-06T19:27:46.003-04:00 Validation DataLoader 0: 71%|███████▏ | 45/63 [00:09<00:03, 4.99it/s]#033[A
2023-12-06T19:27:46.003-04:00 Epoch 13: 99%|█████████▊| 1233/1251 [17:50<00:15, 1.15it/s, loss=2.21, v_num=0, train_st_step=2.000, train_ae_step=0.102, train_stae_step=0.0189, train_loss_step=2.120, train_st_epoch=2.210, train_ae_epoch=0.117, train_stae_epoch=0.0233, train_loss_epoch=2.350]
2023-12-06T19:27:47.003-04:00 Validation DataLoader 0: 73%|███████▎ | 46/63 [00:09<00:03, 5.01it/s]#033[A
2023-12-06T19:27:47.003-04:00 Epoch 13: 99%|█████████▊| 1234/1251 [17:50<00:14, 1.15it/s, loss=2.21, v_num=0, train_st_step=2.000, train_ae_step=0.102, train_stae_step=0.0189, train_loss_step=2.120, train_st_epoch=2.210, train_ae_epoch=0.117, train_stae_epoch=0.0233, train_loss_epoch=2.350]
2023-12-06T19:27:47.003-04:00 Validation DataLoader 0: 75%|███████▍ | 47/63 [00:09<00:03, 5.03it/s]#033[A
2023-12-06T19:27:47.003-04:00 Epoch 13: 99%|█████████▊| 1235/1251 [17:51<00:13, 1.15it/s, loss=2.21, v_num=0, train_st_step=2.000, train_ae_step=0.102, train_stae_step=0.0189, train_loss_step=2.120, train_st_epoch=2.210, train_ae_epoch=0.117, train_stae_epoch=0.0233, train_loss_epoch=2.350]
2023-12-06T19:27:47.003-04:00 Validation DataLoader 0: 76%|███████▌ | 48/63 [00:09<00:02, 5.04it/s]#033[A
2023-12-06T19:27:47.003-04:00 Epoch 13: 99%|█████████▉| 1236/1251 [17:51<00:13, 1.15it/s, loss=2.21, v_num=0, train_st_step=2.000, train_ae_step=0.102, train_stae_step=0.0189, train_loss_step=2.120, train_st_epoch=2.210, train_ae_epoch=0.117, train_stae_epoch=0.0233, train_loss_epoch=2.350]
2023-12-06T19:27:47.003-04:00 Validation DataLoader 0: 78%|███████▊ | 49/63 [00:09<00:02, 5.04it/s]#033[A
2023-12-06T19:27:47.003-04:00 Epoch 13: 99%|█████████▉| 1237/1251 [17:51<00:12, 1.15it/s, loss=2.21, v_num=0, train_st_step=2.000, train_ae_step=0.102, train_stae_step=0.0189, train_loss_step=2.120, train_st_epoch=2.210, train_ae_epoch=0.117, train_stae_epoch=0.0233, train_loss_epoch=2.350]
2023-12-06T19:27:48.004-04:00 Validation DataLoader 0: 79%|███████▉ | 50/63 [00:10<00:02, 4.84it/s]#033[A
2023-12-06T19:27:48.004-04:00 Epoch 13: 99%|█████████▉| 1238/1251 [17:52<00:11, 1.15it/s, loss=2.21, v_num=0, train_st_step=2.000, train_ae_step=0.102, train_stae_step=0.0189, train_loss_step=2.120, train_st_epoch=2.210, train_ae_epoch=0.117, train_stae_epoch=0.0233, train_loss_epoch=2.350]
2023-12-06T19:27:48.004-04:00 Validation DataLoader 0: 81%|████████ | 51/63 [00:10<00:02, 4.85it/s]#033[A
2023-12-06T19:27:48.004-04:00 Epoch 13: 99%|█████████▉| 1239/1251 [17:52<00:10, 1.16it/s, loss=2.21, v_num=0, train_st_step=2.000, train_ae_step=0.102, train_stae_step=0.0189, train_loss_step=2.120, train_st_epoch=2.210, train_ae_epoch=0.117, train_stae_epoch=0.0233, train_loss_epoch=2.350]
2023-12-06T19:27:48.004-04:00 Validation DataLoader 0: 83%|████████▎ | 52/63 [00:10<00:02, 4.87it/s]#033[A
2023-12-06T19:27:48.004-04:00 Epoch 13: 99%|█████████▉| 1240/1251 [17:52<00:09, 1.16it/s, loss=2.21, v_num=0, train_st_step=2.000, train_ae_step=0.102, train_stae_step=0.0189, train_loss_step=2.120, train_st_epoch=2.210, train_ae_epoch=0.117, train_stae_epoch=0.0233, train_loss_epoch=2.350]
2023-12-06T19:27:48.004-04:00 Validation DataLoader 0: 84%|████████▍ | 53/63 [00:10<00:02, 4.89it/s]#033[A
2023-12-06T19:27:48.004-04:00 Epoch 13: 99%|█████████▉| 1241/1251 [17:52<00:08, 1.16it/s, loss=2.21, v_num=0, train_st_step=2.000, train_ae_step=0.102, train_stae_step=0.0189, train_loss_step=2.120, train_st_epoch=2.210, train_ae_epoch=0.117, train_stae_epoch=0.0233, train_loss_epoch=2.350]
2023-12-06T19:27:48.004-04:00 Validation DataLoader 0: 86%|████████▌ | 54/63 [00:11<00:01, 4.90it/s]#033[A
2023-12-06T19:27:48.004-04:00 Epoch 13: 99%|█████████▉| 1242/1251 [17:52<00:07, 1.16it/s, loss=2.21, v_num=0, train_st_step=2.000, train_ae_step=0.102, train_stae_step=0.0189, train_loss_step=2.120, train_st_epoch=2.210, train_ae_epoch=0.117, train_stae_epoch=0.0233, train_loss_epoch=2.350]
2023-12-06T19:27:49.005-04:00 Validation DataLoader 0: 87%|████████▋ | 55/63 [00:11<00:01, 4.92it/s]#033[A
2023-12-06T19:27:49.005-04:00 Epoch 13: 99%|█████████▉| 1243/1251 [17:52<00:06, 1.16it/s, loss=2.21, v_num=0, train_st_step=2.000, train_ae_step=0.102, train_stae_step=0.0189, train_loss_step=2.120, train_st_epoch=2.210, train_ae_epoch=0.117, train_stae_epoch=0.0233, train_loss_epoch=2.350]
2023-12-06T19:27:49.005-04:00 Validation DataLoader 0: 89%|████████▉ | 56/63 [00:11<00:01, 4.93it/s]#033[A
2023-12-06T19:27:49.005-04:00 Epoch 13: 99%|█████████▉| 1244/1251 [17:53<00:06, 1.16it/s, loss=2.21, v_num=0, train_st_step=2.000, train_ae_step=0.102, train_stae_step=0.0189, train_loss_step=2.120, train_st_epoch=2.210, train_ae_epoch=0.117, train_stae_epoch=0.0233, train_loss_epoch=2.350]
2023-12-06T19:27:50.005-04:00 Validation DataLoader 0: 90%|█████████ | 57/63 [00:12<00:01, 4.68it/s]#033[A
2023-12-06T19:27:50.005-04:00 Epoch 13: 100%|█████████▉| 1245/1251 [17:53<00:05, 1.16it/s, loss=2.21, v_num=0, train_st_step=2.000, train_ae_step=0.102, train_stae_step=0.0189, train_loss_step=2.120, train_st_epoch=2.210, train_ae_epoch=0.117, train_stae_epoch=0.0233, train_loss_epoch=2.350]
2023-12-06T19:27:50.005-04:00 Validation DataLoader 0: 92%|█████████▏| 58/63 [00:12<00:01, 4.58it/s]#033[A
2023-12-06T19:27:50.005-04:00 Epoch 13: 100%|█████████▉| 1246/1251 [17:54<00:04, 1.16it/s, loss=2.21, v_num=0, train_st_step=2.000, train_ae_step=0.102, train_stae_step=0.0189, train_loss_step=2.120, train_st_epoch=2.210, train_ae_epoch=0.117, train_stae_epoch=0.0233, train_loss_epoch=2.350]
2023-12-06T19:27:50.005-04:00 Validation DataLoader 0: 94%|█████████▎| 59/63 [00:12<00:00, 4.60it/s]#033[A
2023-12-06T19:27:50.005-04:00 Epoch 13: 100%|█████████▉| 1247/1251 [17:54<00:03, 1.16it/s, loss=2.21, v_num=0, train_st_step=2.000, train_ae_step=0.102, train_stae_step=0.0189, train_loss_step=2.120, train_st_epoch=2.210, train_ae_epoch=0.117, train_stae_epoch=0.0233, train_loss_epoch=2.350]
2023-12-06T19:27:50.005-04:00 Validation DataLoader 0: 95%|█████████▌| 60/63 [00:12<00:00, 4.62it/s]#033[A
2023-12-06T19:27:50.005-04:00 Epoch 13: 100%|█████████▉| 1248/1251 [17:54<00:02, 1.16it/s, loss=2.21, v_num=0, train_st_step=2.000, train_ae_step=0.102, train_stae_step=0.0189, train_loss_step=2.120, train_st_epoch=2.210, train_ae_epoch=0.117, train_stae_epoch=0.0233, train_loss_epoch=2.350]
2023-12-06T19:27:51.006-04:00 Validation DataLoader 0: 97%|█████████▋| 61/63 [00:13<00:00, 4.64it/s]#033[A
2023-12-06T19:27:51.006-04:00 Epoch 13: 100%|█████████▉| 1249/1251 [17:54<00:01, 1.16it/s, loss=2.21, v_num=0, train_st_step=2.000, train_ae_step=0.102, train_stae_step=0.0189, train_loss_step=2.120, train_st_epoch=2.210, train_ae_epoch=0.117, train_stae_epoch=0.0233, train_loss_epoch=2.350]
2023-12-06T19:27:51.006-04:00 Validation DataLoader 0: 98%|█████████▊| 62/63 [00:13<00:00, 4.66it/s]#033[A
2023-12-06T19:27:51.006-04:00 Epoch 13: 100%|█████████▉| 1250/1251 [17:55<00:00, 1.16it/s, loss=2.21, v_num=0, train_st_step=2.000, train_ae_step=0.102, train_stae_step=0.0189, train_loss_step=2.120, train_st_epoch=2.210, train_ae_epoch=0.117, train_stae_epoch=0.0233, train_loss_epoch=2.350]
2023-12-06T19:27:51.006-04:00 Validation DataLoader 0: 100%|██████████| 63/63 [00:13<00:00, 4.68it/s]#033[A
2023-12-06T19:27:51.006-04:00 Epoch 13: 100%|██████████| 1251/1251 [17:55<00:00, 1.16it/s, loss=2.21, v_num=0, train_st_step=2.000, train_ae_step=0.102, train_stae_step=0.0189, train_loss_step=2.120, train_st_epoch=2.210, train_ae_epoch=0.117, train_stae_epoch=0.0233, train_loss_epoch=2.350]
2023-12-06T19:27:51.006-04:00 Epoch 13: 100%|██████████| 1251/1251 [17:55<00:00, 1.16it/s, loss=2.21, v_num=0, train_st_step=2.000, train_ae_step=0.102, train_stae_step=0.0189, train_loss_step=2.120, train_st_epoch=2.210, train_ae_epoch=0.117, train_stae_epoch=0.0233, train_loss_epoch=2.350]
2023-12-06T19:27:51.006-04:00 #033[A
2023-12-06T19:27:51.006-04:00 Epoch 13: 100%|██████████| 1251/1251 [17:55<00:00, 1.16it/s, loss=2.21, v_num=0, train_st_step=2.000, train_ae_step=0.102, train_stae_step=0.0189, train_loss_step=2.120, train_st_epoch=2.110, train_ae_epoch=0.117, train_stae_epoch=0.0231, train_loss_epoch=2.250]
2023-12-06T19:27:51.006-04:00 Epoch 13: 0%| | 0/1251 [00:00<?, ?it/s, loss=2.21, v_num=0, train_st_step=2.000, train_ae_step=0.102, train_stae_step=0.0189, train_loss_step=2.120, train_st_epoch=2.110, train_ae_epoch=0.117, train_stae_epoch=0.0231, train_loss_epoch=2.250]
2023-12-06T19:27:51.006-04:00 Epoch 14: 0%| | 0/1251 [00:00<?, ?it/s, loss=2.21, v_num=0, train_st_step=2.000, train_ae_step=0.102, train_stae_step=0.0189, train_loss_step=2.120, train_st_epoch=2.110, train_ae_epoch=0.117, train_stae_epoch=0.0231, train_loss_epoch=2.250]
I'm not sure. When I run efficient Ad with default config I get the following:
Epoch 0: 97%|█████████▋| 209/215 [00:52<00:01, 3.95it/s, loss=8.46, train_st_step=7.580, train_ae_step=0.698, train_stae_step=0.0383, train_loss_step=8.320]
Validation: 0it [00:00, ?it/s]
Validation: 0%| | 0/6 [00:00<?, ?it/s]
Validation DataLoader 0: 0%| | 0/6 [00:00<?, ?it/s]
Epoch 0: 98%|█████████▊| 210/215 [00:53<00:01, 3.90it/s, loss=8.46, train_st_step=7.580, train_ae_step=0.698, train_stae_step=0.0383, train_loss_step=8.320]
Epoch 0: 98%|█████████▊| 211/215 [00:54<00:01, 3.87it/s, loss=8.46, train_st_step=7.580, train_ae_step=0.698, train_stae_step=0.0383, train_loss_step=8.320]
Epoch 0: 99%|█████████▊| 212/215 [00:55<00:00, 3.83it/s, loss=8.46, train_st_step=7.580, train_ae_step=0.698, train_stae_step=0.0383, train_loss_step=8.320]
Epoch 0: 99%|█████████▉| 213/215 [00:56<00:00, 3.80it/s, loss=8.46, train_st_step=7.580, train_ae_step=0.698, train_stae_step=0.0383, train_loss_step=8.320]
Epoch 0: 100%|█████████▉| 214/215 [00:56<00:00, 3.77it/s, loss=8.46, train_st_step=7.580, train_ae_step=0.698, train_stae_step=0.0383, train_loss_step=8.320]
Epoch 0: 100%|██████████| 215/215 [00:58<00:00, 3.68it/s, loss=8.46, train_st_step=7.580, train_ae_step=0.698, train_stae_step=0.0383, train_loss_step=8.320, pixel_F1Score=0.408, pixel_AUROC=0.753]
Epoch 1: 97%|█████████▋| 209/215 [00:53<00:01, 3.89it/s, loss=6.59, train_st_step=5.700, train_ae_step=0.542, train_stae_step=0.051, train_loss_step=6.290, pixel_F1Score=0.408, pixel_AUROC=0.753, train_st_epoch=9.900, train_ae_epoch=0.837, train_stae_epoch=0.0283, train_loss_epoch=10.80]
Validation: 0it [00:00, ?it/s]
Validation: 0%| | 0/6 [00:00<?, ?it/s]
Validation DataLoader 0: 0%| | 0/6 [00:00<?, ?it/s]
Epoch 1: 98%|█████████▊| 210/215 [00:54<00:01, 3.86it/s, loss=6.59, train_st_step=5.700, train_ae_step=0.542, train_stae_step=0.051, train_loss_step=6.290, pixel_F1Score=0.408, pixel_AUROC=0.753, train_st_epoch=9.900, train_ae_epoch=0.837, train_stae_epoch=0.0283, train_loss_epoch=10.80]
Epoch 1: 98%|█████████▊| 211/215 [00:55<00:01, 3.83it/s, loss=6.59, train_st_step=5.700, train_ae_step=0.542, train_stae_step=0.051, train_loss_step=6.290, pixel_F1Score=0.408, pixel_AUROC=0.753, train_st_epoch=9.900, train_ae_epoch=0.837, train_stae_epoch=0.0283, train_loss_epoch=10.80]
Epoch 1: 99%|█████████▊| 212/215 [00:55<00:00, 3.81it/s, loss=6.59, train_st_step=5.700, train_ae_step=0.542, train_stae_step=0.051, train_loss_step=6.290, pixel_F1Score=0.408, pixel_AUROC=0.753, train_st_epoch=9.900, train_ae_epoch=0.837, train_stae_epoch=0.0283, train_loss_epoch=10.80]
Epoch 1: 99%|█████████▉| 213/215 [00:56<00:00, 3.78it/s, loss=6.59, train_st_step=5.700, train_ae_step=0.542, train_stae_step=0.051, train_loss_step=6.290, pixel_F1Score=0.408, pixel_AUROC=0.753, train_st_epoch=9.900, train_ae_epoch=0.837, train_stae_epoch=0.0283, train_loss_epoch=10.80]
Epoch 1: 100%|█████████▉| 214/215 [00:56<00:00, 3.76it/s, loss=6.59, train_st_step=5.700, train_ae_step=0.542, train_stae_step=0.051, train_loss_step=6.290, pixel_F1Score=0.408, pixel_AUROC=0.753, train_st_epoch=9.900, train_ae_epoch=0.837, train_stae_epoch=0.0283, train_loss_epoch=10.80]
Epoch 1: 100%|██████████| 215/215 [00:58<00:00, 3.68it/s, loss=6.59, train_st_step=5.700, train_ae_step=0.542, train_stae_step=0.051, train_loss_step=6.290, pixel_F1Score=0.478, pixel_AUROC=0.831, train_st_epoch=9.900, train_ae_epoch=0.837, train_stae_epoch=0.0283, train_loss_epoch=10.80]
as you can see pixel_F1 and pixel_AUROC are logged at the end of validation inside tqdm progress bar. I'm not sure why those are not displayed in your case. Can you share the config file?
Hmm, perhaps it's because I removed the image and pixel metrics from the config! I should probably add those back now that I have a end-to-end flow mostly working in AWS. Here's what my metrics looked like from my earlier logs:
metrics:
threshold:
method: adaptive #options: [adaptive, manual]
manual_image: null
manual_pixel: null
I will share my entire config below. From what you are showing me above, are steps 210 to 215 validation? Their step numbers is apart of the training steps? I still don't quite understand where the validation loss is? Is it just the "loss" being re-purposed for those specific steps? All the other metrics you have in your logs above aside from the loss, pixel_AUROC and pixelF1Score have the train prefix. Are the "metrics" in the config just for validation? Can I include a metric that is the loss for the efficientad model? I am used to Keras flows which typically have an explicit val_loss metric during validation.
dataset:
name: data
format: folder
root: ./root/
normal_dir: raw_unlabelled # name of the folder containing normal images.
abnormal_dir: null # name of the folder containing abnormal images.
task: classification # classification or segmentation
mask_dir: null #optional
extensions: .png
normal_test_dir: null # optional
train_batch_size: 8
eval_batch_size: 8
num_workers: 8
image_size: 256 # dimensions to which images are resized (mandatory)
# image_size: 512 # dimensions to which images are resized (mandatory)
# image_size: [386, 516]
center_crop: null # dimensions to which images are center-cropped after resizing (optional)
normalization: none # data distribution to which the images will be normalized: [none, imagenet]
transform_config:
train: null
eval: null
test_split_mode: from_dir # options: [from_dir, synthetic]
test_split_ratio: 0.1 # fraction of train images held out testing (usage depends on test_split_mode)
val_split_mode: same_as_test # options: [same_as_test, from_test, synthetic]
val_split_ratio: 0.5 # fraction of train/test images held out for validation (usage depends on val_split_mode)
# image normalization params
# min_val: 0.0
# max_val: 255.0
model:
name: efficient_ad
teacher_out_channels: 384
model_size: small # options: [small, medium]
lr: 0.0001
weight_decay: 0.00001
padding: false
pad_maps: true # relevant for "padding: false", see EfficientAd in lightning_model.py
# generic params
normalization_method: min_max # options: [null, min_max, cdf]
early_stopping:
patience: 5
metric: train_loss
mode: min
metrics:
threshold:
method: adaptive #options: [adaptive, manual]
manual_image: null
manual_pixel: null
visualization:
show_images: False # show images on the screen
save_images: True # save images to the file system
log_images: False # log images to the available loggers (if any)
image_save_path: null # path to which images will be saved
mode: full # options: ["full", "simple"]
project:
seed: 42
path: /opt/ml/model
logging:
logger: [csv] # options: [comet, tensorboard, wandb, csv] or combinations.
log_graph: true # Logs the model graph to respective logger.
optimization:
export_mode: openvino # options: torch, onnx, openvino
# PL Trainer Args. Don't add extra parameter here.
trainer:
enable_checkpointing: true
default_root_dir: null
gradient_clip_val: 0
gradient_clip_algorithm: norm
num_nodes: 1
devices: 1
enable_progress_bar: true
overfit_batches: 0.0
track_grad_norm: -1
check_val_every_n_epoch: 1
fast_dev_run: false
accumulate_grad_batches: 1
max_epochs: 60
min_epochs: 5
max_steps: 80000
min_steps: null
# max time in format "DD:HH:MM:SS"
max_time: null
limit_train_batches: 1.0
limit_val_batches: 1.0
limit_test_batches: 1.0
limit_predict_batches: 1.0
val_check_interval: 1.0
log_every_n_steps: 50
accelerator: auto # <"cpu", "gpu", "tpu", "ipu", "hpu", "auto">
strategy: null
sync_batchnorm: false
precision: 32
enable_model_summary: true
num_sanity_val_steps: 0
profiler: null
benchmark: false
deterministic: false
reload_dataloaders_every_n_epochs: 0
auto_lr_find: false
replace_sampler_ddp: true
detect_anomaly: false
auto_scale_batch_size: false
plugins: null
move_metrics_to_cpu: false
multiple_trainloader_mode: max_size_cycle
Hello. Yes, you will need to have metrics in config.
From what you are showing me above, are steps 210 to 215 validation?
yes
Their step numbers is apart of the training steps?
Yes, it might be on my system, but the metrics are displayed in training progress bar (validation bar disappears at the end of val)
I still don't quite understand where the validation loss is? Is it just the "loss" being re-purposed for those specific steps?
Validation doesn't have a loss in this case. At the start of val, anomaly map quantiles are calculated: https://github.com/openvinotoolkit/anomalib/blob/1f50c952fc65a165884b2b94e178821dcebfbbef/src/anomalib/models/efficient_ad/lightning_model.py#L255-L260 then validation step only does forward pass: https://github.com/openvinotoolkit/anomalib/blob/1f50c952fc65a165884b2b94e178821dcebfbbef/src/anomalib/models/efficient_ad/lightning_model.py#L262-L275 which then calculates metrics at the end of val epoch: https://github.com/openvinotoolkit/anomalib/blob/1f50c952fc65a165884b2b94e178821dcebfbbef/src/anomalib/models/components/base/anomaly_module.py#L139-L148
All the other metrics you have in your logs above aside from the loss, pixel_AUROC and pixelF1Score have the train prefix. Are the "metrics" in the config just for validation?
loss is only calculated while training, and metrics are calculated in validation and test.
Can I include a metric that is the loss for the efficientad model? I am used to Keras flows which typically have an explicit val_loss metric during validation.
This way of validation is mostly anomalib specific, so you could also include validation loss, but I think that would require rewriting validation step to also calculate that loss.
What is the motivation for this task?
When running Anomalib/EfficientAD, it seems to only log the training dataset metrics. I am using an AWS Sagemaker Estimator in my pipeline and I would like to connect it's metrics_definitions which uses regex to search the logs. I can parse out the loss during training but anomalib seems to not print the metrics during validation for the validation dataset.
Additionally, is there a way to control the logging to only print once per epoch, or perhaps once per configurable number of steps or percentage of progress per epoch? When training my data on a large dataset for many epochs for 24 hours my cloud logs are very verbose due to tqdm printing every step when used in the cloud, as opposed to local usage where the progress bar updates in place.
Describe the solution you'd like
An option to print validation metrics and losses
Additional context
No response