Open sprob97 opened 8 months ago
creating reference xml file : [output/webnlg_version_2/valid/ref_100_0.xml]
Epoch 0/0 ╸━━━━━━━━━━━━━━━━━━━━━ 5460/191666 2:13:42 • 3 days, 0.61it/s loss: 4.29 v_num: on_2
12:19:45
creating hypothesis xml file : [output/webnlg_version_2/valid/hyp_100_0.xml]
Epoch 0/0 ╸━━━━━━━━━━━━━━━━━━━━━ 5460/191666 2:13:42 • 3 days, 0.61it/s loss: 4.29 v_num: on_2
12:19:45
/usr/local/lib/python3.10/dist-packages/bs4/builder/init.py:545: XMLParsedAsHTMLWarning: It
looks like you're parsing an XML document using an HTML parser. If this really is an HTML document
(maybe it's XHTML?), you can ignore or filter this warning. If it's XML, you should know that using
an XML parser will be more reliable. To parse this document as XML, make sure you have the lxml
package installed, and pass the keyword argument features="xml"
into the BeautifulSoup
constructor.
warnings.warn(
Epoch 0/0 ╸━━━━━━━━━━━━━━━━━━━━━ 5460/191666 2:13:42 • 3 days, 0.61it/s loss: 4.29 v_num: on_2
12:19:45
Epoch 0/0 ╸━━━━━━━━━━━━━━━━━━━━━ 5460/191666 2:13:46 • 3 days, 0.61it/s loss: 4.29 v_num: on_2
12:19:45 train_gen.sh: line 29: 5466 Killed python main.py --version 2 --default_root_dir output --run train --max_epochs 1 --accelerator gpu --num_nodes 1 --num_data_workers 2 --lr 1e-4 --batch_size 1 --num_sanity_val_steps 0 --fast_dev_run 0 --overfit_batches 0 --limit_train_batches 1.0 --limit_val_batches 1.0 --limit_test_batches 1.0 --accumulate_grad_batches 10 --detect_anomaly True --data_path webnlg-dataset/release_v3.0/en --log_every_n_steps 100 --val_check_interval 1000 --checkpoint_step_frequency 1000 --focal_loss_gamma 3 --dropout_rate 0.5 --num_layers 2 --edges_as_classes 0 --checkpoint_model_id -1
unable to train model either way due to connection timeout errors