sign-language-processing / transcription

Text to pose model for sign language pose generation from a text sequence
35 stars 16 forks source link

"make_data_iter" not available anymore in joeynmt.datasets. In pose_to_text.dataset.py #9

Closed AngelosDDG closed 1 year ago

AngelosDDG commented 1 year ago

In pose_to_text/dataset.py there is a faulty import statement: "from joeynmt.datasets import make_data_iter".

There is no static make_data_iter or make_iter function in the joeynmt.datasets file.

AngelosDDG commented 1 year ago

Im using python version 3.10 I tried joeynmt versions 2.1, 2.0 and 1.5

AmitMY commented 1 year ago

Thank you for the issue, it is now resolved!

2023-02-23 12:08:32,742 - INFO - joeynmt.model - Total params: 49347840
2023-02-23 12:08:32,745 - INFO - joeynmt.training - PoseToTextModel(
    encoder=TransformerEncoder(num_layers=6, num_heads=8, alpha=1.417938140685523, layer_norm="post"),
    decoder=TransformerDecoder(num_layers=6, num_heads=8, alpha=2.0597671439071177, layer_norm="post"),
    src_embed=Embeddings(embedding_dim=64, vocab_size=4),
    trg_embed=Embeddings(embedding_dim=512, vocab_size=1392),
    loss_function=XentLoss(criterion=KLDivLoss(), smoothing=0.1))
2023-02-23 12:08:35,642 - INFO - joeynmt.builders - Adam(lr=0.001, weight_decay=0.0, betas=[0.9, 0.98])
2023-02-23 12:08:35,642 - INFO - joeynmt.builders - WarmupInverseSquareRootScheduler(warmup=4000, decay_rate=0.063246, peak_rate=0.001, min_rate=1e-08)
2023-02-23 12:08:35,642 - INFO - joeynmt.training - Train stats:
    device: cuda
    n_gpu: 2
    16-bits training: False
    gradient accumulation: 1
    batch size per device: 4096
    effective batch size (w. parallel & accumulation): 8192
2023-02-23 12:08:35,642 - INFO - joeynmt.training - EPOCH 1
/home/nlp/amit/libs/anaconda3/envs/sign/lib/python3.11/site-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.
  warnings.warn('Was asked to gather along dimension 0, but all '
2023-02-23 12:09:03,350 - INFO - joeynmt.training - Epoch   1, Step:      100, Batch Loss:     2.258506, Batch Acc: 0.099649, Tokens per Sec:     2146, Lr: 0.000025
2023-02-23 12:09:12,333 - INFO - joeynmt.training - Epoch   1: total training loss 331.24
2023-02-23 12:09:12,333 - INFO - joeynmt.training - EPOCH 2
2023-02-23 12:09:27,033 - INFO - joeynmt.training - Epoch   2, Step:      200, Batch Loss:     1.596051, Batch Acc: 0.198963, Tokens per Sec:     2584, Lr: 0.000050
2023-02-23 12:09:44,464 - INFO - joeynmt.training - Epoch   2: total training loss 212.89
2023-02-23 12:09:44,465 - INFO - joeynmt.training - EPOCH 3
2023-02-23 12:09:50,802 - INFO - joeynmt.training - Epoch   3, Step:      300, Batch Loss:     1.192517, Batch Acc: 0.327386, Tokens per Sec:     2636, Lr: 0.000075