IBM / transition-amr-parser

SoTA Abstract Meaning Representation (AMR) parsing with word-node alignments in Pytorch. Includes checkpoints and other tools such as statistical significance Smatch.
Apache License 2.0
231 stars 46 forks source link

problem with installation #7

Closed PolKul closed 3 years ago

PolKul commented 3 years ago

Hello, I followed your setup instructions and installed AMR parser from pip. I checked installation with the python tests/correctly_installed.py and it showed all Ok.

But nevertheless I have two issues:

  1. When I run bash preprocess/install_alignment_tools.sh I get this error: Error: Could not retrieve sbt0.13.5`

  2. When I run bash tests/minimal_test.sh it starts loading my GPU (NVidia Titan RTX) and never stops. In your instructions it is noted that it should not run more than a minute...

Maybe I'm still missing some libraries or my python version is incompatible?

ramon-astudillo commented 3 years ago

Hi @PolKul I would focus on troubleshooting 2 first since the minimal test uses already aligned data. The alignment tools depend on a number of external modules and its harder to find the cause.

Regarding 2. If the code freezes as soon as you move anything to the GPU it is likely an installation problem.

PolKul commented 3 years ago

Hi @ramon-astudillo, thanks for the answer. For the problem 2, let me post the full log here:

(venv1) pavel@pavel-TRX40-DESIGNARE:~/work/nlp/transition-amr-parser$ bash tests/minimal_test.sh
[normalize rules] months
[normalize rules] units
[normalize rules] cardinals
[normalize rules] ordinals
Read DATA/wiki25.jkaln
25 sentences
216/293 node types/tokens
35/285 edge types/tokens
241/383 word types/tokens
AMR contains 4 duplicate edges
{'ARG1': 4}
2021-05-04 12:04:04 [amr] Processing oracle
2021-05-04 12:04:04 [oracle] Parsing data
computing oracle: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 25/25 [00:00<00:00, 211.38it/s]
2021-05-04 12:04:04 [oracle] Done
Not whitelisted actions used e.g. arcs for unconfirmed words
Counter({'LA': 2})
Blacklisted actions used e.g. duplicated edges
Counter({'RA': 1})
There were 18 disconnected nodes (:rel)
2021-05-04 12:04:04 [Totals:] 0.61
2021-05-04 12:04:04 [Totals:] Failed Entity Predictions:
Namespace(alignfile=None, batch_normalize_reward=False, bert_layers=None, bpe=None, cpu=False, criterion='cross_entropy', dataset_impl='mmap', destdir='DATA.tests/features/wiki25.jkaln/', entity_rules='DATA.tests/oracles/wiki25.jkaln//entity_rules.json', fp16=False, fp16_init_scale=128, fp16_scale_tolerance=0.0, fp16_scale_window=None, gold_annotations=None, gold_episode_ratio=None, joined_dictionary=False, log_format=None, log_interval=1000, lr_scheduler='fixed', machine_rules='DATA.tests/oracles/wiki25.jkaln//train.rules.json', machine_type='AMR', memory_efficient_fp16=False, min_loss_scale=0.0001, no_progress_bar=False, nwordssrc=-1, nwordstgt=-1, only_source=False, optimizer='nag', padding_factor=8, pretrained_embed='roberta.base', seed=1, source_lang='en', srcdict=None, target_lang='actions', task='translation', tbmf_wrapper=False, tensorboard_logdir='', testpref='DATA.tests/oracles/wiki25.jkaln//test', tgtdict=None, threshold_loss_scale=None, thresholdsrc=0, thresholdtgt=0, tokenize_by_whitespace=False, tokenizer=None, trainpref='DATA.tests/oracles/wiki25.jkaln//train', user_dir=None, validpref='DATA.tests/oracles/wiki25.jkaln//dev', workers=1)
| [en] Dictionary: 247 types
| [en] DATA.tests/oracles/wiki25.jkaln//train.en: 25 sents, 408 tokens, 0.0% replaced by <unk>
| [en] Dictionary: 247 types
| [en] DATA.tests/oracles/wiki25.jkaln//dev.en: 25 sents, 408 tokens, 0.0% replaced by <unk>
| [en] Dictionary: 247 types
| [en] DATA.tests/oracles/wiki25.jkaln//test.en: 25 sents, 408 tokens, 0.0% replaced by <unk>
| [actions] Dictionary: 127 types
| [actions] DATA.tests/oracles/wiki25.jkaln//train.actions: 25 sents, 1327 tokens, 0.0% replaced by <unk>
| [actions] Dictionary: 127 types
| [actions] DATA.tests/oracles/wiki25.jkaln//dev.actions: 25 sents, 1327 tokens, 0.0% replaced by <unk>
| [actions] Dictionary: 127 types
| [actions] DATA.tests/oracles/wiki25.jkaln//test.actions: 25 sents, 1327 tokens, 0.0% replaced by <unk>
Using cache found in /home/pavel/.cache/torch/hub/pytorch_fairseq_master
Unable to build Cython components. Please make sure Cython is installed if the torch.hub model you are loading depends on it.
loading archive file http://dl.fbaipublicfiles.com/fairseq/models/roberta.base.tar.gz from cache at /home/pavel/.cache/torch/pytorch_fairseq/37d2bc14cf6332d61ed5abeb579948e6054e46cc724c7d23426382d11a31b2d6.ae5852b4abc6bf762e0b6b30f19e741aa05562471e9eb8f4a6ae261f04f9b350
| dictionary: 50264 types
Using roberta.base extraction in GPU

25it [00:00, 199.25it/s]

There were missing actions
Counter({'LA(op1)': 1, 'LA(domain)': 1, 'RA(ARG1)': 1})
Using cache found in /home/pavel/.cache/torch/hub/pytorch_fairseq_master
Unable to build Cython components. Please make sure Cython is installed if the torch.hub model you are loading depends on it.
loading archive file http://dl.fbaipublicfiles.com/fairseq/models/roberta.base.tar.gz from cache at /home/pavel/.cache/torch/pytorch_fairseq/37d2bc14cf6332d61ed5abeb579948e6054e46cc724c7d23426382d11a31b2d6.ae5852b4abc6bf762e0b6b30f19e741aa05562471e9eb8f4a6ae261f04f9b350
| dictionary: 50264 types
Using roberta.base extraction in GPU

25it [00:00, 198.63it/s]

There were missing actions
Counter({'LA(op1)': 1, 'LA(domain)': 1, 'RA(ARG1)': 1})
Using cache found in /home/pavel/.cache/torch/hub/pytorch_fairseq_master
Unable to build Cython components. Please make sure Cython is installed if the torch.hub model you are loading depends on it.
loading archive file http://dl.fbaipublicfiles.com/fairseq/models/roberta.base.tar.gz from cache at /home/pavel/.cache/torch/pytorch_fairseq/37d2bc14cf6332d61ed5abeb579948e6054e46cc724c7d23426382d11a31b2d6.ae5852b4abc6bf762e0b6b30f19e741aa05562471e9eb8f4a6ae261f04f9b350
| dictionary: 50264 types
Using roberta.base extraction in GPU

25it [00:00, 197.31it/s]

There were missing actions
Counter({'LA(op1)': 1, 'LA(domain)': 1, 'RA(ARG1)': 1})
| Wrote preprocessed data to DATA.tests/features/wiki25.jkaln/
| distributed init (rank 1): tcp://localhost:18693
| distributed init (rank 0): tcp://localhost:18693
| initialized host pavel-TRX40-DESIGNARE as rank 1
| initialized host pavel-TRX40-DESIGNARE as rank 0
Namespace(activation_dropout=0.0, activation_fn='relu', adam_betas='(0.9,0.98)', adam_eps=1e-08, adaptive_input=False, adaptive_softmax_cutoff=None, adaptive_softmax_dropout=0, arch='stack_transformer_6x6_nopos', attention_dropout=0.0, bert_backprop=False, best_checkpoint_metric='loss', bpe=None, bucket_cap_mb=25, burnthrough=5, clip_norm=0.0, cpu=False, criterion='label_smoothed_cross_entropy', curriculum=0, data='DATA.tests/features/wiki25.jkaln/', dataset_impl=None, ddp_backend='c10d', decoder_attention_heads=4, decoder_embed_dim=256, decoder_embed_path=None, decoder_ffn_embed_dim=512, decoder_input_dim=256, decoder_layers=6, decoder_learned_pos=False, decoder_normalize_before=False, decoder_output_dim=256, device_id=0, disable_validation=False, distributed_backend='nccl', distributed_init_method='tcp://localhost:18693', distributed_no_spawn=False, distributed_port=-1, distributed_rank=0, distributed_world_size=2, dropout=0.0, encode_state_machine='all-layers_nopos', encoder_attention_heads=4, encoder_embed_dim=256, encoder_embed_path=None, encoder_ffn_embed_dim=512, encoder_layers=6, encoder_learned_pos=False, encoder_normalize_before=False, find_unused_parameters=False, fix_batches_to_gpus=False, fp16=False, fp16_init_scale=128, fp16_scale_tolerance=0.0, fp16_scale_window=None, keep_interval_updates=-1, keep_last_epochs=1, label_smoothing=0.01, lazy_load=False, left_pad_source='True', left_pad_target='False', log_format='json', log_interval=1000, lr=[0.025], lr_scheduler='inverse_sqrt', max_epoch=10, max_sentences=None, max_sentences_valid=None, max_source_positions=1024, max_target_positions=1024, max_tokens=3584, max_tokens_valid=3584, max_update=0, maximize_best_checkpoint_metric=False, memory_efficient_fp16=False, min_loss_scale=0.0001, min_lr=1e-09, no_bert_precompute=False, no_epoch_checkpoints=False, no_last_checkpoints=False, no_progress_bar=False, no_save=False, no_save_optimizer_state=False, no_token_positional_embeddings=False, num_workers=1, optimizer='adam', optimizer_overrides='{}', pretrained_embed_dim=768, raw_text=False, required_batch_size_multiple=8, reset_dataloader=False, reset_lr_scheduler=False, reset_meters=False, reset_optimizer=False, restore_file='checkpoint_last.pt', save_dir='DATA.tests/models/wiki25.jkaln/', save_interval=1, save_interval_updates=0, seed=42, sentence_avg=False, share_all_embeddings=False, share_decoder_input_output_embed=False, skip_invalid_size_inputs_valid_test=False, source_lang=None, target_lang=None, task='translation', tbmf_wrapper=False, tensorboard_logdir='', threshold_loss_scale=None, tokenizer=None, train_subset='train', update_freq=[1], upsample_primary=1, use_bmuf=False, user_dir=None, valid_subset='valid', validate_interval=1, warmup_init_lr=1e-07, warmup_updates=1, weight_decay=0.0)
| [en] dictionary: 248 types
| [actions] dictionary: 128 types
| loaded 25 examples from: DATA.tests/features/wiki25.jkaln/valid.en-actions.en
| loaded 25 examples from: DATA.tests/features/wiki25.jkaln/valid.en-actions.en.bert
| loaded 25 examples from: DATA.tests/features/wiki25.jkaln/valid.en-actions.en.wordpieces
| loaded 25 examples from: DATA.tests/features/wiki25.jkaln/valid.en-actions.en.wp2w
| loaded 25 examples from: DATA.tests/features/wiki25.jkaln/valid.en-actions.actions
| loaded 25 examples from: DATA.tests/features/wiki25.jkaln/valid.en-actions.memory
| loaded 25 examples from: DATA.tests/features/wiki25.jkaln/valid.en-actions.memory_pos
| loaded 25 examples from: DATA.tests/features/wiki25.jkaln/valid.en-actions.target_masks
| loaded 25 examples from: DATA.tests/features/wiki25.jkaln/valid.en-actions.active_logits
| DATA.tests/features/wiki25.jkaln/ valid en-actions 25 examples
TransformerModel(
  (encoder): TransformerEncoder(
    (subspace): Linear(in_features=768, out_features=256, bias=False)
    (embed_tokens): Embedding(248, 256, padding_idx=1)
    (embed_positions): SinusoidalPositionalEmbedding()
    (layers): ModuleList(
      (0): TransformerEncoderLayer(
        (self_attn): MultiheadAttention(
          (out_proj): Linear(in_features=256, out_features=256, bias=True)
        )
        (self_attn_layer_norm): LayerNorm(torch.Size([256]), eps=1e-05, elementwise_affine=True)
        (fc1): Linear(in_features=256, out_features=512, bias=True)
        (fc2): Linear(in_features=512, out_features=256, bias=True)
        (final_layer_norm): LayerNorm(torch.Size([256]), eps=1e-05, elementwise_affine=True)
      )
      (1): TransformerEncoderLayer(
        (self_attn): MultiheadAttention(
          (out_proj): Linear(in_features=256, out_features=256, bias=True)
        )
        (self_attn_layer_norm): LayerNorm(torch.Size([256]), eps=1e-05, elementwise_affine=True)
        (fc1): Linear(in_features=256, out_features=512, bias=True)
        (fc2): Linear(in_features=512, out_features=256, bias=True)
        (final_layer_norm): LayerNorm(torch.Size([256]), eps=1e-05, elementwise_affine=True)
      )
      (2): TransformerEncoderLayer(
        (self_attn): MultiheadAttention(
          (out_proj): Linear(in_features=256, out_features=256, bias=True)
        )
        (self_attn_layer_norm): LayerNorm(torch.Size([256]), eps=1e-05, elementwise_affine=True)
        (fc1): Linear(in_features=256, out_features=512, bias=True)
        (fc2): Linear(in_features=512, out_features=256, bias=True)
        (final_layer_norm): LayerNorm(torch.Size([256]), eps=1e-05, elementwise_affine=True)
      )
      (3): TransformerEncoderLayer(
        (self_attn): MultiheadAttention(
          (out_proj): Linear(in_features=256, out_features=256, bias=True)
        )
        (self_attn_layer_norm): LayerNorm(torch.Size([256]), eps=1e-05, elementwise_affine=True)
        (fc1): Linear(in_features=256, out_features=512, bias=True)
        (fc2): Linear(in_features=512, out_features=256, bias=True)
        (final_layer_norm): LayerNorm(torch.Size([256]), eps=1e-05, elementwise_affine=True)
      )
      (4): TransformerEncoderLayer(
        (self_attn): MultiheadAttention(
          (out_proj): Linear(in_features=256, out_features=256, bias=True)
        )
        (self_attn_layer_norm): LayerNorm(torch.Size([256]), eps=1e-05, elementwise_affine=True)
        (fc1): Linear(in_features=256, out_features=512, bias=True)
        (fc2): Linear(in_features=512, out_features=256, bias=True)
        (final_layer_norm): LayerNorm(torch.Size([256]), eps=1e-05, elementwise_affine=True)
      )
      (5): TransformerEncoderLayer(
        (self_attn): MultiheadAttention(
          (out_proj): Linear(in_features=256, out_features=256, bias=True)
        )
        (self_attn_layer_norm): LayerNorm(torch.Size([256]), eps=1e-05, elementwise_affine=True)
        (fc1): Linear(in_features=256, out_features=512, bias=True)
        (fc2): Linear(in_features=512, out_features=256, bias=True)
        (final_layer_norm): LayerNorm(torch.Size([256]), eps=1e-05, elementwise_affine=True)
      )
    )
  )
  (decoder): TransformerDecoder(
    (embed_tokens): Embedding(128, 256, padding_idx=1)
    (embed_stack_positions): SinusoidalPositionalEmbedding()
    (embed_positions): SinusoidalPositionalEmbedding()
    (layers): ModuleList(
      (0): TransformerDecoderLayer(
        (self_attn): MultiheadAttention(
          (out_proj): Linear(in_features=256, out_features=256, bias=True)
        )
        (self_attn_layer_norm): LayerNorm(torch.Size([256]), eps=1e-05, elementwise_affine=True)
        (encoder_attn): MultiheadAttention(
          (out_proj): Linear(in_features=256, out_features=256, bias=True)
        )
        (encoder_attn_layer_norm): LayerNorm(torch.Size([256]), eps=1e-05, elementwise_affine=True)
        (fc1): Linear(in_features=256, out_features=512, bias=True)
        (fc2): Linear(in_features=512, out_features=256, bias=True)
        (final_layer_norm): LayerNorm(torch.Size([256]), eps=1e-05, elementwise_affine=True)
      )
      (1): TransformerDecoderLayer(
        (self_attn): MultiheadAttention(
          (out_proj): Linear(in_features=256, out_features=256, bias=True)
        )
        (self_attn_layer_norm): LayerNorm(torch.Size([256]), eps=1e-05, elementwise_affine=True)
        (encoder_attn): MultiheadAttention(
          (out_proj): Linear(in_features=256, out_features=256, bias=True)
        )
        (encoder_attn_layer_norm): LayerNorm(torch.Size([256]), eps=1e-05, elementwise_affine=True)
        (fc1): Linear(in_features=256, out_features=512, bias=True)
        (fc2): Linear(in_features=512, out_features=256, bias=True)
        (final_layer_norm): LayerNorm(torch.Size([256]), eps=1e-05, elementwise_affine=True)
      )
      (2): TransformerDecoderLayer(
        (self_attn): MultiheadAttention(
          (out_proj): Linear(in_features=256, out_features=256, bias=True)
        )
        (self_attn_layer_norm): LayerNorm(torch.Size([256]), eps=1e-05, elementwise_affine=True)
        (encoder_attn): MultiheadAttention(
          (out_proj): Linear(in_features=256, out_features=256, bias=True)
        )
        (encoder_attn_layer_norm): LayerNorm(torch.Size([256]), eps=1e-05, elementwise_affine=True)
        (fc1): Linear(in_features=256, out_features=512, bias=True)
        (fc2): Linear(in_features=512, out_features=256, bias=True)
        (final_layer_norm): LayerNorm(torch.Size([256]), eps=1e-05, elementwise_affine=True)
      )
      (3): TransformerDecoderLayer(
        (self_attn): MultiheadAttention(
          (out_proj): Linear(in_features=256, out_features=256, bias=True)
        )
        (self_attn_layer_norm): LayerNorm(torch.Size([256]), eps=1e-05, elementwise_affine=True)
        (encoder_attn): MultiheadAttention(
          (out_proj): Linear(in_features=256, out_features=256, bias=True)
        )
        (encoder_attn_layer_norm): LayerNorm(torch.Size([256]), eps=1e-05, elementwise_affine=True)
        (fc1): Linear(in_features=256, out_features=512, bias=True)
        (fc2): Linear(in_features=512, out_features=256, bias=True)
        (final_layer_norm): LayerNorm(torch.Size([256]), eps=1e-05, elementwise_affine=True)
      )
      (4): TransformerDecoderLayer(
        (self_attn): MultiheadAttention(
          (out_proj): Linear(in_features=256, out_features=256, bias=True)
        )
        (self_attn_layer_norm): LayerNorm(torch.Size([256]), eps=1e-05, elementwise_affine=True)
        (encoder_attn): MultiheadAttention(
          (out_proj): Linear(in_features=256, out_features=256, bias=True)
        )
        (encoder_attn_layer_norm): LayerNorm(torch.Size([256]), eps=1e-05, elementwise_affine=True)
        (fc1): Linear(in_features=256, out_features=512, bias=True)
        (fc2): Linear(in_features=512, out_features=256, bias=True)
        (final_layer_norm): LayerNorm(torch.Size([256]), eps=1e-05, elementwise_affine=True)
      )
      (5): TransformerDecoderLayer(
        (self_attn): MultiheadAttention(
          (out_proj): Linear(in_features=256, out_features=256, bias=True)
        )
        (self_attn_layer_norm): LayerNorm(torch.Size([256]), eps=1e-05, elementwise_affine=True)
        (encoder_attn): MultiheadAttention(
          (out_proj): Linear(in_features=256, out_features=256, bias=True)
        )
        (encoder_attn_layer_norm): LayerNorm(torch.Size([256]), eps=1e-05, elementwise_affine=True)
        (fc1): Linear(in_features=256, out_features=512, bias=True)
        (fc2): Linear(in_features=512, out_features=256, bias=True)
        (final_layer_norm): LayerNorm(torch.Size([256]), eps=1e-05, elementwise_affine=True)
      )
    )
  )
)
| model stack_transformer_6x6_nopos, criterion LabelSmoothedCrossEntropyCriterion
| num. model params: 8232960 (num. trained: 8232960)
| training on 2 GPUs
| max tokens per GPU = 3584 and max sentences per GPU = None
| no existing checkpoint found DATA.tests/models/wiki25.jkaln/checkpoint_last.pt
| loading train data for epoch 0
| loaded 25 examples from: DATA.tests/features/wiki25.jkaln/train.en-actions.en
| loaded 25 examples from: DATA.tests/features/wiki25.jkaln/train.en-actions.en.bert
| loaded 25 examples from: DATA.tests/features/wiki25.jkaln/train.en-actions.en.wordpieces
| loaded 25 examples from: DATA.tests/features/wiki25.jkaln/train.en-actions.en.wp2w
| loaded 25 examples from: DATA.tests/features/wiki25.jkaln/train.en-actions.actions
| loaded 25 examples from: DATA.tests/features/wiki25.jkaln/train.en-actions.memory
| loaded 25 examples from: DATA.tests/features/wiki25.jkaln/train.en-actions.memory_pos
| loaded 25 examples from: DATA.tests/features/wiki25.jkaln/train.en-actions.target_masks
| loaded 25 examples from: DATA.tests/features/wiki25.jkaln/train.en-actions.active_logits
| DATA.tests/features/wiki25.jkaln/ train en-actions 25 examples
| NOTICE: your device may support faster training with --fp16
/usr/lib/python3.7/multiprocessing/semaphore_tracker.py:144: UserWarning: semaphore_tracker: There appear to be 1 leaked semaphores to clean up at shutdown
  len(cache))
PolKul commented 3 years ago

I was able to run the test after modifying the overfit.sh and adding --distributed-world-size 1 to the fairseq-train. So it looks like a problem with multi-gpu training.

Anyway, now I'm trying to install the allignment tools with the bash preprocess/install_alignment_tools.sh but I get Error: Could not retrieve sbt 0.13.5 Can you please help me with that?

ramon-astudillo commented 3 years ago

Are you able to pinpoint which part of the installation process yields that error here?.

https://github.com/IBM/transition-amr-parser/blob/master/preprocess/install_alignment_tools.sh#L10

Is it the JAMR aligner, Kevin aligner? I advise you to run those by separate to localize the error.

PolKul commented 3 years ago

It stops at JAMP aligner. Let me post the full log here:


pavel@pavel-TRX40-DESIGNARE:~/work/nlp/transition-amr-parser$ bash preprocess/install_alignment_tools.sh

Downloading JAMR

Cloning into 'jamr'...
remote: Enumerating objects: 6768, done.
remote: Total 6768 (delta 0), reused 0 (delta 0), pack-reused 6768
Receiving objects: 100% (6768/6768), 2.51 MiB | 1.37 MiB/s, done.
Resolving deltas: 100% (4357/4357), done.
Already on 'Semeval-2016'
Your branch is up to date with 'origin/Semeval-2016'.
OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed in 8.0
Getting org.scala-sbt sbt 0.13.5 ...

:: problems summary ::
:::: WARNINGS
        module not found: org.scala-sbt#sbt;0.13.5

    ==== local: tried

      /home/pavel/.ivy2/local/org.scala-sbt/sbt/0.13.5/ivys/ivy.xml

      -- artifact org.scala-sbt#sbt;0.13.5!sbt.jar:

      /home/pavel/.ivy2/local/org.scala-sbt/sbt/0.13.5/jars/sbt.jar

    ==== typesafe-ivy-releases: tried

      http://repo.typesafe.com/typesafe/ivy-releases/org.scala-sbt/sbt/0.13.5/ivys/ivy.xml

    ==== Maven Central: tried

      http://repo1.maven.org/maven2/org/scala-sbt/sbt/0.13.5/sbt-0.13.5.pom

      -- artifact org.scala-sbt#sbt;0.13.5!sbt.jar:

      http://repo1.maven.org/maven2/org/scala-sbt/sbt/0.13.5/sbt-0.13.5.jar

        ::::::::::::::::::::::::::::::::::::::::::::::

        ::          UNRESOLVED DEPENDENCIES         ::

        ::::::::::::::::::::::::::::::::::::::::::::::

        :: org.scala-sbt#sbt;0.13.5: not found

        ::::::::::::::::::::::::::::::::::::::::::::::

:::: ERRORS
    Server access Error: Connection refused (Connection refused) url=http://repo.typesafe.com/typesafe/ivy-releases/org.scala-sbt/sbt/0.13.5/ivys/ivy.xml

    SERVER ERROR: HTTPS Required url=http://repo1.maven.org/maven2/org/scala-sbt/sbt/0.13.5/sbt-0.13.5.pom

    SERVER ERROR: HTTPS Required url=http://repo1.maven.org/maven2/org/scala-sbt/sbt/0.13.5/sbt-0.13.5.jar

:: USE VERBOSE OR DEBUG MESSAGE LEVEL FOR MORE DETAILS
unresolved dependency: org.scala-sbt#sbt;0.13.5: not found
Error during sbt execution: Error retrieving required libraries
  (see /home/pavel/.sbt/boot/update.log for complete log)
Error: Could not retrieve sbt 0.13.5
ramon-astudillo commented 3 years ago

This seems related to the installing of scala which is a requirement of the JAMR aligner https://github.com/jflanigan/jamr

I have not encountered that error, but it seems to complain from inability to access a particular URL of the scala repo (see Server access Error: Connection refused above). I found some indication in SO it may be related to certificate issues in the machine where this is installed, see

https://stackoverflow.com/questions/27002423/sbt-build-failed-module-not-found-org-scala-sbtsbt0-13-5

PolKul commented 3 years ago

well, I'm not experienced with Scala or sbt. But it looks like an old version of sbt (0.13.5) is used for the JAMR project. Here is a similar issue discussed on SO, for example https://stackoverflow.com/questions/59763435/access-maven-repo-over-https-in-sbt it says that "the issue was that I was using sbt version 0.13.5. And DefaultMavenRepository started pointing to https endpoint from v0.13.6"

And in the logs it is really pointing to the http addresses which are not valid on the SBT site. As pointed out, they use HTTPS since version 0.13.6.

Is it possible to recompile the repository with the latest version of Scala/SBT?

ps: by the way, I have installed the latest version of sbt (1.5.1) manually and point out on that version of sbt in the /jamr/project/build.properties but then it has a bunch of other errors.

ramon-astudillo commented 3 years ago

Unfortunately this is a problem in JAMR and not our model and I know little about it, maybe you can reach the JAMR authors in the repo (linked above). They may have encountered this problem and have a fix.

ramon-astudillo commented 3 years ago

Given other related open issues, I assume this solved

HoraceXIaoyiBao commented 2 years ago

Hello, I followed your setup instructions and installed AMR parser from pip. I checked installation with the python tests/correctly_installed.py and it showed all Ok.

But nevertheless I have two issues:

  1. When I run bash preprocess/install_alignment_tools.sh I get this error: Error: Could not retrieve sbt0.13.5`
  2. When I run bash tests/minimal_test.sh it starts loading my GPU (NVidia Titan RTX) and never stops. In your instructions it is noted that it should not run more than a minute...

Maybe I'm still missing some libraries or my python version is incompatible?

Hi there, I ran into the same issue, have you ever solved that?