I only changed the model_path and tokenizer_path. When I ran python spert.py train --config configs/example_train.conf it stuck at the dataloader(see comments in the code):
# in spert_trainer.py _train_epoch
for batch in tqdm(data_loader, total=total, desc='Train epoch %s' % epoch): ##### stuck at this line
model.train() # never reached here
batch = util.to_device(batch, self._device)
# forward step
entity_logits, rel_logits = model(encodings=batch['encodings'], context_masks=batch['context_masks'],
entity_masks=batch['entity_masks'], entity_sizes=batch['entity_sizes'],
relations=batch['rels'], rel_masks=batch['rel_masks'])
# compute loss and optimize parameters
batch_loss = compute_loss.compute(entity_logits=entity_logits, rel_logits=rel_logits,
rel_types=batch['rel_types'], entity_types=batch['entity_types'],
entity_sample_masks=batch['entity_sample_masks'],
rel_sample_masks=batch['rel_sample_masks'])
But when I changed the sampling_processes to 0, it worked, though slow.
Why did the execution stuck with sampling_processes=4?
I am using a CPU to train the model if that matters.
The config file I use :
I only changed the
model_path
andtokenizer_path
. When I ranpython spert.py train --config configs/example_train.conf
it stuck at the dataloader(see comments in the code):But when I changed the
sampling_processes
to0
, it worked, though slow.Why did the execution stuck with
sampling_processes=4
?I am using a CPU to train the model if that matters.