I want to use cpu, so i modified hparams.py
device='cpu',workers=8,gpu_ids=[-1],
and when I run codepython main.py --mode train --save_path path_to_save_the_model
I get this error information in console, HELP
`
[test] 20 examples is loaded
2021-05-26 20:21:46.877853: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cudart64_110.dll
Loading spacy glove embedding:
Vocabulary size: 9660
Word vector size: 300
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 9660/9660 [00:00<00:00, 42924.75it/s]
Unknown word count: 1082
# -------------------------------------------------------------------------
# Setup Training Finished
# -------------------------------------------------------------------------
0%| | 0/58 [00:00<?, ?it/s]
Traceback (most recent call last):
File "E:\工作空间\数据\实验\HMNet-End-to-End-Abstractive-Summarization-for-Meetings-master\main.py", line 116, in
train_model(args)
File "E:\工作空间\数据\实验\HMNet-End-to-End-Abstractive-Summarization-for-Meetings-master\main.py", line 58, in train_model
summarization.train()
File "E:\工作空间\数据\实验\HMNet-End-to-End-Abstractive-Summarization-for-Meetings-master\train.py", line 138, in train
for batch_idx, batch in enumerate(tqdm_batch_iterator):
File "C:\Python39\lib\site-packages\tqdm\std.py", line 1129, in iter
for obj in iterable:
File "C:\Python39\lib\site-packages\torch\utils\data\dataloader.py", line 355, in iter
return self._get_iterator()
File "C:\Python39\lib\site-packages\torch\utils\data\dataloader.py", line 301, in _get_iterator
return _MultiProcessingDataLoaderIter(self)
File "C:\Python39\lib\site-packages\torch\utils\data\dataloader.py", line 914, in init
w.start()
File "C:\Python39\lib\multiprocessing\process.py", line 121, in start
self._popen = self._Popen(self)
File "C:\Python39\lib\multiprocessing\context.py", line 224, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Python39\lib\multiprocessing\context.py", line 327, in _Popen
return Popen(process_obj)
File "C:\Python39\lib\multiprocessing\popen_spawn_win32.py", line 93, in init
reduction.dump(process_obj, to_child)
File "C:\Python39\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
_pickle.PicklingError: Can't pickle <class 'main.HParams'>: attribute lookup HParams on main failed
PS E:\工作空间\数据\实验\HMNet-End-to-End-Abstractive-Summarization-for-Meetings-master>
Traceback (most recent call last):
File "", line 1, in
File "C:\Python39\lib\multiprocessing\spawn.py", line 116, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "C:\Python39\lib\multiprocessing\spawn.py", line 126, in _main
self = reduction.pickle.load(from_parent)
EOFError: Ran out of input`
I want to use cpu, so i modified hparams.py
device='cpu',
workers=8,
gpu_ids=[-1],
and when I run codepython main.py --mode train --save_path path_to_save_the_model
I get this error information in console, HELP `-------------------------------------------------------------------------
hparams.save_dirpath: ./save self.hparams: HParams(attention_key_channels=0, attention_value_channels=0, batch_size=1, beam_size=12, blook_trigram=True, data_dir='data/', device='cpu', dropout=0.2, embedding_size_pos=12, embedding_size_role=20, embedding_size_word=300, filter_size=64, fintune_word_embedding=True, gen_max_length=400, gpu_ids=[-1], hidden_size=300, learning_rate=0.0005, load_pthpath='', max_gradient_norm=2, max_length=800, min_length=280, num_epochs=100, num_heads=2, num_hidden_layers=2, optimizer_adam_beta1=0.9, optimizer_adam_beta2=0.999, save_dirpath='./save', start_eval_epoch=20, use_pos=False, use_role=False, vocab_word_path='checkpoints/vocab_word', workers=8) device cpu gpu_ids [-1] [train] 58 examples is loaded [Dev] 47 examples is loaded role_counter: Counter({'PM': 9755, 'ME': 7553, 'ID': 7513, 'UI': 7049}) pos_counter: Counter({'punct': 69582, 'nn': 59506, 'dt': 45995, 'in': 42797, 'prp': 39219, 'rb': 33180, 'jj': 30777, 'vb': 27703, 'vbp': 19965, 'cc': 18396, 'nns': 15380, 'nnp': 12966, 'to': 11025, 'vbz': 10999, 'md': 10026, 'vbd': 6752, 'cd': 6177, 'vbg': 5478, 'vbn': 4139, 'prp$': 4085, 'wp': 3143, 'wrb': 2636, 'wdt': 2450, 'rp': 1794, 'jjr': 1647, 'rbr': 1116, 'ex': 940, 'uh': 765, 'pdt': 604, 'jjs': 535, 'rbs': 281, 'fw': 80, 'nnps': 36, 'pos': 32, "''": 10, 'sym': 6, ':': 2})
===== Building [Word Vocab] ========= preset_vocab_size: 6 9654it [00:00, 2419732.93it/s] 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 9660/9660 [00:00<00:00, 2544396.93it/s] Vocab size: 9660
===== Building [Role Vocab] ========= preset_vocab_size: 6 4it [00:00, ?it/s] 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:00<?, ?it/s] Vocab size: 10
===== Building [POS Vocab] ========= preset_vocab_size: 6 37it [00:00, ?it/s] 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 43/43 [00:00<?, ?it/s] Vocab size: 43
[test] 20 examples is loaded 2021-05-26 20:21:46.877853: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cudart64_110.dll
Loading spacy glove embedding:
Unknown word count: 1082
0%| | 0/58 [00:00<?, ?it/s] Traceback (most recent call last): File "E:\工作空间\数据\实验\HMNet-End-to-End-Abstractive-Summarization-for-Meetings-master\main.py", line 116, in
train_model(args)
File "E:\工作空间\数据\实验\HMNet-End-to-End-Abstractive-Summarization-for-Meetings-master\main.py", line 58, in train_model
summarization.train()
File "E:\工作空间\数据\实验\HMNet-End-to-End-Abstractive-Summarization-for-Meetings-master\train.py", line 138, in train
for batch_idx, batch in enumerate(tqdm_batch_iterator):
File "C:\Python39\lib\site-packages\tqdm\std.py", line 1129, in iter
for obj in iterable:
File "C:\Python39\lib\site-packages\torch\utils\data\dataloader.py", line 355, in iter
return self._get_iterator()
File "C:\Python39\lib\site-packages\torch\utils\data\dataloader.py", line 301, in _get_iterator
return _MultiProcessingDataLoaderIter(self)
File "C:\Python39\lib\site-packages\torch\utils\data\dataloader.py", line 914, in init
w.start()
File "C:\Python39\lib\multiprocessing\process.py", line 121, in start
self._popen = self._Popen(self)
File "C:\Python39\lib\multiprocessing\context.py", line 224, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Python39\lib\multiprocessing\context.py", line 327, in _Popen
return Popen(process_obj)
File "C:\Python39\lib\multiprocessing\popen_spawn_win32.py", line 93, in init
reduction.dump(process_obj, to_child)
File "C:\Python39\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
_pickle.PicklingError: Can't pickle <class 'main.HParams'>: attribute lookup HParams on main failed
PS E:\工作空间\数据\实验\HMNet-End-to-End-Abstractive-Summarization-for-Meetings-master>
-------------------------------------------------------------------------
Traceback (most recent call last): File "", line 1, in
File "C:\Python39\lib\multiprocessing\spawn.py", line 116, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "C:\Python39\lib\multiprocessing\spawn.py", line 126, in _main
self = reduction.pickle.load(from_parent)
EOFError: Ran out of input`