chenxie95 / deeplearning_course_sjtu

13 stars 2 forks source link

image captioning的demo运行报错 #13

Open BravoFr0st opened 2 years ago

BravoFr0st commented 2 years ago
Traceback (most recent call last):
  File "/dssg/home/acct-stu/stu469/3611Proj/image_captioning/main.py", line 347, in <module>
    fire.Fire(Runner)
  File "/dssg/home/acct-stu/stu469/.conda/envs/pytorch-icc/lib/python3.9/site-packages/fire/core.py", line 141, in Fire
    component_trace = _Fire(component, args, parsed_flag_args, context, name)
  File "/dssg/home/acct-stu/stu469/.conda/envs/pytorch-icc/lib/python3.9/site-packages/fire/core.py", line 466, in _Fire
    component, remaining_args = _CallAndUpdateTrace(
  File "/dssg/home/acct-stu/stu469/.conda/envs/pytorch-icc/lib/python3.9/site-packages/fire/core.py", line 681, in _CallAndUpdateTrace
    component = fn(*varargs, **kwargs)
  File "/dssg/home/acct-stu/stu469/3611Proj/image_captioning/main.py", line 342, in train_evaluate
    self.train(config_file, **kwargs)
  File "/dssg/home/acct-stu/stu469/3611Proj/image_captioning/main.py", line 173, in train
    dataloaders = self.get_dataloaders(args)
  File "/dssg/home/acct-stu/stu469/3611Proj/image_captioning/main.py", line 45, in get_dataloaders
    train_set = Flickr8kDataset(
  File "/dssg/home/acct-stu/stu469/3611Proj/image_captioning/dataset.py", line 61, in __init__
    self.vocab, self.word2idx, self.idx2word, self.max_len = self.__construct_vocab()
  File "/dssg/home/acct-stu/stu469/3611Proj/image_captioning/dataset.py", line 96, in __construct_vocab
    cap_words = nltk.word_tokenize(cap.lower())
  File "/dssg/home/acct-stu/stu469/.conda/envs/pytorch-icc/lib/python3.9/site-packages/nltk/tokenize/__init__.py", line 129, in word_tokenize
    sentences = [text] if preserve_line else sent_tokenize(text, language)
  File "/dssg/home/acct-stu/stu469/.conda/envs/pytorch-icc/lib/python3.9/site-packages/nltk/tokenize/__init__.py", line 106, in sent_tokenize
    tokenizer = load(f"tokenizers/punkt/{language}.pickle")
  File "/dssg/home/acct-stu/stu469/.conda/envs/pytorch-icc/lib/python3.9/site-packages/nltk/data.py", line 750, in load
    opened_resource = _open(resource_url)
  File "/dssg/home/acct-stu/stu469/.conda/envs/pytorch-icc/lib/python3.9/site-packages/nltk/data.py", line 876, in _open
    return find(path_, path + [""]).open()
  File "/dssg/home/acct-stu/stu469/.conda/envs/pytorch-icc/lib/python3.9/site-packages/nltk/data.py", line 583, in find
    raise LookupError(resource_not_found)
LookupError: 
**********************************************************************
  Resource punkt not found.
  Please use the NLTK Downloader to obtain the resource:

  >>> import nltk
  >>> nltk.download('punkt')
  
  For more information see: https://www.nltk.org/data.html

  Attempted to load tokenizers/punkt/PY3/english.pickle

  Searched in:
    - '/dssg/home/acct-stu/stu469/nltk_data'
    - '/dssg/home/acct-stu/stu469/.conda/envs/pytorch-icc/nltk_data'
    - '/dssg/home/acct-stu/stu469/.conda/envs/pytorch-icc/share/nltk_data'
    - '/dssg/home/acct-stu/stu469/.conda/envs/pytorch-icc/lib/nltk_data'
    - '/usr/share/nltk_data'
    - '/usr/local/share/nltk_data'
    - '/usr/lib/nltk_data'
    - '/usr/local/lib/nltk_data'
    - ''
**********************************************************************

是编码问题吗?

wsntxxn commented 2 years ago

按照提示下载资源: import nltk; nltk.download('punkt')

BravoFr0st commented 2 years ago

在nltk数据集和resnet101-63fe2227.pth都正确下载之后, Slurm系统提示oom, 这个demo大概需要申请多少RAM才能通过

cantabile-kwok commented 2 years ago

建议是conf里面load to memory改成false

wsntxxn commented 2 years ago

memory 的需求我也忘了,方便的做法就是如同学所说,把配置文件里load_img_to_memory改成False