dair-iitd / PoolingAnalysis

[EMNLP'20][Findings] Official Repository for the paper "Why and when should you pool? Analyzing Pooling in Recurrent Architectures."
https://pratyushmaini.github.io/Pooling-Analysis
8 stars 1 forks source link

FileNotFoundError #1

Open saladlin opened 3 years ago

saladlin commented 3 years ago

Thank you very much for this excellent work!Congratulations!When I run train.py, I get FileNotFoundError: [Errno 2] No such file or directory:'../data/IMDB_LONG/dump_none.pkl' I don't find the 'IMDB_LONG/dump_none.pkl'. Can you provide the IMDB_LONG/dump_none.pkl'? Thank you for your help.

pratyushmaini commented 3 years ago

Thank you for your interest. Can you tell me the exact error message? The pickled files are only used in order to speed up the data loading process. When you run the program for the first time, it should automatically create a new pickle file. Such as via line https://github.com/dair-iitd/PoolingAnalysis/blob/753633fcc481ad4f792936a561f93e47750b2554/my_dataloader.py#L132 where we first check if a pickle file is available, and if not make a new one.

If you can tell me where the exact error occurred, I can help better debug this. Thanks

saladlin commented 3 years ago

Thank you for your interest. Can you tell me the exact error message? The pickled files are only used in order to speed up the data loading process. When you run the program for the first time, it should automatically create a new pickle file. Such as via line

https://github.com/dair-iitd/PoolingAnalysis/blob/753633fcc481ad4f792936a561f93e47750b2554/my_dataloader.py#L132

where we first check if a pickle file is available, and if not make a new one. If you can tell me where the exact error occurred, I can help better debug this. Thanks

Thank you for your enthusiastic answer. Is when I execute "Evaluate the Initial gradient distribution" "Python train.py --pool att_max --data_size 20K --gpu_id 0 --mode train --batch_size 1 --task IMDB_LONG --wiki none --epochs 5 --gradients 1 --initial 1 --log 1- -customlstm 1" Traceback (most recent call last): File "D:/project/pool/train.py", line 47, in TEXT, LABEL, train_iterator, valid_iterator, test_iterator = get_data(args, MAX_VOCAB_SIZE, device) File "D:\project\pool\my_dataloader.py", line 137, in get_data dump_pickle(args, TEXT, LABEL) File "D:\project\pool\my_dataloader.py", line 116, in dump_pickle fields={'s': ('text', TEXT), 'class': ('label', LABEL)}) File "C:\Users\WXL\anaconda3\envs\torch1.4\lib\site-packages\torchtext\data\dataset.py", line 78, in splits os.path.join(path, train), **kwargs) File "C:\Users\WXL\anaconda3\envs\torch1.4\lib\site-packages\torchtext\data\dataset.py", line 251, in init with io.open(os.path.expanduser(path), encoding="utf8") as f: FileNotFoundError: [Errno 2] No such file or directory: '../data/IMDB_LONG\train_m.json'" will appear, how can I solve it?

pratyushmaini commented 3 years ago

Hi! I am sorry, I totally missed the notification. Can you check if "../data/IMDB_LONG" folder was automatically created? I think this may be a permission issue