Open wiley-porg opened 4 years ago
I also met the problem.
Instruction is as follows: python main.py --exp_name first_eval --eval_only true --reload_model "./fwd_bwd_ibp.pth" --reload_data "./prim_ibp.test" --beam_eval true --beam_size 10 --emb_dim 1024 --n_enc_layers 6 --n_dec_layers 6 --n_heads 8 --dump_path ./dump
The error record is as follows:
Traceback (most recent call last):
File "main.py", line 232, in
For the dump_path parameter (and reload path too), you need an absolute path, like d:/Users/myname/dumped. This should solve the issue with the pickle file. You can also avoid the random experiment names by specifying one with exp_id, e.g. parameters -- dump_path d:/dumped --exp_name bwd_gen --exp_id first should send all files (pickle, train.log and data.prefix files) into d:/dumped/bwd_gen/first/
File "/home/Partofspeechtaggingforbalochi/mysite/main.py", line 27, in result pickled_model = pickle.load(open('modelcrf.pkl', 'rb')) EOFError: Ran out of input
I am facing this problem while hosting the machine learning model on pythonanywhere. can anyone solve this issue
Hi, I'm currently playing around with the symbomath code and I keep running into this error with data generation.
Running the following code (the example generation code provided):
python main.py --export_data true --batch_size 32 --cpu true --exp_name prim_bwd_data --num_workers 20 --tasks prim_bwd --env_base_seed -1 --n_variables 1 --n_coefficients 0 --leaf_probs "0.75,0,0.25,0" --max_ops 15 --max_int 5 --positive true --max_len 512 --operators "add:10,sub:3,mul:10,div:5,sqrt:4,pow2:4,pow3:2,pow4:1,pow5:1,ln:4,exp:4,sin:4,cos:4,tan:4,asin:1,acos:1,atan:1,sinh:1,cosh:1,tanh:1,asinh:1,acosh:1,atanh:1"
Yields an error message telling me that './dumped/prim_bwd_data\48t7888vh8\params.pkl' doesn't exist. I can't manually create the file, since that 10-digit string of numbers constantly changes, so could someone explain why the code isn't correctly making a directory for this?
But that's not the issue - I can set a parameter to set a dump path, so the run code looks like this (all I did was add a --dump_path to it):
python main.py --export_data true --dump_path C:\Users\Toby\PycharmProjects\PDE\venv\dumped --batch_size 32 --cpu true --exp_name prim_bwd_data --num_workers 20 --tasks prim_bwd --env_base_seed -1 --n_variables 1 --n_coefficients 0 --leaf_probs "0.75,0,0.25,0" --max_ops 15 --max_int 5 --positive true --max_len 512 --operators
"add:10,sub:3,mul:10,div:5,sqrt:4,pow2:4,pow3:2,pow4:1,pow5:1,ln:4,exp:4,sin:4,cos:4,tan:4,asin:1,acos:1,atan:1,sinh:1,cosh:1,tanh:1,asinh:1,acosh:1,atanh:1"`Doing this yields the following error message: SLURM job: False 0 - Number of nodes: 1 0 - Node ID : 0 0 - Local rank : 0 0 - Global rank : 0 0 - World size : 1 0 - GPUs per node : 1 0 - Master : True 0 - Multi-node : False 0 - Multi-GPU : False 0 - Hostname : ChanPC A subdirectory or file -p already exists. Error occurred while processing: -p. INFO - 06/25/20 14:02:53 - 0:00:00 - ============ Initialized logger ============ INFO - 06/25/20 14:02:53 - 0:00:00 - accumulate_gradients: 1 amp: -1 attention_dropout: 0 balanced: False batch_size: 32 beam_early_stopping: True beam_eval: False beam_length_penalty: 1 beam_size: 1 clean_prefix_expr: True clip_grad_norm: 5 command: python main.py --export_data true --dump_path 'C:\Users\Chan\PycharmProjects\PDE\venv\dumped' --batch_size 32 --cpu true --exp_name prim_bwd_data --num_workers 20 --tasks prim_bwd --env_base_seed '-1' --n_variables 1 --n_coefficients 0 --leaf_probs '0.75,0,0.25,0' --max_ops 15 --max_int 5 --positive true --max_len 512 --operators 'add:10,sub:3,mul:10,div:5,sqrt:4,pow2:4,pow3:2,pow4:1,pow5:1,ln:4,exp:4,sin:4,cos:4,tan:4,asin:1,acos:1,atan:1,sinh:1,cosh:1,tanh:1,asinh:1,acosh:1,atanh:1' --exp_id "9epgwherdq" cpu: True debug: False debug_slurm: False dropout: 0 dump_path: C:\Users\Chan\PycharmProjects\PDE\venv\dumped\prim_bwd_data\9epgwherdq emb_dim: 256 env_base_seed: -1 env_name: char_sp epoch_size: 300000 eval_only: False eval_verbose: 0 eval_verbose_print: False exp_id: 9epgwherdq exp_name: prim_bwd_data export_data: True fp16: False global_rank: 0 int_base: 10 is_master: True is_slurm_job: False leaf_probs: 0.75,0,0.25,0 local_rank: 0 master_port: -1 max_epoch: 100000 max_int: 5 max_len: 512 max_ops: 15 max_ops_G: 4 multi_gpu: False multi_node: False n_coefficients: 0 n_dec_layers: 4 n_enc_layers: 4 n_gpu_per_node: 1 n_heads: 4 n_nodes: 1 n_variables: 1 node_id: 0 num_workers: 20 operators: add:10,sub:3,mul:10,div:5,sqrt:4,pow2:4,pow3:2,pow4:1,pow5:1,ln:4,exp:4,sin:4,cos:4,tan:4,asin:1,acos:1,atan:1,sinh:1,cosh:1,tanh:1,asinh:1,acosh:1,atanh:1 optimizer: adam,lr=0.0001 positive: True precision: 10 reload_checkpoint: reload_data: reload_model: reload_size: -1 rewrite_functions: same_nb_ops_per_batch: False save_periodic: 0 share_inout_emb: True sinusoidal_embeddings: False stopping_criterion: tasks: prim_bwd validation_metrics: world_size: 1 INFO - 06/25/20 14:02:53 - 0:00:00 - The experiment will be stored in C:\Users\Chan\PycharmProjects\PDE\venv\dumped\prim_bwd_data\9epgwherdq
INFO - 06/25/20 14:02:53 - 0:00:00 - Running command: python main.py --export_data true --dump_path 'C:\Users\Chan\PycharmProjects\PDE\venv\dumped' --batch_size 32 --cpu true --exp_name prim_bwd_data --num_workers 20 --tasks prim_bwd --env_base_seed '-1' --n_variables 1 --n_coefficients 0 --leaf_probs '0.75,0,0.25,0' --max_ops 15 --max_int 5 --positive true --max_len 512 --operators 'add:10,sub:3,mul:10,div:5,sqrt:4,pow2:4,pow3:2,pow4:1,pow5:1,ln:4,exp:4,sin:4,cos:4,tan:4,asin:1,acos:1,atan:1,sinh:1,cosh:1,tanh:1,asinh:1,acosh:1,atanh:1'
WARNING - 06/25/20 14:02:53 - 0:00:00 - Signal handler installed. INFO - 06/25/20 14:02:53 - 0:00:00 - Unary operators: ['acos', 'acosh', 'asin', 'asinh', 'atan', 'atanh', 'cos', 'cosh', 'exp', 'ln', 'pow2', 'pow3', 'pow4', 'pow5', 'sin', 'sinh', 'sqrt', 'tan', 'tanh'] INFO - 06/25/20 14:02:53 - 0:00:00 - Binary operators: ['add', 'div', 'mul', 'sub'] INFO - 06/25/20 14:02:53 - 0:00:00 - words: {'': 2, '(': 3, ')': 4, '': 5, '': 6, '': 7, '': 8, '': 9, 'pi': 10, 'E': 11, 'x': 12, 'y': 13, 'z': 14, 't': 15, 'a0': 16, 'a1': 17, 'a2': 18, 'a3': 19, 'a4': 20, 'a5': 21, 'a6': 22, 'a7': 23, 'a8': 24, 'a9': 25, 'abs': 26, 'acos': 27, 'acosh': 28, 'acot': 29, 'acoth': 30, 'acsc': 31, 'acsch': 32, 'add': 33, 'asec': 34, 'asech': 35, 'asin': 36, 'asinh': 37, 'atan': 38, 'atanh': 39, 'cos': 40, 'cosh': 41, 'cot': 42, 'coth': 43, 'csc': 44, 'csch': 45, 'derivative': 46, 'div': 47, 'exp': 48, 'f': 49, 'g': 50, 'h': 51, 'inv': 52, 'ln': 53, 'mul': 54, 'pow': 55, 'pow2': 56, 'pow3': 57, 'pow4': 58, 'pow5': 59, 'rac': 60, 'sec': 61, 'sech': 62, 'sign': 63, 'sin': 64, 'sinh': 65, 'sqrt': 66, 'sub': 67, 'tan': 68, 'tanh': 69, 'I': 70, 'INT+': 71, 'INT-': 72, 'INT': 73, 'FLOAT': 74, '-': 75, '.': 76, '10^': 77, 'Y': 78, "Y'": 79, "Y''": 80, '0': 81, '1': 82, '2': 83, '3': 84, '4': 85, '5': 86, '6': 87, '7': 88, '8': 89, '9': 90}
INFO - 06/25/20 14:02:53 - 0:00:00 - 6 possible leaves.
INFO - 06/25/20 14:02:53 - 0:00:00 - Checking expressions in [0.01, 0.1, 0.3, 0.5, 0.7, 0.9, 1.1, 2.1, 3.1, -0.01, -0.1, -0.3, -0.5, -0.7, -0.9, -1.1, -2.1, -3.1]
INFO - 06/25/20 14:02:53 - 0:00:00 - Training tasks: prim_bwd
INFO - 06/25/20 14:02:53 - 0:00:00 - Number of parameters (encoder): 4231424
INFO - 06/25/20 14:02:53 - 0:00:00 - Number of parameters (decoder): 5286235
INFO - 06/25/20 14:02:53 - 0:00:00 - Found 177 parameters in model.
INFO - 06/25/20 14:02:53 - 0:00:00 - Optimizers: model
INFO - 06/25/20 14:02:53 - 0:00:00 - Data will be stored in prefix in: C:\Users\Chan\PycharmProjects\PDE\venv\dumped\prim_bwd_data\9epgwherdq\data.prefix ...
INFO - 06/25/20 14:02:53 - 0:00:00 - Data will be stored in infix in: C:\Users\Chan\PycharmProjects\PDE\venv\dumped\prim_bwd_data\9epgwherdq\data.infix ...
INFO - 06/25/20 14:02:53 - 0:00:00 - Creating train iterator for prim_bwd ...
Traceback (most recent call last):
File "main.py", line 225, in
main(params)
File "main.py", line 162, in main
trainer = Trainer(modules, env, params)
File "C:\Users\Chan\PycharmProjects\PDE\venv\src\trainer.py", line 140, in init
self.dataloader = {
File "C:\Users\Chan\PycharmProjects\PDE\venv\src\trainer.py", line 141, in
task: iter(self.env.create_train_iterator(task, params, self.data_path))
File "D:\Anaconda\envs\pde\lib\site-packages\torch\utils\data\dataloader.py", line 279, in iter
return _MultiProcessingDataLoaderIter(self)
File "D:\Anaconda\envs\pde\lib\site-packages\torch\utils\data\dataloader.py", line 719, in init
w.start()
File "D:\Anaconda\envs\pde\lib\multiprocessing\process.py", line 121, in start
self._popen = self._Popen(self)
File "D:\Anaconda\envs\pde\lib\multiprocessing\context.py", line 224, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "D:\Anaconda\envs\pde\lib\multiprocessing\context.py", line 326, in _Popen
return Popen(process_obj)
File "D:\Anaconda\envs\pde\lib\multiprocessing\popen_spawn_win32.py", line 93, in init
reduction.dump(process_obj, to_child)
File "D:\Anaconda\envs\pde\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
_pickle.PicklingError: Can't pickle f: attribute lookup f on main failed
': 0, '': 1, '(pde) C:\Users\Chan\PycharmProjects\PDE\venv>Traceback (most recent call last): File "", line 1, in
File "D:\Anaconda\envs\pde\lib\multiprocessing\spawn.py", line 116, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "D:\Anaconda\envs\pde\lib\multiprocessing\spawn.py", line 126, in _main
self = reduction.pickle.load(from_parent)
EOFError: Ran out of input
train.log
To me, it looks like its trying to train at the same time or something, which is why it can't find an input (the pickle file doesn't exist yet). In the dump folder, there are the data.infix and prefix files, but they're empty.
Have I inputted the parameters wrong or am missing some kind of step? Any help would be much appreciated, as I am relatively new to coding. Thanks so much in advance!