While attempting to execute the code, I encountered the following error message: "Process finished with exit code 137 (interrupted by signal 9: SIGKILL)". Prior to this error, the following log was observed:
"Starting step 0
Dataset is empty; generating initial samples
Processing samples: 0%| | 0/1 [00:00<?, ?it/s]
Setting pad_token_id to eos_token_id:50256 for open-end generation.
Processing samples: 100%|██████████| 1/1 [00:13<00:00, 13.33s/it]
Special tokens have been added to the vocabulary; ensure the associated word embeddings are fine-tuned or trained."
The failure occurred in line 53 of the estimator_llm file:
This code is being executed on my Ubuntu 20.04 system using HuggingFacePipeline, with attempts made using various Large Language Models. Upon researching the error message online, it appears to be related to a memory issue. Could you please provide guidance on how to address this problem?
Thank you.
While attempting to execute the code, I encountered the following error message: "Process finished with exit code 137 (interrupted by signal 9: SIGKILL)". Prior to this error, the following log was observed:
"Starting step 0 Dataset is empty; generating initial samples Processing samples: 0%| | 0/1 [00:00<?, ?it/s] Setting pad_token_id to eos_token_id:50256 for open-end generation. Processing samples: 100%|██████████| 1/1 [00:13<00:00, 13.33s/it] Special tokens have been added to the vocabulary; ensure the associated word embeddings are fine-tuned or trained."
The failure occurred in line 53 of the estimator_llm file:
self.chain = ChainWrapper(self.opt.llm, self.opt.prompt, chain_metadata['json_schema'], chain_metadata['parser_func'])
This code is being executed on my Ubuntu 20.04 system using HuggingFacePipeline, with attempts made using various Large Language Models. Upon researching the error message online, it appears to be related to a memory issue. Could you please provide guidance on how to address this problem? Thank you.