Closed AlexisDeschamps closed 7 months ago
requests
sklearn
scikit-learn
I managed to get this far:
python LLM-as-a-Judge_Adaptation/Generate_Synthetic_Queries_and_Answers.py \ > --document_filepath example_files/document_filepath.tsv \ > --few_shot_prompt_filename example_files/few_shot_prompt_filename.tsv \ > --synthetic_queries_filename output/synthetic_queries_1.tsv \ > --documents_sampled 10000 ------------------------------------------------------------ Document File: example_files/document_filepath.tsv Synthetic File Path: output/synthetic_queries_1.tsv number_of_negatives_added_ratio: 0.5 number_of_positives_added_ratio: 0.0 chosen_score_threshold: 0.01 number_of_contradictory_answers_added_ratio: 0.67 clean_documents: False question_temperatures: [2.0, 1.5, 1.0, 0.5, 0.0] percentiles: [0.05, 0.25, 0.5, 0.95] lower_bound_for_negatives: 20 for_fever_dataset: False for_wow_dataset: False ------------------------------------------------------------ Loading checkpoint shards: 100%|████████████████████████████████| 5/5 [03:26<00:00, 41.25s/it] Traceback (most recent call last): File "/home/azureuser/ares/ARES/LLM-as-a-Judge_Adaptation/Generate_Synthetic_Queries_and_Answers.py", line 135, in <module> model.to(device) File "/home/azureuser/miniconda3/envs/llm_judge/lib/python3.10/site-packages/transformers/modeling_utils.py", line 1896, in to return super().to(*args, **kwargs) File "/home/azureuser/miniconda3/envs/llm_judge/lib/python3.10/site-packages/torch/nn/modules/module.py", line 989, in to return self._apply(convert) File "/home/azureuser/miniconda3/envs/llm_judge/lib/python3.10/site-packages/torch/nn/modules/module.py", line 641, in _apply module._apply(fn) File "/home/azureuser/miniconda3/envs/llm_judge/lib/python3.10/site-packages/torch/nn/modules/module.py", line 641, in _apply module._apply(fn) File "/home/azureuser/miniconda3/envs/llm_judge/lib/python3.10/site-packages/torch/nn/modules/module.py", line 641, in _apply module._apply(fn) [Previous line repeated 4 more times] File "/home/azureuser/miniconda3/envs/llm_judge/lib/python3.10/site-packages/torch/nn/modules/module.py", line 664, in _apply param_applied = fn(param) File "/home/azureuser/miniconda3/envs/llm_judge/lib/python3.10/site-packages/torch/nn/modules/module.py", line 987, in convert return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking) torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 160.00 MiB (GPU 0; 15.77 GiB total capacity; 15.12 GiB already allocated; 95.44 MiB free; 15.12 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
I'll try again after increasing memory.
Changes
requests
dependencysklearn
dependencyscikit-learn
?Testing
I managed to get this far:
I'll try again after increasing memory.