PromtEngineer / localGPT

Chat with your documents on your local device using GPT models. No data leaves your device and 100% private.
Apache License 2.0
20.01k stars 2.23k forks source link

AssertionError: Torch not compiled with CUDA enabled #32

Open Anarjoy opened 1 year ago

Anarjoy commented 1 year ago

As per your request here the error I get after installing localGPT

PS C:\localGPT> python ingest.py C:\Users\Name\AppData\Local\Programs\Python\Python310\lib\site-packages\numpy_distributor_init.py:30: UserWarning: loaded more than 1 DLL from .libs: C:\Users\Name\AppData\Local\Programs\Python\Python310\lib\site-packages\numpy.libs\libopenblas.FB5AE2TYXYH2IJRDKGDGQ3XBKLKTF43H.gfortran-win_amd64.dll C:\Users\Name\AppData\Local\Programs\Python\Python310\lib\site-packages\numpy.libs\libopenblas64v0.3.21-gcc_10_3_0.dll warnings.warn("loaded more than 1 DLL from .libs:" Loading documents from C:\localGPT/SOURCE_DOCUMENTS Loaded 2 documents from C:\localGPT/SOURCE_DOCUMENTS Split into 1536 chunks of text load INSTRUCTOR_Transformer max_seq_length 512 Using embedded DuckDB with persistence: data will be stored in: C:\localGPT ╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮ │ C:\localGPT\ingest.py:52 in │ │ │ │ 49 │ │ 50 │ │ 51 if name == "main__": │ │ ❱ 52 │ main() │ │ 53 │ │ │ │ C:\localGPT\ingest.py:46 in main │ │ │ │ 43 │ │ │ │ │ │ │ │ │ │ │ │ model_kwargs={"device": "cuda"}) │ │ 44 │ │ │ 45 │ │ │ ❱ 46 │ db = Chroma.from_documents(texts, embeddings, persist_directory=PERSIST_DIRECTORY, c │ │ 47 │ db.persist() │ │ 48 │ db = None │ │ 49 │ │ │ │ C:\Users\Name\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain\vectorstores\c │ │ hroma.py:422 in from_documents │ │ │ │ 419 │ │ """ │ │ 420 │ │ texts = [doc.page_content for doc in documents] │ │ 421 │ │ metadatas = [doc.metadata for doc in documents] │ │ ❱ 422 │ │ return cls.from_texts( │ │ 423 │ │ │ texts=texts, │ │ 424 │ │ │ embedding=embedding, │ │ 425 │ │ │ metadatas=metadatas, │ │ │ │ C:\Users\Name\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain\vectorstores\c │ │ hroma.py:390 in from_texts │ │ │ │ 387 │ │ │ client_settings=client_settings, │ │ 388 │ │ │ client=client, │ │ 389 │ │ ) │ │ ❱ 390 │ │ chroma_collection.add_texts(texts=texts, metadatas=metadatas, ids=ids) │ │ 391 │ │ return chroma_collection │ │ 392 │ │ │ 393 │ @classmethod │ │ │ │ C:\Users\Name\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain\vectorstores\c │ │ hroma.py:159 in addtexts │ │ │ │ 156 │ │ │ ids = [str(uuid.uuid1()) for in texts] │ │ 157 │ │ embeddings = None │ │ 158 │ │ if self._embedding_function is not None: │ │ ❱ 159 │ │ │ embeddings = self._embedding_function.embed_documents(list(texts)) │ │ 160 │ │ self._collection.add( │ │ 161 │ │ │ metadatas=metadatas, embeddings=embeddings, documents=texts, ids=ids │ │ 162 │ │ ) │ │ │ │ C:\Users\Name\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain\embeddings\hug │ │ gingface.py:148 in embed_documents │ │ │ │ 145 │ │ │ List of embeddings, one for each text. │ │ 146 │ │ """ │ │ 147 │ │ instruction_pairs = [[self.embed_instruction, text] for text in texts] │ │ ❱ 148 │ │ embeddings = self.client.encode(instruction_pairs) │ │ 149 │ │ return embeddings.tolist() │ │ 150 │ │ │ 151 │ def embed_query(self, text: str) -> List[float]: │ │ │ │ C:\Users\Name\AppData\Local\Programs\Python\Python310\lib\site-packages\InstructorEmbedding\inst │ │ ructor.py:521 in encode │ │ │ │ 518 │ │ if device is None: │ │ 519 │ │ │ device = self._target_device │ │ 520 │ │ │ │ ❱ 521 │ │ self.to(device) │ │ 522 │ │ │ │ 523 │ │ all_embeddings = [] │ │ 524 │ │ if isinstance(sentences[0],list): │ │ │ │ C:\Users\Name\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module. │ │ py:1145 in to │ │ │ │ 1142 │ │ │ │ │ │ │ non_blocking, memory_format=convert_to_format) │ │ 1143 │ │ │ return t.to(device, dtype if t.is_floating_point() or t.is_complex() else No │ │ 1144 │ │ │ │ ❱ 1145 │ │ return self._apply(convert) │ │ 1146 │ │ │ 1147 │ def register_full_backward_pre_hook( │ │ 1148 │ │ self, │ │ │ │ C:\Users\Name\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module. │ │ py:797 in _apply │ │ │ │ 794 │ │ │ 795 │ def _apply(self, fn): │ │ 796 │ │ for module in self.children(): │ │ ❱ 797 │ │ │ module._apply(fn) │ │ 798 │ │ │ │ 799 │ │ def compute_should_use_set_data(tensor, tensor_applied): │ │ 800 │ │ │ if torch._has_compatible_shallow_copy_type(tensor, tensor_applied): │ │ │ │ C:\Users\Name\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module. │ │ py:797 in _apply │ │ │ │ 794 │ │ │ 795 │ def _apply(self, fn): │ │ 796 │ │ for module in self.children(): │ │ ❱ 797 │ │ │ module._apply(fn) │ │ 798 │ │ │ │ 799 │ │ def compute_should_use_set_data(tensor, tensor_applied): │ │ 800 │ │ │ if torch._has_compatible_shallow_copy_type(tensor, tensor_applied): │ │ │ │ C:\Users\Name\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module. │ │ py:797 in _apply │ │ │ │ 794 │ │ │ 795 │ def _apply(self, fn): │ │ 796 │ │ for module in self.children(): │ │ ❱ 797 │ │ │ module._apply(fn) │ │ 798 │ │ │ │ 799 │ │ def compute_should_use_set_data(tensor, tensor_applied): │ │ 800 │ │ │ if torch._has_compatible_shallow_copy_type(tensor, tensor_applied): │ │ │ │ C:\Users\Name\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module. │ │ py:820 in _apply │ │ │ │ 817 │ │ │ # track autograd history of param_applied, so we have to use │ │ 818 │ │ │ # with torch.no_grad(): │ │ 819 │ │ │ with torch.no_grad(): │ │ ❱ 820 │ │ │ │ param_applied = fn(param) │ │ 821 │ │ │ should_use_set_data = compute_should_use_set_data(param, param_applied) │ │ 822 │ │ │ if should_use_set_data: │ │ 823 │ │ │ │ param.data = param_applied │ │ │ │ C:\Users\Name\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module. │ │ py:1143 in convert │ │ │ │ 1140 │ │ │ if convert_to_format is not None and t.dim() in (4, 5): │ │ 1141 │ │ │ │ return t.to(device, dtype if t.is_floating_point() or t.is_complex() els │ │ 1142 │ │ │ │ │ │ │ non_blocking, memory_format=convert_to_format) │ │ ❱ 1143 │ │ │ return t.to(device, dtype if t.is_floating_point() or t.is_complex() else No │ │ 1144 │ │ │ │ 1145 │ │ return self._apply(convert) │ │ 1146 │ │ │ │ C:\Users\Name\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\cuda__init__.py:2 │ │ 39 in _lazy_init │ │ │ │ 236 │ │ │ │ "Cannot re-initialize CUDA in forked subprocess. To use CUDA with " │ │ 237 │ │ │ │ "multiprocessing, you must use the 'spawn' start method") │ │ 238 │ │ if not hasattr(torch._C, '_cuda_getDeviceCount'): │ │ ❱ 239 │ │ │ raise AssertionError("Torch not compiled with CUDA enabled") │ │ 240 │ │ if _cudart is None: │ │ 241 │ │ │ raise AssertionError( │ │ 242 │ │ │ │ "libcudart functions unavailable. It looks like you have a broken build? │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ AssertionError: Torch not compiled with CUDA enabled

psegovias commented 1 year ago

i had the same problem i fixed with https://github.com/PromtEngineer/localGPT/issues/10#issuecomment-1567481140

pauldeden commented 1 year ago

i had the same problem i fixed with #10 (comment)

That fixed it for me as well. Thank you!