Open Phrixus2023 opened 3 weeks ago
And in the case of multiple graphics cards it causes confusion as to which graphics card to load to.
same issue here, anybody can solve this?
same issue here, anybody can solve this?
I probably solved it because there were multiple graphics cards running, and I needed to disable the extra cards to keep just one and then re-run the programme.
D:\BaiduNetdiskDownload\Omost20240604\venv\lib\site-packages\transformers\utils\hub.py:124: FutureWarning: Using
TRANSFORMERS_CACHE
is deprecated and will be removed in v5 of Transformers. UseHF_HOME
instead. warnings.warn( D:\BaiduNetdiskDownload\Omost20240604\lib_omost\pipeline.py:64: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requiresgrad(True), rather than torch.tensor(sourceTensor). alphas_cumprod = torch.tensor(np.cumprod(alphas, axis=0), dtype=torch.float32) Unload to CPU: AutoencoderKL Unload to CPU: CLIPTextModel Unload to CPU: UNet2DConditionModel Unload to CPU: CLIPTextModel Unused kwargs: ['_load_in_4bit', '_load_in_8bit', 'quant_method']. These kwargs are not used in <class 'transformers.utils.quantization_config.BitsAndBytesConfig'>. Downloading shards: 100%|██████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 1830.37it/s] Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 2/2 [00:08<00:00, 4.16s/it] Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. You shouldn't move a model that is dispatched using accelerate hooks. Unload to CPU: LlamaForCausalLM Running on local URL: http://0.0.0.0:7860To create a public link, set
share=True
inlaunch()
.