-
Exception in thread Thread-3:
Traceback (most recent call last):
File "F:\vsr_windows_gpu_v1.1.0\vsr\Python\lib\threading.py", line 932, in _bootstrap_inner
self.run()
File "F:\vsr_windows…
-
1. While running `afdust_adj` from the SMOKE training package for the 12LISTOS domain which was windowed from the 12US1 domain, the `mult.x` program from SMOKE package provided along with Emission Mod…
-
Name: diffusers
Version: 0.30.2
Name: transformers
Version: 4.44.2
Loading pipeline components...: 20%
1/5 [00:00 10 pipe = StableDiffusionInpaintPipeline.from_pretrained(
11 "b…
-
File "/workdir/user_repository/inference/local_deploy_demo.py", line 41, in load_model
self.model = AutoModelForCausalLM.from_pretrained(path, device_map="auto", trust_remote_code=True,
File…
-
Hello!
The `main` (`a441a3f`) branch of the AQLM repository does not support `flash attention 2`. The error occurs because QuantizedWeight does not have a weight attribute ([closed issue #31](https…
-
### System Info
TensorRT Model Optimizer: 0.15.1
TensortRT-LLM version: 0.14.0.dev2024100100
Python version
OS: Ubuntu 22.04
CPU Arch: x86_63
Driver version: 555.42.02
CUDA Version:12.5
### Who can…
-
### System Info
While building TensorRT engines for Mixtral model Mixtral-8x7B-Instruct-v0.1, ran into this error.
Loading checkpoint shards: 21%|██████████████████████████████████▌ …
-
Are there any available tools that can convert the original .pth model files downloaded from Meta into a format usable by stack, or convert them to .safetensors format? I tried the tool from https://g…
-
在处理好了PG19数据之后,进行训练,一直发现有问题
[WARNING|logging.py:329] 2024-05-14 16:24:22,784 >> LlamaModel is using LlamaSdpaAttention, but `torch.nn.functional.scaled_dot_product_attention` does not support `output_…
-
/usr/local/lib/python3.10/dist-packages/gliner/modeling/base.py in extract_prompt_features_and_word_embeddings(config, token_embeds, input_ids, attention_mask, text_lengths, words_mask, **kwargs)
…