-
Whenever I try to load 4bit models I recieve this message. I'm using the latest version of code and can load normal models just fine. I'm using a 6600xt.
``
DEVICE ID | LAYERS | DEVICE NAM…
-
On ubuntu16.04, cuda9.0, there is a running error, do not know how to modify it?
Traceback (most recent call last):
File "tools / train_net.py", line 173, in
main ()
File "tools / train_n…
-
@HarrisDePerceptron
I ran the following source on Transformers 4.1.1 and got an error "ValueError: None values not supported.".
How should I solve it?
https://github.com/snapthat/TF-T5-text-to-t…
-
# High level description
There are currently some low fidelity drag models. In the past few years, these drag models have greatly improved. This ticket should support one of the high fidelity drag …
-
代码第265行,多卡数据同步之后,cross_targets计算方式有问题,应该得考虑当前local rank。
https://github.com/FlagOpen/FlagEmbedding/blob/97f57a1b92dc68d56731a1e38a2d3aad4cd67e20/FlagEmbedding/BGE_M3/modeling.py#L265
原始是:cross_tar…
-
I´ve recently moved to RunPod, obviosly Im a dummy, and well.. I´m getting these errors:
This one appears right after I run the "Start Stable Diffusion" cell:
Warning: caught exception 'Unexpected…
-
Hi, I try to install paddle detection in the paddle serving container. The steps are
docker pull paddlepaddle/serving:0.7.0-devel
docker run -p 9292:9292 --name test -dit paddlepaddle/serving:0.7…
-
我的微调命令就是基于本仓库提供的示例
https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/unified_finetune
微调命令:
`export CUDA_VISIBLE_DEVICES=0,1
torchrun --nproc_per_node 2 \
-m FlagEmbedding.BGE_M3…
-
For loading this model
MODEL_ID = "TheBloke/WizardLM-7B-uncensored-GPTQ"
MODEL_BASENAME = "WizardLM-7B-uncensored-GPTQ-4bit-128g.compat.no-act-order.safetensors"
facing this issue, before its …
-
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
使用量化方式加载模型时,提示
`Failed to load cpm_kernels:[WinError 267] 目录名称无效。: 'C:\\Users\\Hengj\\AppDat…