-
Hello,
I cannot get the LLM to use my GPU instead of my CPU. i tried multiple models but it does not work.
how do i fix this?
MODEL_ID = "TheBloke/Llama-2-13B-chat-GPTQ"
MODEL_BASENAME = "gpt…
-
root@container-715b4abffa-ae32ab74:/data/shared/Qwen/Qwen# export CUDA_VISIBLE_DEVICES=0,1,2
root@container-715b4abffa-ae32ab74:/data/shared/Qwen/Qwen#
root@container-715b4abffa-ae32ab74:/data/shar…
-
我按照教程配置了所有的环境内容,但是一运行就报错..
后面我离线转换的也是报错这个
(lmdeploy) root@intern-studio:~# lmdeploy chat turbomind /share/temp/model_repos/internlm-chat-7b/ --model-name internlm-chat-7b
model_source: hf_model
…
-
As the title states, do we need to set the model loader to ExLlamav2_HF or ExLlamav2?
The [documentation](https://github.com/oobabooga/text-generation-webui/wiki/04-%E2%80%90-Model-Tab) says:
`…
-
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
单机微调int4的chatglm模型,在模型加载时出现错误,提示信息:Only Tensors of floating point and complex dtype can requi…
-
Is this a user error or a programming error?
FreeBSD 12.0-RELEASE FreeBSD 12.0-RELEASE r341666 GENERIC amd64
8G memory, 2T disc.
Salmon installed as Linux binary.
The command I issued was
…
-
Hi,
I'm analyzing single-cell data (from Split-seq) and was able to run kallisto pseudo on a batch file with all the fastq from each of my cells.
I was hoping to convert the output of kallisto…
-
I have a pair of FastQ's from paired end sequencing that had some adapter contamination, so I trimmed them. But since RUM doesn't like FQ files with different sizes (HUGE ISSUE) I padded all the reads…
-
Hi there,
I'm not sure if this functionality already exists, but I'd like to propose a function `aggregateSamples()` which is companion to `aggregateFeatures()`, and would do essentially the same t…
csdaw updated
5 months ago
-
With valiDrops 0.1.0 in R 4.3.2, x86_64-apple-darwin20, I get this error with one of 12 samples of my snRNA seq. experiment. The sample's data is not of good quality (~150 median features/cell, as aga…