-
I tried to instantiate a bert model with the following code:
```rust
use candle_core::DType;
use candle_lora::LoraConfig;
use candle_lora_transformers::bert::{BertModel, Config};
use candle_nn::{…
-
The GUI is not working.
(gpt1) ashu@MSI:/mnt/c/Users/genco/Documents/gpt$ make run
poetry run python -m private_gpt
11:47:15.734 [INFO ] private_gpt.settings.settings_loader - Starting appli…
-
os: windows
I think my environment is ready
use jupyter notebook locally
when i run these:
"from unsloth import FastLanguageModel
import torch
max_seq_length = 8192 # Choose any! We auto sup…
-
When using the source tarball, and building without pandoc, builds fail after `make clean` with a subsequent re-`configure`.
This is what happens when building multiple times in a Yocto workspace, …
-
Only q4_0_4_4 gguf are running in my Poco X6 pro phone. CPU-Z said it have cortex A510 and A715 cores. They are support both i8mm and sve. When i tried to run a gguf what needs it this happens:
~/…
-
Some models (e.g. InternVideo2 multi modality) depend on flash attention extensions. We would like to add additional outputs for:
fused_dense_lib: csrc/fused_dense_lib
layer_norm: csrc/layer_norm
-
**Kibana version:** 8.15.2
**Elasticsearch version:**8.15.2
**Server OS version:** N/A
**Browser version:** N/A
**Browser OS version:** N/A
**Original install method (e.g. download page, yum, fro…
-
使用exo+mlx多台mac运行llama-3.1-70b,返现量化时报错
报错的位置:
quantized.py文件
代码:
def call(self, x):
s = x.shape
x = x.flatten()
out = mx.dequantize(
self["weight"][x],
scales=self["scales"][x],
biases=self["…
-
Hi,
I'm trying to use the MetaspotsByGroups function with multiple variables from meta.data, including "case," "batch," "region," and "distance." However, when I include the "distance" variable, I …
-
### Describe the bug
For whatever reason, this method runs SO SLOW in WSL2. I followed all the instructions to run a Codestral model, but it just runs at the speed of smell. It seems to be using my C…