issues
search
huggingface
/
candle
Minimalist ML framework for Rust
Apache License 2.0
13.48k
stars
724
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
"Illegal Instruction" on Older CPUS
#2140
jett06
opened
44 minutes ago
0
Candle won't use half-gemm from cublas when doing fp16 matmul
#2139
lucasavila00
opened
5 hours ago
6
Add a forward_via_f16 method to the qmatmul op.
#2138
LaurentMazare
closed
7 hours ago
0
Add the cuda dequantize f16 kernels.
#2137
LaurentMazare
closed
7 hours ago
0
Adding direct-F16 quantization
#2136
EricLBuehler
closed
5 hours ago
9
Apply the cast before the scaling.
#2135
LaurentMazare
closed
19 hours ago
1
Add a sort function, similar to the PyToch one.
#2134
LaurentMazare
closed
19 hours ago
0
Make the dtype configurable for phi.
#2133
LaurentMazare
closed
1 day ago
0
Add argsort.
#2132
LaurentMazare
closed
1 day ago
0
DriverError(CUDA_ERROR_ILLEGAL_ADDRESS, "an illegal memory access was encountered")
#2131
VakeDomen
opened
1 day ago
0
A keyboard malfunction created this.
#2130
danielclough
closed
1 day ago
1
`Msg("unknown magic 46554747")` when trying to load tensor from GGUF
#2129
shinmao
closed
29 minutes ago
2
Phi-3 implementation seems to be buggy on metal devices
#2128
jorgeantonio21
opened
2 days ago
5
Add Olmo models
#2127
Isotr0py
closed
2 days ago
1
error when import onnx of yolo8
#2126
xuexl
opened
2 days ago
0
quantized_llama ModelWeights path used in quantized Phi3
#2125
AntBlo
closed
2 days ago
2
Bug Fix: When converting a tensor to a variable, clone if the tensor is already a variable.
#2124
jsdt
opened
4 days ago
2
Support for Microsoft Phi-3 128k context length
#2123
niutech
opened
4 days ago
1
Mention phi-v3 in the readmes.
#2122
LaurentMazare
closed
4 days ago
0
chore: fix some typos in comments
#2121
hardlydearly
closed
19 hours ago
1
Add the phi-3 model.
#2120
LaurentMazare
closed
4 days ago
0
Request: Please add support in examples for the new MS/Phi-3 family
#2119
a-agmon
closed
4 days ago
2
Add the phi-v3 quantized model.
#2118
LaurentMazare
closed
4 days ago
0
Fix for rustfmt.
#2117
LaurentMazare
closed
5 days ago
0
candle-onnx: add operators RandomUniform and Exp
#2116
B1rtek
closed
5 days ago
1
No compiler check for operation on different tensor type.
#2115
npuichigo
opened
5 days ago
1
Fix sigmoid gradient calculation and move sigmoid into a specialized op
#2114
MilkFather
opened
5 days ago
4
Add StorageRef.
#2113
LaurentMazare
closed
5 days ago
0
error while loading shared libraries: libnvrtc.so.12
#2112
wangjiawen2013
opened
5 days ago
3
Batch llama prompt
#2111
tbogdala
closed
5 days ago
1
M2m100 model
#2110
jason-shen
opened
6 days ago
0
Equivalent of torch.nonzero ?
#2109
sbucaille
opened
6 days ago
1
Processing text prompts in batches for LLMs
#2108
tbogdala
closed
5 days ago
4
Use the faster rms-norm kernel for llama.
#2107
LaurentMazare
closed
6 days ago
0
Better time measurement for the llama example.
#2106
LaurentMazare
closed
6 days ago
0
Build Failure with candle-kernels: nvidia-smi Not Found in Docker Environment, Even Though It Is Available
#2105
Pox-here
closed
6 days ago
4
Update tokenizers requirement from 0.15.0 to 0.19.1
#2104
dependabot[bot]
closed
6 days ago
0
Update zip requirement from 0.6.6 to 1.1.1
#2103
dependabot[bot]
closed
6 days ago
0
Request: PixelShuffle
#2102
oovm
opened
1 week ago
1
fix(onnx): No need to install protoc anymore
#2101
oovm
opened
1 week ago
0
Derive clone and debug traits for Moondream model
#2100
santiagomed
closed
1 week ago
1
Updated quantized phi model
#2099
LaurentMazare
closed
1 week ago
1
Small cleanups to the llama multi-process example.
#2098
LaurentMazare
closed
1 week ago
0
Handle multiple dimensions in metal QMM + two fixes.
#2097
LaurentMazare
closed
1 week ago
0
Add missing onnx operations
#2096
gabotechs
closed
1 week ago
1
Run metal and accelerate features on CI
#2095
tomsanbear
closed
1 week ago
1
Use llama v3 by default + add to readme.
#2094
LaurentMazare
closed
1 week ago
0
Only download the weights in the main process (and not in the child processes).
#2093
LaurentMazare
closed
1 week ago
0
Multiprocess/multi-GPU support for llama 3.
#2092
LaurentMazare
closed
1 week ago
0
Fix for gemma MQA.
#2091
LaurentMazare
closed
1 week ago
0
Next