issues
search
jankais3r
/
LLaMA_MPS
Run LLaMA (and Stanford-Alpaca) inference on Apple Silicon GPUs.
GNU General Public License v3.0
583
stars
47
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
op Height/Width dimensions must be less than 16384
#20
itrcz
opened
1 year ago
0
What changes were made for MPS support?
#19
vgoklani
opened
1 year ago
0
powermetrics
#18
vgoklani
opened
1 year ago
0
RuntimeError: Cannot set version_counter for inference tensor
#16
meyert11
opened
1 year ago
0
Why fp16 MPS performance is worse than CPU?
#15
FdyCN
opened
1 year ago
2
Corrupted output due to PyTorch MPS issue affecting torch.argmax() and torch.multinomial()
#14
TheBloke
closed
1 year ago
3
Support Apple Neural Engine (ANE) Transformers
#13
LeiHao0
opened
1 year ago
1
Add Alpaca 30B
#12
LeiHao0
opened
1 year ago
0
Huggingface weights repo doesn't have params.json
#11
Maxhirez
closed
1 year ago
1
Trial runs at Llama 7B (success) and 65B (fail)
#10
kechan
opened
1 year ago
2
The magnet link of the model resource seem to be unusable
#9
ShayTSC
closed
1 year ago
3
Fix command
#8
jooray
closed
1 year ago
1
Fine tuning LLaMA on Apple Silicon GPUs
#7
Gincioks
closed
1 year ago
2
MPS device support
#6
MZeydabadi
closed
1 year ago
1
Optimize attention
#5
Birch-san
closed
1 year ago
1
Garbage output on 30B model?
#4
drewcrawford
opened
1 year ago
10
AssertionError: ./models/tokenizer.model
#3
MZeydabadi
closed
1 year ago
1
RuntimeError: probability tensor contains either `inf`, `nan` or element < 0
#1
fdstevex
closed
1 year ago
2