issues
search
juncongmoo
/
pyllama
LLaMA: Open and Efficient Foundation Language Models
GNU General Public License v3.0
2.81k
stars
311
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Question regarding EnergonAI repo
#114
philipp-fischer
closed
8 months ago
0
torch.distributed.elastic.multiprocessing.errors.ChildFailedError
#113
sido420
opened
10 months ago
1
Quick Question
#112
ArgusK17
closed
10 months ago
0
How to run an interactive mode in Jupyter?
#111
myrainbowandsky
opened
1 year ago
0
12GB card
#109
arthurwolf
opened
1 year ago
2
no module named llama
#108
Cooper-Ji
opened
1 year ago
1
Added transformers to requirements.txt
#107
HireTheHero
opened
1 year ago
0
NVMLError_NoPermission: Insufficient Permissions
#106
sz2three
opened
1 year ago
0
evaluating has an extremely large value when quantize to 4bit.
#105
JiachuanDENG
opened
1 year ago
1
Download 7B model seems stuck
#104
guanlinz
opened
1 year ago
9
Download watchdog kicking in? (M1 mac)
#103
kryt
opened
1 year ago
0
RuntimeError: Error(s) in loading state_dict for LLaMAForCausalLM: Unexpected key(s) in state_dict:
#102
ZealHua
opened
1 year ago
0
RecursionError: maximum recursion depth exceeded while calling a Python object
#101
Vaibhav11002
opened
1 year ago
0
shape mismatch error
#100
Celppu
opened
1 year ago
0
an operation was attempted on something that is not a socket
#99
GameDevKitY
opened
1 year ago
0
parameter inncorrect when I run make command
#98
GameDevKitY
opened
1 year ago
0
gptq github
#97
austinmw
opened
1 year ago
4
Try Modular - Mojo
#96
eznix86
opened
1 year ago
0
Randomly get shape mismatch error
#95
vedantroy
opened
1 year ago
0
Does this include the GPTQ quantization tricks?
#94
vedantroy
opened
1 year ago
0
Why are params.json empty?
#93
ItsCRC
closed
1 year ago
5
Quantize issue
#92
ZenekZombie
opened
1 year ago
0
Is that possible to quantize a locally converted model, instead of downloading from hugging face?
#91
chigkim
closed
1 year ago
1
RecursionError running llama.download
#90
anyangpeng
opened
1 year ago
4
Adjust watchdog time interval from 30 seconds to 2 minutes.
#89
Jack-Moo
closed
1 year ago
1
downloading file to pyllama_data/30B/consolidated.00.pth ...please wait for a few minutes ...
#88
Nolyzlel
closed
1 year ago
2
aria2c 'magnet:?xt=urn:btih:ZXXDAUWYLRUXXBHUYEMS6Q5CE5WA3LVA&dn=LLaMA' not working
#87
Nolyzlel
opened
1 year ago
2
Run 'inference.py' and 'model parallel group is not initialized'
#86
ildartregulov
opened
1 year ago
7
Apply Delta failed
#85
majidbhatti
opened
1 year ago
1
How to run 13B model in a single GPU just by inference.by?
#84
statyui
opened
1 year ago
0
about rotary embedding in llama
#83
irasin
closed
12 months ago
2
Strange characters
#82
webpolis
opened
1 year ago
1
Cannot run on Mac with Python 3.11.3
#81
kornhill
opened
1 year ago
6
Inference Error :UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe7 in position 18: invalid continuation byte"
#80
MaiziXiao
opened
1 year ago
0
docs: reduce bit misspell README
#79
guspan-tanadi
closed
1 year ago
0
Quantized version link suspect
#78
thistleknot
opened
1 year ago
1
Is there a way to skip evaluating after quantizing because it takes forever?
#77
chigkim
opened
1 year ago
0
Gave written examples to run 7B model on GPUs
#76
george-adams1
closed
1 year ago
0
Can't Load Quantized Model with GPTQ-for-LLaMa
#75
chigkim
opened
1 year ago
2
a questuon about the single GPU Inference
#74
TitleZ99
opened
1 year ago
1
quantify llama 7B, the md5 value and the model size does not equals to the value in README
#73
balcklive
opened
1 year ago
6
Readme Should Have Inference Command to use for Quantization in Text
#72
chigkim
opened
1 year ago
1
rewrite download_community.sh
#71
llimllib
closed
1 year ago
3
add a shebang to all shell files
#70
llimllib
closed
1 year ago
0
Document if it works with CPU / Macos
#69
ikamensh
opened
1 year ago
0
ModuleNotFoundError: No module named 'transformers'
#67
tasteitslight
opened
1 year ago
6
Can't see progress bar
#66
rahulvigneswaran
opened
1 year ago
1
Has black formatting been considered?
#65
tanitna
opened
1 year ago
0
How to run the gradio with 30B model? and what devices are needed? please
#64
TobiasWYH
opened
1 year ago
0
make download work behind proxy
#63
wanweilove
closed
1 year ago
0
Next