issues
search
randaller
/
llama-chat
Chat with Meta's LLaMA models at home made easy
GNU General Public License v3.0
832
stars
118
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Can anyone suggest the best prompt for codellama13b model for converting the sql query to postgresql query
#36
PrasannaVnewtglobal
opened
8 months ago
0
Train model using GPU
#35
thefaizan
opened
10 months ago
0
[Feature Request] Support InternLM
#34
vansinhu
opened
10 months ago
0
GPU vs CPU which one is best?
#33
masterchop
opened
11 months ago
2
FineTuning the last layer of Model
#32
masteryodaa
opened
12 months ago
0
Cuda Error on Training
#31
masteryodaa
opened
12 months ago
0
Chats are repeating
#30
hellorp1990
opened
1 year ago
0
The generation will stop at ":" keyword.
#29
shenghanwu
opened
1 year ago
1
hi, i need help
#28
elpewpew
opened
1 year ago
0
To shield the annoying progress bar, we found a way
#27
snow-wind-001
opened
1 year ago
1
Running this llama-chat successfully, but with repetitive progress bars, is this normal?
#26
meeeo
opened
1 year ago
0
Question - perform tasks
#25
kmichal
closed
1 year ago
1
Feature Request - Ways to add context to conversation from beginning.
#24
bradgillap
closed
1 year ago
1
Is example-chat.py ready to use GPU?
#23
kaykyr
opened
1 year ago
0
Exception RuntimeError: at::cuda::blas::gemm: not implemented
#22
KartoniarzEssa
closed
1 year ago
1
Incomplete answer with GPTSQLStructStoreIndex compared to ChainLang
#21
k4ycer
closed
1 year ago
2
How to off model training on the runtime?
#20
atul47B
opened
1 year ago
0
Do you have any plans to support GPTQ-4bit model?
#19
forestsource
closed
6 days ago
2
How to generate Bible data to LLAMA?
#18
paulocoutinhox
opened
1 year ago
4
"model parallel group is not initialized" when loading model
#17
yaoing
closed
1 year ago
2
Goes nowhere
#16
chrisbward
closed
1 year ago
3
How should I use multiple GPUs?
#15
Chting
opened
1 year ago
8
test
#14
tkone2018
closed
1 year ago
1
Create LICENSE.md
#13
mucahitbz
closed
1 year ago
0
Clarify requirements
#12
vid
opened
1 year ago
8
how can I use cuda tensor
#11
cha0s-repo
closed
1 year ago
1
OOM with 64G ram and 13B model
#10
dunkean
closed
1 year ago
2
Error on run: size mismatch for ...
#9
MartinKlefas
closed
1 year ago
4
merge-weights.py fails merging the 13B model
#8
teknoraver
closed
1 year ago
3
Share your best prompts and generations (and model name) here.
#7
randaller
opened
1 year ago
22
It's too slow, how run 30B on 4 GPUs interactively
#6
zhongtao93
closed
1 year ago
3
125G of memory, executing merge-weights.py on 30B will oom
#5
Chting
opened
1 year ago
4
why does this happen?
#4
breadbrowser
closed
1 year ago
7
Hello, how to make the output be trimed?
#2
lucasjinreal
closed
1 year ago
3
I got no print out of AI repsonse
#1
lucasjinreal
closed
1 year ago
4