issues
search
MiuLab
/
Taiwan-LLM
Traditional Mandarin LLMs for Taiwan
https://twllm.com
Apache License 2.0
938
stars
81
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Support for AWQ quantization in TGI
#59
nigue3025
opened
1 month ago
1
請問訓練用的程式碼是用哪一套?
#58
wennycooper
opened
2 months ago
6
想請教關於Fine tuning時的資料集要求
#57
davidho27941
opened
3 months ago
1
Support Ollama to run Taiwan-LLM
#56
WangRongsheng
closed
3 months ago
1
The process of RLHF and reward modeling
#55
joshhu
opened
3 months ago
1
Regarding using HuggingFaceEmbeddings and loading it onto the GPU
#54
WenTingTseng
closed
1 month ago
1
Bump transformers from 4.31.0 to 4.36.0 in /demo
#53
dependabot[bot]
closed
5 months ago
0
是否會開源依此模型為基礎的Embedding model
#52
D3annyC
closed
5 months ago
2
關於在 LM Studio 使用此模型…
#51
zhihmeng
opened
5 months ago
1
請問會有70B的模型嗎?
#50
hiwudery
closed
7 months ago
0
Update README.md
#49
adamlin120
closed
7 months ago
0
Update README.md
#48
eltociear
closed
6 months ago
0
Problem of quantisation
#47
yihong1120
closed
6 months ago
1
add tc-eval results
#46
adamlin120
closed
7 months ago
0
有關 Taiwan-LLM-7B-v2.1-chat 的 base model
#45
larry0220
closed
6 months ago
2
Bump langchain from 0.0.325 to 0.0.329
#44
dependabot[bot]
closed
6 months ago
0
Update v2 in README.md
#43
adamlin120
closed
7 months ago
0
Bump langchain from 0.0.312 to 0.0.325
#42
dependabot[bot]
closed
7 months ago
0
[Question] Text generation by transformers pipeline is not working properly
#41
HCTsai
closed
6 months ago
2
Bump langchain from 0.0.260 to 0.0.312
#40
dependabot[bot]
closed
8 months ago
0
關於https://chat.twllm.com 的程式構建
#39
wrsnice
closed
8 months ago
2
config for Taiwan-LLaMa-v1.0 demo on Hugging Face Spaces.
#38
Syax19
closed
8 months ago
3
Pre-training link can not find
#37
wastu01
closed
8 months ago
1
Fix typo & broken links
#36
penut85420
closed
8 months ago
0
About the spec of instruction tuning dataset
#35
HuangChiEn
closed
8 months ago
2
Prompt-template with contexts
#34
wennycooper
closed
9 months ago
1
Can I Disable flash attention 2 ?
#33
bensonbs
closed
8 months ago
2
使用Taiwan-LLaMa-13b-1.0.Q8_0.gguf, inference產生的回答是空白的
#32
gymeee0715
closed
9 months ago
6
yentinglin/Taiwan-LLaMa-v1.0 輸出亂碼
#31
0781532
closed
8 months ago
25
使用ggmlv3 q6_K model, inference會掉字
#30
wennycooper
closed
9 months ago
3
使用NodeJS的spawn時中文會出現亂碼,有人有這個經驗嗎?
#29
edenlin-uj
closed
9 months ago
4
網頁Demo與程式碼執行結果有落差
#28
RosieYC
closed
8 months ago
1
Training datasets are not available
#27
Lifulifu
closed
8 months ago
1
關於訓練相關資訊請教
#26
ysf888app
closed
8 months ago
2
[Feature Request] Support InternLM
#25
JimmyMa99
closed
8 months ago
0
Minimum GPU device requirement for inference (with OOM issue)
#24
nigue3025
closed
10 months ago
2
fix issue #22
#23
bonzo-ntu
closed
10 months ago
0
README.md example typo
#22
bonzo-ntu
closed
10 months ago
0
Location of downloaded bin file to deploy
#21
nigue3025
closed
10 months ago
2
zh_TW_c4 404 error
#20
linpan
closed
8 months ago
2
Model quantize
#19
bensonbs
closed
10 months ago
10
docker: Error response from daemon: failed to create task for container
#18
geoxpert0001
closed
8 months ago
9
Add support for Llama2, Palm, Cohere, Anthropic, Replicate, Azure Models - using litellm
#17
ishaan-jaff
closed
8 months ago
3
請問訓練此模型時使用的機器規格
#16
hsiaoyun0
closed
8 months ago
2
Using 5 billion tokens pretraining llama2
#15
joshhu
closed
9 months ago
1
能否商用化?
#14
geoxpert0001
closed
8 months ago
7
Support quantized model (int8, int4) and deployment?
#13
ykhorzon
closed
8 months ago
3
industry collaborations (7B/13B/70B) Model Pretrain
#12
hiwudery
closed
8 months ago
1
pretrain model是否會開放?
#11
cyc00518
closed
8 months ago
2
Poor quality of the dataset
#10
LuneZ99
closed
8 months ago
3
Next