issues
search
BlinkDL
/
ChatRWKV
ChatRWKV is like ChatGPT but powered by RWKV (100% RNN) language model, and open source.
Apache License 2.0
9.39k
stars
689
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
B
#203
jewc20009
opened
3 weeks ago
0
加速模型载入
#202
shouldsee
opened
2 months ago
2
[requires_grad]在本地部署CHATRWKV时遇到了AttributeError: 'str' object has no attribute 'requires_grad'
#201
masterandmiku
opened
5 months ago
1
如何选模型基座?
#200
xunyao4dev
opened
6 months ago
1
[pip package] feature request: pipeline.generate: add ability to get the state, if it was not provided
#199
Maykeye
opened
6 months ago
1
[pip package] Make loading aware that os.environ can change
#198
Maykeye
opened
6 months ago
2
add text condition for gen music
#197
jiaqianjing
opened
7 months ago
1
model path list
#196
EnricoBeltramo
opened
7 months ago
1
mps slower than cpu
#195
fakerybakery
opened
7 months ago
1
How to run new v5-Eagle-7B
#194
arun-samespace
opened
8 months ago
2
"cpu fp32i8" strategy not working in RWKV v6 through Python rwkv module
#193
polkovnikov
closed
7 months ago
2
Inference doesn't work on Apple Macbook even when using CPU fp32 as strategy
#192
fredsco
opened
8 months ago
1
eagle-7B
#191
l0d0v1c
closed
8 months ago
1
回复总是截断了,如何让回复自然的结束
#189
micronetboy
opened
9 months ago
1
feat: add torch parallel mode for v5.2 model.
#188
AsakusaRinne
opened
10 months ago
0
大哥,乱码了
#187
humanpp
closed
10 months ago
1
NameError: name 'PIPELINE' is not defined
#186
Siu-Ming
opened
10 months ago
1
Update rwkv5.cu for multiprocessing
#185
harrisonvanderbyl
opened
11 months ago
0
fix bf16
#184
daquexian
closed
11 months ago
0
fix multi gpus
#183
daquexian
closed
11 months ago
0
fix jit
#182
daquexian
closed
11 months ago
0
fix dml sampling
#181
daquexian
closed
11 months ago
0
add v5 int8 by unifying float and int8 att
#180
daquexian
closed
11 months ago
0
How to write the RWKV in autogressive style like RNN
#179
HaiFengZeng
opened
11 months ago
2
remove @MyFunction on cuda_att_seq_v5_2
#178
daquexian
closed
12 months ago
0
optimize att_seq_v5_2 by replacing torch.cat with torch.empty
#177
daquexian
closed
12 months ago
0
pure pytorch att_seq_v5_2
#176
daquexian
closed
12 months ago
0
fix kernel compilation on windows
#175
josStorer
closed
12 months ago
0
Support DirectML with minimized code change
#174
pengan1987
closed
12 months ago
0
RuntimeError: Error building extension 'wkv_cuda_v1'
#173
scooter99boston
closed
1 year ago
2
[Feature Request] text2music
#172
rayrayraykk
opened
1 year ago
2
Prompt for RAG with RWKV-4-World-7B-v1-20230626-ctx4096
#171
Matthieu-Tinycoaching
opened
1 year ago
1
fix top_p sampling when cumsum exactly equals to top_p
#170
daquexian
closed
1 year ago
0
huggingface无法访问,模型无法下载
#169
xiaokai-lyk
closed
1 year ago
4
repalce benchmark.py time with tokens/s
#168
BBuf
closed
12 months ago
0
'No CUDA GPUs are available' in google colab with V100 GPU and high RAM
#167
uxff
closed
6 months ago
2
demo true error ?
#166
malv-c
opened
1 year ago
1
demo ?
#164
malv-c
closed
1 year ago
2
开源中文NSFW微调模型
#163
ZhengJun-AI
closed
1 year ago
0
fix dense_dim error on pytorch 1.13
#162
daquexian
closed
1 year ago
0
add a "stops words" support
#161
yynil
opened
1 year ago
0
Add a support to "stop_words" in PIPELINE
#160
yynil
opened
1 year ago
0
Feature Request: an option to use Positional Interpolation to extent CTX length at inference time
#159
ytfh44
closed
1 year ago
2
support fp16 on v5 models, fix cublas link errors on some platforms
#158
daquexian
closed
1 year ago
0
fast fp16 seq and one mode
#157
daquexian
opened
1 year ago
0
fix fp32 run
#156
daquexian
closed
1 year ago
0
fix small model fp16 nan
#155
daquexian
closed
1 year ago
0
[WIP] Add fast cuda kernels for one mode
#154
daquexian
closed
1 year ago
1
求助time-mixing部分,处理数值溢出代码的解释
#153
Ellen7ions
closed
1 year ago
0
feat: prefetching
#152
daquexian
opened
1 year ago
0
Next