issues
search
LHRLAB
/
ChatKBQA
[ACL 2024] Official resources of "ChatKBQA: A Generate-then-Retrieve Framework for Knowledge Base Question Answering with Fine-tuned Large Language Models".
https://arxiv.org/abs/2310.08975
MIT License
222
stars
21
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
oracle entity linking annotations
#15
quqxui
opened
5 days ago
3
跑代码后loss一直是零值
#14
zhangapeng
opened
1 week ago
1
ValueError: FP16 Mixed precision training with AMP or APEX (`--fp16`) and FP16 half precision evaluation (`--fp16_full_eval`) can only be used on CUDA devices.
#13
JokieVN
closed
2 months ago
0
作者您好!我在代码复现时遇到了CPU内存不足的问题,可以帮我提示一下为什么会遇到这个问题以及解决方法吗?非常感谢!!
#12
DW934
opened
2 months ago
0
Uploading Processed Files
#11
YaooXu
closed
2 months ago
2
There are some abnormalities in the test results
#10
gongchuanyang
opened
3 months ago
0
疑似文件缺失
#9
52566rz
closed
3 months ago
2
泛化能力
#8
ccp123456789
opened
5 months ago
1
The problem of generating SExpr expression
#7
ganlinganlin
opened
5 months ago
2
The LoRA hyper-parameters?
#6
JBoRu
opened
6 months ago
1
Clarification on Metrics in ChatKBQA Results Reproduction
#5
FUTUREEEEEE
opened
7 months ago
5
Checkpoint seems not contain LoRA weights
#4
FUTUREEEEEE
closed
6 months ago
4
TypeError: sdp_kernel() got an unexpected keyword argument 'enable_mem_efficient'
#3
ganlinganlin
closed
7 months ago
1
torch.cuda.OutOfMemoryError: CUDA out of memory.
#2
ganlinganlin
opened
7 months ago
1
报错:文件缺失
#1
Lucency0331
closed
7 months ago
17