issues
search
4AI
/
LS-LLaMA
A Simple but Powerful SOTA NER Model | Official Code For Label Supervised LLaMA Finetuning
https://arxiv.org/abs/2310.01208
MIT License
122
stars
19
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Do you train only linear classification head or finetune the whole Llama model?
#22
ngthanhtin
closed
3 weeks ago
2
process doubt for dataset
#21
misitetong
opened
1 month ago
2
Can you provide multi GPU training script
#20
XYaoooo
opened
2 months ago
0
Bitsandbytes quantization extension
#19
ferrazzipietro
opened
2 months ago
3
Hi, what's the prompt for NER task?
#18
wmkai
closed
3 months ago
5
RuntimeError with dtypes
#17
biirving
opened
3 months ago
1
Ask pretrained weights
#16
nguyenhoanganh2002
closed
4 months ago
1
Added CI data and made changes to unllama_token_clf
#15
neeraja1504
closed
4 months ago
0
Padding Strategy
#14
upjabir
opened
5 months ago
1
Assertion failed
#13
catalwaysright
opened
5 months ago
2
downgraded to transformers 4.32.1 still got error
#12
qiuhaolu
closed
5 months ago
4
TypeError: LlamaDecoderLayer.__init__() missing 1 required positional argument: 'layer_idx'
#11
frankdarkluo
closed
5 months ago
4
Training on the custom dataset?
#10
25icecreamflavors
opened
6 months ago
0
the inference function
#9
1259927114
opened
6 months ago
0
Not LlamaForSequenceClassification in modeling_llama.py
#8
thanhsang298
opened
6 months ago
1
Error while inferencing
#7
drmayu7
opened
7 months ago
1
Update README.md
#6
csroyli
closed
7 months ago
0
Update modeling_llama.py
#5
csroyli
closed
7 months ago
0
Cannot reproduce the results of this paper...
#4
coder4nlp
closed
7 months ago
16
LoRA seems not training the linear head for classification
#3
mutetea
closed
7 months ago
1
How to train with multiple GPUs
#2
xiaohei1001
closed
7 months ago
1
I'm stuck in a place, but I'm a beginner and I don't understand why, can you help me
#1
ckw-6
closed
9 months ago
4