issues
search
kamalkraj
/
e5-mistral-7b-instruct
Finetune mistral-7b-instruct for sentence embeddings
Apache License 2.0
61
stars
13
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
OVERFLOW! Rank 0 Skipping step
#14
liujiqiang999
opened
3 weeks ago
0
Best Practices for Fine-Tuning Models on Multi-Hop Datasets?
#13
Leon-Sander
opened
4 weeks ago
0
Exception when "--checkpointing_steps" is set
#12
Hypothesis-Z
opened
2 months ago
2
ValueError: expected sequence of length 595 at dim 1 (got 589)
#11
Hypothesis-Z
closed
3 months ago
1
OOM with 2 GPUs (48GB in total)
#10
yurinoviello
opened
3 months ago
0
fix batch_size > 1 bug using collate_fn
#9
sangzisen
closed
2 months ago
0
fix batch_size > 1 bug using custom_collate_fn
#8
sangzisen
closed
4 months ago
0
Inference OOM with 80G GPU 1000 input sequence length
#7
charliedream1
opened
5 months ago
6
Am I using the code incorrectly? help me
#6
Pang-dachu
opened
5 months ago
14
Is there any way to use multiple GPUs?
#5
Pang-dachu
closed
5 months ago
1
How much GPU should it need minimum to run this?
#4
anonymousz97
closed
5 months ago
2
when I modify batch_size larger than 1
#3
sangzisen
closed
5 months ago
8
out of memory on A100-80G.
#2
Joris-Fu
closed
5 months ago
7
Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
#1
Tostino
closed
5 months ago
9