issues
search
redotvideo
/
haven
LLM fine-tuning and eval
https://haven.run
Apache License 2.0
341
stars
11
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Add Support for Self-Deployment & Add Feature for Dataset Visualization and Collection
#91
uniAIDevs
opened
7 months ago
0
fix linebreak in readme
#90
justusmattern27
closed
10 months ago
0
Add new readme
#89
justusmattern27
closed
10 months ago
0
Open source new project
#88
hkonsti
closed
10 months ago
1
OUTPUT_POSTFIX in preprocess function can cause infinitively generation.
#87
binhmed2lab
opened
11 months ago
0
is there an error in the way the prompt is builded ?
#86
tomad02
opened
1 year ago
1
Update Readme
#85
justusmattern27
closed
1 year ago
0
Is this maintained?
#84
osilverstein
opened
1 year ago
3
TypeError: from_pretrained() missing 1 required positional argument: 'model_id'
#83
jayantkhannadocplix1
opened
1 year ago
1
Llamatune fails with your example code from its home page
#82
IridiumMaster
opened
1 year ago
2
how many gpus needs when full training 70B Llama2
#81
alphanlp
opened
1 year ago
1
can you provide you chat.json file
#80
alphanlp
opened
1 year ago
0
Finetuning not affecting output
#79
fefitin
closed
1 year ago
1
Running Local Server
#78
VeitIsopp
closed
1 year ago
1
Add llamatune
#77
justusmattern27
closed
1 year ago
0
Query about LLM Inference Acceleration Support
#76
eshoyuan
closed
1 year ago
2
Add support for access-restricted models
#75
hkonsti
closed
10 months ago
2
Make logging readable
#74
hkonsti
closed
1 year ago
0
Update sdk version in setup.py
#73
hkonsti
closed
1 year ago
0
AWS support
#72
hkonsti
closed
10 months ago
1
Port python sdk to other codegen tool
#71
hkonsti
closed
10 months ago
1
Request - spin up AWS instances
#70
sqpollen
closed
1 year ago
2
v0.2.0
#69
hkonsti
closed
1 year ago
0
Small fixes in README
#68
justusmattern27
closed
1 year ago
0
Return id as string when creating worker
#67
hkonsti
closed
1 year ago
0
Google Colab Demo in README
#66
justusmattern27
closed
1 year ago
0
Improve setup error message
#65
hkonsti
closed
1 year ago
0
Small Roadmap Update
#64
justusmattern27
closed
1 year ago
0
Add Getting Started Section in README
#63
justusmattern27
closed
1 year ago
0
Turn list workers response into native python
#62
hkonsti
closed
1 year ago
0
Verify deployment is created, disable stop-tokens option for now
#61
hkonsti
closed
1 year ago
0
Fix 0 being falsey in if
#60
hkonsti
closed
1 year ago
0
Remove T4 GPU option for large models
#59
justusmattern27
closed
1 year ago
0
Update versioning
#58
hkonsti
closed
1 year ago
0
Higher CPU Memory for T4 instances
#57
justusmattern27
closed
1 year ago
0
Add stop tokens on worker completion endpoint
#56
justusmattern27
closed
1 year ago
0
removed model download before vllm engine initialization
#55
justusmattern27
closed
1 year ago
0
Allow custom worker images
#54
hkonsti
closed
1 year ago
0
Add stop tokens to complete call, enable complete-only model-workers
#53
hkonsti
closed
1 year ago
1
Rename UNREACHABLE to LOADING
#52
hkonsti
closed
1 year ago
0
Minor telemetry changes
#51
hkonsti
closed
1 year ago
0
Add T4 Support, Add MPT-30B Support, Enable Models with remote code to run
#50
justusmattern27
closed
1 year ago
0
Api endpoints for adding and removing fine-tuned models
#49
hkonsti
closed
1 year ago
0
Add option to disable admin endpoints
#48
hkonsti
closed
1 year ago
0
Small telemetry update
#47
hkonsti
closed
1 year ago
0
Return instead of throwing when worker name is already taken
#46
hkonsti
closed
1 year ago
0
Completion endpoint
#45
hkonsti
closed
1 year ago
1
Justus/completion api
#44
hkonsti
closed
1 year ago
0
Improve Readme
#43
hkonsti
closed
1 year ago
0
Add diagram to Readme
#42
hkonsti
closed
1 year ago
3
Next