-
Hi, I use the following command to run the code
CUDA_VISIBLE_DEVICES=3,5,6 ACCELERATE_LOG_LEVEL=info accelerate launch --config_file accelerate_configs/deepspeed_zero3.yaml scripts/run.py training_…
-
#!/bin/bash
CUDA_VISIBLE_DEVICES=0,1 python ../evaluation/run_evaluation_llm4decompile_vllm.py \
--model_path ../../LLM/llm4decompile-6.7b-v1.5 \
--testset_path ../decompile-eval/decompile-ev…
-
Hello,
Wonderful work on Voyager.
Please consider added local model support (instead of openai package - using something like Python requests package to a localhost local model using openai comp…
-
# Pronunciation scoring for non-native English
This task is to perform a pronunciation scoring of non-native speakers of English. Pronunciaiton scoring is important in computer-assisted language le…
-
The current evaluation metrics supported by `llm-eval` are robust. However, upon reviewing the documentation, I found that the current repo doesn't account for evaluating model toxicity. Assessing LLM…
-
Hi, I noticed in the tech report of LLama3-8B-80K that, the authors evaluate the vanilla LLama-8K-Instruct in the LongBench dataset with 8K context length, and obtain the following results:
![image](…
-
thanks for your great work!
I want to view the results after evaluation, where to find the WandB project "llm-driver"?
PzWHU updated
1 month ago
-
Thanks for this refreshing take on LLM prompt generation and evaluation, it's very promising.
I was wondering if few-shot examples should have their own first-class support in BAML due to their pow…
mbbyn updated
16 hours ago
-
LLM 大模型学习必知必会系列 (一):大模型基础知识篇 https://xie.infoq.cn/article/4a3cc4bb786ad63e31414c466?utm_campaign=geektime_search&utm_content=geektime_search&utm_medium=geektime_search&utm_source=geektime_search&utm_t…
-
### Question Validation
- [X] I have searched both the documentation and discord for an answer.
### Question
I am using Question generator from llama index , here they use Open AI LLM , I want to u…