-
現状の実装ではデフォルトで様々な場所にファイルが書き出されますが、ライブラリとしての一貫性を考えると、ユーザが指定していない場所に何かを書き出す挙動は極力排除するのが望ましいです。特に次のような点は改善が必要だと思います。
* 指定がないとき、カレントディレクトリやホームディレクトリを使用しない
* 指定がないとき、デフォルトのパスを合成しない
基本的には、指定すべきパスが欠如してい…
-
run inference with /TensorRT-LLM/examples/run.py , it's ok
mpirun -n 4 -allow-run-as-root python3 /load/trt_llm/TensorRT-LLM/examples/run.py \
--input_text "hello,who are you?" \
…
-
We could use an LLM agent for fetching the story and polish it, at least would be interesting to know how well it could behave in comparison with our current approach
-
Hi, @LechengKong. Thanks for open-sourcing your excellent work. After reading your paper, I still have some questions. And I am here to ask for your kind help.
1. During the joint training, how do y…
-
### Problem & Motivation
There is a huge wave of interest around high accuracy Q&A, such as via Retrieval Augmented Generation (RAG). RAG accuracy is largely driven by how well vector search is abl…
-
we need model evaluation method.
-
**PROBLEM**
In the blogger demo, while the insights extraction (transcript2insights) seems effective, the create_blueprint function using the LLM call appears to lose information. There are two poten…
-
### When running the llama-2-7b-chat-hf model with openai api for gsm8k(Mathematical Ability Test), it needs to set temperature=0.0
But I get unexpected error like
> lm_eval --model local-chat-com…
-
Faithfulness is not callable
-
Hi, I appreciate the efforts to develop a benchmark to evaluate ML agent systems in every state possible. But, I am most curious about the annotation process that was used to create these benchmarks. …