-
## [LangChain Development](https://app.pluralsight.com/library/courses/langchain-development/table-of-contents)
by [Tom Taulli](https://app.pluralsight.com/profile/author/tom-taulli)
founder : H…
-
Hi,
Thanks for sharing this work.
1) What's the best way to perform a quick analysis on a custom domain w/o fine tuning (zero shot or few shot)?
How should I prepare my input dataset and prompts…
-
Your work is excellent, and the open-source efforts are truly commendable. I have recently been trying out your code, but I encountered an issue during training. The llara_train.sh script mentions a c…
-
Hi Mike,
In terms of the 3 fine tuning processes using only Pensvm_A questions, I finished fine-tuning the first 2 (train just with the first 5 problems in Ch5 Pensvm A + train with the first five …
-
Hello,
Thank you for putting out this amazing set of models, datasets and evals!
Is it possible to release the code and details for the VideoChat2-text baseline from your paper? I am studying some…
-
Hi, you have done an excellent job, and it would be perfect if you could include highly relevant references.
The following two papers demonstrate the effectiveness of pre-training visual prompts in …
-
### Description
1. The current commit is divided into individual messages for each file, but sometimes the entire function will affect different files. I hope there is only a single line of commit me…
-
Hi,
For fine-tuning the current model to other languages, is it better to use the existing trained model and prompt tokenizer "parler-tts/parler_tts_mini_v0.1" or maybe it better train from scratch…
-
I am currently working on a problem to rerank tools (retrieving the appropriate tool for LLM), but the cross-encoder models are not converging.
Here is an example:
query: give me btc price
tool: ge…
-
Prompt Weighting is essential for fine tuning prompts. Is there a plan to add this to this framework in the future or is this out of scope for this project?