-
Tuned model works with VertexAI but not ChatVertexAI:
This works:
```
from langchain_google_vertexai import VertexAI
llm = VertexAI(model_name="gemini-1.0-pro-002",
tuned_model_name="projec…
-
Hi, can you also write as example tool for fine tunings llm like gigabyte ai to utility
https://www.gigabyte.com/Press/News/2201
-
### Describe the bug
This is my configuration file.
```
### OPEN INTERPRETER CONFIGURATION FILE
#{}
# Be sure to remove the "#" before the following settings to use them.
# custom_instr…
-
Hello!
I noticed that in your paper you mentioned "To improve coverage, we include additional test cases from AlphaCode (Li et al., 2022) generated with a fine-tuned LLM."
I have a question about …
-
### Describe the issue
No matter which local model I use the calculator example from the autogen Tool Use tutorial fails in various ways:
1. the agent often fails to produce inputs that match the …
-
Hi,
It would be great to have an example of finetuning Phi without LoRA or QLoRA.
Thanks!
-
I plan to PR today, though it depends on final progress.
The computation speed is slow because we currently have no mulmat kernel with interleaving broadcast support yet, so tests are time consuming…
-
-
I am working on a task to classify a sentence into multiple topics (Multi-label classification).
Initially, I trained the entire 1.5M unlabeled data on bert base model using domain adaptation.
Then…
-
This may not be very necessary for active learning, but it makes the data more meaningful, and accessible on its own. In a structured format, it can be read using scripts without needing to go to the …
nwagu updated
2 months ago