-
### Would be nice if we can see what has been updated for a model on the ollama.com
![](https://github.com/user-attachments/assets/f6c08a45-e58e-443b-b4e0-2e763239aa2a)
-
I've been trying out Talemate as a regular user of SillyTavern and I'm trying to work out how to get Talemate to produce the quality of output ST can produce when working with locally hosted models.
…
-
Please add the ability to see which model generated the image in the history panel.
Currently it shows the prompt and strength - But I tend to queue a lot of different models and strengths.
-
```
def qw_chat(messages) -> str:
if not messages:
return ''
template = get_template(template_type, tokenizer)
seed_everything(42)
cur_his…
-
## Steps to reproduce:
```
$ jinja2 -D service_account=spark-service-account -D namespace=spark -D storage_account= -D container=spark-dev-storage bundle-azure-storage.yaml.j2 > bundle-azure-storage…
-
I'm using Keras to train a model in Google Colab. During training, I achieve high performance with a small mean absolute error (MAE) as shown by the training metrics. However, when I use model.predict…
-
### Motivation
I notice that internval_chat/eval/evaluate_vqa.py has parameters for few-shot learning but have not been implemented correctly.
My question is:
How can we do few-shot learning …
-
Hi TextGrad developers,
First of all, thanks a lot for this great work!
I wonder do you have any plans to allow specifying chat history before the LLM calls? Below is some context explaining why…
-
Dear,
I am using this exact [example](https://github.com/langchain-ai/langchainjs/blob/main/langchain/src/memory/summary_buffer.ts).
```typescript
// Initialize the memory with a specific model…
-
### What features would you like to see added?
One of the benefits of Bedrock is that i can source a large number of very diverse models from a single vendor i am already working with and trust, Un…