Open sheneman opened 2 weeks ago
Tried on 1.2.3 docker version, pixtral 12B, and worked. <--- correction , no in fact I was lucky , see next comment... At least on docker version looks right.
I just tried on 1.2.4 desktop version, pixtral 12B and it fails:
Looks like if data is not on first description he hallucinate colors... (on desktop version)
Correction seen exact same behviour on docker :), 1.2.3, pixtral 12B.
How are you running AnythingLLM?
AnythingLLM desktop app
What happened?
When I upload a file, I can use a vision model like llama3.2-vision:11b to describe it, but then subsequent prompts don't have any memory of the image.
I would expect that I can ask repeated questions of the image and that it would remain in my current context until my context window was exhausted.
Are there known steps to reproduce?
No response