Open Isaina opened 4 months ago
@Isaina open query_vdb.py
and replace:
- 36 model, tokenizer = load("mlx-community/NeuralBeagle14-7B-4bit-mlx")
+ 36 model, tokenizer = load("mlx-community/quantized-gemma-2b-it")
Can YOU also support mlx on Intel macos platform
Ok thanks for gemma information On 05.04.2024. at 18:50, Abe Estrada wrote:
From: "Abe Estrada" ***@***.***>Date: 5 April 2024To: "vegaluisjose/mlx-rag" ***@***.***>Cc: "Mention" ***@***.***>,"Isaina" ***@***.***>Subject: Re: [vegaluisjose/mlx-rag] Gemma integration (Issue #4)
@Isaina no, mlx was created for Apple silicon
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you were mentioned.Message ID: @.***>
Hi
can you also implement gemma model to compare with llama
best regards