Apparently for a Llama embedding function some run context is necessary and one cannot simply do vector = embedding("sentence"). This seems to get stuck (at least in jupyter). Figure that out and use it to replace the hardcoded 1536 in the vector quickstart demo for llama
Apparently for a Llama embedding function some run context is necessary and one cannot simply do
vector = embedding("sentence")
. This seems to get stuck (at least in jupyter). Figure that out and use it to replace the hardcoded 1536 in the vector quickstart demo for llama