Closed RiEnClVi closed 3 months ago
To address the bug you're experiencing when using Bedrock cohere with Pinecone serverless in node mode with the llama-index
, here are some steps and potential solutions:
Initialization of VectorStoreIndex
:
index = VectorStoreIndex(nodes, storage_context=storage_context, service_context=service_context)
Inserting Nodes:
index.insert_nodes(subnodes)
Ensure that the request and response formats are correctly handled for the "cohere.command-text-v14" model. Refer to the test cases provided in the test_bedrock.py
file to verify the correct formats.
Ensure that your implementation of the Pinecone vector store in the llama-index
correctly handles the query and response formats. Here is an example of how the Pinecone vector store is implemented:
class PineconeVectorStore(BasePydanticVectorStore):
# ... other methods ...
def query(self, query: VectorStoreQuery, **kwargs: Any) -> VectorStoreQueryResult:
# Implementation details...
Ensure you have the correct version of the Pinecone client installed. You can check and install the Pinecone client as follows:
from typing import Any
from packaging import version
def _import_pinecone() -> Any:
try:
import pinecone
except ImportError as e:
raise ImportError(
"Could not import pinecone python package. "
"Please install it with `pip install pinecone-client`."
) from e
return pinecone
def _is_pinecone_v3() -> bool:
pinecone = _import_pinecone()
pinecone_client_version = pinecone.__version__
return version.parse(pinecone_client_version) >= version.parse("3.0.0")
By ensuring the correct request and response formats, verifying the Pinecone vector store implementation, and checking the Pinecone client version, you should be able to resolve the bug. If the issue persists, please provide more specific details or logs to further diagnose the problem.
To continue the conversation, mention @dosu.
Bug Description
I can still reproduce the bug in: https://github.com/run-llama/llama_index/issues/13376
If I use even the newest llama index
I don't have the bug if I just call the embeddings directly like in the first tutorial but: https://docs.llamaindex.ai/en/stable/examples/embeddings/bedrock/?h=bedrock+emb
Doing this tutorial will fail: https://docs.llamaindex.ai/en/stable/examples/vector_stores/PineconeIndexDemo/ (naturally if using bedrock embedding cohere)
This at least occurs if using node mode in combination of serverless AWS pinecone:
index = VectorStoreIndex(nodes, storage_context=storage_context, service_context=service_context)
or in certain examples in this mode too:
index.insert_nodes(subnodes)
Would be thankful if this could be solved.
Version
0.10.38
Steps to Reproduce
Use Bedrock cohere with pinecone serverless.
Like in your demo link. Reproduction should at least work if using node import.
Relevant Logs/Tracbacks