Closed JulianChenlol closed 3 months ago
Hi,
You could refer to this doc: https://docs.llamaindex.ai/en/stable/examples/customization/llms/AzureOpenAI/
api_key = "<api-key>"
azure_endpoint = "https://<your-resource-name>.openai.azure.com/"
api_version = "2023-07-01-preview"
llm = AzureOpenAI(
model="gpt-35-turbo-16k",
deployment_name="my-custom-llm",
api_key=api_key,
azure_endpoint=azure_endpoint,
api_version=api_version,
)
# You need to deploy your own embedding model as well as your own chat completion model
embed_model = AzureOpenAIEmbedding(
model="text-embedding-ada-002",
deployment_name="my-custom-embedding",
api_key=api_key,
azure_endpoint=azure_endpoint,
api_version=api_version,
)
from llama_index.core import Settings
Settings.llm = llm
Settings.embed_model = embed_model
# load documents
documents = SimpleDirectoryReader("./src/getting-started/overview.md").load_data()
# initialize without metadata filter
from llama_index.storage.storage_context import StorageContext
vector_store = PGVectoRsStore(client=client)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
index = VectorStoreIndex.from_documents(
documents, storage_context=storage_context
)
Hi,
You could refer to this doc: https://docs.llamaindex.ai/en/stable/examples/customization/llms/AzureOpenAI/
api_key = "<api-key>" azure_endpoint = "https://<your-resource-name>.openai.azure.com/" api_version = "2023-07-01-preview" llm = AzureOpenAI( model="gpt-35-turbo-16k", deployment_name="my-custom-llm", api_key=api_key, azure_endpoint=azure_endpoint, api_version=api_version, ) # You need to deploy your own embedding model as well as your own chat completion model embed_model = AzureOpenAIEmbedding( model="text-embedding-ada-002", deployment_name="my-custom-embedding", api_key=api_key, azure_endpoint=azure_endpoint, api_version=api_version, ) from llama_index.core import Settings Settings.llm = llm Settings.embed_model = embed_model # load documents documents = SimpleDirectoryReader("./src/getting-started/overview.md").load_data() # initialize without metadata filter from llama_index.storage.storage_context import StorageContext vector_store = PGVectoRsStore(client=client) storage_context = StorageContext.from_defaults(vector_store=vector_store) index = VectorStoreIndex.from_documents( documents, storage_context=storage_context )
Thank you! It works! This a good template.
I read the LlamaIndex part in the docs. But I found that the default way is to set os env.
I want to use AzureOpenAI key instead of defalut OpenAi key. Is there a way? Thank you!