aws-samples / amazon-bedrock-samples

This repository contains examples for customers to get started using the Amazon Bedrock Service. This contains examples for all available foundational models
https://aws.amazon.com/bedrock/
MIT No Attribution
515 stars 270 forks source link

Feature/prompt academy sess 4 #120

Closed rsgrewal-aws closed 6 months ago

rsgrewal-aws commented 6 months ago

Issue #, if available:

Description of changes:

By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

chpecora commented 6 months ago

Rupinder, everything looks good except for 2 things I noticed:

  1. In Chat_application.ipynb in section "Using Llamaindex for orchistration for RAG" when running this code cell:

from IPython.display import Markdown, display from langchain.embeddings.bedrock import BedrockEmbeddings from langchain.llms.bedrock import Bedrock from llama_index import ServiceContext

ImportError: cannot import name 'version_short' from 'pydantic.version' (/opt/conda/lib/python3.10/site-packages/pydantic/version.cpython-310-x86_64-linux-gnu.so)

looks to be coming from :

ImportError Traceback (most recent call last) Cell In[41], line 5 2 from langchain.embeddings.bedrock import BedrockEmbeddings 3 from langchain.llms.bedrock import Bedrock ----> 5 from llama_index import ServiceContext

  1. geting an index error at "4.3 Ingest the image embeddings" in mm_search.ipynb

IndexError Traceback (most recent call last) Cell In[30], line 22 18 text_embedding_pairs = zip(dataset['item_name_in_en_us'].to_list(), multimodal_embeddings_img) 19 #metadata_dict = dict ( [(key, value) for i, (key, value) in enumerate(zip(dataset['item_name_in_en_us'].to_list(), dataset['img_full_path'].to_list()))] ) ---> 22 db = FAISS.from_embeddings(text_embedding_pairs, embedding_model, metadatas=metadata_dict)

  1. Also in Next steps at bottom of each notebook could there be a reference or name of the next notebook that should be looked at/ come in correct sequence after the one currently on.

i.e.

Next steps Now that we have a working RAG application with vector search retrieval, we will explore a new type of retrieval. In the next notebook we will see how to use LLM agents to automatically retrieve information from APIs. name/description

rsgrewal-aws commented 6 months ago

fixed the builds

chpecora commented 6 months ago

LGTM