Open akozd opened 1 year ago
@akozd did you notice Table creation steps are not available in the notebook -https://github.com/aws-samples/amazon-sagemaker-genai-datamesh/blob/main/blogs/Simple-text-to-sql/mda_with_llm_langchain_byo_model_without_cloudformation.ipynb ?
database creation step is there, but when I run the code, tables are not getting generated. Due to which when we run
def parse_catalog():
#Connect to Glue catalog
#get metadata of redshift serverless tables
columns_str=''
#define glue cient
glue_client = boto3.client('glue')
for db in gdc:
response = glue_client.get_tables(DatabaseName =db)
print(response)
for tables in response['TableList']:
print(tables)
Table list comes empty like this - 'TableList': []
I am following the same code and steps in the notebook
pip installs
commented out? Why are you individually installing each library? You should pip install the libraries all at once.123456789012
.engine_snowflake=create_engine('snowflake://'+sf_username+':'+sf_password+'@'+sf_account_id+'/'+db+'/'+schema+'?warehouse='+dwh)
<-- use f-strings insteadparse_catalog()
function to comprise of multiple helper functions, one for each type of metadata source.parse_catalog()
doesn't also really work for customer use cases. A bunch of things have been hardcoded such asstockmarket_schema
. Really you want this solution to be generic enough so that the customer updates their creds in the first section, and then inparse_catalog()
the catalog is automatically constructed from the specified sources.else: print("error" ) print("Step complete. Channel is: ", channel)
-> You are just printing an error statement. The subsequent line will print out as well, as if nothing happened. Consider raising an actualException
print(resolve_query(query="Which stock performed the best and the worst in May of 2013?")
. In general, the code should be as easy to use and as generic as possible for the customer.