uakbr / whispe2.0

Apache License 2.0
0 stars 0 forks source link

Transcribe audio #1

Closed uakbr closed 2 years ago

uakbr commented 2 years ago

URL

https://www.youtube.com/watch?v=LFrlMQbPehA

github-actions[bot] commented 2 years ago

Language: english

Transcription: Hello everyone, welcome to AWS Tutorials. In AWS Tutorials, we provide workshops and exercises to learn about AWS services. These workshops and exercises are published to our website aws-dozer.com and you can implement specific scenarios, certain scenarios using these workshops and exercises to learn about AWS services. And today, I am going to talk about data APIs in Amazon Redshift, which is one of the recently released features in Redshift. So let's get some introduction about this API. So using data APIs in Redshift, you can actually run your SQL commands, but these SQL commands you can invoke like HTTP endpoints. So it provides you an API endpoint and through the API endpoint, you can actually run SQL commands to perform the kind of operation you want to perform through your commands. Now one of the beautiful parts of this command is that your command could be quite intensive, quite long to execute. The data APIs are asynchronous. So when you call up your command and pass a SQL command, it is not that you are passing the command and then it will wait for command to finish the job and then give you a kind of, it doesn't work in request and response model. It works in an asynchronous model. So you can file up your command and it will come back and tell you that, hey, your command has been submitted and here is your query ID. But then you don't need to wait for command to, there's no response from the command immediately. So what you have to do is that you have to, using this query ID, come back and request for the result of the command. So this whole execution of the command is asynchronous. You file up the command and then it gives you a query ID and with that query ID, you can go and fetch your result at some point of time later. It is very useful because Redshift is a data warehouse solution. You might want to run a query which will probably take a couple of hours to execute. And in that case, you don't need, I mean, request response is not going to work. And in that case, it makes sense that you have some kind of query ID reference to go back to your data API and say, hey, I filed up a request some time back. Do you mind giving me the result of that? Yeah, things like that. So data API also manages its own connections and managing your data, your results and returning data. All those things are managed by you. All you have to do is simply file up your command and then come back later and say, hey, give me the result. So it's pretty straightforward and simple. And loads of heavy lifting has been done on data API to make sure that your query is executed, your results are securely stored, and then it is done back to you when you want. Now, data API also provides integration with EventBridge. And this really, again, helps this being asynchronous because when you file up a command, actually, depending on when your query is executing, it can publish the event to the EventBridge about the status of the query. And based on that, you can come to know that your query has finished or it is still in progress or things like that. The query execution changes the status. And when the status changes, those events are published to EventBridge. And through that, you can come to know through notification that what is the status of the query. And when you see that your query is finished, your results are ready, then you can go back and say, hey, give me the result back. Now, another very interesting fact about data API is that suppose you're calling Redshift cluster through lambda function. So traditionally, since Redshift cluster is running inside a VPC, this lambda function you have to attach to a VPC and then work with that. Now, those overheads are not there anymore. You can use this data API inside lambda function, and you don't need to launch your lambda function inside a VPC. So that's another beautiful thing. So a lot of overheads have been taken away by introducing this data API. All you have to worry about is writing those lines of code which run the query and which face the results. OK, so now few important factors about this data API. So I said that when you run the query, the query is executed in the background and the results are stored for you to be faced later. This duration for which this results are stored is the maximum retention time of your query result is 24 hours. Yeah, that's one very important factor. Maximum duration of your query is 24 hours. So if you're running a query and of course, you're running a data warehouse query, so it can take a couple of minutes to a couple of hours. But the maximum duration your query can run is 24 hours, not more than that. That's another very important factor. Then your maximum query result is 100 MB. Now, this could be a little limiting here, in my opinion, but in future this can change. So if you're making a query, your query result should not be more than 100 MB. That's another thing. And your query statement should not be more than 100 KB, which is fine. 100 KB is quite a decent size. But yeah, having a query result of 100 MB kind of can make it limiting in certain situations. Because if you have a query which requires more than 100 MB, then in that case you cannot use data API. But these are the considerations at this point of time. But I'm sure many of these things will change and enhance in coming releases. I'm pretty confident about those things. But yeah, let's see. So how does authentication work? There are two ways your data API can authenticate. One very simple way is that you can keep your login and password into your Secrets Manager. And then when calling the API, you can simply pass the Secrets Manager reference, ARN. And then it can pick the authentication or authorization from there. Authentication from your credentials from there. And yeah, simple. Job done. It can authenticate the call. The second is you can also use temporary credentials. As you know that Redshift supports temporary credentials. And in that case, you can use one of the Redshift user accounts with temporary credentials to authenticate as well. There are two methods which are available for data API to authenticate your call to the Redshift cluster. Now, when you are working with data API, you have to do some kind of mapping. Because you are writing a code inside languages like Python and Node.js. Those kind of things. Those kind of programming languages. And your other end is your Redshift API. So your JDBC data types have to be mapped to your API data types. Your programming language data types. And this is a table I just want to give you that this is how your JDBC data type map to your API data types. So, for instance, decimal becomes string. Boolean and bit becomes Boolean. Club becomes your string. Integer, integer, integer, integer, integer, integer becomes long. Things like that. So, depending on what kind of data type you have in the database, if you are programming that inside your data API, you have to map your data type to these using this table. Simple. So, there are three methods which are key. There are many more methods beyond these three methods. But these three methods I believe are the key methods to understand the fundamentals of data API. The first method is execute statement. So, when you run the execute statement, actually this is where you say, okay, this is my Redshift cluster. This is my SQL command. This is my authentication method. Go and run my command. You submit your SQL statement. And then what happens is that you get back a query ID. And this is the query ID which you have to keep as a reference for any future calls. Whether you want to know the status of your execution or whether you want to get the result of the execution, you need to have this query ID. And the second method which is important is describe statement. So, in describe statement method, you can pass this query ID. And then describe statement will tell you what is the status of your call, whether it is still in progress or still invoking or it has finished. It can tell you the status of your query execution. Okay. And then third one is your get statement result where you can again pass query ID as one of the parameters. And it will give you back the result of your query which you have executed. So, these three methods actually take care of your whole query execution. So, having understand these fundamentals, this is what we are going to do. So, we are going to launch a Redshift cluster here. And then we will configure the credentials of the Redshift cluster into a secret manager key, a secret. And then we will write some client application in cloud 9 using Python, which will use data APIs to talk to the Redshift. So, this is what we are going to do. Okay. So, in order to implement this, we have built an exercise which is the URL of exercise. And we have also provided the URL of exercise in the description box below. And this exercise provides you step-by-step instructions to implement this scenario end to end. I'm going to talk about this exercise in detail, these exercise steps in detail. And go through it to understand how this whole exercise works. And then, of course, this URL is available to you. And then you can run this exercise by your own at your own pace whenever you are free to do. Okay. So, let's jump on to the exercise now. So, here is the exercise published to our website aws-dojo.com. And then, yeah, let's see. So, there are multiple steps involved, which you have to follow in the sequence to get the job done. The first step is that you need to have an AWS account. And if you don't have an AWS account, then you can use this link to create a free trial account. Then the next one is creating, launching a Redshift cluster. So, we quickly go and launch the Redshift cluster using the most of the default configurations. So, we say, okay, let's go and create a cluster. This is my cluster name. I want to go and use a free trial version. My database name is dev. My database node port is 5439. My master user name, and then I give a password. And then we simply launch the cluster. So, I'm using simply default configurations to launch the cluster. Because, yeah, my purpose is to simply have some data into this cluster and use data API to call it. So, a few things to provide. This is your cluster name. This is your user name. This is your password and your database. And these four configurations you need later in different steps of the exercise. Now, next one to go and insert some data inside Redshift cluster. So, since we're trying to run a select query using data API in this case. So, we need to have, we have a dev database, but we don't have any tables. So, let's create some table and insert some database. Sorry, insert some recording to it. So, in order to do so, we use this editor, building editor inside Redshift cluster. We simply connect to this database using login name and password. And then we run this simple create table command to create a table called country business index. Again, this is a sample table, guys. Yeah, it has no real meaning. You can create a table of your own if you want. I simply use this simple structure. And then after that, we run this first statement in the query editor so that it creates a table. And then we run another statement which is simply inserting data into this table. So, this is inserting four rows into this table. So, I'm trying to create a very small database just to demonstrate it. But again, feel free to create a large database, large table and play with larger values over here. So, we simply, this one is very straightforward. Simply go create table and create record into your database. Once that is done, now we are going to configure the secrets manager. And we need to configure secrets manager so that we can store the credentials of the Redshift cluster over there. And then we can use that into data API for authentication purpose. So, in order to do so, we go to secrets manager and say, hey, I want to create a credential for Redshift cluster. We give a username and password and then we attach it to the cluster which it is going to access. And then, yeah, we simply go and save it. And we give it a name called Dojo secret, in fact. And then when you save it, you make note of this secret ARN because this secret ARN is what we are going to pass as one of the parameters into data API to, data API to, to, when authenticating to the Redshift cluster. So, after that, we have to write the client code. So, in this case, I have used SageMaker Notebook to write the client. So, for that, yeah, I only use SageMaker Notebook of Cloud9. So, in this case, I'm using SageMaker Notebook. So, I simply go and create a SageMaker Notebook. But before that, SageMaker Notebook needs to use some role in order to call the various other services. So, we are creating a role which will be used by SageMaker Notebook. So, for that, we'll go to the IAM management console. We say create a role and then this is a role for a SageMaker service. And then we simply give this role a name called Dojo SageMaker role. OK, it by default gets a permission called Amazon SageMaker full access, which gives it permission to the SageMaker service. But since this SageMaker is going to make call to the secrets manager, it will also make call to the, it can make call to the services like EventBridge also, like if you're using EventBridge in your parameters. So, you might have to add, no, you have to add permission beyond SageMaker full access. And in this case, in order to simplify, I simply added a power user access. I give this role as a power user access so that if this service has to, so I have to like, so I just removed just to be very clear, I removed this Amazon SageMaker full access. And instead I give this power user access because this SageMaker is going to really call many other services. So, I said, OK, let's give the blanket access for now. But of course, in the production moment, you don't want to do it that way. You want to give very specific permissions to SageMaker Notebook if you're using SageMaker Notebook for your development. So, having done that, now your role is ready. Now we go and create a SageMaker Notebook. We create a SageMaker Notebook. We give it a nice name called Dojo Data API Notebook. And then we give, assign it to this SageMaker, you know, Dojo SageMaker role, the same role we created earlier. And we launch the notebook. And we wait till your notebook turns into in-service status. It will take some time to create, but we'll wait till it converts to in-service status. Once that happens, then we open the Jupyter Notebook. OK, so we click on this link to open Jupyter Notebook and say, OK, and then we open a new notebook file and we use Konda Python 3. So, I mean, I can use any of these where the Python 3 is there, but in this case, we use Konda Python 3 because I want Python 3 as one of the run time. So once it is, yeah, it is, it is notebook is created, it has got actually, it has got Python 3 and Boto 3 both there. But since I'm using a very, like at the time of this exercise creation, I'm using a very new service. I'm not sure if BotoCore or Boto 3 both are upgraded. So it's probably always a good idea to go and upgrade your BotoCore or Boto 3 when you launch a notebook or even Cloud9 environment. So in this case, I'm running this pip install BotoCore upgrade so that it will update the BotoCore over there. And then I'm also running pip install Boto 3 upgrade which will upgrade the Boto 3 over there. So I'm upgrading the Boto 4 and Boto 3 both. And then once you upgrade the Boto, any package, you have to restart the kernel so that the updates have been picked up. So I simply go to the kernel, see I'm restarting my kernel so that my run time environment picks up all the new updates. And that's it, guys. My environment is up and running now ready to write the code. So simply we go and write the code. And here is the lines of code I'm writing to call data API. And it's pretty straightforward, guys. It's very surprisingly simple. So what we're doing is that we are doing Redshift data client and then we simply call the execute statement. And in this execute statement, the parameters is your Redshift cluster name, you will database your own in the statement on. And then what is your secret ARN. So where is number we created a secret key secrets earlier in Secrets Manager. And this is the ARN for that secret. And then this is your secret. This is the statement you want to run to fetch the result. And you simply run it and it runs immediately, it submits a statement and gives you back a query ID. So in response, you fetch the ID parameter, ID key, in fact, and that is your query ID. Now, this is the query ID which you have to use to get the status of your query statement, of your secret statement. Or you have to get the results using this query ID. Yeah, a lot of lot of different inquiries you can do with this query ID. So next, we are calling this describe statement function method where we are passing this query ID. And it is giving you back the status of my status of my secret statement execution. And this is a pretty small, so it hardly takes any time to execute. But I'm kind of running a small loop over here and I'm printing the status and I'm waiting till I get status back as finished. So if I get a status back as finished, that means the execution is finished and I'm good to go and fetch the results. So I'm simply looping it till I get the finished state. And then I'm running this get executed statement where it will simply go and fetch the records, get the results. So now simply it's that simple that I will go and get the results from the finished statement execution. Now, if you simply run it, you will see that it starts running and it shows query status first unknown. Then it started, then finished. If you have, for instance, more complex query, then you will have some more status over here. This was pretty straightforward and simple query. So I think before my next call went a loop, it has finished the job. But yeah, if you have a long and complex query, then probably you will see a few more status. But what is important here is that once you have the finished status back, that means your query statement query has been executed. And then at that point of time, you started printing your results. And I just printed the JSON format of the result, which you can format and of course, you can use it the way you want. And you can see that my data is coming as a long value and a string value kind of thing. So this was this was. Yeah. And that's where it is over. Now we are good. And we have seen how it's in call data API. And the last statement is to simply go and clean up the resources so that you don't incur any cost. So that was all about the exercise, guys. And I hope you like it. And if you like, please click on the like button and please subscribe to my channel to learn about this kind of exercises and workshops in the coming days. There are many other workshop exercises which are similar to this. And you can and all these workshops and exercises are published to our website, aws-dozy.com. And these workshop and exercises you can use to learn about various AWS services. If you have any feedback and comment, please provide us either on our YouTube channel comment section or you can also click on the contact us button and provide feedback over here. So I look forward to your feedback. That was all for today. I promise to come back again with some new exercise in the coming days. Meanwhile, have a nice day and thank you very much for watching. Bye bye.

Translation: Hello everyone, welcome to AWS Tutorials. In AWS Tutorials, we provide workshops and exercises to learn about AWS services. These workshops and exercises are published to our website aws-dozer.com and you can implement specific scenarios, certain scenarios using these workshops and exercises to learn about AWS services. And today, I am going to talk about Data APIs in Amazon Redshift, which is one of the recently released features in Redshift. So let's get some introduction about this API. So using Data APIs in Redshift, you can actually run your SQL commands, but these SQL commands you can invoke like HTTP endpoints. So it provides you an API endpoint and through the API endpoint, you can actually run SQL commands to perform the kind of operation you want to perform through your commands. Now one of the beautiful parts of this command is that your command could be quite intensive, quite long to execute. The Data APIs are synchronous. So when you call up your command and pass a SQL command, it is not that you are passing the command and then it will wait for the command to finish the job and then give you a kind of request and response model. It works in a synchronous model. So you can file up your command and it will come back and tell you that, hey, your command has been submitted and here is your query ID. But then you don't need to wait for command to, you know, there's no response from the command immediately. Then what you have to do is that you have to, using this query ID, come back and request for the result of the command. So this whole execution of the command is asynchronous. You file up the command and then it gives you a query ID and with that query ID, you can go and fetch your result at some point of time later. It is very useful because that's the data warehouse solution. You might want to run a query which will probably take a couple of hours to execute. And in that case, you don't need, I mean, request response is not going to work. And in that case, it makes sense that you have some kind of query ID reference to go back to your data API and say, hey, I fired up a request some time back. Do you mind giving me the result of that? Yeah, things like that. So data API also manages its own connections and managing your data, your results and returning data. All those things are managed by you. All you have to do is simply fire up your command and then come back later and say, hey, give me the result. Yeah, so it's pretty straightforward and simple. And loads of heavy lifting has been done on data API to make sure that your query is executed, your results are securely stored, and then it is done back to you when you want. Now, data API also provides integration with EventBridge. And this really, again, helps this being asynchronous because when you fire up a command, actually, depending on when your query is executing, it can publish the event to the EventBridge about the status of the query. And based on that, you can come to know that your query has finished or it is still in progress or things like that. With query execution changes the status. And when the status changes, those events are published to EventBridge. And through that, you can come to know through notification that what is the status of the query. And when you see that your query is finished, your results are ready, then you can go back and say, hey, give me the result back. Now, another very interesting fact about data API is that suppose you're calling a Redshift cluster through lambda function. So traditionally, since Redshift cluster is running inside a VPC, this lambda function you have to attach to a VPC and then work with that. Now, those overheads are not there anymore. You can use this data API inside lambda function and you don't need to launch your lambda function inside a VPC. So that's another beautiful thing. So a lot of overheads have been taken away by introducing this data API. All you have to worry about is writing those lines of code which run the query and which fetch the results. So now, few important factors about this data API. So as I said that when you run the query, the query is executed in the background and the results are stored for you to be fetched later. This duration for which this result are stored is the maximum retention time of your query result is 24 hours. Yeah, that's one very important factor. Maximum duration of your query is 24 hours. So if you're running a query and of course, you're running a data warehouse query, so it can take a couple of minutes to a couple of hours. But the maximum duration your query can run is 24 hours, not more than that. That's another very important factor. Then your maximum query result is 100 MB. Now, this could be a little limiting here in my opinion, but in future this can change. But yeah, if you're making a query, your query result should not be more than 100 MB. Yeah, that's another thing. Your query statement should not be more than 100 KB, which is fine. I think 100 KB is quite a decent size. But yeah, having a query result of 100 MB make it limiting into certain situation because if you have a query which requires more than 100 MB, then in that case you cannot use data API. But these are the consideration at this point of time, but I'm sure many of these things will change and enhance in coming religious, and I'm pretty confident about those things. But yeah, let's see. How does the authentication work? There are two ways your data API can authenticate. One very simple way is that you can keep your login and password into your Secrets Manager. Then when calling the API, you can simply pass the Secrets Manager reference, ARN, and then it can now pick the authentication or authorization from there, authentication from the credentials from there. Simple, job done. You authenticate, it can authenticate the call. The second is you can also use temporary credentials. As you know that Redshift supports temporary credentials, and in that case you can use one of the Redshift user accounts with temporary credentials to authenticate as well. There are two methods which are available for data API to authenticate your call to the Redshift cluster. Now, when you are working with data API, I mean, you have to do some mapping, because you're writing a code inside language like Python and Node.js, those kind of programming language. Your other end is your Redshift API. Your JDBC data types have to be mapped to your API data types, your programming language data types. This is a table I just want to give you that this is how your JDBC data type map to your API data types. Finds to like decimal becomes a string, Boolean and bit become Boolean, clock becomes your string, integer, ten integers, one integer, big integer becomes long, things like that. Depending on what kind of data type you have in the database, if you're programming that inside your data API, you have to map your data type using this table. Simple. There are three methods which are key. There are many more methods beyond these three methods, but these three methods I believe is the key method to understand the fundamentals of data API. The first method is ExecuteStatement. When you run the ExecuteStatement, actually this is where you say, okay, this is my Redshift cluster, this is my SQL command, this is my authentication method, go and run my command. You submit your SQL statement. Then what happens is that you get back a query ID, and this is the query ID which you have to keep as a reference for any future calls. Whether you want to know the status of your execution or whether you want to get the result of the execution, you need to have this query ID. The second method which is important is DescribeStatement. In DescribeStatement method, you can pass this query ID and then DescribeStatement will tell you what is the status of your call, whether it is still in progress or still invoking or it has finished. It can tell you the status of your query execution. Then third one is your GetStatementResult, where you can again pass query ID as one of the parameters, and it will give you back the result of your query which you have executed. These three methods actually take care of your whole query execution. Having understand these fundamentals, this is what we are going to do. We are going to launch a Redshift cluster here. Then we will configure the credentials of the Redshift cluster into a Secrets Manager key, a secret. Then we will write some client application in Cloud9 using Python, which will use data APIs to talk to the Redshift. This is what we are going to do. In order to implement this, we have built an exercise, which is the URL of exercise. We have also provided the URL of exercise in the description box below. This exercise provides you step-by-step instructions to implement this scenario end-to-end. I'm going to talk about this exercise in detail, these exercise steps in detail. Go through it to understand how this whole exercise works. Then of course, this URL is available to you, and then you can run this exercise at your own pace whenever you are free to do. Let's jump onto the exercise now. Here is the exercise published to our website, aws-dojo.com. There are multiple steps involved, which you have to follow in the sequence to get the job done. The first step is that you need to have an AWS account. If you don't have an AWS account, then you can use this link to create a free trial account. Then the next one is launching a Redshift cluster. We quickly go and launch Redshift cluster using the most of the default configurations. We say, okay, let's go and create a cluster. This is my cluster name. I want to go and use a free trial version. My database name is dev. My database node port is 5439, my master user name, and then I give a password. Then we simply launch the cluster. I'm using simply default configurations to launch cluster, because my purpose is to simply have some data into this cluster and use data API to call it. A few things to provide. This is your cluster name, this is your username, this is your password, and this is your database. These four configurations you need later in different steps of the exercise. Now next one to go and insert some data inside Redshift cluster. Since we're trying to run a select query using data API in this case, we have a dev database, but we don't have any table. Let's create some table and insert some database. Sorry, insert some recording to it. In order to do so, we use this editor, building editor inside Redshift cluster. We simply connect to this database using login name and password. Then we run this simple create table command to create a table called country business index. Again, this is a sample table, guys. It has no real meaning. You can create a table of your own if you want. I simply use this simple structure. Then after that, we run this first statement in the query editor so that it creates a table. Then we run another statement which is simply inserting data into this table. This is inserting four rows into this table. I'm trying to create a very small database just to demonstrate it. But again, feel free to create a large database, large table and play with larger values over here. This one is very straightforward. Simply go create table and create record into your database. Once that is done, now we are going to configure the secrets manager. We need to configure secrets manager so that we can store the credentials of the Redshift cluster over there. Then we can use that into data API for authentication purpose. In order to do so, we go to secrets manager and say, hey, I want to create a credential for Redshift cluster. We give a username and password, and then we attach it to the cluster which it is going to access. Then we simply go and save it. We give it a name called dojo secret, in fact. Then when you save it, you make note of this secret ARN because this secret ARN is what we are going to pass as one of the parameters into data API to when authenticating to the Redshift cluster. After that, we have to write the client code. In this case, I have used SageMaker Notebook to write the client. For that, I only use SageMaker Notebook of Cloud9. In this case, I'm using SageMaker Notebook. I simply go and create a SageMaker Notebook. But before that, SageMaker Notebook needs to use some role in order to call the various other services. We are creating a role which will be used by SageMaker Notebook. For that, we'll go to the IAM management console, we say create a role, and then this is a role for a SageMaker service. Then we simply give this role a name called dojo SageMaker role. It by default gets a permission called Amazon SageMaker full access, which gives it permission to the SageMaker service. But since this SageMaker is going to make call to the Secrets Manager, it can make call to the services like EventBridge also, like if you're using EventBridge in your parameters. Now, you have to add permission beyond SageMaker full access. In this case, in order to simplify, I've simply added a PowerUserAccess. I give this role as a PowerUserAccess so that if this service has to have to like, so I just removed, just to be very clear, I removed this Amazon SageMaker full access, and instead I give this PowerUserAccess because this SageMaker is going to really call many other services. So I said, okay, let's give the blanket access for now. But of course, in the production moment, you don't want to do it that way. You want to give very specific permission. You want to give permissions to SageMaker Notebook if you're using SageMaker Notebook for your development. So having done that, now your role is ready. Now we go and create a SageMaker Notebook. We create a SageMaker Notebook. We give it a nice name called Dojo Data API Notebook. And then we assign it to this Dojo SageMaker role, the same role we created earlier, and we launch the Notebook. And then we wait till your Notebook turns into in-service status. It will take some time to create, but we'll wait till it converts to in-service status. Once that happens, then we open the Jupyter Notebook. Okay, so we click on this link to open Jupyter Notebook and say, okay, and then we open a new notebook file, and we use conda python3. So I mean, I can use any of these where the python3 is there, but in this case, we use conda python3 because I want python3 as one of the runtime. So, yeah, it is, notebook is created. It has got, actually, it has got python3 and bototree, both there. But since I'm using a very, like, at the time of this size creation, I'm using a very new service. I'm not sure if botocore or bototree both are upgraded. So it's probably always a good idea to go and upgrade your bototcore or bototree when you launch a notebook or even Cloud9 environment. So in this case, I'm running this pip install botocore upgrade so that it will update the botocore over there. And then I'm also running pip install bototree upgrade, which will upgrade the bototree over there. So I'm upgrading the bototore and bototree both. And then once you upgrade any package, you have to restart the kernel so that the updates have been picked up. So I simply go to the kernel, see I'm restarting my kernel so that my runtime environment picks up all the new updates. And that's it, guys. My environment is up and running now, ready to write the code. So simply we go and write the code. And here is the lines of code I'm writing to call data API. And it's pretty straightforward, guys. It's very surprisingly simple. So what we're doing is that we are doing Redshift data client. And then we simply call the execute statement. And in this execute statement, the parameters is your Redshift cluster name. You will database your running the statement on. And then what is your secret ARN. So remember, we created a secret key. Secrets earlier in Secrets Manager. And this is the ARN for that secret. And then this is your SQL statement. This is the statement you want to run to fetch the result. And you simply run it. And it runs immediately. It submits a statement and gives you back a query ID. So in response, you fetch the ID parameter, ID key, in fact. And that is your query ID. Now this is the query ID which you have to use to get the status of your SQL statement. Or you have to get the results using this query ID. A lot of different inquiries you can do with this query ID. So next, we are calling this describe statement function method where we are passing this query ID. And it is giving you back the status of my SQL statement execution. And this is a pretty small. So it hardly takes any time to execute. But I'm kind of running a small loop over here. And I'm printing the status. And I'm waiting till I get status back as finished. So if I get a status back as finished, that means the execution is finished. And I'm good to go and fetch the result. So I'm simply looping it till I get the finished state. And then I'm running this get executed statement, which will simply go and fetch the records, get the results. So now it's that simple that I will go and get the results from the finished statement execution. Now if you simply run it, you will see that it starts running. And it shows query status first unknown, then started, then finished. If you have, for instance, a more complex query, then you will have some more status over here. This was pretty straightforward and simple query. So I think before my next call went, it has finished the job. But yeah, if you have a long and complex query, then probably you will see a few more status. But what is important here is that once you have the finished status back, that means your query statement query has been executed. And then at that point of time, you started printing your results. And I just printed the JSON format of the result, which you can format and, of course, use it the way you want. You can see that my data is coming when I have a long value and a string value kind of thing. So this was, yeah, and that's where it is over. Now we are good. And we have seen how it's been called data API. And the last statement is to simply go and clean up the resources so that you don't incur any cost. So that was all about the exercise, guys. And I hope you like it. And if you like, please click on the Like button. And please subscribe to my channel to learn about these kind of exercises and workshops in the coming days. There are many other workshop exercises which are similar to this. And all these workshops and exercises are published to our website, aws-doji.com. And these workshops and exercises you can use to learn about various AWS services. If you have any feedback and comment, please provide us either on our YouTube channel comment section, or you can also click on the Contact Us button and then provide feedback over here. So I look forward to your feedback. That was all for today. I promise to come back again with some new exercise in the coming days. Meanwhile, have a nice day. And thank you very much for watching. Bye-bye.