Open Gaurav-Karu opened 3 years ago
@Gaurav-Karu hi, cpplite is going to be deprecated in favor of Track2 storage SDK.
@Jinming-Hu : Yes I am aware of that as it has been conveyed to us by Swadi.Shraddha@microsoft.com , But Does the Track2 storage is in production ?
Note: we have already given the dates to client ....so what I can do and we cannot experiments until and unless the new SDK is under production ?
we are switching from azure-iot-sdk-c to azure-cpplite because cpplite has a support for proxy
blob_client_wrapper
doesn't throw exception, you'll need to check errno
for error information.
and blob_client
does have concurrent upload function https://github.com/Azure/azure-storage-cpplite/blob/be490edaf413dc113d8182cbc7a29140d3a63481/include/blob/blob_client.h#L154
@Jinming-Hu : Yes I am aware of that as it has been conveyed to us by Swadi.Shraddha@microsoft.com , But Does the Track2 storage is in production ?
Not yet, but it's the latest beta8 is considered RC release. So there won't be too many breaking changes between this version and production release (GA).
@Jinming-Hu ...Is there a way I can recompile cpplite and return errno in another wrapper class which I can use in my code to check the errno....because using assert is dangerous in the production code.
Can u please suggest ?
You don't have to use asserts.
Just check errno
like this
client.create_container(your_container_name);
int ret = errno;
if (ret == 0) {
// succeeded
} else if (ret == container_already_exists) {
// container already exists
}
there's a list of error numbers here
or if you want to wrap code above into a function
int create_container_wrapper(const std::string& container_name) {
client.create_container(container_name);
return errno;
}
Hi Thanks for the update...I have one more query if the files are larger size (Let say 800MB) it will tgake some time to upload in blob storage....Now if the user suspend the upload and resume back. (Do such handling is there ?)
why blob_client_wrapper obj is not made accessible directly ? And do we need to use only one object blob_client_wrapper to handle upload, Download and delete or we can create different blob_client_wrapper object for handling ? How to make use of it ?
I have one more query if the files are larger size (Let say 800MB) it will tgake some time to upload in blob storage....Now if the user suspend the upload and resume back. (Do such handling is there ?)
Nope, there's no way to suspend or resume for now. But there is a workaround. We have Put Block and Put Block List API. You can split that 800MB data into small chunks, and upload one or a few chunks at a time.
why blob_client_wrapper obj is not made accessible directly ? And do we need to use only one object blob_client_wrapper to handle upload, Download and delete or we can create different blob_client_wrapper object for handling ? How to make use of it ?
blob_client_wrapper
is not for public use, it was designed for a specific customer. So the API wasn't well-designed. That's why I 've been suggesting you take a look at our new Track2 SDK. There's already an RC-release. The API interface is quite stable.
You can use single client_wrapper for multiple operations, even perform these operations from multiple threads.
How to get the size of committed and uncommitted block if I am using blob_client_wrapper function upload_file_to_blob
@Gaurav-Karu take a look at this API
what is the alternative for this api in cpplite ?
///
/// <summary>
/// Uploads a list of blocks to a new or existing blob.
/// </summary>
/// <param name="block_list">An enumerable collection of block IDs, as Base64-encoded strings.</param>
/// <param name="condition">An <see cref="azure::storage::access_condition" /> object that represents the access condition for the operation.</param>
/// <param name="options">An <see cref="azure::storage::blob_request_options" /> object that specifies additional options for the request.</param>
/// <param name="context">An <see cref="azure::storage::operation_context" /> object that represents the context for the current operation.</param>
void upload_block_list(const std::vector<block_list_item>& block_list, const access_condition& condition, const blob_request_options& options, operation_context context)
{
**upload_block_list_async**(block_list, condition, options, context).wait();
}
Do I need to call upload again or any specific api in cpplite who does similar ?
Do I need to call upload again or any specific api in cpplite who does similar ?
Sorry I didn't get what you mean
alternative for the below in cpplite?
void upload_block_list(const std::vector
yeah, put_block_list
@Jinming-Hu can I use cpprestsdk with azure0cpplite, Because I remember old azure-sdk library supports cpprestsdk , Because cpprestsdk internally uses casablanca
So can I use cpprest sdk feature api like...
I think so, you can use both sdk side-by-side
@Jinming-Hu I need a small confirmation from you...I am using the below api for Download and Upload.
Note: Please suggest some points to improve the performance
@Jinming-Hu Observed crashed when tried to download 8000 Package ...after downloading 1000 Package the Gateway became slow and observed multiple crashes.....I have used void blob_client_wrapper::download_blob_to_file(const std::string &container, const std::string &blob, const std::string &destPath, time_t &returned_last_modified, size_t parallel)
BT TRACE
from /home/gwa/GWA/lib/libazure-storage-lite.so
() from /home/gwa/GWA/lib/libazure-storage-lite.so
at //home/gwauser/opt/jenkins/jobs/GWA-test/258/LinuxEnv/RSPBackendConnectivity/FileTransfer/src/FTCLiteDownload.cpp:181
Blob_clent_wrapper return void ....and If I go with blob_clent the volume of download I cannot achieve concurrency....like parallel upload.
Please suggest we need to integrate cpplite as azure-iot-sdk do not have a support for proxy.
Please share some sample code as sample2.cpp doesn't seem to be enough