Open ndebello opened 1 year ago
The process in DB result in success status
We have tried to check other processes through APi service but we could not replicate your situation
To make a specific test we need to impersonate your user thus we will change the password to your user to test api Reply
Can we make this test on monday morning ? Then you will need to reset your password to your user after our tests
Monday morning is fine.
Thanks
Nicola
On 24 Feb 2023, at 18:03, commercio-assistance @.***> wrote:
The process in DB result in success status
We have tried to check other processes through APi service but we could not replicate your situation
To make a specific test we need to impersonate your user thus we will change the password to your user to test api Reply
Can we make this test on monday morning ? Then you will need to reset your password to your user after our tests
— Reply to this email directly, view it on GitHub https://github.com/commercionetwork/Commercio-app/issues/3#issuecomment-1444035110, or unsubscribe https://github.com/notifications/unsubscribe-auth/AD2GJ7MUZC3FVXLBRNWMGGDWZDSVRANCNFSM6AAAAAAVHCP5UA. You are receiving this because you authored the thread.
We have checked your "process_id": "97a12cb1-7f69-4a23-91d3-b4c90ef63a22" through API impersonificating your user but we could not replicate your situation
Checking API
curl -X GET 'https://dev-api.commercio.app/v1/sharedoc/process/97a12cb1-7f69-4a23-91d3-b4c90ef63a22' \
--header 'Accept: */*' \
--header 'Authorization: Bearer <AUTH_TOKEN>'
We get this reply
{
"process_id": "97a12cb1-7f69-4a23-91d3-b4c90ef63a22",
"sender": "did:com:1kjsr53jmt52x57g5pd5ma4fsmex43vtrknmrv6",
"receivers": [
"did:com:1qu88tsglfxqsjd9l6em9vwvmsasm4tp2pxxwrv"
],
"document_id": "f6169c38-2f81-4001-bd17-16a7addc97a3",
"doc_hash": "3cf24dba5fb0a30e26e83b2ac5b9e29e1b161e5c1fa7425e73043362938b9824",
"doc_hash_alg": "sha-256",
"doc_tx_hash": "5882337EBB875B295852494A5571C6E1EF72AA80EB478B5DD4651317CF317693",
"tx_timestamp": "2023-02-24T15:57:06Z",
"tx_type": "commercionetwork.commercionetwork.documents.MsgShareDocument",
"doc_storage_uri": "ccc",
"doc_metadata": {
"content_uri": "http://example.com/metadata",
"schema": {
"uri": "http://example.com/schema.xml",
"version": "1.0.0"
}
},
"chain_id": "commercio-testnet11k",
"timestamp": "2023-02-24T15:57:00Z",
"status": "success",
"back_url": "http://example.com/callback"
}
I tried again some minutes ago and it moved from status=processing to status=success at the 27th GET retry, even if the doc_tx_hash field appeared at the first retries. HASH C594CE067B651F44B2BE45AE1F4737022F2895C645972C3600F8302B9A845DA1 ProcessID 06fb1c4a-5525-4f05-bf63-0e1bb2b80a6a
Premisis : The hash of the transaction is available before it is broadcasted and elaborated and is not sure it will be inserted correctly in a block .
Thus you get first the hash and the status of the process will be set to "success" status only when the block is elaborated by the chain and the platform verify its presence in the chain store.
Thus is correct having the hash but the process still with status = "processing".
Is not clear the time interval for your 27 Get
requests . Take into consideration that a block is elaborated by the chain every 6 seconds .
Broadcasting the message to the chain and verifing its presence in the chain after the block has been elaborated take time so it is important to delay the Get requests in time for a specific process id and avoid to overload api endpoint with repetitive requests for the same process .
Please increase interval as you requeue the check for the process Ex 1s , 1.6s ,2.5s, 4s, 8s ecc
<Is not clear the time interval for your 27 Get requests>
I performed a GET every 3 seconds. Fixed tick.
Now (following your suggestion) I use something like:
int maxRetries = 50; int myTick = 1; for (int i = 0; i < maxRetries; i++) { try { myTick = myTick + ((log(i + 1)).floor()); await Future.delayed(Duration(seconds: myTick)); . . . . . } . . . . . }
So far it worked perfectly. I will do more tests to be sure about that.
I performed many tests, the last of which had processID=97a12cb1-7f69-4a23-91d3-b4c90ef63a22.
The very same piece of software doesn't have this problem on the mainnet.
Full log available if needed.
UPDATE: it seems that it doesn't change from "processing" to "success" (or from "enqueued" to "success") even if the field "doc_tx_hash" appears in the response.
Here is the log of two subsequent api calls responses (after which it never moves out of "processing"):
enqueued 200 {"process_id":"68dff409-0905-4af4-bfc8-dd22b8a1dd5a","sender":"did:com:1kjsr53jmt52x57g5pd5ma4fsmex43vtrknmrv6","receivers":["did:com:1qu88tsglfxqsjd9l6em9vwvmsasm4tp2pxxwrv"],"document_id":"f2cfec2b-57a8-4a39-8dfc-e4e100a7be0c","doc_hash":"3cf24dba5fb0a30e26e83b2ac5b9e29e1b161e5c1fa7425e73043362938b9824","doc_hash_alg":"sha-256","tx_type":"commercionetwork.commercionetwork.documents.MsgShareDocument","doc_storage_uri":"ccc","doc_metadata":{"content_uri":"http://example.com/metadata","schema":{"uri":"http://example.com/schema.xml","version":"1.0.0"}},"timestamp":"2023-02-24T16:22:43Z","status":"enqueued","back_url":"http://example.com/callback"}
processing 200 {"process_id":"68dff409-0905-4af4-bfc8-dd22b8a1dd5a","sender":"did:com:1kjsr53jmt52x57g5pd5ma4fsmex43vtrknmrv6","receivers":["did:com:1qu88tsglfxqsjd9l6em9vwvmsasm4tp2pxxwrv"],"document_id":"f2cfec2b-57a8-4a39-8dfc-e4e100a7be0c","doc_hash":"3cf24dba5fb0a30e26e83b2ac5b9e29e1b161e5c1fa7425e73043362938b9824","doc_hash_alg":"sha-256","doc_tx_hash":"A69ADE15FC758D071D1B218C6E7FAA2E28B6E61BF6E4BC91A7F10AFA9BEB53DE","tx_type":"commercionetwork.commercionetwork.documents.MsgShareDocument","doc_storage_uri":"ccc","doc_metadata":{"content_uri":"http://example.com/metadata","schema":{"uri":"http://example.com/schema.xml","version":"1.0.0"}},"timestamp":"2023-02-24T16:22:43Z","status":"processing","back_url":"http://example.com/callback"}