Closed Kailai-Wang closed 1 year ago
ToDo:
[ ] change the single query to multi query each time [ ] need to check with TDF about their maximum vector length for one input parameter
I wrote some test cases about create_identity and request_vc, and the problems I found so far are as follows:
[0m[38;5;8m[[0m2023-03-22T02:36:01Z [0m[1m[31mERROR[0m lc_assertion_build::a4[0m[38;5;8m][0m [BuildAssertion] A4, Request, RequestError("HttpReqError(IO(Os { code: 11, kind: WouldBlock, message: \"Resource deadlock avoided\" }))")
thread '<unnamed>' panicked at 'not yet implemented', /home/ztgx/codespace/litentry/litentry-parachain/tee-worker/litentry/core/data-providers/src/graphql.rs:68:42 fatal runtime error: failed to initiate panic, error 5
The Rococo network should be added.
Thanks! I'm adding this in my latest PR already - but we still need support from TDF colleagues. See #1469
The addresses has currently no limits.
Context
We are currently doing it sequentially. Imagine the user linked 20 identities and requested a VC while we have a very bad network - this could easily be a problem and block the following requests.
A simple solution is to send the identities in a vector all in one go. Should we go for it? Please bear in mind that we need to "measure before optimise" to make sure we don't optimise things prematurely: we should be able to answer the questions: how long does it take to generate a VC with 20+ identities? What about 50 identities? What if under a throttled network?
:heavy_check_mark: Please set appropriate labels and assignees if applicable.