bcgov / aries-vcr

Hyperledger Aries Verifiable Credential Registry (VCR) is a set of application level software components designed to accelerate the adoption of trustworthy entity to entity communications.
Apache License 2.0
78 stars 70 forks source link

The application can get into a state where it is not able to receive credentials #426

Closed WadeBarnes closed 4 years ago

WadeBarnes commented 4 years ago

Found in ICOB test environment. Replicated in dev and test.

While posting credentials for a data load, after loading a few hundred thousand credentials, when you search for and attempt to verify a credential for a corp, it appears the agents and/or api/controller encounter some sort of error (possibly a timeout) in which they get into a bad state. This causes the verification to eventually fail (Gateway Timeout), and all (well the vast majority) of the credential postings to fail afterward too. The only way to recover is to recycle all the agents.

Recycling nothing other than the ICOB agents resolves the issue, even though the API instances are also reporting errors and the ICOB "search" database seems to be involved.

A sample of the problem reports received by the ICIA agents:

Callback: topic= problem_report , message= {'@type': 'did:sov:BzCbsNYhMrjHiqZDTUASHg;spec/notification/1.0/problem-report', '@id': 'fc672eed-ff93-4bae-a1e2-faf84c212c8c', '~thread': {'thid': 'a6adfaf0-0ad7-446e-8545-424279b47bf3'}, 'explain-ltxt': "('Connection aborted.', RemoteDisconnected('Remote end closed connection without response',))"}
Callback: topic= problem_report , message= {'@type': 'did:sov:BzCbsNYhMrjHiqZDTUASHg;spec/notification/1.0/problem-report', '@id': '8b9d0314-4308-4aa5-bc51-05e42e9e604f', '~thread': {'thid': 'db8c0514-04a3-413e-9c8e-2d581485c18b'}, 'explain-ltxt': 'could not connect to server: Connection timed out\n\tIs the server running on host "172.50.192.211" and accepting\n\tTCP/IP connections on port 5432?\n'}

The affected database 172.50.192.211 is db-indy-cat (the ICOB "search" database).

Callback: topic= problem_report , message= {'@type': 'did:sov:BzCbsNYhMrjHiqZDTUASHg;spec/notification/1.0/problem-report', '@id': 'b42def54-65a2-4a7e-8da6-350bbcb07276', '~thread': {'thid': '01bb18c6-9ec4-4149-96eb-fbf6b1f8f367'}, 'explain-ltxt': 'FATAL:  remaining connection slots are reserved for non-replication superuser connections\n'}

A sample of the errors reported by the ICOB API (controller) instances:

ERROR 2020-01-10 19:48:05,580 log 1 140058313078528 Internal Server Error: /api/credential/795827/verify
Traceback (most recent call last):
  File "/home/indy/.pyenv/versions/3.6.8/lib/python3.6/site-packages/urllib3/connectionpool.py", line 672, in urlopen
    chunked=chunked,
  File "/home/indy/.pyenv/versions/3.6.8/lib/python3.6/site-packages/urllib3/connectionpool.py", line 421, in _make_request
    six.raise_from(e, None)
  File "<string>", line 3, in raise_from
  File "/home/indy/.pyenv/versions/3.6.8/lib/python3.6/site-packages/urllib3/connectionpool.py", line 416, in _make_request
    httplib_response = conn.getresponse()
  File "/home/indy/.pyenv/versions/3.6.8/lib/python3.6/http/client.py", line 1331, in getresponse
    response.begin()
  File "/home/indy/.pyenv/versions/3.6.8/lib/python3.6/http/client.py", line 297, in begin
    version, status, reason = self._read_status()
  File "/home/indy/.pyenv/versions/3.6.8/lib/python3.6/http/client.py", line 266, in _read_status
    raise RemoteDisconnected("Remote end closed connection without"
http.client.RemoteDisconnected: Remote end closed connection without response

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/indy/.pyenv/versions/3.6.8/lib/python3.6/site-packages/requests/adapters.py", line 449, in send
    timeout=timeout
  File "/home/indy/.pyenv/versions/3.6.8/lib/python3.6/site-packages/urllib3/connectionpool.py", line 720, in urlopen
    method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
  File "/home/indy/.pyenv/versions/3.6.8/lib/python3.6/site-packages/urllib3/util/retry.py", line 400, in increment
    raise six.reraise(type(error), error, _stacktrace)
  File "/home/indy/.pyenv/versions/3.6.8/lib/python3.6/site-packages/urllib3/packages/six.py", line 734, in reraise
    raise value.with_traceback(tb)
  File "/home/indy/.pyenv/versions/3.6.8/lib/python3.6/site-packages/urllib3/connectionpool.py", line 672, in urlopen
    chunked=chunked,
  File "/home/indy/.pyenv/versions/3.6.8/lib/python3.6/site-packages/urllib3/connectionpool.py", line 421, in _make_request
    six.raise_from(e, None)
  File "<string>", line 3, in raise_from
  File "/home/indy/.pyenv/versions/3.6.8/lib/python3.6/site-packages/urllib3/connectionpool.py", line 416, in _make_request
    httplib_response = conn.getresponse()
  File "/home/indy/.pyenv/versions/3.6.8/lib/python3.6/http/client.py", line 1331, in getresponse
    response.begin()
  File "/home/indy/.pyenv/versions/3.6.8/lib/python3.6/http/client.py", line 297, in begin
    version, status, reason = self._read_status()
  File "/home/indy/.pyenv/versions/3.6.8/lib/python3.6/http/client.py", line 266, in _read_status
    raise RemoteDisconnected("Remote end closed connection without"
urllib3.exceptions.ProtocolError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response',))
requests.exceptions.ConnectionError: HTTPConnectionPool(host='agent-indy-cat', port=8024): Max retries exceeded with url: /credential_exchange/6a22db5b-84b5-4195-915f-730f6ce6dde0/problem_report (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f6280473128>: Failed to establish a new connection: [Errno 110] Connection timed out',))
WadeBarnes commented 4 years ago

Under normal data load conditions the number of connections the ICOB api (controller) instances have open on the database is around 100 plus or minus;

==============================================================
Database connection details for db-indy-cat:
--------------------------------------------------------------
 max_conn | used | res_for_super | res_for_normal
----------+------+---------------+----------------
      200 |  113 |             3 |             84

When the application gets into the reported state the number of connections slowly but steadily grow to the point where the maximum number of connections is exceeded. This is not immediate and takes several minutes;

==============================================================
Database connection details for db-indy-cat:
--------------------------------------------------------------
 max_conn | used | res_for_super | res_for_normal
----------+------+---------------+----------------
      200 |  203 |             3 |             -6

On the surface, at least, this appears to be more of a symptom rather than a cause.

Once the number of database connections is exceeded, the database pod's health checks fail and the pod is taken offline. This in turn causes the following errors affecting the API (controller) instances;

ERROR 2020-01-10 20:54:17,568 views 1 139881615439616 could not connect to server: Connection timed out
--
  | Is the server running on host "172.50.192.211" and accepting
  | TCP/IP connections on port 5432?

The database being offline affects the API (controller) pod instances health checks which causes then to start being taken off-line too.

WadeBarnes commented 4 years ago

Simply recycling all of the ICOB agent instances (and only the ICOB agent instances), even while ICIA (the credential issuer) is still posting credentials at full speed, resolves the issue and ICOB starts receiving credentials successfully again.

WadeBarnes commented 4 years ago

The initial request to verify a credential seems to trigger the state.

If there are no credentials being posted, and an attempt is made to verify a credential using the ICOB UI, and the verification fails (times out), any subsequent credential posting operations will fail. ICOB appears to be receiving the credentials but it does not appear to respond to the issuer.

WadeBarnes commented 4 years ago

Here is the interaction between a single instance of the api (controller) and agent during a credential verification request:

API:


DEBUG 2020-01-10 21:36:19,245 routing 1 140017880004352 Processing request headers to detect requested API version.
--
  | DEBUG 2020-01-10 21:36:19,245 routing 1 140017880004352 Processing URL to detect requested API version.
  | DEBUG 2020-01-10 21:36:19,245 routing 1 140017880004352 No version override detected, will be using 'default
  | DEBUG 2020-01-10 21:36:19,245 routing 1 140017880004352 Resolved API url is /api/v2/credential/1609524/formatted
  | INFO 2020-01-10 21:36:19,552 helpers 1 140019881105152 172.51.92.1 [10/Jan/2020:21:36:19 +0000] "GET /api/credential/1609524/formatted HTTP/1.0" 200 8437 "https://orgbook-test.pathfinder.gov.bc.ca/en/organization/registration.registries.caFM0225294/cred/1609524/verify " "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.117 Safari/537.36"
  | DEBUG 2020-01-10 21:36:19,696 routing 1 140018920175360 Processing request headers to detect requested API version.
  | DEBUG 2020-01-10 21:36:19,696 routing 1 140018920175360 Processing URL to detect requested API version.
  | DEBUG 2020-01-10 21:36:19,696 routing 1 140018920175360 No version override detected, will be using 'default
  | DEBUG 2020-01-10 21:36:19,696 routing 1 140018920175360 Resolved API url is /api/v2/credential/1609524/verify
  | DEBUG 2020-01-10 21:36:19,751 connectionpool 1 140018920175360 Starting new HTTP connection (1): agent-indy-cat:8024

Agent:

2020-01-10 21:36:19,758 indy.non_secrets DEBUG open_wallet_search: >>> wallet_handle: 2, type_: 'connection', query_json: '{}', options_json: '{"retrieveRecords": true, "retrieveTotalCount": false, "retrieveType": false, "retrieveValue": true, "retrieveTags": false}'
--
  | 2020-01-10 21:36:19,759 indy.libindy DEBUG do_call: >>> name: indy_open_wallet_search, args: (c_int(2), c_char_p(140536837584032), c_char_p(140536132206832), c_char_p(140536133785712), <CFunctionType object at 0x7fd11e58ea70>)
  | 2020-01-10 21:36:19,759 indy.libindy DEBUG do_call: Function indy_open_wallet_search returned err: 0
  | 2020-01-10 21:36:19,759 indy.libindy DEBUG do_call: <<< <Future pending>

After a period of time the Agent encounters the following error:


2020-01-10 21:40:29,100 aries_cloudagent.transport.outbound.manager ERROR Outbound message could not be delivered
--
  | Traceback (most recent call last):
  | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/aries_cloudagent/transport/outbound/http.py", line 58, in handle_message
  | endpoint, data=payload, headers=headers
  | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/aiohttp/client.py", line 1005, in __aenter__
  | self._resp = await self._coro
  | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/aiohttp/client.py", line 497, in _request
  | await resp.start(conn)
  | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/aiohttp/client_reqrep.py", line 857, in start
  | self._continue = None
  | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/aiohttp/helpers.py", line 585, in __exit__
  | raise asyncio.TimeoutError from None
  | concurrent.futures._base.TimeoutError
WadeBarnes commented 4 years ago

Here is the same interaction in dev where the verification request completes successfully. dev and test are running the same version of code and are using the same configurations (with appropriate settings for each environment). Data in dev was last loaded 2019-08-01, data in test was last loaded over the last few days.

API:

2020-01-10 22:12:36,378 indy.non_secrets DEBUG open_wallet_search: >>> wallet_handle: 2, type_: 'connection', query_json: '{}', options_json: '{"retrieveRecords": true, "retrieveTotalCount": false, "retrieveType": false, "retrieveValue": true, "retrieveTags": false}'
2020-01-10 22:12:36,378 indy.libindy DEBUG do_call: >>> name: indy_open_wallet_search, args: (c_int(2), c_char_p(139903594278064), c_char_p(139902853942840), c_char_p(139902854978128), <CFunctionType object at 0x7f3dabfd1a70>)
2020-01-10 22:12:36,378 indy.libindy DEBUG do_call: Function indy_open_wallet_search returned err: 0
2020-01-10 22:12:36,378 indy.libindy.native.indy.commands DEBUG     src/commands/mod.rs:158 | NonSecretCommand command received
2020-01-10 22:12:36,379 indy.libindy DEBUG do_call: <<< <Future pending>
2020-01-10 22:12:36,379 indy.libindy.native.non_secrets_command_executor DEBUG  src/commands/non_secrets.rs:105 | OpenSearch command received
2020-01-10 22:12:36,399 indy.libindy.native.postgres DEBUG  /home/indy/.cargo/registry/src/github.com-1ecc6299db9ec823/postgres-0.15.2/src/lib.rs:901 | executing query: 
2020-01-10 22:12:36,412 indy.libindy.native.postgres DEBUG  /home/indy/.cargo/registry/src/github.com-1ecc6299db9ec823/postgres-0.15.2/src/lib.rs:901 | executing query: 
2020-01-10 22:12:36,412 indy.libindy.native.postgres DEBUG  /home/indy/.cargo/registry/src/github.com-1ecc6299db9ec823/postgres-0.15.2/src/lib.rs:489 | preparing query with name `s0`: SELECT i.id, i.name, i.value, i.key, i.type FROM items as i WHERE i.type = $1
2020-01-10 22:12:36,414 indy.libindy.native.postgres DEBUG  /home/indy/.cargo/registry/src/github.com-1ecc6299db9ec823/postgres-0.15.2/src/lib.rs:608 | executing statement s0 with parameters: [[67, 120, 121, 103, 80, 65, 78, 65, 119, 43, 112, 56, 67, 49, 100, 84, 109, 48, 84, 66, 77, 50, 85, 111, 75, 55, 97, 70, 53, 54, 116, 119, 55, 104, 110, 53, 78, 65, 119, 50, 57, 82, 81, 77, 99, 70, 115, 72, 85, 88, 119, 61]]
2020-01-10 22:12:36,419 indy.libindy DEBUG _indy_callback: >>> command_handle: 101, err , args: (26,)
2020-01-10 22:12:36,419 indy.libindy DEBUG _indy_callback: <<<
2020-01-10 22:12:36,419 indy.libindy DEBUG _indy_loop_callback: >>> command_handle: 101, err , args: (26,)
2020-01-10 22:12:36,419 indy.libindy DEBUG _indy_loop_callback: Function returned 26
2020-01-10 22:12:36,419 indy.libindy DEBUG _indy_loop_callback <<<
2020-01-10 22:12:36,419 indy.non_secrets DEBUG open_wallet_search: <<< res: 26
...

FullAgentLog.txt

WadeBarnes commented 4 years ago

Verified the dev and test environments are running the same images. Reset the test environments. Reissued a few credentials. Tested credential verification on a handful of corps; all successfully. Going to let the credential posting process run for a while an test again later.

WadeBarnes commented 4 years ago

Renamed, since testing indicated the issue had nothing to do with load.

WadeBarnes commented 4 years ago

After resetting the dev environment, this issue is now occurring in dev after posting 337,164 credentials.

WadeBarnes commented 4 years ago

After resetting and loading 550,377 credentials into the test environment it is now having the same issue.

WadeBarnes commented 4 years ago

The issue appears to be related to the number of credentials in the wallet, the issue surfaces if there are more than a few hundred thousand credentials.

ianco commented 4 years ago

I suspect it's related to query performance. The indy-sdk/wallet generates a very complicated query in response to the TOB proof request, for example:

SELECT i.id, i.name, i.value, i.key, i.type 
FROM items as i 
WHERE i.type = $1 
  AND ((i.id in (SELECT item_id FROM tags_encrypted WHERE name = $2 AND value = $3)) 
        AND (i.id in (SELECT item_id FROM tags_encrypted WHERE name = $4 AND value = $5)) 
        AND (i.id in (SELECT item_id FROM tags_encrypted WHERE name = $6 AND value = $7)) 
        AND (i.id in (SELECT item_id FROM tags_encrypted WHERE name = $8 AND value = $9)) 
        AND (i.id in (SELECT item_id FROM tags_encrypted WHERE name = $10 AND value = $11)) 
        AND (i.id in (SELECT item_id FROM tags_encrypted WHERE name = $12 AND value = $13)) 
        AND (i.id in (SELECT item_id FROM tags_encrypted WHERE name = $14 AND value = $15)) 
        AND (i.id in (SELECT item_id FROM tags_encrypted WHERE name = $16 AND value = $17)) 
        AND (i.id in (SELECT item_id FROM tags_encrypted WHERE name = $18 AND value = $19)) 
        AND (i.id in (SELECT item_id FROM tags_encrypted WHERE name = $20 AND value = $21)) 
        AND (i.id in (SELECT item_id FROM tags_encrypted WHERE name = $22 AND value = $23)) 
        AND (i.id in (SELECT item_id FROM tags_encrypted WHERE name = $24 AND value = $25)) 
        AND (i.id in (SELECT item_id FROM tags_encrypted WHERE name = $26 AND value = $27)) 
        AND (i.id in (SELECT item_id FROM tags_encrypted WHERE name = $28 AND value = $29)) 
        AND (i.id in (SELECT item_id FROM tags_encrypted WHERE name = $30 AND value = $31)) 
        AND (i.id in (SELECT item_id FROM tags_encrypted WHERE name = $32 AND value = $33)) 
        AND (((i.id in (SELECT item_id FROM tags_encrypted WHERE name = $34 AND value = $35)) 
               AND (i.id in (SELECT item_id FROM tags_encrypted WHERE name = $36 AND value = $37)))
            )
        )

(There is a sub-clause for each claim in the credential)

ianco commented 4 years ago

I found updating the wallet indexes improved the query on my local:

drop index ix_tags_encrypted_name;
drop index ix_tags_encrypted_value;
CREATE INDEX ix_tags_encrypted_name_value ON public.tags_encrypted USING btree (name, value);
swcurran commented 4 years ago

Which makes sense based on the proof request we're doing. I'm guessing our proof includes the restriction that all claims in the credential match a specific value.

Didn't we agree that we only need to check the cardinality values and get the latest of those values?

On Tue, Jan 14, 2020 at 2:53 PM Ian Costanzo notifications@github.com wrote:

I suspect it's related to query performance. The indy-sdk/wallet generates a very complicated query in response to the TOB proof request, for example:

SELECT i.id, i.name, i.value, i.key, i.type FROM items as i WHERE i.type = $1 AND ((i.id in (SELECT item_id FROM tags_encrypted WHERE name = $2 AND value = $3)) AND (i.id in (SELECT item_id FROM tags_encrypted WHERE name = $4 AND value = $5)) AND (i.id in (SELECT item_id FROM tags_encrypted WHERE name = $6 AND value = $7)) AND (i.id in (SELECT item_id FROM tags_encrypted WHERE name = $8 AND value = $9)) AND (i.id in (SELECT item_id FROM tags_encrypted WHERE name = $10 AND value = $11)) AND (i.id in (SELECT item_id FROM tags_encrypted WHERE name = $12 AND value = $13)) AND (i.id in (SELECT item_id FROM tags_encrypted WHERE name = $14 AND value = $15)) AND (i.id in (SELECT item_id FROM tags_encrypted WHERE name = $16 AND value = $17)) AND (i.id in (SELECT item_id FROM tags_encrypted WHERE name = $18 AND value = $19)) AND (i.id in (SELECT item_id FROM tags_encrypted WHERE name = $20 AND value = $21)) AND (i.id in (SELECT item_id FROM tags_encrypted WHERE name = $22 AND value = $23)) AND (i.id in (SELECT item_id FROM tags_encrypted WHERE name = $24 AND value = $25)) AND (i.id in (SELECT item_id FROM tags_encrypted WHERE name = $26 AND value = $27)) AND (i.id in (SELECT item_id FROM tags_encrypted WHERE name = $28 AND value = $29)) AND (i.id in (SELECT item_id FROM tags_encrypted WHERE name = $30 AND value = $31)) AND (i.id in (SELECT item_id FROM tags_encrypted WHERE name = $32 AND value = $33)) AND (((i.id in (SELECT item_id FROM tags_encrypted WHERE name = $34 AND value = $35)) AND (i.id in (SELECT item_id FROM tags_encrypted WHERE name = $36 AND value = $37))) ) )

(There is a sub-clause for each claim in the credential)

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/bcgov/indy-catalyst/issues/426?email_source=notifications&email_token=AAHYRQS54N4IVMSDVT2Z46LQ5Y67JA5CNFSM4KFM57X2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEI6NKNA#issuecomment-574412084, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAHYRQVFUJSHRHQWIA4KLETQ5Y67JANCNFSM4KFM57XQ .

swcurran commented 4 years ago

That's cool. Did those indices exist before?

On Tue, Jan 14, 2020 at 2:54 PM Ian Costanzo notifications@github.com wrote:

I found updating the wallet indexes improved the query on my local:

drop index ix_tags_encrypted_name; drop index ix_tags_encrypted_value; CREATE INDEX ix_tags_encrypted_name_value ON public.tags_encrypted USING btree (name, value);

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/bcgov/indy-catalyst/issues/426?email_source=notifications&email_token=AAHYRQQ4WKTGTEQ5HARUHE3Q5Y7CPA5CNFSM4KFM57X2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEI6NMMY#issuecomment-574412339, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAHYRQVFMTGFUERFLNHNMOTQ5Y7CPANCNFSM4KFM57XQ .

ianco commented 4 years ago

@swcurran :

  1. I don't recall specifically, however that would be a good optimization. I'd need to check the code to see what changes this would require in VCR ( @nrempel FYI)

  2. The name and value indexes existed individually. I don't think the value index is ever used (do you ever query a tag just by value?), and the name_value index covers both name and the combination of name and value

ianco commented 4 years ago

With about 60,000 credentials the change to the database index makes no difference in the query performance. However I can see that the query is getting slower linearly as I add more credentials.

I'll try the update that @swcurran suggested, modifying the proof request to include only the cardinality credentials, and then verifying the values in the returned proof, this will simplify the query.

ianco commented 4 years ago

Bad news, the proof request is already optimized as @swcurran described above - the only restrictions are based on source_id, cardinality and issue_date. For example:

proof_request 
{'version': '1.0', 'name': 'self-verify', 
'requested_predicates': {}, 
'requested_attributes': {'self-verify-proof': {'names': ['entity_status_effective', 'entity_name_effective', 'registered_jurisdiction', 'entity_name_assumed', 'effective_date', 'entity_name_trans', 'entity_status', 'reason_description', 'home_jurisdiction', 'entity_type', 'entity_name', 'entity_name_trans_effective', 'entity_name_assumed_effective', 'extra_jurisdictional_registration', 'registration_expiry_date', 'registration_id', 'expiry_date', 'registration_date', 'registration_renewal_effective'], 
'restrictions': [{'attr::effective_date::value': '2002-05-10T07:00:00+00:00', 'attr::registration_id::value': 'FM0322987'}]}}}

I'll take a look in the sdk to see how we're getting such a complex query from this proof request.

ianco commented 4 years ago

... it's doing an "exists" on each requested attribute (which is a tag sub-clause) PLUS a query against each constraint (which is another tag sub-clause).

The above proof request results in a query with 21 sub-clauses against the tags_encrypted table

swcurran commented 4 years ago

Which is what I would expect it to do. I don't think it can do anything else. Perhaps an "in" clause?

On Wed, Jan 15, 2020 at 3:00 PM Ian Costanzo notifications@github.com wrote:

... it's doing an "exists" on each requested attribute (which is a tag sub-clause) PLUS a query against each constraint (which is another tag sub-clause).

The above proof request results in a query with 21 sub-clauses against the tags_encrypted table

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/bcgov/indy-catalyst/issues/426?email_source=notifications&email_token=AAHYRQW4G6JZUJISB7YCU7DQ56ISRA5CNFSM4KFM57X2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEJCDYAI#issuecomment-574897153, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAHYRQUWGZTHLT4WQPS5FG3Q56ISRANCNFSM4KFM57XQ .

ianco commented 4 years ago

I can re-organize the query to make it more efficient (move the value constraints first, and the "exists" constraints last). It's going to involve some tweaking in the sdk, because the whole thing is built as a WQL query which is then converted to SQL.

swcurran commented 4 years ago

I'm guessing you are running the query in the environment to see what works and what doesn't? I wouldn't want you messing the SDK without seeing what makes postgres happy.

Is that a Dilbert's boss question?

On Wed, Jan 15, 2020 at 3:27 PM Ian Costanzo notifications@github.com wrote:

I can re-organize the query to make it more efficient (move the value constraints first, and the "exists" constraints last). It's going to involve some tweaking in the sdk, because the whole thing is built as a WQL query which is then converted to SQL.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/bcgov/indy-catalyst/issues/426?email_source=notifications&email_token=AAHYRQVRLUVKI5DBPH3KT2DQ56LURA5CNFSM4KFM57X2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEJCFVWQ#issuecomment-574905050, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAHYRQS2XCZFH6MN5OM27J3Q56LURANCNFSM4KFM57XQ .

ianco commented 4 years ago

Yes I have some queries that I'm playing with.

If I move the "value" constraints up to the beginning of the query, it runs in about 3 msec on my local, vs about 60 msec for the sdk-generated query.

I'll do a bit of testing to see how it scales before I start to muck with the sdk

ianco commented 4 years ago

Indy-sdk PR is: https://github.com/hyperledger/indy-sdk/pull/2039