Open nsupegemini opened 8 months ago
After investing,I found issue was at other place. It is properly passing timeout . Please feel free to close it
@NiraliSupe Sorry to dig this back up but I figured I would ask how you resolved this before posting another issue? I look to be having a very similar problem
In one of the plugin, I was going directly to Elasticsearch to get existing record and I was missing timeout
in the plugin. I did went through pgsync code and timeout field is getting passed to bulk update
function. I needed to set environment variable ELASTICSEARCH_TIMEOUT
. Hope this helps
PGSync version: 2.5.0
Postgres version: 12.10
Elasticsearch version: 7.17.6
Redis version: Redis server v=7.0.11
Python version: Python 3.9.5
Problem Description: Looks like ELASTICSEARCH_TIMEOUT is not used in bulk update https://github.com/toluaina/pgsync/blob/95116702c4b314d8b97696ef857cfe116241e236/pgsync/search_client.py#L188
Error Message (if any):