Problem Description:
Hi @toluaina, so I have a huge dataset 6 lakh entries, with the SQL joins it takes a lot of time to sync and I've set QUERY_SIZE to 10,000. But it fails after 1lakh syncs. As the system goes down completely.
I see htop command output to be using full memory 32GB of the python process. Is there a way to have sync in chunks like first 1Lakh entries then next and so on ???
PGSync version: Latest
Postgres version: 14
Elasticsearch version: 8+
Redis version: 7+
Python version: 3.11.4
Problem Description: Hi @toluaina, so I have a huge dataset 6 lakh entries, with the SQL joins it takes a lot of time to sync and I've set QUERY_SIZE to 10,000. But it fails after 1lakh syncs. As the system goes down completely.
I see
htop
command output to be using full memory 32GB of the python process. Is there a way to have sync in chunks like first 1Lakh entries then next and so on ???Error Message (if any):