While testing on this specific environment, I encountered this unexpected error which does not occur with the Docker version that was created by @laceysanderson
Tripal Job Launcher
Running as user 'admin'
-------------------
2021-05-17 16:50:35: Job ID 44.
2021-05-17 16:50:35: Calling: tripal_chado_drop_schema(chado)
[error] Message: Job execution failed: SQLSTATE[53200]: Out of memory: 7 ERROR: out of shared
memory
HINT: You might need to increase max_locks_per_transaction.: drop schema
chado cascade; Array
(
)
Resolution:
Edit the postgresql.conf file within the Postgresql data directory and adjust max_locks_per_transaction
The current max_locks_per_transaction (when defaulted / commented is 10)
Uncommenting / Allowing max_locks_per_transaction = 64 within the config file suffered same error
Adjusting max_locks_per_transaction = 128 resolved this issue.
NOTE: YOU MUST RESTART POSTGRESQL AFTER YOU HAVE MADE THIS CHANGE.
It would definitely be good to have a troubleshooting section to the docs with tips like these! This tip is applicable to all environments not just the specific one mentioned by @risharde
Originally contributed by @risharde:
While testing on this specific environment, I encountered this unexpected error which does not occur with the Docker version that was created by @laceysanderson
Environmental details: Virtualization: Oracle Virtualbox OS: CENTOS 7 (64-bit) MEM ALLOCATION: 2 GB Drupal: 9.1.8 Tripal: 4.x Branch: 4-tv4-tripal_importer Postgresql: 10 PHP: 7.2
Error output:
Resolution: Edit the postgresql.conf file within the Postgresql data directory and adjust max_locks_per_transaction The current max_locks_per_transaction (when defaulted / commented is 10) Uncommenting / Allowing max_locks_per_transaction = 64 within the config file suffered same error Adjusting max_locks_per_transaction = 128 resolved this issue.
NOTE: YOU MUST RESTART POSTGRESQL AFTER YOU HAVE MADE THIS CHANGE.