tlocke / pg8000

A Pure-Python PostgreSQL Driver
BSD 3-Clause "New" or "Revised" License
515 stars 46 forks source link

Error: Could not resize shared memory segment, no space left on device #87

Closed bapcyk closed 3 years ago

bapcyk commented 3 years ago
  File "./app.py", line 101, in <module>
    mig.fixing_data()
  File "./app.py", line 76, in fixing_data
    res = pg_run(self.pg, '''
  File "/home/bapcyk/prj/make.py", line 350, in pg_run
    rows = pg.run(sql, **kwargs) or []
  File "/home/bapcyk/prj/.venv/lib64/python3.8/site-packages/pg8000/native.py", line 201, in run
    self._context = self.execute_unnamed(
  File "/home/bapcyk/prj/.venv/lib64/python3.8/site-packages/pg8000/core.py", line 651, in execute_unnamed
    self.handle_messages(context)
  File "/home/bapcyk/prj/.venv/lib64/python3.8/site-packages/pg8000/core.py", line 769, in handle_messages
    raise self.error
pg8000.exceptions.DatabaseError: {'S': 'ERROR', 'V': 'ERROR', 'C': '53100', 'M': 'could not resize shared memory segment "/PostgreSQL.131527914" to 16777216 bytes: No space left on device', 'F': 'dsm_impl.c', 'L': '312', 'R': 'dsm_impl_posix'}
    track_io_timing = on
    client_min_messages = log
    log_filename = 'dbtest.log'
    log_destination = 'stderr'
    logging_collector = on
    log_rotation_size = 100MB
    log_line_prefix = '%m [%d %u %r] '
    listen_addresses = '*'
    max_connections = 224
    shared_buffers = 4GB
    effective_cache_size = 12GB
    maintenance_work_mem = 2GB
    checkpoint_completion_target = 0.9
    wal_buffers = 32MB
    default_statistics_target = 500
    random_page_cost = 1.1
    effective_io_concurrency = 300
    work_mem = 32MB
    min_wal_size = 4GB
    max_wal_size = 16GB
    max_worker_processes = 10
    max_parallel_workers_per_gather = 5
    max_parallel_workers = 10
    max_parallel_maintenance_workers = 4

So, to be honest, I don't know is it correct to be treated as a PG8000 error (it's better to be treated as a help need). I suppose DBeaver sets some session parameters and it helps, but it's a suggestion only and I don't know what they are. I tried to set max_parallel_workers to 0, but it did not help. I don't know the scope of the error to be honest.

tlocke commented 3 years ago

Hi @bapcyk, this is an error encountered by PostgreSQL itself, and it looks like the container ran out of shared memory. As far as I can see it's not a bug with pg8000.

bapcyk commented 3 years ago

@tlocke checked, yes, you are right. It's not related to PG8000. Thanks!

bapcyk commented 3 years ago

Not a bug in PG8000: hit the same in everywhere including DBeaver (when the limit of fetching rows is 0)