agilord / postgres_pool

Postgresql connection pool with a wide range of automated reconnect parameters in Dart.
https://pub.dev/packages/postgres_pool
BSD 3-Clause "New" or "Revised" License
18 stars 10 forks source link

PostgreSQLSeverity.unknown 53200: out of memory Detail: Failed on request of size 360 in memory context "CacheMemoryContext". #6

Open bubnenkoff opened 3 years ago

bubnenkoff commented 3 years ago
Unhandled exception:
PostgreSQLSeverity.unknown 53200: out of memory Detail: Failed on request of size 360 in memory context "CacheMemoryContext".
===== asynchronous gap ===========================
package:postgres_pool/postgres_pool.dart 322  PgPool.run.<fn>
package:retry/retry.dart 131                  RetryOptions.retry
package:postgres_pool/postgres_pool.dart 320  PgPool.run
bin\parser_middleware.dart 53                 main.<fn>
unparsed                                      <asynchronous suspension>
void main() async {

pg = PgPool(
    PgEndpoint(
        host: 'localhost',
        port: 5432,
        database: 'db',
        username: 'postgres',
        password: '123'),
      settings: PgPoolSettings()
      ..maxConnectionAge = Duration(hours: 1)
      ..concurrency = 4,
  ) ;

//...

possible error place:

 Future<dynamic> getListOfFilesForProcessing(Map body) async {
  try {

      final f = pg.run((c) async {
        final rs = await c.query(
          body['sql']
        );
        // print("getListOfFilesForProcessing: ${rs} ");
         return rs;
      });

    return f;

    } on PostgreSQLException catch (e) {
      writeLog('getListOfFilesForProcessing', e.message);
    }
isoos commented 3 years ago

@bubnenkoff: this seems to be an out of memory error on the postgresql server / or on the process running the query. Could you please check the server logs about it?

bubnenkoff commented 3 years ago

I am seeing to many PG instances: изображение

But nothing special in PostgreSQL\13\data\log

isoos commented 3 years ago

What's the query you are running? Does it take long time to complete? What if you run it through psql?

bubnenkoff commented 3 years ago

50% is update by ID queries. The most complex part is:

  Future<dynamic> sqlInsert(Map body) async {
    dbEnums isDataWasInserted = dbEnums.insertFailed; // if at last one is success

    try {
      await connection.transaction((ctx) async {
        // print(body['sql']);
        for (var s in jsonDecode(body['sql'])) {
          // print("sql insert: $s");
          await ctx.query(s);
        }
        isDataWasInserted = dbEnums.success;
        // print("INSERT SUCCESS\n");
      }).timeout(Duration(seconds: 120));
    } 

    on PostgreSQLException catch (e) {
      for (var s in jsonDecode(body['sql'])) {
        print('sql insert: $s');
      }    

      print('FIRST INSERT FAILED: ${e.message} ');
      connection.cancelTransaction();

      try {
        print('There is some duplicates. Removing them');
        await connection.transaction((ctx) async {
          for (var s in jsonDecode(body['sql-remove'])) {
            await ctx.query(s);
          }
        }).timeout(Duration(seconds: 90));

        // new insert attempt
        try {
          await connection.transaction((ctx) async {
            for (var s in jsonDecode(body['sql'])) {
              await ctx.query(s);
            }
            // print("INSERT2 SUCCESS");
            isDataWasInserted = dbEnums.success;
          }).timeout(Duration(seconds: 90));
        } on TimeoutException catch (e) {
          print('second attempt timeout error: ${e.message}');
        } on PostgreSQLException catch (e) {
          print('SECOND INSERT WAS FAILED ${e.message}');
          connection.cancelTransaction();
          writeLog('SECOND INSERT WAS FAILED ', e.message); // so we can't insert data
        }
      } on TimeoutException catch (e) {
        print('removing dups timeout error: ${e.message}');
      } on PostgreSQLException catch (e) {
        print('Removing duplicates was FAILED: ${e.message}');
      }
    } finally {
      return isDataWasInserted;
    }
  }

btw transaction exception is look like marked as private, so I handle only common PostgreSQLException

1% is SELECT xml_files ... that take from 0.5 to 30 seconds if I run it in SQL manager.

From psql all works. The service can work many hours before crush.

Is seems that every hour the number of instances of PG is grow: изображение

isoos commented 3 years ago

Are you working with large fields in your DB rows? Having a 9GB postgres process seems to indicate that some of the queries are expensive (in time, IO or memory). You may configure the postgresql server to log queries, and also use EXPLAIN to see what may be the bottleneck.

There is a very small, mostly unlikely chance that the client library sends something that triggers an excessive response. You'd need to debug the server-side logs to get to the bottom of this.