vapor / postgres-nio

🐘 Non-blocking, event-driven Swift client for PostgreSQL.
https://api.vapor.codes/postgresnio/documentation/postgresnio/
MIT License
317 stars 72 forks source link

Follow up to fixing crash in queries that timeout #352

Closed gwynne closed 1 year ago

gwynne commented 1 year ago

This hopefully solves the failed force-unwrap crash in the fix for #347 provided by #351.

cc @fabianfett @trasch

trasch commented 1 year ago

I can confirm that this fixes the remaining crash for me 👍

Thanks @gwynne and @fabianfett !

fabianfett commented 1 year ago

@trasch Do you have a repro for the crash?

codecov-commenter commented 1 year ago

Codecov Report

Merging #352 (7807f7a) into main (6e993d5) will decrease coverage by 0.01%. The diff coverage is 0.00%.

Additional details and impacted files ```diff @@ Coverage Diff @@ ## main #352 +/- ## ========================================== - Coverage 41.19% 41.19% -0.01% ========================================== Files 117 117 Lines 9658 9660 +2 ========================================== Hits 3979 3979 - Misses 5679 5681 +2 ``` | [Impacted Files](https://codecov.io/gh/vapor/postgres-nio/pull/352?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=vapor) | Coverage Δ | | |---|---|---| | [...urces/PostgresNIO/New/PostgresChannelHandler.swift](https://codecov.io/gh/vapor/postgres-nio/pull/352?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=vapor#diff-U291cmNlcy9Qb3N0Z3Jlc05JTy9OZXcvUG9zdGdyZXNDaGFubmVsSGFuZGxlci5zd2lmdA==) | `61.67% <0.00%> (-0.27%)` | :arrow_down: |
gwynne commented 1 year ago

It fixes the proximate crash, but it's not really dealing with the actual underlying problem; I defer to @fabianfett for the real solution.

trasch commented 1 year ago

Do you have a repro for the crash?

@fabianfett Not publicly available, and it uses internal databases that aren't available externally.

What I'm doing is running a few million PostGIS queries with a connection pool of size 50, so lots of parallel queries with short timeouts. I will try to create a test case that mimics this, but I will need some time. (And I hope that I didn't simply botch the connection pool...)