Closed daurnimator closed 8 years ago
Can you check if c5254c0d3bb279c327ba6e08e34a5da8e01028dc solves this issue?
The fix in c5254c0d3bb279c327ba6e08e34a5da8e01028dc potentially accesses invalid memory.
If the connection has been closed and the page it was on collected by the OS, PQstatus
will end up segfaulting. Alternatively, some other object my now be occupying that space in memory, and you end up corrupting it!
I think the fix should be to keep the PGconn
as a uservalue on the largeObject
userdata.
This will actually simplify things so that you only need to keep the fd
in the largeObject
userdata itself. (which in turn simplifies #32)
Actually this might be a bug in PostgreSQL. I need to check that.
Not a bug in PostgreSQL, but should be fixed in a19b4d711aedb87a3c23d9c3df8fc29e232b0729
should be fixed in a19b4d7
Huh? That commit doesn't fix this at all.
The root cause is keeping a reference to the PGconn
that isn't invalidated when the PGconn
is finished.
Am 11.05.2016 um 11:40 schrieb daurnimator notifications@github.com:
should be fixed in a19b4d7 https://github.com/arcapos/luapgsql/commit/a19b4d711aedb87a3c23d9c3df8fc29e232b0729 Huh? That commit doesn't fix this at all.
The root cause is keeping a reference to the PGconn that isn't invalidated when the PGconn is finished.
It fixes the crash because the connection object is checked.
because the connection object is checked.
The checking of connection object can cause a segfault.
So a largeObject should contain a PGconn * instead of a PGconn ?
So a largeObject should contain a PGconn * instead of a PGconn ?
No, that won't help.
You need to invalidate the largeObject when the PGConn
is collected.
There are two potential solutions here:
PGconn
you store a list of things to invalidate on cleanupPGconn
to the largeObject.Option 2 is preferable IMO, and it can be done by adding a uservalue (lua_setuservalue
) to the largeObject that holds the lua userdata with the PGConn
in it. This is very similar to what we already did for PQtrace
Am 11.05.2016 um 11:51 schrieb daurnimator notifications@github.com:
So a largeObject should contain a PGconn * instead of a PGconn ?
No, that won't help.
You need to invalidate the largeObject when the PGConn is collected. There are two potential solutions here:
- In the PGconn you store a list of things to invalidate on cleanup
- You tie the lifetime of the PGconn to the largeObject.
Option 2 is preferable IMO, and it can be done by adding a uservalue (lua_setuservalue) to the largeObject that holds the lua userdata with the PGConn in it. This is very similar to what we already did for PQtrace
Wouldn't it be sufficient lua_ref() the connection in an additional field in largeObject, and unref when the large object is closed?
Wouldn't it be sufficient lua_ref() the connection in an additional field in largeObject, and unref when the large object is closed?
Yes, that should work (that is another way to implement option 2).
Am 11.05.2016 um 11:56 schrieb daurnimator notifications@github.com:
Wouldn't it be sufficient lua_ref() the connection in an additional field in largeObject, and unref when the large object is closed?
Yes, that should work (that is another way to implement option 2).
I wonder what will happen if I explicitely finish the connection. It is documented that a connection object must not be used after PQfinish() has been called.
Am 11.05.2016 um 11:56 schrieb daurnimator notifications@github.com:
Wouldn't it be sufficient lua_ref() the connection in an additional field in largeObject, and unref when the large object is closed?
Yes, that should work (that is another way to implement option 2).
May large objects should not be proper objects, but actually be called as conn methods:
local fd = conn:lo_open(oid, mode) conn:lo_read(fd, 10) conn:lo_close(fd)
That would be consistent with how libpq handles large object (there is not PGlargeObject type...)
Not using a "pseudo" object would make error handling easier and prevent the issues we currently see.
That is also an adequate solution. Seems simpler too. Go for it :)
Fixed by a reworked API, see 059a79040f67b96ac783f7111120e9d05ba7a03b
This is probably because the connection object is invalid because the connection has already been garbage collected.