pgjdbc / r2dbc-postgresql

Postgresql R2DBC Driver
https://r2dbc.io
Apache License 2.0
1.01k stars 177 forks source link

LargeObject support via OID #255

Open fzoli opened 4 years ago

fzoli commented 4 years ago

Feature Request

PostgreSQL supports two type of large object:

Here are the differences:

Characteristic BYTEA OID
Max. allowed space 1 GB 2 GB
Data access As a whole Stream-style
Storage In defined table In pg_largeobject system table
Data manipulation Using SQL and escaping sequnces Only within transaction block by special functions
Loading Preload On demand

Stream-style makes it possible to implement a server that supports HTTP range requests. Is it possible to create an extension for r2dbc-postgresql to support LargeObject with OID?

Describe the solution you'd like

Blob from the SPI can not be used because data manipulation of a LargeObject is not possible with insert/update clauses.

io.r2dbc.postgresql.api.PostgresqlConnection could provide something like:

The Range can be described with two attribute:

mp911de commented 4 years ago

Thanks for your proposal. How does this work right now with PGJDBC? Ideally, we can reuse Blob as the streaming interface is already available there.

fzoli commented 4 years ago

Here is the API of PGJDBC

Connection of PGJDBC provides LargeObjectManager via connection.unwrap(PGConnection.class).getLargeObjectAPI()

Methods of LargeObjectManager:

Argument mode can be READ, WRITE, READWRITE LargeObject is a stream-like object with seek support.

Methods of LargeObject:

Argument reference can be SEEK_SET, SEEK_CUR, SEEK_END

Internally LargeObject uses Fastpath API.

fzoli commented 4 years ago

I reconsidered my request. LargeObject is memory efficient because the client code uses the same byte array in a loop. With reactive stream data is immutable so we can consume only ByteBuffer chunks. With a large file maybe the overhead of the chunks are worse than the memory requirement of a new thread that can use the blocking API. Or not?

mp911de commented 4 years ago

Thanks for the details. The Large Object API is pretty different from a plain Blob, so it seems using io.r2dbc.Blob isn't necessarily a suitable approach.

is memory efficient because the client code uses the same byte array in a loop

With this API being a driver-specific API, we can use Netty's ByteBuf which can be pooled buffers so we have GC pressure covered. Looking at PGJDBC's LargeObject we should be able to mirror the same API with using reactive types along (e.g. Mono<ByteBuf> read(int length), ByteBufFlux read() corresponding withgetInputStream(), Mono<Long> write(Flux<ByteBuf>) corresponding with getOutputStream()).

The FastPath API is largely deprecated with the background of using parameterized statements. Maybe @davecramer can shed a bit of light here.

davecramer commented 4 years ago

@mp911de what is the question ? see https://www.postgresql.org/docs/current/libpq-fastpath.html for the reasoning for the deprecation

mp911de commented 4 years ago

The actual question is how the replacement should look like. I wasn't able to find an example how to call functionality required for the Large Object API such as lo_close, loread, lo_tell and others via SQL.

davecramer commented 4 years ago

Interesting looks like the driver still uses FastPath and we don't have any tests for it. Honestly I don't use it but could probably figure it out when I have some time. If they happen to figure it out first that would be awesome

mp911de commented 4 years ago

Thanks, Dave. I marked this ticket as ideal for contribution.

It makes sense to add such a feature to our library and we can work out the actual API when we receive a design proposal in form of a PR.

Squiry commented 4 years ago

@fzoli You can call any of those functions without driver's explicit support. Take a look here. But there's one bad thing in reading: backend will send data by 65536 bytes chunks, but our driver won't decode it until it will receive full packet. So we have to rework our receiving process to support lazy decoding or we have to call lo_get multiple times with reasonable for parameter until the whole file is here.

OrangeDog commented 4 years ago

In JDBC, the OID-based large objects are exposed as java.sql.Blob. I've not really looked at R2DBC, but I don't see why the io.r2dbc.Blob interface wouldn't be suitable for the common case (read/write a stream).

mp911de commented 4 years ago

That was at first also my thought. The large object API allows for several additional methods such as seeking, truncation, length introspection. We need a bit of code to see how this enhancement goes and how the API shapes up.

davecramer commented 4 years ago

So is LO support a necessity? I don't really see many people using this in JDBC ?

mp911de commented 4 years ago

I don't know as I'm not so much involved in how folks use Postgres with large objects. I don't mind making sure our driver is able to work with server-side functions so that this feature could live also outside of the driver.

davecramer commented 4 years ago

Makes sense I guess. I probably wouldn't prioritize it.

OrangeDog commented 4 years ago

As an actual Postgres user, blob support is crucial. A bytea is not a blob.

This is like arguing you don’t need to support char because you don’t see many people using it.

davecramer commented 4 years ago

Speaking from some authority as primary maintainer for the PostgreSQL JDBC driver and major contributor for Postgres and long time user (since 1999) I honestly don't see a lot of use of large objects. Now perhaps I am myopic, I don't know. That said I didn't say don't support it, I said I wouldn't prioritize it.

OrangeDog commented 4 years ago

If by "large objects" you mean LargeObjects then perhaps that's fair, but most people will use large objects in Java via Blob.

davecramer commented 4 years ago

Yes I was referring to LargeObjects.