Open hvlad opened 1 week ago
Why a new packet instead of sending them in the response message itself? IIRC response packets contain its own format so inline BLOBs can be described individually as strings and then transformed to cached BLOBs on client.
Vlad, I suppose that content of op_inline_blob is cached by remote provider in order to serve requests for data in that blobs w/o network access. If yes - how long is data in that cache kept?
Vlad, I suppose that content of op_inline_blob is cached by remote provider in order to serve requests for data in that blobs w/o network access. If yes - how long is data in that cache kept?
Yes, sure. Cached blob is bound to the transaction object and will be released (what happens first):
Note, in the case when user opens the blob with non-empty BPB, cached blob is discarded.
Imagine RO-RC transaction which leasts VERY long (nothing prevents from keeping it open for client application lifetime). Would not such long life of cache be an overhead?
Imagine RO-RC transaction which leasts VERY long (nothing prevents from keeping it open for client application lifetime). Would not such long life of cache be an overhead?
It is supposed that cached blobs will be read by application. Anyway, it will be good to have a way to set limit on blobs cache size, is it your point ?
Telling true my first thought was that cache is very tiny - just blobs from last fetched row, but this appears inefficient when we try to support various grids.
First of all let's think about binding cache not to transaction but to request/statement. It's hardly typical to close statement and read blobs from it after close. Moreover, in the worst case that will anyway work - in old way over the wire.
With limiting cache size arrives one more tunable parameter and I'm afraid there are already too much of them: blob size limit per-attachment or per-statement, may be on per-field basis (at least on/off), default BPB, may be on per-field basis too. (Hmm - are there too many cases when >1 blob per row is returned ?)
Last but not least - is blob's inlining enabled by default? On my mind yes, but very reasonable (ie not too big) defaults should be used.
There should be cache size limits in any case. If you loaded 1000000 records (1 blob per record) at 16K, that's already 16G. But if I understand correctly, this will be provided that the user does not read these cached blobs as the records are fetched. Maybe it's worth limiting the blob cache to some amount, for example 1000 (configurable) and when the number of blobs becomes greater than this value, the oldest of them are removed from the cache.
And of course, this should be disabled/enabled at the statement level. And perhaps some dpb
to set the default parameter.
Telling true my first thought was that cache is very tiny - just blobs from last fetched row, but this appears inefficient when we try to support various grids.
Yes, it was my thoughts too. Also, consider batch fetching, when whole batch of rows should be read from the wire - it will cache all corresponding blobs anyway.
First of all let's think about binding cache not to transaction but to request/statement. It's hardly typical to close statement and read blobs from it after close. Moreover, in the worst case that will anyway work - in old way over the wire.
It was in my very first version of code. Until I started to handle op_exec_immediate2
- it have no statement :)
It is possible to mark blobs by stmt id
(when possible) and remove such blobs from transaction cache on statement close.
But I prefer to avoid it, so far. It gives a chance for the "not typical" apps to access cached blobs after statement close - I guess is it not so non-typical when there is no cursor, i.e. for 'EXECUTE PROCEDURE', etc.
With limiting cache size arrives one more tunable parameter and I'm afraid there are already too much of them: blob size limit per-attachment or per-statement, may be on per-field basis (at least on/off), default BPB, may be on per-field basis too. (Hmm - are there too many cases when >1 blob per row is returned ?)
If there will be too many parameters, we can put them into separate dedicated interface, say IClientBlobCache, that will be implemented by Remote provider only.
And I'm sure there is applications that have many blobs in its resultsets. Look at monitoring tables, for example: MON$STATEMENTS have two blobs, there are other.
Last but not least - is blob's inlining enabled by default? On my mind yes, but very reasonable (ie not too big) defaults should be used.
Currently it is enabled - else nobody could be able to test the feature ;)
One of the goals of this PR is to discuss and then implement necessary numbers of parameters and corresponding API to customize the blobs cache.
So far, I see two really required parameters: 'maximum blob size for inline sending' (per-statement or per-attachment- to be decided, it should be known to the server) and 'size of blob cache' (per-attachment, client-only). Others is 'good to have' but not highly required : BPB, per-field inlining.
The builds for testing could be found here: https://github.com/FirebirdSQL/firebird/actions/runs/11836803458 Scroll page down to the 'Artifacts' section
I tried to conduct experiments on a local network. There are no problems with latency there, however, I will provide some results of the experiment.
Run the query in different variants
select
remark
from horse
where remark is not null
It contains 66794 small BLOBs.
Run IBExpert with this query and do FetchAll
Results Firebird-5.0.2.1567-0-9fbd574-windows-x64 (server + client): 640ms Memory consumption 38 MB (IBExpert)
Results Firebird-6.0.0.526-0-Initial-windows-x64 (server + client): 1s 187ms Memory consumption 385 MB (IBExpert)
Probably there will be a gain in networks with high latency. I will try to see in the near future. In the meantime, the experiment shows that default blob prefetching is not always useful and at least consumes more memory.
PS
select
sum(octet_length(remark)) as len
from horse
where remark is not null
LEN
=========
6 558 101
Overhead seems quite large to save 6 MB.
Am I right in understanding that 16K of memory is always allocated for each BLOB? I also don't know how exactly BLOBs are handled in IBExpert, perhaps it doesn't close a fully read BLOB until the end of the query/transaction. What about the limitation of storing the last N BLOBs in the cache?
On 11/18/24 16:44, Simonov Denis wrote:
Results Firebird-5.0.2.1567-0-9fbd574-windows-x64 (server + client): 640ms Memory consumption 38 MB (IBExpert)
Results Firebird-6.0.0.526-0-Initial-windows-x64 (server + client): 1s 187ms Memory consumption 385 MB (IBExpert)
Probably you should compare not 6-inline_blobs vs 5 but 6-inline_blobs vs6-std? FB6 is in pre-alpha state, performance & memory comsumption may be affected by a lot of other things.
Next, are you sure that in IBE 'FetchAll' loads blobs data from server?
Am I right in understanding that 16K of memory is always allocated for each BLOB?
Yes, and it was not introduced by this PR.
BTW, 66794 blobs should consume near 1GB, while you see about 350MB - what memory counter you look at ? I tried with 67000 of blobs of 1024 bytes and see about 1.4GB increase of 'Private Bytes' and about 1.1GB increase of 'Virtual Memory' (it was with DEBUG build).
I also don't know how exactly BLOBs are handled in IBExpert, perhaps it doesn't close a fully read BLOB until the end of the query/transaction.
I doubt IBE reads any blob contents when shows data in grid - until user explicitly ask for it by moving mouse cursor over grid cell or by pressing '...' button in the cell. And debugger confirms it.
What about the limitation of storing the last N BLOBs in the cache?
It was proposed but we still have not defined what settings and API to manage them we need.
Thanks for testing !
@AlexPeshkoff : I think the time overhead is related with memory allocations.
I just looked at the task manager. It is clear that it does not display memory quite correctly, but here the difference is visible to the naked eye. And I have no claims to performance, I understand that slightly different conditions need to be tested (primarily in networks with high latency). Nevertheless, I consider this test useful to understand that without proper settings we can get excessive memory consumption at least.
As there is no better ideas, I offer following API changes
interface Statement : ReferenceCounted
{
...
version: // 6.0
// Inline blob transfer
uint getMaxInlineBlobSize(Status status);
void setMaxInlineBlobSize(Status status, uint size);
}
interface Attachment : ReferenceCounted
{
...
version: // 6.0
// Blob caching by client
uint getBlobCacheSize(Status status);
void setBlobCacheSize(Status status, uint size);
// Inline blob transfer
uint getMaxInlineBlobSize(Status status);
void setMaxInlineBlobSize(Status status, uint size);
}
On 11/19/24 18:47, Vlad Khorsun wrote:
As there is no better ideas, I offer following API changes
|interface Statement : ReferenceCounted { ... version: // 6.0 // Inline blob transfer uint getMaxInlineBlobSize(Status status); void setMaxInlineBlobSize(Status status, uint size); } | |interface Attachment : ReferenceCounted { ... version: // 6.0 // Blob caching by client uint getBlobCacheSize(Status status); void setBlobCacheSize(Status status, uint size); // Inline blob transfer uint getMaxInlineBlobSize(Status status); void setMaxInlineBlobSize(Status status, uint size); } |
ok for me
I see no need for new methods in IAttachment, it can be handled by backward-compatible way using DPB and info items unless someone want to make such adjustments dynamically during attachment lifetime.
I see no need for new methods in IAttachment, it can be handled by backward-compatible way using DPB and info items unless someone want to make such adjustments dynamically during attachment lifetime.
The presence of methods in IAttachment
does not cancel the need for dpb
tags to initially set these parameters when connecting. And yes, since the cache itself is for each transaction, it makes sense to change these parameters during the connection. If I understand correctly, the value from setBlobCacheSize
is passed to the transaction at startup, and IAttachment::setMaxInlineBlobSize
is used during IAttachment::execute
and IAttachment::OpenCursor
, and passes the default value to IStatement
when calling IAttachment::prepare
.
The feature allows to send small blob contents in the same data stream as main resultset. This lowers number of roundtrips required to get blob data and significantly improves performance on high latency networks.
The blob metadata and data is send using new type of packet
op_inline_blob
and new structureP_INLINE_BLOB
. Theop_inline_blob
packet is send before correspondingop_sql_response
(in case of answer onop_execute2
orop_exec_immediate2
), orop_fetch_response
(answer onop_fetch
). There could be as muchop_inline_blob
packets as number of blob fields in output format. NULL blobs and too big blobs are not sent. The blob send as a whole, i.e. current implementation doesn't support sending of part of blob. The reasons - attempt to not over-complicate the code and the fact thatseek
is not implemented for segmented blobs.Current, initial, implementation send all blobs that total size is not greater than 16KB.
The open questions is what API changes is required to allow user to customize this process:
Also, will good to have but not required:
This PR currently in draft state and published for the early testers and commenters.