Open fearfate opened 1 month ago
the top
output, ysdb-worker is the connect with some custom componement and bloblang custom functions
[root@prod-pve-worklink-yops-ysdb ~]# top
top - 01:10:18 up 5 days, 12:54, 2 users, load average: 0.24, 0.09, 0.06
Tasks: 191 total, 1 running, 190 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 32778176 total, 12321100 free, 11014104 used, 9442972 buff/cache
KiB Swap: 0 total, 0 free, 0 used. 21336116 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
17910 root 20 0 14.4g 10.0g 38748 S 0.3 31.9 247:00.85 ysdb-worker
Hey @fearfate, I had a quick look over the config, but it would be difficult to reproduce the issue you're seeing on my end. Unless the Google library used by gcp_bigquery_select
does something silly, the issue should still pop up if you use a simpler input. Any chance you could try reproducing it using a file
or generate
input (or even a custom input which doesn't rely on 3rd party services)?
I changed the usage and this can not be reproduced again, thanks for your replay!
Hey @fearfate, I had a quick look over the config, but it would be difficult to reproduce the issue you're seeing on my end. Unless the Google library used by
gcp_bigquery_select
does something silly, the issue should still pop up if you use a simpler input. Any chance you could try reproducing it using afile
orgenerate
input (or even a custom input which doesn't rely on 3rd party services)?
OK, does that mean the issue has been resolved?
I used connect to sync a table of bigquery which has 50GB storage usage, tring to do a batch synchronization. but i found that after the task is completed, the memory usage remains unchanged. I am confused about this, could someone tell me how to deal with this, thanks!
I runned this in stream mode
My stream config:
And the metrics:
By the way, how to make the pagination loop for the input
gcp_bigquery_select
, withLIMIT
andOFFSET
, to scan all rows of the table