heterodb / pg-strom

PG-Strom - Master development repository
http://heterodb.github.io/pg-strom/
Other
1.27k stars 163 forks source link

[VTJ-JP]PG-Strom crashed on "pgstromScanNextTuple" #778

Open sakaik opened 1 month ago

sakaik commented 1 month ago

SUMMARY

以下のクエリでクラッシュしました。(ここではEXPLAIN ANALYZEの例を示しましたが、EXPLAINを除いたSELECTでも同様です)

sakaitest_nvme5=# EXPLAIN ANALYZE
sakaitest_nvme5-# SELECT l1.* 
sakaitest_nvme5-#   FROM moj_curves_data l1, moj_curves_data l2
sakaitest_nvme5-#   WHERE l1.ver=l2.ver AND l1.filename=l2.filename AND l1.curve_id=l2.curve_id AND  l1.xml_pid<>l2.xml_pid AND l1.x IS NULL 
sakaitest_nvme5-#   LIMIT 10;

server closed the connection unexpectedly
        This probably means the server terminated abnormally
        before or while processing the request.
The connection to the server was lost. Attempting reset: Failed.
The connection to the server was lost. Attempting reset: Failed.
!?> 

Tables and nums of data

データ例:

  ver   |     filename     |  curve_id  | x | y |  xml_pid   | num 
--------+------------------+------------+---+---+------------+-----
 202404 | 03201-4000-1.zip | C000000107 |   |   | P000000319 |   1
 202404 | 03201-4000-1.zip | C000000107 |   |   | P000000320 |   2
 202404 | 03201-4000-1.zip | C000000108 |   |   | P000000320 |   1
:

データ件数:

sakaitest_nvme5=# SELECT COUNT(*) FROM moj_curves_data;
   count    
------------
 6455800708

PostgreSQL Log

PostgreSQL-Fri.log に以下の出力あり

2024-05-31 08:31:08.181 UTC [1002046] LOG:  GPU0: CUDA stack size expanded 0 -> 5120 bytes

2024-05-31 08:31:53.600 UTC [1000488] LOG:  background worker "parallel worker" (PID 1002093) was terminated by signal 11: Segmentation fault
2024-05-31 08:31:53.600 UTC [1000488] DETAIL:  Failed process was running: SELECT l1.* 
          FROM moj_curves_data l1, moj_curves_data l2
          WHERE l1.ver=l2.ver AND l1.filename=l2.filename AND l1.curve_id=l2.curve_id AND  l1.xml_pid<>l2.xml_pid AND l1.x IS NULL 
          LIMIT 10;
2024-05-31 08:31:53.600 UTC [1000488] LOG:  terminating any other active server processes
2024-05-31 08:31:55.816 UTC [1000488] LOG:  all server processes terminated; reinitializing

2024-05-31 08:31:57.923 UTC [1002106] FATAL:  the database system is in recovery mode
2024-05-31 08:31:57.925 UTC [1002105] LOG:  database system was interrupted; last known up at 2024-05-31 08:29:57 UTC
2024-05-31 08:31:58.021 UTC [1002105] LOG:  database system was not properly shut down; automatic recovery in progress
2024-05-31 08:31:58.032 UTC [1002105] LOG:  redo starts at 414/70BEFCB8
2024-05-31 08:31:58.032 UTC [1002105] LOG:  invalid record length at 414/70BEFCF0: expected at least 24, got 0
2024-05-31 08:31:58.032 UTC [1002105] LOG:  redo done at 414/70BEFCB8 system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s
2024-05-31 08:31:58.052 UTC [1002107] LOG:  checkpoint starting: end-of-recovery immediate wait
2024-05-31 08:31:58.141 UTC [1002107] LOG:  checkpoint complete: wrote 3 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.044 s, sync=0.006 s, total=0.099 s; sync files=2, longest=0.003 s, average=0.003 s; distance=0 kB, estimate=0 kB; lsn=414/70BEFCF0, redo lsn=414/70BEFCF0
2024-05-31 08:31:58.149 UTC [1000488] LOG:  database system is ready to accept connections
2024-05-31 08:31:58.170 UTC [1002111] LOG:  PG-Strom fatbin image is ready: pgstrom-gpucode-V012030-544157df02e31b45cceac10780d49938.fatbin
2024-05-31 08:31:59.437 UTC [1002111] ERROR:  failed on bind('.pg_strom.1000488.gpu0.sock'): Address already in use
2024-05-31 08:31:59.869 UTC [1000488] LOG:  background worker "PG-Strom GPU Service" (PID 1002111) exited with exit code 1

2024-05-31 08:32:04.893 UTC [1002129] LOG:  PG-Strom fatbin image is ready: pgstrom-gpucode-V012030-544157df02e31b45cceac10780d49938.fatbin
2024-05-31 08:32:06.612 UTC [1002129] LOG:  GPU0 workers - 13 startup (with GpuCacheManager), 0 terminate

core dumps

1回のクエリ実行で、以下の2つのコアファイルが作成された。

-rw-------. 1 postgres postgres 36129558528 May 31 09:55 core.1003460
-rw-------. 1 postgres postgres 36221059072 May 31 09:55 core.1003461

それぞれの backtraceを示します。

gdb -f /usr/pgsql-16/bin/postgres -c /opt/nvme/core/core.1003460
(gdb) bt
#0  pgstromScanNextTuple (pts=0x25de7f8) at executor.c:1143
#1  0x00007f568b6ffb66 in pgstromExecScanAccess (pts=0x25de7f8) at executor.c:1709
#2  0x00007f568b701ace in pgstromExecScanAccess (pts=0x25de7f8) at executor.c:1680
#3  pgstromExecTaskState (node=0x25de7f8) at executor.c:1906
#4  0x00000000006ae26c in ExecProcNodeInstr (node=0x25de7f8) at execProcnode.c:480
#5  0x00007f568b7187ae in ExecProcNode (node=0x25de7f8) at /usr/pgsql-16/include/server/executor/executor.h:273
#6  execInnerPreloadOneDepth (p_shared_inner_usage=0x7f5693d04388, p_shared_inner_nitems=0x7f5693d04380, istate=<optimized out>, 
    pts=<optimized out>, memcxt=0x25fba00) at gpu_join.c:1869
#7  GpuJoinInnerPreload (pts=pts@entry=0x25d9fa0) at gpu_join.c:2299
#8  0x00007f568b701a18 in __pgstromExecTaskOpenConnection (pts=0x25d9fa0) at executor.c:1814
#9  pgstromExecTaskState (node=0x25d9fa0) at executor.c:1882
#10 0x00000000006ae26c in ExecProcNodeInstr (node=0x25d9fa0) at execProcnode.c:480
#11 0x00000000006a79d3 in ExecProcNode (node=0x25d9fa0) at ../../../src/include/executor/executor.h:273
#12 ExecutePlan (execute_once=<optimized out>, dest=0x256e120, direction=<optimized out>, numberTuples=10, 
    sendTuples=<optimized out>, operation=CMD_SELECT, use_parallel_mode=<optimized out>, planstate=0x25d9fa0, estate=0x25d9b20)
    at execMain.c:1670
#13 standard_ExecutorRun (queryDesc=0x25c7e40, direction=<optimized out>, count=10, execute_once=<optimized out>)
    at execMain.c:365
#14 0x00000000006abc46 in ParallelQueryMain (seg=0x25011e8, toc=0x7f5693d04000) at execParallel.c:1464
#15 0x000000000058b5dc in ParallelWorkerMain (main_arg=<optimized out>) at parallel.c:1520
#16 0x00000000007b6c8d in StartBackgroundWorker () at bgworker.c:861
#17 0x00000000007bbfe3 in do_start_bgworker (rw=<optimized out>) at postmaster.c:5765
#18 maybe_start_bgworkers () at postmaster.c:5989
#19 0x00000000007bc85f in process_pm_pmsignal () at postmaster.c:5152
#20 ServerLoop () at postmaster.c:1773
#21 0x00000000007bee2d in PostmasterMain (argc=argc@entry=3, argv=argv@entry=0x24aa7e0) at postmaster.c:1466
--Type <RET> for more, q to quit, c to continue without paging-- 
#22 0x0000000000506ddf in main (argc=3, argv=0x24aa7e0) at main.c:198
gdb -f /usr/pgsql-16/bin/postgres -c /opt/nvme/core/core.1003461
(gdb) bt
#0  pgstromScanNextTuple (pts=0x25df508) at executor.c:1143
#1  0x00007f568b6ffb66 in pgstromExecScanAccess (pts=0x25df508) at executor.c:1709
#2  0x00007f568b701ace in pgstromExecScanAccess (pts=0x25df508) at executor.c:1680
#3  pgstromExecTaskState (node=0x25df508) at executor.c:1906
#4  0x00000000006ae26c in ExecProcNodeInstr (node=0x25df508) at execProcnode.c:480
#5  0x00007f568b7187ae in ExecProcNode (node=0x25df508) at /usr/pgsql-16/include/server/executor/executor.h:273
#6  execInnerPreloadOneDepth (p_shared_inner_usage=0x7f5693d04388, p_shared_inner_nitems=0x7f5693d04380, istate=<optimized out>, 
    pts=<optimized out>, memcxt=0x25fc710) at gpu_join.c:1869
#7  GpuJoinInnerPreload (pts=pts@entry=0x25dacb0) at gpu_join.c:2299
#8  0x00007f568b701a18 in __pgstromExecTaskOpenConnection (pts=0x25dacb0) at executor.c:1814
#9  pgstromExecTaskState (node=0x25dacb0) at executor.c:1882
#10 0x00000000006ae26c in ExecProcNodeInstr (node=0x25dacb0) at execProcnode.c:480
#11 0x00000000006a79d3 in ExecProcNode (node=0x25dacb0) at ../../../src/include/executor/executor.h:273
#12 ExecutePlan (execute_once=<optimized out>, dest=0x256e120, direction=<optimized out>, numberTuples=10, 
    sendTuples=<optimized out>, operation=CMD_SELECT, use_parallel_mode=<optimized out>, planstate=0x25dacb0, estate=0x25da830)
    at execMain.c:1670
#13 standard_ExecutorRun (queryDesc=0x25c8b50, direction=<optimized out>, count=10, execute_once=<optimized out>)
    at execMain.c:365
#14 0x00000000006abc46 in ParallelQueryMain (seg=0x25011e8, toc=0x7f5693d04000) at execParallel.c:1464
#15 0x000000000058b5dc in ParallelWorkerMain (main_arg=<optimized out>) at parallel.c:1520
#16 0x00000000007b6c8d in StartBackgroundWorker () at bgworker.c:861
#17 0x00000000007bbfe3 in do_start_bgworker (rw=<optimized out>) at postmaster.c:5765
#18 maybe_start_bgworkers () at postmaster.c:5989
#19 0x00000000007bc85f in process_pm_pmsignal () at postmaster.c:5152
#20 ServerLoop () at postmaster.c:1773
#21 0x00000000007bee2d in PostmasterMain (argc=argc@entry=3, argv=argv@entry=0x24aa7e0) at postmaster.c:1466
--Type <RET> for more, q to quit, c to continue without paging--
#22 0x0000000000506ddf in main (argc=3, argv=0x24aa7e0) at main.c:198
kaigai commented 1 month ago

こちら、make clean した後に再ビルドしても再発しますか?

少しヘッダの定義が変わった後なので、*.o が古いままだと異常動作する可能性があります。

sakaik commented 1 month ago

はい。ビルドするスクリプトの中で make clean しているので、今回も make clean済みでの実行結果になります。

sudo make uninstall PG_CONFIG=/usr/pgsql-16/bin/pg_config
make clean PG_CONFIG=/usr/pgsql-16/bin/pg_config
kaigai commented 1 month ago

65b8aaaa7d5d0d87af4eeb93f9f2942bd1d000cf で修正しました。 バッファを確保する時の長さ計算のミスでした。

sakaik commented 1 month ago

残念ながら、依然クラッシュします。

version

sakaitest_nvme5=# select pgstrom.githash();
                 githash                  
------------------------------------------
 65b8aaaa7d5d0d87af4eeb93f9f2942bd1d000cf

クエリ実行結果

sakaitest_nvme5=# SELECT l1.* 
  FROM moj_curves_data l1, moj_curves_data l2
  WHERE l1.ver=l2.ver AND l1.filename=l2.filename AND l1.curve_id=l2.curve_id AND  l1.xml_pid<>l2.xml_pid AND l1.x IS NULL 
  LIMIT 10;
server closed the connection unexpectedly
        This probably means the server terminated abnormally
        before or while processing the request.
The connection to the server was lost. Attempting reset: Failed.
The connection to the server was lost. Attempting reset: Failed.
!?> 

Core dumps

Files

今回、同時に3つのcoreファイルが生成されました。前回と少しだけ内容が変わっているようです。またひとつめについて、gdb自体がクラッシュ(?)したので出力が異なっています。

-rw-------. 1 postgres postgres 37755420672 Jun  1 07:53 core.1016550
-rw-------. 1 postgres postgres 38549897216 Jun  1 07:53 core.1016553
-rw-------. 1 postgres postgres 36691718144 Jun  1 07:53 core.1016554

1つめ

(gdb) bt
#0  pgstromScanNextTuple (pts=0x245a9c8) at executor.c:1143
#1  0x00007fd328161b66 in pgstromExecScanAccess (pts=0x245a9c8) at executor.c:1709
#2  0x00007fd328163ace in pgstromExecScanAccess (pts=0x245a9c8) at executor.c:1680
#3  pgstromExecTaskState (node=0x245a9c8) at executor.c:1906
#4  0x00007fd32817a7ae in ExecProcNode (node=0x245a9c8) at /usr/pgsql-16/include/server/executor/executor.h:273
#5  execInnerPreloadOneDepth (p_shared_inner_usage=0x7fcad59ab368, p_shared_inner_nitems=0x7fcad59ab360, istate=<optimized out>, 
    pts=<optimized out>, memcxt=0x24adae0) at gpu_join.c:1869
#6  GpuJoinInnerPreload (pts=pts@entry=0x24426e0) at gpu_join.c:2299
#7  0x00007fd328163a18 in __pgstromExecTaskOpenConnection (pts=0x24426e0) at executor.c:1814
#8  pgstromExecTaskState (node=0x24426e0) at executor.c:1882
#9  0x00000000006c174a in ExecProcNode (node=0x24426e0) at ../../../src/include/executor/executor.h:273
#10 gather_getnext (gatherstate=0x2441f30) at nodeGather.c:295
#11 ExecGather (pstate=0x2441f30) at nodeGather.c:227
#12 0x00000000006cc203 in ExecProcNode (node=0x2441f30) at ../../../src/include/executor/executor.h:273
#13 ExecLimit (pstate=0x2441c58) at nodeLimit.c:96
#14 0x00000000006a79d3 in ExecProcNode (node=0x2441c58) at ../../../src/include/executor/executor.h:273
#15 ExecutePlan (execute_once=<optimized out>, dest=0x24544a0, direction=<optimized out>, numberTuples=0, 
    sendTuples=<optimized out>, operation=CMD_SELECT, use_parallel_mode=<optimized out>, planstate=0x2441c58, estate=0x2441a10)
    at execMain.c:1670
#16 standard_ExecutorRun (queryDesc=0x23577e0, direction=<optimized out>, count=0, execute_once=<optimized out>) at execMain.c:365
#17 0x000000000084469e in PortalRunSelect (portal=0x23c17b0, forward=<optimized out>, count=0, dest=<optimized out>)
    at pquery.c:924
#18 0x00000000008459d0 in PortalRun (
../../gdb/dwarf2read.c:5272: internal-error: compunit_symtab* dw2_find_pc_sect_compunit_symtab(objfile*, bound_minimal_symbol, CORE_ADDR, obj_section*, int): Assertion `result != NULL' failed.
A problem internal to GDB has been detected,
further debugging may prove unreliable.
Quit this debugging session? (y or n) y

This is a bug, please report it.  For instructions, see:
<http://www.gnu.org/software/gdb/bugs/>.

../../gdb/dwarf2read.c:5272: internal-error: compunit_symtab* dw2_find_pc_sect_compunit_symtab(objfile*, bound_minimal_symbol, CORE_ADDR, obj_section*, int): Assertion `result != NULL' failed.
A problem internal to GDB has been detected,
further debugging may prove unreliable.
Create a core file of GDB? (y or n) y
Aborted (core dumped)

2つめ

(gdb) bt
#0  pgstromScanNextTuple (pts=0x241a848) at executor.c:1143
#1  0x00007fd328161b66 in pgstromExecScanAccess (pts=0x241a848) at executor.c:1709
#2  0x00007fd328163ace in pgstromExecScanAccess (pts=0x241a848) at executor.c:1680
#3  pgstromExecTaskState (node=0x241a848) at executor.c:1906
#4  0x00007fd32817a7ae in ExecProcNode (node=0x241a848) at /usr/pgsql-16/include/server/executor/executor.h:273
#5  execInnerPreloadOneDepth (p_shared_inner_usage=0x7fd330767368, p_shared_inner_nitems=0x7fd330767360, istate=<optimized out>, 
    pts=<optimized out>, memcxt=0x2437a50) at gpu_join.c:1869
#6  GpuJoinInnerPreload (pts=pts@entry=0x2415ff0) at gpu_join.c:2299
#7  0x00007fd328163a18 in __pgstromExecTaskOpenConnection (pts=0x2415ff0) at executor.c:1814
#8  pgstromExecTaskState (node=0x2415ff0) at executor.c:1882
#9  0x00000000006a79d3 in ExecProcNode (node=0x2415ff0) at ../../../src/include/executor/executor.h:273
#10 ExecutePlan (execute_once=<optimized out>, dest=0x23aacc0, direction=<optimized out>, numberTuples=10, 
    sendTuples=<optimized out>, operation=CMD_SELECT, use_parallel_mode=<optimized out>, planstate=0x2415ff0, estate=0x2415b70)
    at execMain.c:1670
#11 standard_ExecutorRun (queryDesc=0x2403e90, direction=<optimized out>, count=10, execute_once=<optimized out>)
    at execMain.c:365
#12 0x00000000006abc46 in ParallelQueryMain (seg=0x23431e8, toc=0x7fd330767000) at execParallel.c:1464
#13 0x000000000058b5dc in ParallelWorkerMain (main_arg=<optimized out>) at parallel.c:1520
#14 0x00000000007b6c8d in StartBackgroundWorker () at bgworker.c:861
#15 0x00000000007bbfe3 in do_start_bgworker (rw=<optimized out>) at postmaster.c:5765
#16 maybe_start_bgworkers () at postmaster.c:5989
#17 0x00000000007bc85f in process_pm_pmsignal () at postmaster.c:5152
#18 ServerLoop () at postmaster.c:1773
#19 0x00000000007bee2d in PostmasterMain (argc=argc@entry=3, argv=argv@entry=0x22ec7e0) at postmaster.c:1466
#20 0x0000000000506ddf in main (argc=3, argv=0x22ec7e0) at main.c:198

3つめ

(gdb) bt
#0  pgstromScanNextTuple (pts=0x241a848) at executor.c:1143
#1  0x00007fd328161b66 in pgstromExecScanAccess (pts=0x241a848) at executor.c:1709
#2  0x00007fd328163ace in pgstromExecScanAccess (pts=0x241a848) at executor.c:1680
#3  pgstromExecTaskState (node=0x241a848) at executor.c:1906
#4  0x00007fd32817a7ae in ExecProcNode (node=0x241a848) at /usr/pgsql-16/include/server/executor/executor.h:273
#5  execInnerPreloadOneDepth (p_shared_inner_usage=0x7fd330767368, p_shared_inner_nitems=0x7fd330767360, istate=<optimized out>, 
    pts=<optimized out>, memcxt=0x2437a50) at gpu_join.c:1869
#6  GpuJoinInnerPreload (pts=pts@entry=0x2415ff0) at gpu_join.c:2299
#7  0x00007fd328163a18 in __pgstromExecTaskOpenConnection (pts=0x2415ff0) at executor.c:1814
#8  pgstromExecTaskState (node=0x2415ff0) at executor.c:1882
#9  0x00000000006a79d3 in ExecProcNode (node=0x2415ff0) at ../../../src/include/executor/executor.h:273
#10 ExecutePlan (execute_once=<optimized out>, dest=0x23aacc0, direction=<optimized out>, numberTuples=10, 
    sendTuples=<optimized out>, operation=CMD_SELECT, use_parallel_mode=<optimized out>, planstate=0x2415ff0, estate=0x2415b70)
    at execMain.c:1670
#11 standard_ExecutorRun (queryDesc=0x2403e90, direction=<optimized out>, count=10, execute_once=<optimized out>)
    at execMain.c:365
#12 0x00000000006abc46 in ParallelQueryMain (seg=0x23431e8, toc=0x7fd330767000) at execParallel.c:1464
#13 0x000000000058b5dc in ParallelWorkerMain (main_arg=<optimized out>) at parallel.c:1520
#14 0x00000000007b6c8d in StartBackgroundWorker () at bgworker.c:861
#15 0x00000000007bbfe3 in do_start_bgworker (rw=<optimized out>) at postmaster.c:5765
#16 maybe_start_bgworkers () at postmaster.c:5989
#17 0x00000000007bc85f in process_pm_pmsignal () at postmaster.c:5152
#18 ServerLoop () at postmaster.c:1773
#19 0x00000000007bee2d in PostmasterMain (argc=argc@entry=3, argv=argv@entry=0x22ec7e0) at postmaster.c:1466
#20 0x0000000000506ddf in main (argc=3, argv=0x22ec7e0) at main.c:198
(gdb) 
sakaik commented 1 month ago

見比べてみると、変わったと言っても、前回出ていた

#4  0x00000000006ae26c in ExecProcNodeInstr (node=0x25df508) at execProcnode.c:480
#10 0x00000000006ae26c in ExecProcNodeInstr (node=0x25dacb0) at execProcnode.c:480

あたりが今回出ていない、という程度の差でした。

kaigai commented 4 weeks ago

結果バッファの使い過ぎ問題は上のパッチで対処。

中間結果を持ちすぎ問題に関してはこれから。

kaigai commented 1 week ago

結果バッファの持ちすぎ問題は de1450b3f5d305eed61fa497abbb5a05c72aee7f で修正してみました。 OOM Killerに殺される問題、最新版でも発生しますでしょうか?

sakaik commented 1 week ago

最新版 dae3722065b275294990e5dde16133c38d16327e でも、残念ながらOOM Killerにヤラれてしまいました。 /var/log/messages に出ていたOOMキラーのログは別途お渡しいたします。

sakaitest_nvme5=# SELECT l1.*                             
  FROM moj_curves_data l1, moj_curves_data l2
  WHERE l1.ver=l2.ver AND l1.filename=l2.filename AND l1.curve_id=l2.curve_id AND  l1.xml_pid<>l2.xml_pid AND l1.x IS NULL 
LIMIT 10;
WARNING:  terminating connection because of crash of another server process
DETAIL:  The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
HINT:  In a moment you should be able to reconnect to the database and repeat your command.
server closed the connection unexpectedly
        This probably means the server terminated abnormally
        before or while processing the request.
The connection to the server was lost. Attempting reset: Failed.
The connection to the server was lost. Attempting reset: Failed.
!?> 
kaigai commented 2 hours ago

調べてみました。以前のように、GPU-Serviceの中でリトライを繰り返して死んでいるわけではないようです。 GPU-Service自体はバッファが溢れそうになったらコマメにPostgreSQLバックエンドにデータを返却するような 挙動になっています。 しかし、このJOINの場合、GpuJoinのInner-Bufferを構築するために、508GBの容量のあるmoj_curves_dataを オンメモリで読み込もうとしています。 その結果、GPU-Serviceからコマメにレスポンスを受け取ったPostgreSQLバックエンドプロセスがメモリ不足に 陥って、最終的にOOM Killerに殺されています。

これは不可避なので、どちらかというと、

またこのクエリですと、moj_curves_data.verにB木インデックスを貼って、Nested-Loopさせる方が得策でしょう。

sakaitest_nvme5=# explain SELECT l1.*
  FROM moj_curves_data l1, moj_curves_data l2
  WHERE l1.ver=l2.ver AND l1.filename=l2.filename AND l1.curve_id=l2.curve_id AND  l1.xml_pid<>l2.xml_pid AND l1.x IS NULL
  LIMIT 10;
                                                                                                                 QUERY PLAN

--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Limit  (cost=192818108.57..192818108.67 rows=10 width=69)
   ->  Custom Scan (GpuJoin) on moj_curves_data l2  (cost=192818108.57..689100244.74 rows=47256781053 width=69)
         GPU Projection: l1.ver, l1.filename, l1.curve_id, l1.x, l1.y, l1.xml_pid, l1.num
         GPU Join Quals [1]: ((l1.xml_pid)::text <> (l2.xml_pid)::text), ((l1.ver)::text = (l2.ver)::text), ((l1.filename)::text = (l2.filename)::text), ((l1.curve_id)::text = (l2.curve_id)::text) ... [nrows: 6456368000 -> 47256780000]
         GPU Outer Hash [1]: (l2.ver)::text, (l2.filename)::text, (l2.curve_id)::text
         GPU Inner Hash [1]: (l1.ver)::text, (l1.filename)::text, (l1.curve_id)::text
         GPU-Direct SQL: enabled (GPU-0)
         ->  Custom Scan (GpuScan) on moj_curves_data l1  (cost=100.00..80712858.66 rows=6406008566 width=69)
               GPU Projection: ver, filename, curve_id, x, y, xml_pid, num
               GPU Scan Quals: (x IS NULL) [rows: 6456368000 -> 6406009000]
               GPU-Direct SQL: enabled (GPU-0)
(11 rows)