Open jieguolove opened 1 year ago
I can only query normally after restarting heavydb. Is there a heavydb parameter to avoid this problem?
` [root@test ~]# systemctl stop heavydb [root@test ~]# systemctl start heavydb [root@test ~]# su - heavyai Last login: Sun Oct 8 15:05:08 CST 2023 on pts/0 [heavyai@test ~]$ heavysql -p admin -db hblt Thrift: Sun Oct 8 15:26:43 2023 TSocket::open() connect() <Host: localhost Port: 6274>: Connection refused User admin connected to database hblt heavysql> \memory_summary HeavyDB Server CPU Memory Summary: MAX USE ALLOCATED FREE 12837.02 MB 0.00 MB 0.00 MB 0.00 MB
HeavyDB Server GPU Memory Summary: [GPU] MAX USE ALLOCATED FREE
heavysql> select count(*) from DAILYSESSION_TABLE; EXPR$0 251343812 heavysql> \timing heavysql> \memory_summary HeavyDB Server CPU Memory Summary: MAX USE ALLOCATED FREE 12837.02 MB 0.00 MB 0.00 MB 0.00 MB
HeavyDB Server GPU Memory Summary: [GPU] MAX USE ALLOCATED FREE
heavysql> select from DAILYSESSION_TABLE limit 5; DAYSbACCTSESSIONIDbLOGINNAMEbBEGINTIMEbENDTIMEbRASCLIENTbRECORDTYPEbNASPORTbPHONEbACCTSTATUSTYPEbACCTINPUTOCTETSbACCTOUTPUTOCTETSbACCTSESSIONTIMEbACCTAUTHENTICbFRAMEDIPbPUBLICIPbTEMP1bTEMP2bMACbSTARTPORTbENDPORTbFRAMED_IPV6_ADDRESSbDELEGATED_IPV6_STARTbDELEGATED_IPV6_ENDbACCT_IPV6_INPUT_OCTETSbACCT_IPV6_OUTPUT_OCTETS 。。。。。。 5 rows returned. Execution time: 22699 ms, Total time: 22745 ms heavysql> select from DAILYSESSION_TABLE limit 10; DAYSbACCTSESSIONIDbLOGINNAMEbBEGINTIMEbENDTIMEbRASCLIENTbRECORDTYPEbNASPORTbPHONEbACCTSTATUSTYPEbACCTINPUTOCTETSbACCTOUTPUTOCTETSbACCTSESSIONTIMEbACCTAUTHENTICbFRAMEDIPbPUBLICIPbTEMP1bTEMP2bMACbSTARTPORTbENDPORTbFRAMED_IPV6_ADDRESSbDELEGATED_IPV6_STARTbDELEGATED_IPV6_ENDbACCT_IPV6_INPUT_OCTETSbACCT_IPV6_OUTPUT_OCTETS 。。。。。。 10 rows returned. Execution time: 42 ms, Total time: 141 ms heavysql> \memory_summary HeavyDB Server CPU Memory Summary: MAX USE ALLOCATED FREE 12837.02 MB 3723.14 MB 4096.00 MB 372.86 MB
HeavyDB Server GPU Memory Summary: [GPU] MAX USE ALLOCATED FREE
heavysql> \t CUSTOMER_TABLE RAD_DETAIL DAILYSESSION_TABLE heavysql> select from RAD_DETAIL limit 5; USERIDbSERVICE_TYPEbACCOUNT_INFObFRAMED_PROTOCOLbFRAMED_IP_ADDRESSbFRAMED_IP_NETMASKbIDLE_TIMEOUTbCLASSbSESSION_TIMEOUTbCISCO_AVPAIRbIP_TYPEbASSIGN_IP_POOLbSTART_TIMEbEND_TIME ...... 5 rows returned. Execution time: 2460 ms, Total time: 2480 ms heavysql> select from CUSTOMER_TABLE limit 5; ...... 5 rows returned. Execution time: 9208 ms, Total time: 9209 ms heavysql> select * from CUSTOMER_TABLE limit 10; USERIDbUSERNAMEbUSERNAME_EbLOGINNAMEbPASSWDbPIDTYPEbPIDNObRATIOTYPEbPAYAPPROACHbPAYINFObBANK_NAMEbCUSTOMER_COUNTS_OWNERbBANK_ACCOUNTSbCREDITbUSERTYPEbPROFESSIONbCOMPANYbAREAIDbCONTACTbADDRESSbPOSTCODEbPHONEbMOBILEbFAXbBRANCHbOPERATORNAMEbSERIALNObCREATETIMEbBEGINTIMEbVALIDTIMEbMAXCHARGEbCOMMENTTEXTbSTATEbMAXCONNECTIONbBINDINGPHONEbEMAILbDUETIMEbLOCKTIMEbPROFILEbCONTEXTIDbMARKETMANbEQUIPSOURCEbMAINTAINbALIASbDISCNTIDbACCOUNTTYPEbCONNECTTYPEbCITYCODEbOPERATORBRANCHbBIGBRANCHbLOCKTYPEbDISCNTTIMEbCARDIDbARIDbNIDbOPER97 ...... 10 rows returned. Execution time: 21 ms, Total time: 22 ms heavysql> \memory_summary HeavyDB Server CPU Memory Summary: MAX USE ALLOCATED FREE 12837.02 MB 7375.76 MB 8192.00 MB 816.24 MB
HeavyDB Server GPU Memory Summary: [GPU] MAX USE ALLOCATED FREE
[heavyai@test storage]$ free -g total used free shared buff/cache available Mem: 15 11 0 1 4 2 Swap: 7 5 1
`
Hi,
You can free up the memory used for CPU Cache with the SQL command ALTER SESSION CLEAR CPU MEMORY;
That said, when working with systems having such limited memory, it's essential to review memory parameters. By default, 80% of the system memory is allocated as CPU cache, which can potentially lead to HEAP memory exhaustion and result in Out-of-Memory (OOM) errors.
On such systems with very low resurources, I recommend limiting the memory available for the CPU cache using the cpu-buffer-mem-bytes parameter, setting it to 6GB (6442450944 bytes). This leave the rest to the Operative system, OS managed FS cache and HEAP memory of the process.
Let me provide some insights into how our data is organized in our tables and how much data needs to be loaded into memory to handle your simple queries.
In our tables, each column is subdivided into chunks of N-elements, a size referred to as the
The response time depends on several factors, including the number of columns, the fragment size (or the total number of records in the table if it's less than the fragment size), and the speed of the disks.
For example, if you have a very fast disk subsystem with a throughput of 10GB/sec, the disk transfer of the 14 chunks totaling 1.9GB takes 381ms.
heavysql> select L_ORDERKEY,L_PARTKEY,L_SUPPKEY,L_LINENUMBER,L_QUANTITY,
L_EXTENDEDPRICE,L_DISCOUNT,L_TAX,L_RETURNFLAG,L_LINESTATUS,L_SHIPDATE,
L_COMMITDATE,L_RECEIPTDATE,L_SHIPINSTRUCT,L_SHIPMODE
from lineitem limit 10;
[CUT]
10 rows returned.
Execution time: 680 ms, Total time: 718 ms
heavysql> \memory_summary
HeavyDB Server CPU Memory Summary:
MAX USE ALLOCATED FREE
51316.35 MB 1922.61 MB 4096.00 MB 2173.39 MB
The same query on the same table with a fragment size of 2 million takes just 71ms to load the data and uses less memory.
HeavyDB Server CPU Memory Summary:
MAX USE ALLOCATED FREE
51316.35 MB 110.63 MB 4096.00 MB 3985.37 MB
Unlike Oracle or other row-based databases, it's important to note that when only specific columns are required, the query performs faster if only those needed columns are included in the SELECT clause. Therefore, if you request the server to return only 4 columns out of the available 14, the response time will be quicker, and it will consume less memory.
heavysql> select L_ORDERKEY,L_SHIPDATE,L_SHIPINSTRUCT,L_SHIPMODE from lineitem limit 10;
[CUT]
Execution time: 71 ms, Total time: 110 ms
heavysql> \memory_summary
HeavyDB Server CPU Memory Summary:
MAX USE ALLOCATED FREE
51316.35 MB 244.14 MB 4096.00 MB 3851.86 MB
Hope this helps, but I encourage you in testing aggregates rather than simple projection queries.
Candido
In the case of insufficient memory, how can the query be executed normally? After that, all database table queries are as follows: ERR_OUT_OF_CPU_MEM: Not enough host memory to execute the query
thanks! ` [heavyai@test storage]$ free -g total used free shared buff/cache available Mem: 15 9 0 1 5 3 Swap: 7 7 0
heavysql> \version HeavyDB Server Version: 7.0.0-20230926-d315676d23
heavysql> \t CUSTOMER_TABLE RAD_DETAIL DAILYSESSION_TABLE heavysql> COPY DAILYSESSION_TABLE FROM '/var/lib/heavyai/storage/import/sample_datasets/dailysession_table.csv';
Result Loaded: 251343812 recs, Rejected: 0 recs in 1352.280000 secs 1 rows returned. Execution time: 1352442 ms, Total time: 1352484 ms heavysql> heavysql> \t CUSTOMER_TABLE RAD_DETAIL DAILYSESSION_TABLE heavysql> select from RAD_DETAIL limit 10; USERIDbSERVICE_TYPEbACCOUNT_INFObFRAMED_PROTOCOLbFRAMED_IP_ADDRESSbFRAMED_IP_NETMASKbIDLE_TIMEOUTbCLASSbSESSION_TIMEOUTbCISCO_AVPAIRbIP_TYPEbASSIGN_IP_POOLbSTART_TIMEbEND_TIME ...... 10 rows returned. Execution time: 707 ms, Total time: 1080 ms heavysql> select count() from DAILYSESSION_TABLE; EXPR$0 251343812 1 rows returned. Execution time: 1080 ms, Total time: 1313 ms heavysql> select * from DAILYSESSION_TABLE limit 3; ERR_OUT_OF_CPU_MEM: Not enough host memory to execute the query heavysql> \memory_summary HeavyDB Server CPU Memory Summary: MAX USE ALLOCATED FREE 12837.02 MB 7375.76 MB 8192.00 MB 816.24 MB
HeavyDB Server GPU Memory Summary: [GPU] MAX USE ALLOCATED FREE
heavysql> select * from DAILYSESSION_TABLE limit 3; ERR_OUT_OF_CPU_MEM: Not enough host memory to execute the query
`