dataegret / pgcompacttable

BSD 3-Clause "New" or "Revised" License
314 stars 48 forks source link

Create index take a long time !! #18

Open MathCharton opened 6 years ago

MathCharton commented 6 years ago

Hi ! Im testing pgcompacttable with this command : perl pgcompacttable -v -f -d mydb_db -U toto -W password -t offer -h 127.0.0.1

[Mon Apr 23 11:42:27 2018] (mydb_db) Connecting to database [Mon Apr 23 11:42:27 2018] (mydb_db) Postgres backend pid: 11553 [Mon Apr 23 11:42:27 2018] (mydb_db) Handling tables. Attempt 1 [Mon Apr 23 11:42:27 2018] (mydb_db:public.offer) Start handling table public.offer [Mon Apr 23 11:42:46 2018] (mydb_db:public.offer) Vacuum initial: 75312 pages left, duration 18.658 seconds. [Mon Apr 23 11:42:47 2018] (mydb_db:public.offer) Bloat statistics with pgstattuple: duration 0.843 seconds. [Mon Apr 23 11:42:47 2018] (mydb_db:public.offer) Statistics: 75312 pages (470320 pages including toasts and indexes), it is expected that ~19.610% (14771 pages) can be compacted with the estimated space saving being 115.399MB. [Mon Apr 23 11:42:47 2018] (mydb_db:public.offer) Processing forced. [Mon Apr 23 11:42:47 2018] (mydb_db:public.offer) Update by column: allow_quote_requests. [Mon Apr 23 11:42:47 2018] (mydb_dbpublic.offer) Set pages/round: 5. [Mon Apr 23 11:42:47 2018] (mydb_db:public.offer) Set pages/vacuum: 1507. [Mon Apr 23 11:43:49 2018] (mydb_db:public.offer) Progress: 72%, 10770 pages completed. [Mon Apr 23 11:45:13 2018] (mydb_db:public.offer) Vacuum final: 61112 pages left, duration 35.200 seconds. [Mon Apr 23 11:45:13 2018] (mydb_db:public.offer) Analyze final: duration 0.448 second. [Mon Apr 23 11:45:13 2018] (mydb_db:public.offer) Bloat statistics with pgstattuple: duration 0.430 seconds. [] waiting....

And when i check pg_stat_activity, i see that he's doing a Create index : query | CREATE INDEX CONCURRENTLY pgcompact_index_11552 ON public.offer USING gin (additional_fields_values)

My table size : 500 Mb So is it normal that a create index take a long time ?? ( > 3h for the moment ) ?

Thanks

Melkij commented 6 years ago

Hello pgcompacttable just waiting query complete... Can you show select state, wait_event, wait_event_type, query from pg_stat_activity where pid = 11553; I think create index waiting some lock or another older transaction.

MathCharton commented 6 years ago

Hello !

datid | 16401 datname | mydb_db pid | 11553 usesysid | 193136225 usename | toto application_name | client_addr | 127.0.0.1 client_hostname | client_port | 59417 backend_start | 2018-04-23 11:42:27.904933+02 xact_start | 2018-04-23 11:45:13.936807+02 query_start | 2018-04-23 11:45:13.936807+02 state_change | 2018-04-23 11:45:13.936809+02 waiting | f state | active backend_xid | 983705956 backend_xmin | 983705956 query | CREATE INDEX CONCURRENTLY pgcompact_index_11552 ON public.offer USING gin (additional_fields_values)

Melkij commented 6 years ago

Hmm, its strange. This is not pgcompacttable issue anyway, we just waiting postgresql to complete query. But i have no idea what doing your postgresql. What happens with disk utilization? CPU? Any abnormal records in logs?

MathCharton commented 6 years ago

ok ok ,thanks for answering me ! Last question : it is possible to pgcompacttable pg_largeobject table ??

Melkij commented 6 years ago

I never test it before. On my local database works and pg_largeobject was compacted, but index was not rebuild.

MathCharton commented 6 years ago

[Apr 26 03:23:59 ] (my_db:pg_catalog.pg_largeobject) Progress: 100%, 13983750 pages completed. [Apr 26 10:56:06 ] (my_db:pg_catalog.pg_largeobject) Vacuum final: cannot clean 9438448 pages, 9806464 pages left, duration 27113.641 seconds. [Apr 26 11:03:53 ] (my_db:pg_catalog.pg_largeobject) Analyze final: duration 466.157 second. [Apr 26 11:30:02 ] (my_db:pg_catalog.pg_largeobject) Bloat statistics with pgstattuple: duration 1569.736 seconds. [Apr 26 11:30:02 ] (my_db:pg_catalog.pg_largeobject) Skipping reindex: pg_catalog.pg_largeobject_loid_pn_index, can not reindex without heavy locks because of its dependencies, reindexing is up to you. Use of uninitialized value $message_args[5] in sprintf at pgcompacttable line 193. [Apr 26 11:30:02 ] (my_db:pg_catalog.pg_largeobject) Reindex queries forced: pg_catalog.pg_largeobject_loid_pn_index, initial size 464739 pages (3.546GB), will be reduced by 0% (0.000B) [Apr 26 11:30:02 ] (my_db:pg_catalog.pg_largeobject) CREATE UNIQUE INDEX CONCURRENTLY pgcompact_index_32743 ON pg_catalog.pg_largeobject USING btree (loid, pageno); --my_db [Apr 26 11:30:02 ] (my_db:pg_catalog.pg_largeobject) BEGIN; SET LOCAL statement_timeout TO 1000; ALTER INDEX "pg_catalog"."pg_largeobject_loid_pn_index" RENAME TO "tmp_92786847"; ALTER INDEX "pg_catalog".pgcompact_index_32743 RENAME TO "pg_largeobject_loid_pn_index"; END; DROP INDEX CONCURRENTLY "pg_catalog"."tmp_92786847";; --my_db [Apr 26 11:30:02 ] (my_db:pg_catalog.pg_largeobject) Processing incomplete. [Apr 26 11:30:02 ] (my_db:pg_catalog.pg_largeobject) Processing results: 9806464 pages (17886311 pages including toasts and indexes), size has been reduced by 34.696GB (34.705GB including toasts and indexes) in total. This attempt has been initially expected to compact ~95% more space (9343930 pages, 71.289GB) [Apr 26 11:30:02 ] (my_db:pg_catalog.pg_largeobject) Finish handling table pg_catalog.pg_largeobject [Apr 26 11:30:02 ] (my_db:pg_catalog.pg_largeobject) Handling tables. Attempt 2 [Apr 26 11:30:02 ] (my_db:pg_catalog.pg_largeobject) Start handling table pg_catalog.pg_largeobject [Apr 27 07:45:05 ] (my_db:pg_catalog.pg_largeobject) Vacuum initial: 368011 pages left, duration 72902.961 seconds. [Apr 27 07:45:36 ] (my_db:pg_catalog.pg_largeobject) Bloat statistics with pgstattuple: duration 31.168 seconds. [Apr 27 07:45:36 ] (my_db:pg_catalog.pg_largeobject) Statistics: 368011 pages (8445393 pages including toasts and indexes), it is expected that ~4.160% (15326 pages) can be compacted with the estimated space saving being 119.738MB. [Apr 27 07:45:36 ] (my_db:pg_catalog.pg_largeobject) Processing forced. [Apr 27 07:45:36 ] (my_db:pg_catalog.pg_largeobject) Update by column: loid. [Apr 27 07:45:36 ] (my_db:pg_catalog.pg_largeobject) Set pages/round: 5. [Apr 27 07:45:36 ] (my_db:pg_catalog.pg_largeobject) Set pages/vacuum: 7361. [Apr 27 07:46:18 ] (my_db:pg_catalog.pg_largeobject) Vacuum final: cannot clean 193 pages, 368013 pages left, duration 41.453 seconds. [Apr 27 07:46:18 ] (my_db:pg_catalog.pg_largeobject) Analyze final: duration 0.204 second. [Apr 27 07:46:22 ] (my_db:pg_catalog.pg_largeobject) Bloat statistics with pgstattuple: duration 3.719 seconds. [Apr 27 07:46:22 ] (my_db:pg_catalog.pg_largeobject) Skipping reindex: pg_catalog.pg_largeobject_loid_pn_index, can not reindex without heavy locks because of its dependencies, reindexing is up to you. Use of uninitialized value $message_args[5] in sprintf at pgcompacttable line 193. [Apr 27 07:46:22 ] (my_db:pg_catalog.pg_largeobject) Reindex queries forced: pg_catalog.pg_largeobject_loid_pn_index, initial size 464739 pages (3.546GB), will be reduced by 0% (0.000B) [Apr 27 07:46:22 ] (my_db:pg_catalog.pg_largeobject) CREATE UNIQUE INDEX CONCURRENTLY pgcompact_index_32743 ON pg_catalog.pg_largeobject USING btree (loid, pageno); --my_db [Apr 27 07:46:22 ] (my_db:pg_catalog.pg_largeobject) BEGIN; SET LOCAL statement_timeout TO 1000; ALTER INDEX "pg_catalog"."pg_largeobject_loid_pn_index" RENAME TO "tmp_328920406"; ALTER INDEX "pg_catalog".pgcompact_index_32743 RENAME TO "pg_largeobject_loid_pn_index"; END; DROP INDEX CONCURRENTLY "pg_catalog"."tmp_328920406";; --my_db [Apr 27 07:46:22 ] (my_db:pg_catalog.pg_largeobject) Processing complete. [Apr 27 07:46:22 ] (my_db:pg_catalog.pg_largeobject) Processing results: 368013 pages left (8445395 pages including toasts and indexes), size reduced by 106.706GB (106.734GB including toasts and indexes) in total. [Apr 27 07:46:22 ] (my_db:pg_catalog.pg_largeobject) Finish handling table pg_catalog.pg_largeobject [Apr 27 07:46:22 ] (my_db) Processing complete. [Apr 27 07:46:22 ] (my_db) Processing results: size reduced by 106.706GB (106.734GB including toasts and indexes) in total. [Apr 27 07:46:22 ] (my_db) Disconnecting from database [Apr 27 07:46:22 ] Processing complete: 2 retries to process has been done [Apr 27 07:46:22 ] Processing results: size reduced by 106.706GB (106.734GB including toasts and indexes) in total, 106.706GB (106.734GB) my_db.

Do u know why he's Skipping reindex: pg_catalog.pg_largeobject_loid_pn_index ([Fri Apr 27 07:46:22 2018]) But at [Thu Apr 26 11:30:02 2018] he's doing : ALTER INDEX "pg_catalog".pgcompact_index_32743 RENAME TO "pg_largeobject_loid_pn_index";