keithf4 / pg_bloat_check

Bloat check script for PostgreSQL
Other
139 stars 36 forks source link

Unexpected object type encountered in stats table #14

Closed henriavelabarbe closed 4 years ago

henriavelabarbe commented 5 years ago

Hi, I encountered the following error on a postgresql 9.6.9 database.

Traceback (most recent call last):
  File "./pg_bloat_check.py", line 671, in <module>
    print("Unexpected object type encountered in stats table. Please report this bug to author with value found: " + str(r['objectttype']))
  File "/usr/lib64/python2.7/site-packages/psycopg2/extras.py", line 169, in __getitem__
    x = self._index[x]
KeyError: 'objectttype'

Runing it again with debug :

sql: SELECT table_len, tuple_count, tuple_len, tuple_percent, dead_tuple_count, dead_tuple_len, dead_tuple_percent, free_space, free_percent FROM pgstattuple(2694::regclass)  WHERE table_len > 1 AND ( (dead_tuple_len + free_space) > 1 OR (dead_tuple_percent + free_percent) > 0.1 )
[[16384L, 3L, 48L, 0.29, 0L, 0L, 0.0, 8088L, 49.37]]
insert sql: INSERT INTO bloat_indexes (oid
                        , schemaname
                        , objectname
                        , objecttype
                        , size_bytes
                        , live_tuple_count
                        , live_tuple_percent
                        , dead_tuple_count
                        , dead_tuple_size_bytes
                        , dead_tuple_percent
                        , free_space_bytes
                        , free_percent
                        , approximate
                        , relpages
                        , fillfactor)
                    VALUES (2694, 'pg_catalog', 'pg_auth_members_role_member_index', 'index', 16384, 3, 0.29, 0, 0, 0.0, 8088, 49.37, false, 2, 90.0)
Traceback (most recent call last):
  File "./pg_bloat_check.py", line 671, in <module>
    print("Unexpected object type encountered in stats table. Please report this bug to author with value found: " + str(r['objectttype']))
  File "/usr/lib64/python2.7/site-packages/psycopg2/extras.py", line 169, in __getitem__
    x = self._index[x]
KeyError: 'objectttype'
keithf4 commented 5 years ago

Well, the error message itself has a typo looking for the wrong keyname. So need to fix that first so we can see what the real error message is. Can you apply this patch and run again?

diff --git a/pg_bloat_check.py b/pg_bloat_check.py
index e212b77..90c6138 100755
--- a/pg_bloat_check.py
+++ b/pg_bloat_check.py
@@ -458,15 +458,6 @@ def print_version():

 def rebuild_index(conn, index_list):
-#    if args.bloat_schema != None:
-#        index_table = args.bloat_schema + "bloat_indexes"
-#    else:
-#        index_table = "bloat_indexes"
-
-#    sql = "SELECT oid, schemaname, objectname, objecttype FROM " + index_table + " ORDER BY 2,3,4"
-#    cur = conn.cursor(cursor_factory=psycopg2.extras.DictCursor)
-#    cur.execute(sql)
-#    result = cur.fetchall()

     if index_list == []:
         print("Bloat statistics table contains no indexes for conditions given.")
@@ -668,7 +659,7 @@ if __name__ == "__main__":
                 elif r['objecttype'] == 'index_pk':
                     type_label = 'p'
                 else:
-                    print("Unexpected object type encountered in stats table. Please report this bug to author with value found: " + str(r['objectttype']))
+                    print("Unexpected object type encountered in stats table. Please report this bug to author with value found: " + str(r['objecttype']))
                     sys.exit(2)

                 justify_space = 100 - len(str(counter) + ". " + r['schemaname'] + "." + r['objectname'] + " (" + type_label + ") " + "(" + str(r['total_waste_percent']) + "%)" + r['total_wasted_size'] + " wasted")
keithf4 commented 5 years ago

Have you been able to run this again since I released v2.6.0 that fixed the typo in the error?

keithf4 commented 4 years ago

Will be closing this issue by the end of the month if there is no further response.

keithf4 commented 4 years ago

Closing issue for now. If you are still encountering this issue, please feel free to respond to it or create a new one at anytime and I will revisit it.