Closed sunix closed 10 years ago
A new version (10.09.01) of schemacrawler was created, which allows more options on table types selection. I tested it, it works. Some issues:
Schema metadata obtained - setup 36ms ; crawl 706ms ; dto conversion 3ms ; json conversion 24ms - total 769ms
Schema metadata obtained - setup 35ms ; crawl 5756ms ; dto conversion 12ms ; json conversion 75ms - total 5878ms
Schema metadata obtained - setup 36ms ; crawl 342ms ; dto conversion 0ms ; json conversion 2ms - total 380ms
Schema metadata obtained - setup 5ms ; crawl 5294ms ; dto conversion 8ms ; json conversion 10ms - total 5317ms
Schema metadata obtained - setup 123ms ; crawl 722ms ; dto conversion 5ms ; json conversion 17ms - total 867ms
Schema metadata obtained - setup 121ms ; crawl 883ms ; dto conversion 4ms ; json conversion 5ms - total 1013ms
If I understood everything correctly, schemacrawler and the postgres jdbc driver can't be used correctly here. The tables under pg_catalog and information_schema are system tables and system view, with types "SYSTEM TABLE" and "SYSTEM VIEW". Schemacrawler by default only returns plain tables (type "TABLE" and "VIEW") but can be asked to return other types (from enum schemacrawler.schema.TableType), for example "SYSTEM_TABLE" (actually "system_table" converted to uppercase).
Note:
So we can't ask the PG jdbc driver to return "SYSTEM TABLE" typed objects. One way to as the jdbc driver to return all tables is to pass a null "types" parameter to the method (DatabaseMetadata.gettables), in that case it doesn't filter tables and gives all.
But schema crawler doesn't allow it. The SchemaCrawlerOptions.setTableTypes methods (both of them) replace null values by an empty HashSet. Then, this empty set is converted to an empty String array and given to getTables (not null...)
This should probably be considered a Schemacrawler bug as the PG JDBC driver actually does what is said in the JDBC javadoc (I won't check the spec :-) )