schemacrawler / SchemaCrawler

Free database schema discovery and comprehension tool
http://www.schemacrawler.com/
Other
1.62k stars 200 forks source link

Is possible to connect with Teiid JDV? #12

Closed luizsoliveira closed 9 years ago

luizsoliveira commented 9 years ago

Dear,

I do not know if this is the ideal channel to ask that question, but I thank you for understanding.

I have used the schema crawler to discover Postgres and MySQL databases structures.

I am interested in doing the same job with the Teiid (http://teiid.jboss.org/) through the specific connector JDBC.

I wonder if anyone has ever done that? And what steps should be taken?

On the next page (http://sualeh.github.io/SchemaCrawler/plugins.html) it was not clear if'll have customizes the generated code with the specificities of Teiid. At the moment I do not have intimate knowledge of how the Teiid works.

schemacrawler commented 9 years ago

SchemaCrawler can connect to any database system that provides a JDBC driver. I have not specifically tested with Teiid, but theoretically, it should work out of the box. There is no code generation needed, and no additional code that you need to write. You will have to make sure that you use the -url command-line argument, and not the -server, -host, and -port arguments.

luizsoliveira commented 9 years ago

Thanks for the quick response.

Yes , I get it , I will use the url parameter instead of others, but how can I do for crawler schema using the JDBC driver Teiid ?

Tranks

Good rest Em 08/10/2015 18:25, "Sualeh Fatehi" notifications@github.com escreveu:

SchemaCrawler can connect to any database system that provides a JDBC driver. I have not specifically tested with Teiid, but theoretically, it should work out of the box. There is no code generation needed, and no additional code that you need to write. You will have to make sure that you use the -url command-line argument, and not the -server, -host, and -port arguments.

— Reply to this email directly or view it on GitHub https://github.com/sualeh/SchemaCrawler/issues/12#issuecomment-146690416 .

schemacrawler commented 9 years ago

Here is how you get the URL: https://goo.gl/GUKISv Download the JDBC driver jar file, and put it in the SchemaCrawler lib folder.

On Thu, Oct 8, 2015 at 7:53 PM, Luiz notifications@github.com wrote:

Thanks for the quick response.

Yes , I get it , I will use the url parameter instead of others, but how can I do for crawler schema using the JDBC driver Teiid ?

Tranks

Good rest Em 08/10/2015 18:25, "Sualeh Fatehi" notifications@github.com escreveu:

SchemaCrawler can connect to any database system that provides a JDBC driver. I have not specifically tested with Teiid, but theoretically, it should work out of the box. There is no code generation needed, and no additional code that you need to write. You will have to make sure that you use the -url command-line argument, and not the -server, -host, and -port arguments.

— Reply to this email directly or view it on GitHub < https://github.com/sualeh/SchemaCrawler/issues/12#issuecomment-146690416> .

— Reply to this email directly or view it on GitHub https://github.com/sualeh/SchemaCrawler/issues/12#issuecomment-146718746 .

luizsoliveira commented 9 years ago

Thank you!

The connection worked with the following parameters:

-Djavax.net.ssl.trustStore=PATH-TO-TRUSTSTORE-PRO.jks \ -cp $(echo lib/*.jar | tr ' ' ':') schemacrawler.Main \ -command=schema \ -url=jdbc:teiid:DATABASE_NAME@mms://HOST:31000 \ -u=USER \ -password=PASSWORD \ -infolevel=detailed \ -outputformat=json \ -loglevel=CONFIG \ -outputfile=PATH/schema.json

The following file is included in: schemacrawler-14.03.03-main/_schemacrawler/lib/teiid-8.4.3-redhat-1-jdbc.jar

luizsoliveira commented 9 years ago

But still can not generate the full schema because the process is paralyzed (over three hours) in the step of obtaining weak associations.

Changing the infolevel to minimum, standard or detailed did not return information about the columns of the tables. For instance:

{ "foreignKeys": [], "indexes": [], "columns": [], "name": "XX_XX_XX_XXXXXX_XXXXXX", "fullName": "XX_XX_XX_XXXXXX_XXXXXX", "type": "table", "triggers": [], "tableConstraints": [], "remarks": "Rotated MUPE Table", "primaryKey": { "unique": true, "name": "XX_XX_XX_XXXXXX_XXXXXX" } },

For all 281 tables of the model, the schemacrawler return an empty set of columns.

schemacrawler commented 9 years ago

Please use -infolevel=standard for now, and turn on logging with -loglevel=CONFIG. Then, send me the log file at sualeh@hotmail.com