Closed draeger closed 5 years ago
I am in the process of figuring this out as it is inevitable to move ahead without building this, and hence once I am clear I would be willing to add more details in this.
The easiest way currently is to clone the repository and just run "gradle". This should now automatically run "configureSQLiteDB.sh" and the default gradle task, producing the fatJar with dependencies and SQLiteDB packaged.
Nevertheless, there are currently 3 possible ways to get a connection with BiGG database:
The original way: Load the database.dump into a local instance of PostgreSQL. This should work fine, however, the description at https://github.com/SBRG/bigg_models under "Dumping and restoring the database" is missing one important step: You first have to create an empty bigg database before you can run pg_restore -c -d bigg bigg_database.dump, if you don't already have BiGG running. Additionally adding -O to this command does not run the set/change_ownership lines, thus reducing the 87 "errors" produced to none. Here we can either add this to our README or open a pull request with the corrections against bigg_models and link to that.
Our easy to use, but not very scalable version: Build the fatJar, which has the SQLite version included and does not need any additional steps. With the code from the last pull request merged, running "configureSQLiteDB.sh" manually should no longer be necessary, as this is now run by Gradle, if the DB is not present.
The experimental way: Use a modified version of https://github.com/psalvy/bigg-docker to run BiGG in a Docker container, as the original runs an outdated version. I have experimented with this a bit and have a fork at https://github.com/mephenor/bigg-docker that I've somewhat tested for correctness with regard to ModelPolisher queries. There you have to place the current database.dump into the web directory and run docker-compose up, which will expose the database on port 5432 but also additionally run a copy of the BiGG website on localhost:8910.
As there seems to be some confusion currently about which version one should use, let me clarify that you only need one of these set up and the easiest is the integrated one, while I would recommend the first variant, as it should run a bit faster than the second one.
For the future it might be nice to have a variant of bigg-docker, reduced to serving only the database and fetching and verifying the current version automatically. This might then be contained in its own repository and integrated into ModelPolisher as a submodule, which would allow to change the startup scripts to start the container. However, this probably should be discussed in a separate issue.
More details should be provided on how to build the project after cloning.