After using these APIs in a notebook, I found it simpler and more intuitive to store customized ontology data in a separate database that can be shared and referenced in code, as opposed to referencing Python objects containing them. Therefore I'd like to use the database name as an optional parameter to let load and work with different databases. Here's an example of how push_valuesets would be used:
This also reduces the load on a user that just consumes a custom ontology database, since just pointing to a different database eliminates intermediate steps and the need to be aware of our classes.
Of course, I'd expect custom ontologies to be written to the default database in many workflows...this just makes it easy to experiment with a custom database without impacting others.
After using these APIs in a notebook, I found it simpler and more intuitive to store customized ontology data in a separate database that can be shared and referenced in code, as opposed to referencing Python objects containing them. Therefore I'd like to use the database name as an optional parameter to let load and work with different databases. Here's an example of how push_valuesets would be used:
This also reduces the load on a user that just consumes a custom ontology database, since just pointing to a different database eliminates intermediate steps and the need to be aware of our classes.
Of course, I'd expect custom ontologies to be written to the default database in many workflows...this just makes it easy to experiment with a custom database without impacting others.