snowplow / iglu-server

A RESTful schema registry
Other
13 stars 9 forks source link

Improve docker-compose example to bypass setup command #33

Open oguzhanunlu opened 5 years ago

oguzhanunlu commented 5 years ago

We need to run setup command and insert a super API key before we can start using the server. We can improve UX of Iglu Server in docker-compose world by replicating setup command within init.sql.

chuwy commented 5 years ago

Why do we need to duplicate it and cannot use setup itself? setup exists to create a structure that is compatible with DB access layer and serves as a single source of truth. If we change it and forget to reflect it in the init.sql - user will get a corrupted DB, which much worse than not having it at all.

oguzhanunlu commented 5 years ago

I wanted to reduce number of steps for users but I see the corruption point, I'll revert my last change.

Speaking of setup, let me question the existence of setup command. Why does a user need to run one more step when we can automate it by creating tables at server launch if postgres is configured correctly? Isn't it an additional load on user?

chuwy commented 5 years ago

I think these are two semantically different tasks: launching something and deploying infrastructure for something. I personally would not expect something to deploy entities when I intend to launch it. Another questions that we want to raise - how many things we want to auto-deploy (api keys, meta schemas, empty draft etc)? At what point it will become completely opaque to user what they have just deployed during first launch? What happens if at some point we decide to add a non-idempotent entity? Also, how much manual labor does it add to execute one command?

As a middle-ground however there's this ticket: https://github.com/snowplow-incubator/iglu-server/issues/10

However, I think it makes sense to turn the error message for non-existing table into a guide on what to do.

oguzhanunlu commented 5 years ago

I agree with the idea that these tasks have different semantics while I personally would like to see auto deployment of tables since it isn't usage or user specific and same thing for all usages. I'd not auto-deploy super api key for security concerns.

Even though this step isn't much manual labor, this approach can accumulate itself and result in a future version of this project where each semantically different task requires another command to run. I'm not sure how many different semantics we can possible have here but a point to think about.

I liked #10 . It really looks like a middle ground for this discussion.

From the original ticket:

--with-setup will make server not just running, but also setup the DB tables if they don't exist yet

I think there is another option here; drop tables if exists and create them from scratch. I don't strongly advocate none of the options but curious to see why you preferred the other way.

istreeter commented 3 years ago

I did some work improving the docker-compose setup. See this branch.

Instead of fixing init.sql (which was @oguzhanunlu's suggestion) I added a setup step to docker-compose which runs the migrations, which I think satisfies @chuwy's comments above.

The remaining problem is the user needs to manually add an api key, by executing some sql in the postrgres container. I added instructions to the readme for how to do this, but it's a bit hacky.

A nicer option is to extend the setup command so it takes an optional --master-api-key command line argument. Then it runs the migrations and inserts the key to the database. This would help the docker-compose setup, but would also be helpful for initialising production setups too.

An alternative solution is to add a new command add-api-key so we don't overload the setup command with more functionality. But I slightly prefer my first suggestion.

wdyt?

chuwy commented 3 years ago

I like what you did, @istreeter! And think --master-api-key would improve it even further.