niryariv / opentaba-client

BSD 3-Clause "New" or "Revised" License
8 stars 15 forks source link

some thoughts on multi-city setup #66

Closed niryariv closed 9 years ago

niryariv commented 10 years ago

we can put all the core code in a submodule and include it the city repos. this means that instead of the current situation where the core repo keeps a list of all city repo remotes, each city repo will have a remote for the core repo.

we'll still need to pull & commit whenever the core submodule updates, but that could be a relatively simple script.

<script src="http://core.opentaba.info/app.js" />

and

$("content").load("core/index.html")

a client repo will only contain the following files only:

what do you think?

alonisser commented 10 years ago

git submodules are usually a bad idea for deployment (or mostly everything else) it's broken in many suprising ways (try to remove a submodule, or update it. not fun) see this , this and this Automation is one time task and solveable, and then done with travis/jenkins (via cloudbees -free for open source projects) etc. I would be happy to help with this But I find it hard to understand context without sitting together, especially when @florpor advances very fast (which is great) - Maybe we can schedule a work session sometimes near in hasadna monday meeting in Tel aviv pair on this.

alonisser commented 10 years ago

BTW, I'm not sure we actually need the repo to keep the whole dataset. We can easily set a opentaba-config server. that could deliver a dataset on demand as part of a site setup script.

niryariv commented 10 years ago

Meeting physically is hard now since although I'm in Tel Aviv 3 days/week, Monday isn't one of them. Hopefully this changes soon, but meanwhile we can try to work it out online as most Open Source projects do.

Some context: we had the discussion long ago on how to support multiple municipalities. Generally there are two choices:

  1. Everything on one server/client/db - simplifies maintenance
  2. Each muni gets its own instance - uses less resources

We now found that even a few munis will push Heroku to the ~$35/mo realm. We can do various things like move to AWS etc, but at some point we'll always hit a scalability limit. I think its important to keep this project financially independent & sustainable, so option 2 is at least worth exploring.

This means each muni gets its own server & client repo: Server in order to stay below the quota limits of Heroku/MongoDB, client in order to enable subdomains like jerusalem.opentaba.info (GitHub pages doesn't support wildcards or >1 domain in a CNAME file - otherwise we could probably keep one client repo for all munis)

That poses the challenge of maintaining all these repos, keeping the code in sync.

Basically each repo is just a copy of the main codebase, the only difference is in the config data. My thinking was that putting the core code in a submodule will allow us write a simple script that'll update all the dummy repos when the code is changed.

After reading the links you posted, looks like it's better to use a tool like repo or Gitslave instead of a submodule. But perhaps the submodule issues aren't critical for us since all the repos are read-only?

In any case, we're still testing out the whole repo-per-muni approach. We have to find a good solution for maintenance in order for it to be viable.

florpor commented 10 years ago

@niryariv @alonisser we could skype-date if you guys want, or i can see you separately (i can come to both tlv and jerusalem sessions).

we decided to try a multi-server solution because: for servers - to keep things free and not exceed the quotas we are given, and for clients - because github only allows one main/sub domain per repository, no wildcards or anything, and we didn't like the SPA approach that can be seen in the SPA branch

nothing is different between the different repositories and apps - config files are the same (assuming updates were deployed to everyone), except for the CNAME files in each client repository. (the client repositories only have gh-pages branches, no master) we wanted a script that would make it easier to deliver code updates to all servers/clients, so maintenance does not turn into hell, so i started writing a fabric script with a few tasks.

about the submodules idea - even if submodules were easy to update and maintain, updating 10 or more repos would still be miserable (clone , update submodule, commit, push, delete) and essentially the same process if done automatically by a script, (current is push to remote, clone, update cname, commit, push and delete). the creation and deletion parts of the scripts are not the core - they started out as just helper tasks to get the remotes' lists updated and we ended up automating the whole process with them. they can still be done manually no problem.

alonisser commented 10 years ago

@niryariv @florpor. I know you can't come to TLV meeting. I meant @florpor.. I would be happy if we can schedule a meeting in one of the development meeting of HASADNA.

I agree the multi server approach is better.

I don't know the other tools so I need to explore them further. Why not having an upstream joint remote branch with the common code for all projects. and then we can pull upstream in every project (automated or not automated) to get the updated code. while pulling and pushing to origin that includes the specific config code?

Twitter:@alonisser https://twitter.com/alonisser LinkedIn Profile http://www.linkedin.com/in/alonisser Facebook https://www.facebook.com/alonisser _Tech blog:_4p-tech.co.il/blog _Personal Blog:_degeladom.wordpress.com Tel:972-54-6734469

On Sun, Aug 31, 2014 at 11:31 AM, florpor notifications@github.com wrote:

@niryariv https://github.com/niryariv @alonisser https://github.com/alonisser we could skype-date if you guys want, or i can see you separately (i can come to both tlv and jerusalem sessions).

we decided to try a multi-server solution because: for servers - to keep things free and not exceed the quotas we are given, and for clients - because github only allows one main/sub domain per repository, no wildcards or anything, and we didn't like the SPA approach that can be seen in the SPA branch https://github.com/niryariv/opentaba-client/tree/spa

nothing is different between the different repositories and apps - config files are the same (assuming updates were deployed to everyone), except for the CNAME files in each client repository. (the client repositories only have gh-pages branches, no master) we wanted a script that would make it easier to deliver code updates to all servers/clients, so maintenance does not turn into hell, so i started writing a fabric http://www.fabfile.org/ script with a few tasks.

about the submodules idea - even if submodules were easy to update and maintain, updating 10 or more repos would still be miserable (clone , update submodule, commit, push, delete) and essentially the same process if done automatically by a script, (current is push to remote, clone, update cname, commit, push and delete). the creation and deletion parts of the scripts are not the core - they started out as just helper tasks to get the remotes' lists updated and we ended up automating the whole process with them. they can still be done manually no problem.

— Reply to this email directly or view it on GitHub https://github.com/niryariv/opentaba-client/issues/66#issuecomment-53981264 .

florpor commented 10 years ago

@alonisser can do tomorrow if you want

that is (almost) our current approach - we create repositories or heroku apps and define them as remotes for the base opentaba-[server/client], and also have one remote with a list of all of them, and then we push to all our remotes in that list. only problem is with the fact that each opentaba-client clone needs its own CNAME, and after creating or editing one we can't push to it without a merge, so right now each client deploy is a push --force and then clone, edit CNAME, commit and push and remove temporary local clone.

alonisser commented 10 years ago

@florpor we'll continue in email.. and schedule a meeting