Open balintbrews opened 6 years ago
For the record, we decided to hold off on this for now.
@isholgueras In the meantime, could you please document your concerns with the current architecture? What exactly would we gain by changing Spark to be a global dependency? What are the disadvantages of the current architecture where we include Spark as a local dependency for each project?
One problem I see with the global approach is that we'd need to solve versioning the schema of the .spark.yml
file, but you may have a good answer to this, @isholgueras.
As we spoke in a PM, I see two advantages of having this tool as an external package:
That is very similar as the tool that Matt Glamman has with platform-docker
The steps to initialize a project are:
platform-docker init
. It creates the initial .platform/local.yml
(or similar, don't remember the name). If the file exists, doesn't perform any actionplatform-docker start
. Start all of the containers defined in the local.yml
file.platform-docker init && platform-docker start
, and thanks to the shared proxy container you have these 2 projects up and running (in different ports by default, but you can configure the Inverse Proxy to serve in 80 port the different domains)That's basically what we spoke in PM but a bit more organized and described :)
The main issue that @isholgueras brought up that is indeed a significant limitation is this one:
you are forced to use compatible versions of the project libraries
this is precisely a problem for things like the drush functionality, and other dependencies (even Robo itself). for drush there are workarounds, we could write our code in a way that supports both drush 8 and drush 9 command execution, but things like robo versions would be really harder to work with two versions.
create shared containers between projects
I'm on the fence about this. Technically, this is not a limitation because we could have a task that checks for a shared haproxy inside a separate folder, eg ~/.spark/ and starts it, if it already exists and is not running, OR creates it from some scaffolding if it doesn't yet exist. This doesnt demand spark being a global dependency. On the other hand, you do start to run into issues as spark evolves and different projects have different spark versions, trying to run a shared service that expects different things could be problematic. I consider this to be a far-future possibility and a pretty big "what if".
@isholgueras has a great proposal, which probably needs to be broken down into smaller parts. Let's start here.