Closed EugenMayer closed 8 years ago
I use it daily in my development workflow to:
composer
a PHP dependency manager,I don't want to run composer
on the host system as I want it to run with the real environment it will run on other platforms (staging / production...)
And I guess you need those files on the host for code completition? (PHP files) Thats a good point - we actually do run all generation and asset compiling on the host, which has its benefits You can use fsevents to regenerate assets on change during developmonet (you cannot do this on the container, no event propagatio). Also you IDE gets alot when it haa composer locally to analyze your deps. You will need to get it running locally for proper autoloadet IDE Support and others things.
So for us it has been proven over the time, that composer/gulp should run locally. You do not polute the system too much, all you need is gulp and composer, all the others tools are installed inline and removed ob project dumping.
What do you need the css/js assets for on the host?i guess you generate them out of less/sass/coffeescript, so you already have the sources you would edit.
Do we have the same use case, we also du (huge) Drupal development and it still works out very well one way synced
I add this as a reason for me to not use 2-way sync. Jus to add this to the discussion:
We yet never missed any two-way sync behavior, much more, it would have been an issue with drupal:
Indeed that's already my setup for drupal. I use a dedicated volume container outside of the sites
folder and the code base. In fact, it's even another server without php that serves those static files.
But Drupal is not the only technology I work on. And others doesn't have such a clean separation.
More generally, I like the idea to see what's exactly happens on the server and doesn't like the idea that I could miss some files that will exist in the container.
Composer
is surely the tool that generates the most of files. But others like drush
(mainly config export), drupal console
, phing
, behat
, phpunit
generates also files / reports that I want to be synced back. I use a simple bash
wrapper for all these tools that makes me use them as if they were locally installed.
However your experience is welcomed and will maybe make me think about simplifying my stack by installing more tools locally and using one way sync.
I'd like to add my 2 cents here RE Composer.
I want to move to a microservice architecture and one of the advantages to this is not having to stick to a particular technology stack. I have started developing one microservice and it differs from our main 'monolith' in that it uses some new libraries that require PHP 7 and Composer enforces this. Our development machines do not have PHP 7 installed, and we also have our monolith which does not require PHP 7. This means that running composer install
locally can only ever work for either our microservice or the original monolith codebase, never both. I don't want to be bound by what's installed on our systems for development, as Docker can (in theory) allow us to develop in a completely self-contained environment.
@JakeWorrell @mickaelperrin great insight and arguments! Thank you for this non "religious" discussion.
I try to summarize your reasons:
First of all, we basically cover / have the same needs as requested here. We use about 8 containers for our app stacking, including rails on jruby, fpm with drupal, httpd, elastic/solr, percona, mongodb, memcached, java-stack server, consul .. and so on. We also use more or less all of those: drush, rake, thor, maven, phpunit, rspec as either build tools ( asset pipelines ), tests or management tools ( installation of instances / management )
We also run different language version for our stack: php5.5 and php7, ruby 1.9, 2.2 and jruby/jruby9000 Including some additional compiled stuff like: typescript, coffescript, scss
So i really get and understand your pain in regards of the host getting polluted, having a lot of deps installed just to cover the whole intersection of all projects. Thats really painful.
Still, the real-world forces you in some things, IMHO, which also influences our topic here
Point 1./ 2. / 3.: I would love to really remove all those dependencies from the host, especially since under OSX things can get hairy getting all those run ( in parallel ). The issue with that is, that most of the IDEs ( all? ) like IntelliJ, does a much better work when you have the interpreter in charge locally. Sure you can setup remote-interpreters, but often, you will not able to use important features of your IDE without a local php/ruby/java:
So right now, my feeling is, you cannot get rid of the local interpreter completely, because (yet) the world of the development tools based on the IDE does not support that very well (or at all).
I guess we all admit and managed to at least remove all the microservice dependencies from dbs, webservers, app-containers ( fpm ) and other services and we would all admit, that it is a great relief to have them run "production like" in a production like linux environment.
Point 4. Well of course, this has benefits but for me, i do not really care to much what CSS/JS is generated out of scss/coffescript/typescript or how the batching looks like. If i want to see those, i use the browser/inspector anyway, not looking locally into aggregated, minified somethings. More or less anything generated in the runtime is nothing i can work with in general and only need to see in edge-cases - usually i can stick only with those things, which generate those files, be it assets or anything else.
Bottom line: I guess we all agree on basically the same goals and thats why i understand, even better now, why there might be a transitional technology like unison needed, while maybe the tools become better with having "remote interpreters" only. For now, i think rsync does a solid job already, avoiding some of the shortcomings of unison, with effectively (for now) less downsides.
I understand that one of rsync indirect dependencies, in you generating assets/something on the host and then transferring on those on the container - means you have all the interpreter locally. And thats just a plain negative point. In the real world probably, you need those interpreters anyway due to the incompleteness of the toolchain support (those are our experiences)
I would love to extract the core points of our discussion into a wiki page like "why unison / why 2 way sync" including the arguments for it / against it. Also, if you e.g. have comments on how remote interpreters can be used and be enough ( configuration ), could stick those to easy up peoples work with this kind of stacks. It would be a little knowledge base of solutions of development in a "as much as possible" clean host environment
On my side I sometimes want to see what generate Drupal or other tools. To understand how somethings works, or to debug some core function. I can of course launch a bash inside the docker machine, but I ratter stay inside my IDE.
But I'm pretty new to docker, so I perhaps still didn't find the right way to work with.
For example with drupal 8 I used drupal console to generate the drupal configuration https://www.drupal.org/documentation/administer/config . I export this config outside www (I change drupal config) and commit it. I still have to figure out what the best way to do that with docker. Certainly installing drupal console inside docker.
I guess we can close this for now, since with 0.0.14 we have real 2 way sync
Please explain your use case, why you need 2 way sync so i can understand, what use case we are working towards. In my understanding right now, i cannot image a single usecase, but i already have several use-cases against using 2 way sync. I can share those later on