steelbrain / linter

A Base Linter with Cow Powers http://steelbrain.me/linter/
MIT License
1.1k stars 178 forks source link

Usage with cached modules in docker container #1262

Closed slavab89 closed 8 years ago

slavab89 commented 8 years ago

I hope that this issue / question is in the right place while i'm not sure if its even somehow possible.

When using docker, the common way to build the container is by using the following way: http://bitjudo.com/blog/2014/03/13/building-efficient-dockerfiles-node-dot-js/ Meaning caching the node_modules. Once those modules are cached, they are saved inside the container while the user's project directory stays clean without the node_modules directory. This is nice and sweet but then comes the issue with linting code through Atom. Because there are no modules installed locally, then "special" linting rules that might be defined in the project might fail to run because they need to corresponding module that cannot be loaded.

Is there some solution to this? Somehow to tell the atom linter to get the results through some container thats running?

steelbrain commented 8 years ago

Hi @slavab89, While the question is not directly related to linter, I'll try to answer it anyway since we're just a bunch of developers helping developers on this website :)

If I understood the question properly, the problem is that you are keeping your node_modules in docker container only and NOT in the local version and some linters are failing because there are no modules installed locally.

There are actually different best practices about deploying and developing Node.js apps for docker containers. I, myself maintain a handful of Node.js applications that are deployed to a kubernetes cluster as docker containers. The way we keep both the dev and production environments clean is by using docker-compose locally and mounting node_modules into the container and running locally. In production, that mount is removed and files are copied in with a Dockerfile.

Therefore even though you want to deploy your apps to docker containers, you should still keep a local node_modules for when you are running your application. Otherwise you'd have to rebuild your app each time you want to try a change which is extremely painful and cripples productivity.

Closing as off-topic but feel free to continue discussing

slavab89 commented 8 years ago

First of all thanks for answering and helping 😄

Are you by any chance using several Dockerfiles or do you have only one and do the different mounts using the docker-compose file? Moreover, if you install your modules locally, and then mount them to the container as a shared volume in dev (So the ones you have locally are the same ones in the container), cant it cause some issue? Meaning that the container might be running some linux OS while you're working on mac or even windows..

If its possible, could you share an example of the Dockerfile and docker-compose file that you have as an example?

steelbrain commented 8 years ago

I've posted a small isolated setup in a configurations gist. We are using one Dockerfile for each service.

A relatively important note is that using the Native Docker on both OSX and Windows can lead to a LOT of issues, most notably the FS events not being triggered (fs watchers fail). We work it around by putting everything in vagrant.

So our team only has to do brew cask install vagrant virtualbox and then vagrant up, and everything including docker is installed automatically. Combining vagrant with vagrant-docker-compose install vagrant on the guest VM automatically further simplifying the process.

Installing the node modules locally, and then building by copy/pasting them into the container saves from a lot of "Works on my machine" scenarios because essentially the modules between your dev and production environment are exactly the same.

slavab89 commented 8 years ago

I think i understood how its working for you guys.. Will try to see if its possible to do something similar on my team 😄