Open GoogleCodeExporter opened 8 years ago
This is caused by a sad Docker limitation.
see also:
- http://stackoverflow.com/questions/26883460
- http://superuser.com/questions/842642
- https://github.com/docker/docker/issues/1676
- https://github.com/docker/docker/issues/6094
Original comment by gzoe...@gmail.com
on 21 Nov 2014 at 5:08
Simple Dart apps can be deployed to custom runtimes on Managed VMs following
the examples at https://www.dartlang.org/cloud/. However many serious,
larger-scale projects depend on local packages and private source repositories.
The Dart pub package manager supports referencing both local packages and
Github repos in pubspec.yaml and the apps run just fine, however running or
deploying such apps using the gcloud utility of the Google Cloud SDK is not
possible.
This particular issue is about referencing private Gitbub repos from Dart
projects. Perhaps another issue should be created for supporting local packages.
Specifically, the error message references a problem with executing ssh. It
seems that if it is possible to reference packages from
https://pub.dartlang.org/ that are downloaded and deployed when running the
command, it should also be possible to reference Git repositories. There seems
to be no difference beyond the transfer protocols used.
Original comment by cristian...@gmail.com
on 21 Nov 2014 at 6:30
Here is a nice workaround for Linux (don't know if other OS support similar
workarounds)
http://superuser.com/questions/842642
I also think that "gcloud preview app run" should just mount the context
directory as a volume instead of building a Docker image each time a source
file changes.
Original comment by gzoe...@gmail.com
on 23 Nov 2014 at 11:16
@christian
One possible way to work around the ssh login issue is to instead checkout the
git repository as a local package directory and deploy it as part of your
application.
To do this you would use the dart-runtime-base dockerfile as your parent docker
image and refer to the package as a local package using a path in the
pubspec.yaml file.
You can see how to on
https://github.com/dart-lang/dart_docker/tree/master/runtime-base under the
Usage.
This way you avoid having to ship you git credentials with your docker image
and instead ship the (granted larger) repository as part of the application
docker image.
To avoid shipping more than necessary you can also add a .dockerignore file to
your docker app directory to exclude the directories and files you don't want
to package up as part of your image. See
http://docs.docker.com/reference/builder/#the-dockerignore-file for more
details.
Original comment by wibl...@google.com
on 12 Jan 2015 at 9:02
@christian
Being a bit more elaborate I see the following (currently working) ways of
solving your problem (with the current way gcloud preview app run/deploy works):
a) Having (maybe private) git dependencies in pubspec.yaml *and* let "pub get"
fetch these dependencies inside the docker container:
(Possible) Advantages:
* Simple directory layout and everything is specified in pubspec.yaml
* No additional steps outside.
* Smaller docker context
Disadvantages
* Requires installing ssh client inside docker container (via `RUN apt-get ...`)
* Requires adding private key to docker container (via `ADD ...`)
* Requires potentially re-cloning on every build (=> very slow)
[To make it slightly faster, one could e.g. clone the git repositories via
separate `RUN git clone` steps and use path dependencies in pubspec.yaml.]
b) Cloning (maybe private) git dependencies *manually* and use path
dependencies in pubspec.yaml.
Advantages:
* No need for the private key to be added to the container
* No need to install tools inside the docker container (e.g. ssh)
* Source can come from anywhere (git or some other place)
Disadvantages:
* More steps to outside
* Larger docker context & more complicated way of building the docker image
[If one does not want to having cloned git repositories several times, one can
- at least on linux - use "mount --bind ~/repositories/<name>
apengine_app/pkg/<name>]
Option b) is IMHO the better approach. To make restarting time low one needs to
do two tings
1) Add all files/directories which are not needed for the app to run into
`.dockerignore`: e.g. `.git`, `.pub` directories
2) Make a Dockerfile which allows efficient caching.
Here's an example of a application using several packages:
The layout of the application is:
root/
- app.yaml
- .dockerignore
- app/
- pubspec.yaml
- bin/
- server.dart
- pkg/
- <private-dep1>/
- pubspec.yaml
- lib/
- <private-dep2>/
- pubspec.yaml
- lib/
The .dockerignore contains e.g.:
app/.pub
.git
The Dockerfile could look like:
FROM google/dart-runtime-base
WORKDIR /project/app
ADD app.yaml /project/
ADD app/pubspec.* /project/app/
ADD pkg/<private-dep1>/pubspec.* /project/pkg/<private-dep1>/
ADD pkg/<private-dep2>/pubspec.* /project/pkg/<private-dep2>/
RUN pub get
ADD pkg /project/pkg
ADD app /project/app
RUN pub get --offline
ENV DART_VM_OPTIONS --enable-async
To make deploying files faster, one should add a section to `app.yaml` which
excludes files not used by AppEngine itself (e.g. dart files):
app.yaml:
....
skip_files:
- ^.*/packages.*$
- ^.*\.dart$
- ^\.git/.*$
Original comment by kustermann@google.com
on 12 Jan 2015 at 12:27
I have seen this in the Docker.template
(https://github.com/dart-lang/dart_docker/blob/master/runtime/Dockerfile.templat
e) as well, but can't find an explanation of why you need to do:
RUN pub get
*then* add the rest of the files, and run
RUN pub get --offline
Why is this necessary?
Original comment by mate...@gmail.com
on 8 Jul 2015 at 3:55
The reason for having two steps is to decrease docker build times
*significantly* when changing code and rebuilding docker images.
The first "RUN pub get" step will take a long time, since it's hitting the
internet and downloading potentially many packages from pub.dartlang.org (and
maybe from other sources). It is therefore advisable to cache this step, so we
don't have to do it very often. In fact the only reason to re-run this step is
if the pubspec.{yaml,lock} files have changed, or one wants to use a new base
docker image (e.g. newer version of Dart).
The docker layer caching will ensure it caches "pub get" and only re-runs it,
if the base docker image or the pubspec.{yaml,lock} files have changed.
The next step is adding all the remaining dart files and running "RUN pub get
--offline". So once any of the dart files have changed, they get re-added and
"pub get --offline" gets re-run. Since it's running in offline mode and all
external dependencies are cached, this is step is basically just creating
package/ directories and symlinks (or .packages file, once pub has support for
the new DEP proposed package spec file).
There have been user reports that the "pub get --offline" step is sometimes
trying to download something from the internet and fails therefore. This issue
is tracked via https://github.com/dart-lang/pub/issues/1293
Original comment by kusterma...@gmail.com
on 8 Jul 2015 at 5:10
Is this still an issue? Can you provide stacktrace for failing command and
output of gcloud info.
Original comment by che...@google.com
on 8 Sep 2015 at 3:51
Yes, this issue still persists as it is the result of the ways pub and Docker
work. IMO this won't be resolved unless one of the two changes fundamentally in
how it works. I also don't see how a stacktrace or gcloud info would add any
new information as the problem and its cause are well known.
Just to recap:
- Docker uses a `context` when building a container which only contains the
directory where the `Dockerfile` is located plus its subdirectories. No
symlinks are followed.
- pub uses symlinks to the PUB_CACHE for a projects dependencies => packages
won't be present in docker context => `pub get` has to be called inside the
container again
- private git repos can't be accessed inside docker containers unless access is
being configured
Current (somewhat official) workaround is to check out the private repo locally
and put the files inside the docker context.
This is the situation and I don't see it changing any time soon.
But since I'm at it, may I propose another workaround. In our project we are
using a directory structure that looks like this:
- docker_context/
- module1/
- app.yaml
- Dockerfile
- pubspec.yaml
- ...
- module2/
- app.yaml
- Dockerfile
- pubspec.yaml
- ...
- private_dependency1/
- private_dependency2/
We deploy every module separately. When deploying a module we copy its app.yaml
and Dockerfile out into the docker_context directory and use the whole
directory containing all module and private dependency directories as context
(we are using .dockerignore to not have unneded modules in the container). We
are doing this to avoid overwriting/changing dependencies in pubspec.yaml as
this is more cumbersome than copying only a few files around.
The downsides to this are:
- we have to deploy every module seperately
- we can't run the project locally that way using docker (we use dockerless
gcloud mode for running the project)
Those downsides could be avoided if it was possible to specify context and
Dockerfile independently, e.g. via app.yaml. There is an option in newer docker
versions to specify a path to the Dockerfile (via the -f parameter) to use, so
context and Dockerfile are decoupled in docker already. So please, could you
add such an option to provide context/Dockerfile explicitely to gcloud/app.yaml?
Thanks and regards
daniel
Original comment by daniel.d...@exit-live.com
on 9 Sep 2015 at 1:55
Original issue reported on code.google.com by
cristian...@gmail.com
on 20 Nov 2014 at 10:01