Closed SashaWeinstein closed 2 years ago
I was getting an error when trying to run the export from the devcontainer that the .gdb file already existed even though it wasn't in my local filesystem. Maybe the file is persisting in the volume set up by the run?
This isn't an issue per se, more of a thought that I want to write down to discuss with @td928 when he gets back from El Salvador.
It turns out that volumes are necessary to export geodatabase files via the
webmapp/gdal-docker
container. My previous understanding is that volumes are how docker manages shared filespaces between a host machine and one or more containers. After working through issue #332 I don't think that understanding is correct.Docker containers can write to local filesystems without specifying a volume.
For the geodatabase export, the new container doesn't need to read from the "host" (I put host in quotes because the host is a docker container too). It needs to read from the database and that setup is controlled by the
--network
parameter. It does need to write to the host's filesystem though, and it can do that without a volume specified. I figured this out by runningwhich creates a text file in the working directory of the "host."
So why does the gdal container need a volume specified?
I really can't tell. Will watch some youtube videos and discuss with Te
Shortcoming of the
LOCAL_WORKSPACE_FOLDER
solutionWhen the export is run with the
-v "${LOCAL_WORKSPACE_FOLDER//\\/\/}$(pwd):/data"\
solution that Te found, it returns this errorMy solution was to add
/workspace/pluto_build/output/mappluto_unclipped_gdb
and/workspace/pluto_build/output/mappluto_gdb
via the docker GUI. I guess each engineer who wants to export via the devcontainer will have to do this setup. The error message is pretty straightforward.