Closed fekad closed 4 years ago
if you have the -q option before the subcommand working, can I close this issue too?
the http-based "public" access that we talked about - can that be already part of the docker service stack that we provide?
are the docker daemons user-specific ? I am running the docker from my regular user "gabor" on my machine, and when I tried to connect via ssh and have the docker command executed by the "abcd" user, I got the following error
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.39/containers/quip/json: dial unix /var/run/docker.sock: connect: permission denied
The other nice thing about http-based public access is that we can (and should) make that read-only, and that gives a good mechanism for providing read-only access. however that reminds me that we wanted to be able to control read-only access separately too via command line. I'll create a new issue for that.
Can we close this? I don't think it needs any new programming, just setup.
Or we can repurpose this issue to be about the http pretty view, but then relabel it as major
I think the "permission denied" problem is not related to the abcd. I wouldn't close this issue yet, because we still have a few open questions about remote access. Eg.:
no upload is needed for remote access. for download, we need a STDOUT option, and then you need to specify file type.
Let's make this issue about the STDOUT download option that is still missing, and the API for going through https to the database. a separate issue will be opened for the gui to make a webpage to generate queries and display results in a browser.
I got the remote access to work, see the README.md.
I think the STDOUT needs to be a global option that I can put before the subcommand, and it would instruct abcd to never create any files but to dump things into the stdout. In case of the 'download' subcommand, it should produce an output that is just the data without any other messages so that it can be redirect straight into a data file that is then valid.
can we do upload via STDIN in this way?
this needs a --format option that is just passed to ASE.
the download
should have an option --stdout that, in conjunction with --format, dumps the data on stdout.
there should be a global option --remote, which implies --readonly, and also implies --stdout for the download
subcommand (but still needs the --format to be specified)
There is a global option --remote
, which prevents modifying the database and uses the standard output for the download command. The --read-only
became redundant so it has been removed.
and we have the --format also ? (because you don't have an output file specified, so you can't guess the format from the file extension)
So the first idea, over ssh, is to use the .ssh/authorised_keys file to execute a script that eventually calls abcd with a given access query tag. this tag can be made dependent on the ssh key being used. this is similar to how gitolite does authentication.
the trick is that the line in the authorised keys file will look something like
ssh-key /path/to/abcd -q access=some-key
this doesn't quite work for us at the moment because the subsequent arguments will be appended to it, including the subcommand, and the -q inserted between the abcd command and the subcommand will not get parsed properly. so either we make this work by having -q to be an option to the abcd command itself, and it just gets passed down to the subcommand, or have a kludge where the authorsed_keys line is
ssh-key /path/to/abcd-launcher -q access=some-key
and the abcd-launcher script cimply takes the "-q access=some-key" string and moves it to the end of the command line before launching the actual abcd command.