Closed prioux closed 4 months ago
did not work with UserkeyFlatDirSshDataProvider Error copying: Public file for SSH Key 'u11' does not exist.
That's normal, you need to push your key first. Go to 'my account' page and find the push key feature.
an error with datavault, No such file or directory @ dir_s_mkdir - /home/users/sboroday/cbrain_data_providers/vault0/admin
. Dataprovider was working with old GUI copy command
will try again after break
If you look at my commit, you'll see there is nothing in there that's dependent on the types of the DataProvider. I use the method userfile.provider_move_to_otherprovider()
which completely delegates the operation to the DP implementation. If an error is occuring in your setup, it has to be a configuration problem that is pre-existing, not a consequence of a bug in this PR.
For some reason I did not encounter any issues with more typical dataprovider types, like EnCbrainSmartDataProvider
. Of course, could be some issues in my setup
Works fine on my side with various DP classes.
I first failed to copy the file from 1 DP to a UserKeyFlatDirSsshDataProvider (I forgot to push my key!). I was surprise to see that when the copy file failed the entry for the Userfile stay in the DP even if the copy had fail. Maybe it can be removed when the copy failed it will prevent some issue when trying again to do the copy.
Yes, if a low level transport error occurs, it's possible the copy operation will abort while leaving a spurious entry in the database for the copied file. I will look into that, but it's unrelated to this pR< it's an issue with the DP copy file method.
DP copy file method might not be intended to be invoked on a bourreau, so rule out error in copy I also try to copy in GUI
What does that mean 'not be intended to be invoked on a bourreau' ? Why do you say that? I implemented the entire DataProvider framework in CBRAIN, for the core abstract libraries to most of the subclass implementations. Tell me why.
Sure I cannot know the intent, that was totally wild guess. Still it might happen that at the time the reliable functioning of the method on Bourreau app was of limited importance, and some less important classes of data-providers were not exhaustively tested?
BTW the 'low level' provider's method cache_prepare, causing error, was written 15 years ago, and not touched since then, so few people would remember the original requirements in the full details.
So, with respect to the Bourreau that was asked to copy the files:
1- was the VaultLocal DP the SOURCE of the files 2- was the VaultLocal DP the DESTINATION for the files 3- was the VaultLocal's configuration an actual local filesystem path where the Bourreau runs?
Because I looked that that method, and it's totally fine:
def cache_prepare(userfile) #:nodoc:
SyncStatus.ready_to_modify_cache(userfile) do
username = userfile.user.login
userdir = Pathname.new(remote_dir) + username
Dir.mkdir(userdir) unless File.directory?(userdir)
true
end
end
It does exactly what it's supposed to do for a DP that is LOCAL only.
I was assuming copy should work for all DP including local to Portal 1- was the VaultLocal DP the SOURCE of the files No 2- was the VaultLocal DP the DESTINATION for the files Yes 3- was the VaultLocal's configuration an actual local filesystem path where the Bourreau runs? No it was Portal local DP
Then it's normal it doesn't work, you're trying to copy files to a directory that doesn't exists, as far as the Bourreau goes.
The problem is not with any code, it's with your configuration.
If you want o copy files between machines, the DPs must be network-enabled. VaultLocal has no networking capatbilities.
I would mention that limitation in the method docs, it is not obvious (at least to me)
It says so right in the file:
# This class provides an implementation for a data provider
# where the remote files are not even remote, they are local
# to the currently running rails application. The provider's
# files are stored in a flat directory, two levels
# deep, directly specified by the object's +remote_dir+
# attribute and the user's login name. The file "hello"
# of user "myuser" is thus stored into a path like this:
#
# /root_dir/myuser/hello
#
# where +root_dir+ is the data provider's +remote_dir+ (a local
# directory).
#
# This data provider does not cache anything! The 'remote' files
# are in fact all local, and accessing the 'cached' files mean
# accessing the real provider's files. All methods are adjusted
# so that their behavior is sensible.
#
# For the list of API methods, see the DataProvider superclass.
class VaultLocalDataProvider < LocalDataProvider
And also in the header for the class LocalDataProvider
I meant send_command_copy_files
docs
I tried to read the above documentation, but stumble on the first sentence. It says "This class provides an implementation for a data provider
where the remote files are not even remote, they are local
to the currently running rails application." What 'the current running rails app'? CBRAIN consist of several rails app - one portal and several execution servers. That is very confusing. I find nothing of absence of network interface or impossibility to use that files elsewhere. IMHO a comment in send_command_copy_files
could clarify its intended applicability area.
The Rails app running the code. The process that loaded the ruby files.
This is all code in the app/models directory, so whatever Ruby process has loaded the models is able to run the code in there.
As for send_command_copy_file, it's a utility method, a wrapper that sends a message between one rails application to another (but normally, only from portal to bourreau).
Ok on my side for now.
A RemoteResource (Bourreau or Portal) within the CBRAIN system can now be commanded to perform a file copying operation. This is useful as an optimization mechanism when the remote resource has direct access to the source files or the destination data provider, or both.
There is no interface mechanism that currently use the feature. But a console user can try it with statements like: