Closed timcadman closed 5 months ago
@timcadman is this still the case?
No I've worked this one out - it was an error triggered by their not being sufficient disk space to save the workspace. It would be helpful if it could state this in the error (though not sure if you have control over this).
demo_url <- "https://armadillo-demo.molgenis.net/"
demo_token <- armadillo.get_token(demo_url)
builder <- DSI::newDSLoginBuilder()
builder$append(
server = "armadillo",
url = demo_url,
profile = "xenon",
driver = "ArmadilloDriver",
token = demo_token
)
logindata <- builder$build()
conns <- DSI::datashield.login(logins = logindata, assign = F)
ds.rep(x1 = 4,
times = 10000,
length.out = NA,
each = 1,
source.x1 = "clientside",
source.times = "c",
source.length.out = NULL,
source.each = "c",
x1.includes.characters = FALSE,
newobj = "rep.seq",
datasources = conns)
ds.ls()
'$armadillo $armadillo$environment.searched [1] "R_GlobalEnv"
$armadillo$objects.found [1] "rep.seq"'
datashield.workspace_save(conns, "test_save")
datashield.logout(conns)
conns <- datashield.login(logindata, restore = "test_save")
ds.ls()
'$armadillo $armadillo$environment.searched [1] "R_GlobalEnv"
$armadillo$objects.found [1] "rep.seq"'
fallocate -l 10g /usr/share/armadillo/data/test.img
datashield.workspace_save(conns, "test_save")
Observe that command executes with no error returned
datashield.logout(conns)
rm /usr/share/armadillo/data/test.img
conns <- datashield.login(logindata, restore = "test_save")
Logging into the collaborating servers Login armadillo [==================================>-----------------------------------] 50% / 0sError: Internal server error: org.molgenis.r.exceptions.RExecutionException: org.springframework.web.client.HttpClientErrorException$BadRequest: 400 Bad Request: "{"status":"400","key":"REvaluationRuntimeException","args":[],"message":"Error in base::load(file = \".RData\", envir = .GlobalEnv) : \n empty (zero-byte) input file\n"}"
We have to check while saving a workspace we have enough storage
Dick has checked and the largest workspace currently saved on BiB server is 1.3gb
I don't think there is anyway to know how large the workspace will be prior to saving it. I think this leaves two options:
You could try the algorithm:
This will require the running of a crash recovery process on machine restart, to ensure all shadow workspace have been renamed.