Closed GeorgeEfstathiadis closed 1 year ago
This error is from Mano?
Can you find the user in their study and check if they have any data? My initial guess is that Mano doesn't handle the case of no data, e.g. a zero byte zip file.
Can you reproduce this error?
So the error is indeed in Mano, when downloading the data. When I tried to download the data provided from the study team, I got the same error.
Study id is tOARXhwT2Ptx3TjD3bn8XVa3 at eu server with beiwe ids ["2gghdw7p", "fiksfhc2"], although the study team told me they found the error in other users as well. Both users have data in their dashboards.
To reproduce this error:
import os
study_id = "tOARXhwT2Ptx3TjD3bn8XVa3"
direc = os.getcwd() #current working directory
dest_folder_name = "raw_data"
server = "eu"
time_start = "2023-05-05"
time_end = None
data_streams = ["gps", "accelerometer"]
beiwe_ids = ["2gghdw7p", "fiksfhc2"]
dest_dir = os.path.join(direc, dest_folder_name)
# import .py file located in another directory if needed
import sys
sys.path.insert(0, 'C:/Users/gioef/Desktop/onnela_lab/forest_mano/forest_mano')
import keyring_studies
kr = mano.keyring(None)
kr
from helper_functions import download_data
download_data(kr, study_id, dest_dir, beiwe_ids, time_start, time_end, data_streams)
(I just opened an issue on Mano, https://github.com/harvard-nrg/mano/issues/17 , about us taking it over. This has been a background discussion, we have our own issue about it somewhere, and I've emailed with Tim before. I'm getting the codebase installed locally so I can get into it, but I'm not an expert in it, yet.)
I can't run the script yet, need to set environment variables, `mano.keyring(None)" fails with a missing URL.... I checked the servers, I can confirm there is data in the appropriate locations for these participants... I have recreated the data download parameters in the ui on https://eu.beiwe.org/data_access_web_form, and it ran, with an output of 1.6GB...
When I open the downloaded zip file I get an error in my desktop environment stating it is not a zip flle.
I have now checked the estimated total data size for both users, in bytes they are 17,237,914,457 (fiksfhc2) and 5,814,177,107 (2gghdw7p), so way more than 1.6GB
I'm now running the download again for only 2gghdw7p, it downloaded much more data but finished with a network error, browser didn't save the file.
I've confirmed its not a memory error on the server - so at least there's that....
This is definitely a backend bug that I will need to take over, but there is a human fix: download the data in smaller (daily is probably fine) chunks. Can you put together directions for that and forward that on?
** or rather that Should work, I need you to test what happens running time-delimited download under Mano, and if that works pass that on to the client
Thanks Eli, I tried downloading incrementally, day by day, and that worked. Thus I sent them a script to do that.
Ah - this was a server scaling issue, which explains why it was out of the blue. This server sees very little utilization, and the autoscaling rules were not well-tuned to that. I've changed the scaling rules and updated the server type.
Proximate cause of the failure was elastic beanstalk scaling down the server count... and picking the wrong server.
I sat on a long download, monitored the servers, the new trigger configuration no longer cause a scaling event from only one active data download.
@GeorgeEfstathiadis I'm satisfied from my end, they should be able to do the large download now. Assigning back to you, please close the issue when you have confirmation from the reporter that they can now access their data.
Closing as they can download data in small batches
Error from users of beiwe, trying to download data for multiple users in their study