Closed guillermoturiel closed 10 months ago
Hello Guillermo,
Thank you for your kind words and your interest in the software. I apologize for the inconvenience you're facing with the dataset download. I appreciate your patience.
I've identified the issue and am working on resolving it. In the meantime, here are the new links to download the human and mouse datasets:
Human Dataset: https://cloud.uni-hamburg.de/s/YCJkWqebaP5poLH Mouse Dataset: https://cloud.uni-hamburg.de/s/8ndzG5fm77e6dPG
I will also include these updated links in the upcoming version. If you encounter any further issues or have additional questions, please don't hesitate to reach out.
Hello,
Thank you very much for developing this software. Seems quite interesting and I am looking forward to try it on my data.
I have been following the full-example.ipynb document but I have encountered an error that blocks my progress.
When running sn.download.db() I get the following error:
In [113]: sn.download_db() Do you want to install datasets for (human/mouse/both)? human human.zip: 0.00B [00:03, ?B/s]
HTTPError Traceback (most recent call last) Cell In[113], line 1 ----> 1 sn.download_db()
File ~/anaconda3/envs/scanet/lib/python3.9/site-packages/scanet/download_db.py:41, in downloaddb() 39 filename = os.path.join(MAIN_PATH, "databases",os.path.basename(dataset_human)) 40 with DownloadProgressBar(unit='B', unit_scale=True, miniters=1, desc=dataset_human.split('/')[-1]) as t: ---> 41 request.urlretrieve(datasethuman, filename=filename, 42 reporthook=t.update_to) 45 if answer == "mouse" or answer == "both": 46 if db_exists("mouse"):
File ~/anaconda3/envs/scanet/lib/python3.9/urllib/request.py:239, in urlretrieve(url, filename, reporthook, data) 222 """ 223 Retrieve a URL into a temporary location on disk. 224 (...) 235 data file as well as the resulting HTTPMessage object. 236 """ 237 url_type, path = _splittype(url) --> 239 with contextlib.closing(urlopen(url, data)) as fp: 240 headers = fp.info() 242 # Just return the local path and the "headers" for file:// 243 # URLs. No sense in performing a copy unless requested.
File ~/anaconda3/envs/scanet/lib/python3.9/urllib/request.py:214, in urlopen(url, data, timeout, cafile, capath, cadefault, context) 212 else: 213 opener = _opener --> 214 return opener.open(url, data, timeout)
File ~/anaconda3/envs/scanet/lib/python3.9/urllib/request.py:523, in OpenerDirector.open(self, fullurl, data, timeout) 521 for processor in self.process_response.get(protocol, []): 522 meth = getattr(processor, meth_name) --> 523 response = meth(req, response) 525 return response
File ~/anaconda3/envs/scanet/lib/python3.9/urllib/request.py:632, in HTTPErrorProcessor.http_response(self, request, response) 629 # According to RFC 2616, "2xx" code indicates that the client's 630 # request was successfully received, understood, and accepted. 631 if not (200 <= code < 300): --> 632 response = self.parent.error( 633 'http', request, response, code, msg, hdrs) 635 return response
File ~/anaconda3/envs/scanet/lib/python3.9/urllib/request.py:561, in OpenerDirector.error(self, proto, args) 559 if http_err: 560 args = (dict, 'default', 'http_error_default') + orig_args --> 561 return self._call_chain(args)
File ~/anaconda3/envs/scanet/lib/python3.9/urllib/request.py:494, in OpenerDirector._call_chain(self, chain, kind, meth_name, args) 492 for handler in handlers: 493 func = getattr(handler, meth_name) --> 494 result = func(args) 495 if result is not None: 496 return result
File ~/anaconda3/envs/scanet/lib/python3.9/urllib/request.py:641, in HTTPDefaultErrorHandler.http_error_default(self, req, fp, code, msg, hdrs) 640 def http_error_default(self, req, fp, code, msg, hdrs): --> 641 raise HTTPError(req.full_url, code, msg, hdrs, fp)
HTTPError: HTTP Error 404: Not Found
When looking into download_db.py I see that there are two links for the human and mouse datasets:
dataset_human = "https://wolken.zbh.uni-hamburg.de/index.php/s/DZ3XT76Z7xYAT64/download/human.zip" dataset_mouse = "https://wolken.zbh.uni-hamburg.de/index.php/s/6syqpRTgDLaQs4t/download/mouse.zip"
I suspect that the problem is here, when trying to download the data from this link. I think there is some kind of access problem as this is what I see when trying to open the link in a normal browser:
Could you please provide some help to solve this? Or maybe an alternative way of downloading the dataset?
Thank very much
Best Guillermo