Closed Dont-Care-Didnt-Ask closed 1 year ago
Hello,
I never tried to run it on colab. So it might be problematic to run it out-from-the box on colab. I could recommend you to install it locally. Download dataset using python -m mice_dfi.dataset.download
and then place downloaded files intomice_dfi/src/mice_dfi/dataset/raw
or any other place. Then modify mice_dfi.dataset._GLOBAl_DIR_PATH
variable that it points to directory with data.
Anyway I recommend you to run everything at a local machine. The neural network is very light and simple, so can be trained using only CPU.
I tried to do it locally. To begin, I create a new conda environment with conda create -n project python=3.9 pip
. Then I activate it and run ~/miniconda3/envs/project/bin/pip install .
. It the line 158 in setup.py is commented, it succeds.
However, python -m mice_dfi.dataset.download
fails with following log:
Traceback (most recent call last):
File "/home/exxxplainer/miniconda3/envs/cba_project/lib/python3.9/urllib/request.py", line 1346, in do_open
h.request(req.get_method(), req.selector, req.data, headers,
File "/home/exxxplainer/miniconda3/envs/cba_project/lib/python3.9/http/client.py", line 1285, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/home/exxxplainer/miniconda3/envs/cba_project/lib/python3.9/http/client.py", line 1331, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/home/exxxplainer/miniconda3/envs/cba_project/lib/python3.9/http/client.py", line 1280, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/home/exxxplainer/miniconda3/envs/cba_project/lib/python3.9/http/client.py", line 1040, in _send_output
self.send(msg)
File "/home/exxxplainer/miniconda3/envs/cba_project/lib/python3.9/http/client.py", line 980, in send
self.connect()
File "/home/exxxplainer/miniconda3/envs/cba_project/lib/python3.9/http/client.py", line 1447, in connect
super().connect()
File "/home/exxxplainer/miniconda3/envs/cba_project/lib/python3.9/http/client.py", line 946, in connect
self.sock = self._create_connection(
File "/home/exxxplainer/miniconda3/envs/cba_project/lib/python3.9/socket.py", line 823, in create_connection
for res in getaddrinfo(host, port, 0, SOCK_STREAM):
File "/home/exxxplainer/miniconda3/envs/cba_project/lib/python3.9/socket.py", line 954, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno -2] Name or service not known
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/exxxplainer/miniconda3/envs/cba_project/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/exxxplainer/miniconda3/envs/cba_project/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/exxxplainer/miniconda3/envs/cba_project/lib/python3.9/site-packages/mice_dfi/dataset/download.py", line 45, in <module>
filename = wget.download(url, out='raw/{:s}/{:s}.csv'.format(path, dataset))
File "/home/exxxplainer/miniconda3/envs/cba_project/lib/python3.9/site-packages/wget.py", line 526, in download
(tmpfile, headers) = ulib.urlretrieve(binurl, tmpfile, callback)
File "/home/exxxplainer/miniconda3/envs/cba_project/lib/python3.9/urllib/request.py", line 239, in urlretrieve
with contextlib.closing(urlopen(url, data)) as fp:
File "/home/exxxplainer/miniconda3/envs/cba_project/lib/python3.9/urllib/request.py", line 214, in urlopen
return opener.open(url, data, timeout)
File "/home/exxxplainer/miniconda3/envs/cba_project/lib/python3.9/urllib/request.py", line 517, in open
response = self._open(req, data)
File "/home/exxxplainer/miniconda3/envs/cba_project/lib/python3.9/urllib/request.py", line 534, in _open
result = self._call_chain(self.handle_open, protocol, protocol +
File "/home/exxxplainer/miniconda3/envs/cba_project/lib/python3.9/urllib/request.py", line 494, in _call_chain
result = func(*args)
File "/home/exxxplainer/miniconda3/envs/cba_project/lib/python3.9/urllib/request.py", line 1389, in https_open
return self.do_open(http.client.HTTPSConnection, req,
File "/home/exxxplainer/miniconda3/envs/cba_project/lib/python3.9/urllib/request.py", line 1349, in do_open
raise URLError(err)
urllib.error.URLError: <urlopen error [Errno -2] Name or service not known>
However, it creates raw
directory, and there are bone
, CBC
, gait
, lifespan
and serum
. So, I run the training command: python -m mice_dfi.model.train -o dump -c ./src/mice_dfi/model/config/model_resnet.yaml --tb
, and got
2023-03-21 11:17:59.551759: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: :/home/exxxplainer/.mujoco/mujoco200/bin:/home/exxxplainer/.mujoco/mujoco200/bin
2023-03-21 11:17:59.551810: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
RuntimeError: module compiled against API version 0x10 but this version of numpy is 0xd . Check the section C-API incompatibility at the Troubleshooting ImportError section at https://numpy.org/devdocs/user/troubleshooting-importerror.html#c-api-incompatibility for indications on how to solve this problem .
Traceback (most recent call last):
File "/home/exxxplainer/miniconda3/envs/cba_project/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/exxxplainer/miniconda3/envs/cba_project/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/exxxplainer/miniconda3/envs/cba_project/lib/python3.9/site-packages/mice_dfi/model/train.py", line 9, in <module>
import mice_dfi.plots.utils as mutils
File "/home/exxxplainer/miniconda3/envs/cba_project/lib/python3.9/site-packages/mice_dfi/plots/__init__.py", line 1, in <module>
from .style import *
File "/home/exxxplainer/miniconda3/envs/cba_project/lib/python3.9/site-packages/mice_dfi/plots/style.py", line 5, in <module>
import matplotlib
File "/home/exxxplainer/miniconda3/envs/cba_project/lib/python3.9/site-packages/matplotlib/__init__.py", line 174, in <module>
_check_versions()
File "/home/exxxplainer/miniconda3/envs/cba_project/lib/python3.9/site-packages/matplotlib/__init__.py", line 159, in _check_versions
from . import ft2font
ImportError: numpy.core.multiarray failed to import
It says something about GPU, but it can be ignored (I use WSL, and it is troublesome to configure GPU there). The important mistake seems to be the thing with numpy.
Thanks for feedback. I will check conda create -n project python=3.9 pip
setup and come back soon.
By the way could you connect to MPD from your country? Just open the link Peters4 in a browser
Hi again! Good news -- I found a working configuration with older version of Python. First, I do
git clone https://github.com/gero-science/mice_dfi
cd mice_dfi
conda create -n project python=3.8 pip
conda activate project
Now, if I run ~/miniconda3/envs/project/bin/pip install .
it displays Encountered error while trying to install package.
which I mentioned in the very first message. And again, commenting out line 158 from setup.py
allows to install mice-dfi
.
Then, I set the mice_dfi.dataset._GLOBAl_DIR_PATH
to point to a convenient place, and run python -m mice_dfi.dataset.download
. And face the error:
urllib.error.URLError: <urlopen error [Errno -2] Name or service not known>
It occurs when I try to download Yuan2_strainmeans
dataset. Apparently, it is the issue with MPD, that you mentioned. However, before your reply I googled a bit and found this link, which seems to be the dataset, and downloaded in manually. (Although, I am not sure it is the version of dataset you used.)
After that, I finally run python -m mice_dfi.model.train -o dump -c ./src/mice_dfi/model/config/model_resnet.yaml --tb
, and it does train the neural network.
So, formally I can call the original issue resolved. However, there are some questions left:
setup.py
is actually important for some hidden purposes, and commenting it out is not a great solution.notebooks/generated
, however, I do not see the code used to generate these datasets.Would be grateful to hear something about that (although I'm not in a hurry anymore).
Thanks for the feedback. I will take a look when have free time.
Concerning question 3, use method predict_z
to obtain dFI values and predict_decoded
to obtained predicted input for MSE metrics.
Hello! I was trying to reproduce the results of this paper as a part of course project, and faced an error, running installation commands from README.
Here is a colab notebook with these three commands. I did not install or upgraded any packages prior to launching your code.
Part of error log:
Then, I found out, that commenting out line 158 from
setup.py
allows to installmice-dfi
. However, it proceeds to fail when downloading datasets.Utimately, I am not able to launch the training script and reproduce the results.