Kawue / imaging_communitydetection_demo

Pre-configurated pipeline to find communities within a network build from lateral imaging data to reproduce the results of the following publication:
GNU General Public License v3.0
2 stars 1 forks source link

How apply GRINE on data (imzML) collected by high mass resolution MALDI-MSI instrument #1

Open anani-a-missinou opened 4 years ago

anani-a-missinou commented 4 years ago

Dear Karsten Wüllems,

I’m a Ph.D student in France on Brassica napus (rapeseed) specialized metabolites against pathogens. I want to apply your pipeline “Analysis of GRaph mapped Image data NEtworks” on my data. We have disease-resistant and disease-susceptible plants and we want to find features communities within a network build from imaging of cross-sectioned tissue (infected/non-infected).

Your pipeline can identify groups of molecules with distributions that correlate with plant anatomical structures and help us to better understanding the metabolic resistance of rapeseed to pathogens.

We have : • imzML files collected by FTICR - Imagerie MALDI - FTICR SolariX 7T Paracell combisource ESI/MALDI (Bruker Daltonics).

Have you shared the compilation of the pipeline, Step-By-Step Guide or command lines used? I would be grateful if you would let me know.

I sincerely thank you for your answer. Respectfully yours,

Kawue commented 4 years ago

ok, so in general you can ignore any conda errors as they do not interfere with the code. However, Error processing tar file(exit status 1): write /opt/conda/envs/grine/lib/libmkl_avx512_mic.so: no space left on device seems like your volume was full.

Which Windows 10 version is on your PC? Is there a reason you used Docker Toolbox instead of Docker Desktop?

Do you execute the code from the cmd or powershell or are you using the docker toolbox commandline?

I tried the example call on Windows 10 Pro with Docker Desktop

anani-a-missinou commented 4 years ago

I execute the code on the Docker QuickStart shell.

I used Windows 10 Professionnel, version 1909 and I think I have available memory.

I used Docker Toolbox on Windows because you recommend it on git.

Now as try to use Docker Desktop and clone again the git repository.

But troubles appear during it running and there is now the same error on Docker Toolbox.


$ cd D:/Parts/DATA/Metabolomics/rawData/METAPHOR/MALDI-MSI/Image_File/GRINEmsi-community-detection $ docker login Authenticating with existing credentials... Login Succeeded $ git clone https://github.com/Kawue/msi-community-detection.git $ cd msi-community-detection $ docker build -t grine/msicommunitydetection . Sending build context to Docker daemon 355.8kB Step 1/8 : FROM continuumio/anaconda3 ---> bdb4a7e92a49 Step 2/8 : COPY environment.yml . ---> Using cache ---> 9348cce04bb6 Step 3/8 : COPY main.py . ---> Using cache ---> 5e12dbcf4f34 Step 4/8 : COPY /kode ./kode ---> Using cache ---> 69fc7b34b2b7 Step 5/8 : RUN conda env create -f environment.yml ---> Running in 15ddd360b543 Collecting package metadata (repodata.json): ...working... failed

<<< ERROR REPORT >>>

Traceback (most recent call last):
  File "/opt/conda/lib/python3.7/site-packages/urllib3/response.py", line 388, in _decode
    data = self._decoder.decompress(data)
  File "/opt/conda/lib/python3.7/site-packages/urllib3/response.py", line 85, in decompress
    ret += self._obj.decompress(data)
zlib.error: Error -3 while decompressing data: incorrect data check

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/opt/conda/lib/python3.7/site-packages/requests/models.py", line 750, in generate
    for chunk in self.raw.stream(chunk_size, decode_content=True):
  File "/opt/conda/lib/python3.7/site-packages/urllib3/response.py", line 560, in stream
    for line in self.read_chunked(amt, decode_content=decode_content):
  File "/opt/conda/lib/python3.7/site-packages/urllib3/response.py", line 757, in read_chunked
    chunk, decode_content=decode_content, flush_decoder=False
  File "/opt/conda/lib/python3.7/site-packages/urllib3/response.py", line 394, in _decode
    e,
urllib3.exceptions.DecodeError: ('Received response with content-encoding: gzip, but failed to decode it.', error('Error -3 while decompressing data: incorrect data check'))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/opt/conda/lib/python3.7/site-packages/conda/exceptions.py", line 1079, in __call__
    return func(*args, **kwargs)
  File "/opt/conda/lib/python3.7/site-packages/conda_env/cli/main.py", line 80, in do_call
    exit_code = getattr(module, func_name)(args, parser)
  File "/opt/conda/lib/python3.7/site-packages/conda_env/cli/main_create.py", line 111, in execute
    result[installer_type] = installer.install(prefix, pkg_specs, args, env)
  File "/opt/conda/lib/python3.7/site-packages/conda_env/installers/conda.py", line 32, in install
    prune=getattr(args, 'prune', False), update_modifier=UpdateModifier.FREEZE_INSTALLED)
  File "/opt/conda/lib/python3.7/site-packages/conda/core/solve.py", line 117, in solve_for_transaction
    should_retry_solve)
  File "/opt/conda/lib/python3.7/site-packages/conda/core/solve.py", line 158, in solve_for_diff
    force_remove, should_retry_solve)
  File "/opt/conda/lib/python3.7/site-packages/conda/core/solve.py", line 262, in solve_final_state
    ssc = self._collect_all_metadata(ssc)
  File "/opt/conda/lib/python3.7/site-packages/conda/common/io.py", line 88, in decorated
    return f(*args, **kwds)
  File "/opt/conda/lib/python3.7/site-packages/conda/core/solve.py", line 415, in _collect_all_metadata
    index, r = self._prepare(prepared_specs)
  File "/opt/conda/lib/python3.7/site-packages/conda/core/solve.py", line 1011, in _prepare
    self.subdirs, prepared_specs, self._repodata_fn)
  File "/opt/conda/lib/python3.7/site-packages/conda/core/index.py", line 228, in get_reduced_index
    repodata_fn=repodata_fn)
  File "/opt/conda/lib/python3.7/site-packages/conda/core/subdir_data.py", line 105, in query_all
    result = tuple(concat(executor.map(subdir_query, channel_urls)))
  File "/opt/conda/lib/python3.7/concurrent/futures/_base.py", line 598, in result_iterator
    yield fs.pop().result()
  File "/opt/conda/lib/python3.7/concurrent/futures/_base.py", line 435, in result
    return self.__get_result()
  File "/opt/conda/lib/python3.7/concurrent/futures/_base.py", line 384, in __get_result
    raise self._exception
  File "/opt/conda/lib/python3.7/concurrent/futures/thread.py", line 57, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/opt/conda/lib/python3.7/site-packages/conda/core/subdir_data.py", line 98, in <lambda>
    package_ref_or_match_spec))
  File "/opt/conda/lib/python3.7/site-packages/conda/core/subdir_data.py", line 110, in query
    self.load()
  File "/opt/conda/lib/python3.7/site-packages/conda/core/subdir_data.py", line 174, in load
    _internal_state = self._load()
  File "/opt/conda/lib/python3.7/site-packages/conda/core/subdir_data.py", line 248, in _load
    repodata_fn=self.repodata_fn)
  File "/opt/conda/lib/python3.7/site-packages/conda/core/subdir_data.py", line 485, in fetch_repodata_remote_request
    timeout=timeout)
  File "/opt/conda/lib/python3.7/site-packages/requests/sessions.py", line 546, in get
    return self.request('GET', url, **kwargs)
  File "/opt/conda/lib/python3.7/site-packages/requests/sessions.py", line 533, in request
    resp = self.send(prep, **send_kwargs)
  File "/opt/conda/lib/python3.7/site-packages/requests/sessions.py", line 686, in send
    r.content
  File "/opt/conda/lib/python3.7/site-packages/requests/models.py", line 828, in content
    self._content = b''.join(self.iter_content(CONTENT_CHUNK_SIZE)) or b''
  File "/opt/conda/lib/python3.7/site-packages/requests/models.py", line 755, in generate
    raise ContentDecodingError(e)
requests.exceptions.ContentDecodingError: ('Received response with content-encoding: gzip, but failed to decode it.' error('Error -3 while decompressing data: incorrect data check'))

$ /opt/conda/bin/conda-env create -f environment.yml

environment variables: CIO_TEST= CONDA_AUTO_UPDATE_CONDA=false CONDA_ROOT=/opt/conda PATH=/opt/conda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin :/bin REQUESTS_CA_BUNDLE= SSL_CERT_FILE=

 active environment : None
   user config file : /root/.condarc

populated config files : conda version : 4.8.2 conda-build version : 3.18.11 python version : 3.7.6.final.0 virtual packages : __glibc=2.28 base environment : /opt/conda (writable) channel URLs : https://repo.anaconda.com/pkgs/main/linux-64 https://repo.anaconda.com/pkgs/main/noarch https://repo.anaconda.com/pkgs/r/linux-64 https://repo.anaconda.com/pkgs/r/noarch package cache : /opt/conda/pkgs /root/.conda/pkgs envs directories : /opt/conda/envs /root/.conda/envs platform : linux-64 user-agent : conda/4.8.2 requests/2.22.0 CPython/3.7.6 Linux/4.19.130-boot2docker debian/10 glibc/2.28 UID:GID : 0:0 netrc file : None offline mode : False

An unexpected error has occurred. Conda has prepared the above report.

The command '/bin/sh -c conda env create -f environment.yml' returned a non-zero code: 1


image image

Kawue commented 4 years ago

I think my github description was misleading. You have to use the Toolbox for Windows 10 Home because its not supported by Docker Desktop.

Anyway, this is confusing and hard to track, I just made the same steps as you posted above and it worked. But for the calls above you used the PowerShell?

Please call docker ps and docker volume ls and show me the results.

I am also not using a docker login, but this should not have any effect.

anani-a-missinou commented 4 years ago

For the calls above I used in one hand the shell of Docker Desktop and on the other hand Docker QuickStart shell. It gives the same error above.

To clarify, I keep using Docker QuickStart or Docker Desktop?


$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

$ docker volume ls DRIVER VOLUME NAME local 03c51ee3e3aa26b401c128719c202d9f4801d73f97829602fcf29b966568e709 local 6b323d316fac9c58265e8cac2c811eb2e3d465d0dd462d110c335757e76e0692 local 1420ebbd54110f5f4fa25775ac04eca03a9f07303503512316d93592c744f03a local 68490184dcb40b15fb4f1cf6422c75b2e75f6f344ecd874c0022c563e246dd25 local 8131355471c707c350a97ebeec806cab6de380a6925bacbee4c7b8aa869b469e local a191668d10065c68ac47b57e32fe123b552a2395f1de76f3f6bc7fb9f2960e99 local aa6b0314fd8d750ebb854293bf6d84e07994e0eeb4649db4842d4c2bd98dc4c1 local b1937e522d501654ce6c5caa8868771ea7e7c881f1b77db4c95f060a83992a63 local d1160ea2fa75a94dd733aa0beeb39ea7799eace8e48129675da01758c45ce629 local d26748726152cee2203d465464fdfb2007636e6f7526442615727face42af599 local e39128cab32a7a0be1c70f4c4903970920f37681915be5ed0d5cf1a8cd09d35e local ffa67544e4a5a5675e9458b977406595b83fc3fdfaa57f85542969758e006ab0 local path-to-data


Kawue commented 4 years ago

ok, can you repeat the docker build command from the powershell?

Also do you use Docker for anything else? If not, the Volume should be empty and you might want to call docker volume prune to remove all unused volume attachments.

Kawue commented 4 years ago

well, I just got the no space left error on my system on a completely unrelated project. Can you please show me the information provided by docker system df?

anani-a-missinou commented 4 years ago

I can not build docker and I cannot connect to it using PowerShell. seems to come from the docker daemon is not running.

Normally no, I don't use Docker for other applications.

docker build command from the powershell


PS C:\Users\amissinou> cd D:/Parts/DATA/Metabolomics/rawData/METAPHOR/MALDI-MSI/Image_File/GRINE/msi-community-detection PS D:\Parts\DATA\Metabolomics\rawData\METAPHOR\MALDI-MSI\Image_File\GRINE\msi-community-detection> docker build -t grine/msicommunitydetection . error during connect: Post http://%2F%2F.%2Fpipe%2Fdocker_engine/v1.40/build?buildargs=%7B%7D&cachefrom=%5B%5D&cgroupparent=&cpuperiod=0&cpuquota=0&cpusetcpus=&cpusetmems=&cpushares=0&dockerfile=Dockerfile&labels=%7B%7D&memory=0&memswap=0&networkmode=default&rm=1&session=jrd9he5sbnmljfunooolfwuba&shmsize=0&t=grine%2Fmsicommunitydetection&target=&ulimits=null&version=1: open //./pipe/docker_engine: Access is denied. In the default daemon configuration on Windows, the docker client must be run elevated to connect. This error may also indicate that the docker daemon is not running. PS D:\Parts\DATA\Metabolomics\rawData\METAPHOR\MALDI-MSI\Image_File\GRINE\msi-community-detection> docker login Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one. Username: 2amissinou Password: error during connect: Post http://%2F%2F.%2Fpipe%2Fdocker_engine/v1.40/auth: open //./pipe/docker_engine: Access is denied. In the default daemon configuration on Windows, the docker client must be run elevated to connect. This error may also indicate that the docker daemon is not running.


docker volume prune


$ docker volume prune WARNING! This will remove all local volumes not used by at least one container. Are you sure you want to continue? [y/N] y Total reclaimed space: 0B

$ docker volume ls DRIVER VOLUME NAME local 03c51ee3e3aa26b401c128719c202d9f4801d73f97829602fcf29b966568e709 local 6b323d316fac9c58265e8cac2c811eb2e3d465d0dd462d110c335757e76e0692 local 1420ebbd54110f5f4fa25775ac04eca03a9f07303503512316d93592c744f03a local 68490184dcb40b15fb4f1cf6422c75b2e75f6f344ecd874c0022c563e246dd25 local 8131355471c707c350a97ebeec806cab6de380a6925bacbee4c7b8aa869b469e local a191668d10065c68ac47b57e32fe123b552a2395f1de76f3f6bc7fb9f2960e99 local aa6b0314fd8d750ebb854293bf6d84e07994e0eeb4649db4842d4c2bd98dc4c1 local b1937e522d501654ce6c5caa8868771ea7e7c881f1b77db4c95f060a83992a63 local d1160ea2fa75a94dd733aa0beeb39ea7799eace8e48129675da01758c45ce629 local d26748726152cee2203d465464fdfb2007636e6f7526442615727face42af599 local e39128cab32a7a0be1c70f4c4903970920f37681915be5ed0d5cf1a8cd09d35e local ffa67544e4a5a5675e9458b977406595b83fc3fdfaa57f85542969758e006ab0


docker system df


$ docker system df TYPE TOTAL ACTIVE SIZE RECLAIMABLE Images 16 7 14.08GB 9.316GB (66%) Containers 22 0 3.037GB 3.037GB (100%) Local Volumes 12 12 0B 0B Build Cache 0 0 0B 0B


Kawue commented 4 years ago

ok, please call docker system prune this should reclaim a lot of space in your docker cache.

If you execute the docker desktop application it should appear in your task bar as small icon. After it becomes ready it notifys you and you should be able to use the powershell.

anani-a-missinou commented 4 years ago

the small icon in task bar as is present but it is the same error with powerShell.

docker system prune


$ docker system prune WARNING! This will remove:

Are you sure you want to continue? [y/N] y Deleted Containers: b1a75be2117a96717197d39d3dd2ac6e168a02c8daea45bd396b47400cc78844 15ddd360b54399ef362d47a836ab986a4c90112308d2e564503867194c90b879 c2d705466eae9d7648bf899547e425c804c75af45b80676a8320e4d68298479f 7843c4fc6f36271594b5bdb69558b91ff3265d18d4db1482ab0cd792cc5317ac 602617f6b6268478f971a7f6f5d7f043a054b3ee27212e21e9182e2ed231d1b2 73f2f79c715e5f52d7a8fbc2eaf42048f9805c4167ed7cc0e12729eb0a241835 cf786423b0c13b946fccf97df7a7321d2ed1b64ecf04b38e39089f8106aec704 5887326bd7b1a8a21e567fb73dc208faaf5ba17e9c4e518f8540340573a3cd60 69d60814fd3b001af2a2ffa996dbfff2e2a39685edc8fe3ed46e1ab833827197 fdf0ab9d7411c8213c3318d065e8185a490bb5f5be98f2179747b0cf8884b79e 6529c126eeabf5acc45968771f84d06f5a776c947fbe213a70455f4c31736b08 9cf3076a06bd239d71778f8532d3eb2c9828ae863fed9115dc0103faebcad05c 1399abd840666379e781a034fe6241e6c4809ba16e1e0d9e4df97ab0012f507c c055e2cc91128502842d0299d6d8a5134e341420a659a6f80456bddc0f3275e0 3dd18d4fb2d6fcb1e185ee005d52d4c11c0fabbb0892cda89d167f394c4e2d3e 84fabc787ed6d65e36bc1982d99489a5ea78402f994c62a5c78c246c819a94ba 929fd9532a183999ee4babc41ba56e2d51c1a5695968daf58a1206c607940963 f71a4c57a1d5d03c366e84cf09609febc4cb1e71873712eba1f8838ad0901392 1d794b6a7ce5d96c3e73675306bd7687425447309a8a285a5f8a0d8c8d94ec51 27862ff19931ee62306a5910da43efaa018402fa4bb33d021cbd431650c835f6 a6981cadbd26b2b82b1a23e833b880da3826384aeb533a7838e3322f9844b219 6647ca66e9902640922bfeb9ae7327bd93e0f99bbc5216a12813f3afcb711a90

Deleted Networks: grine-v2-dev_apiNetwork

Deleted Images: deleted: sha256:b0d3b0e6f51790a640b46859aa277aac89a8b3af44e9ee3738573d5ec450a11a deleted: sha256:44f2fc6c484c9b460bab529aa1f9feba2a3470c23506206a253f00c614850420 deleted: sha256:1355e4523557fe3659a6058d13c7847eb85d248c1016401a902b6a9f7a9e1286 deleted: sha256:7219ff52d5b56b17acc178986a1a0954fc9722cd97d5e0b4b36d4e7123cff8dc deleted: sha256:a0625089aa8e6333c4d93a40970e46a8e11542e1e911c40535c1640b980e3df6 deleted: sha256:e51d21427054b387f81453db833889995df380bb8a99e6d140af2312bb3a6c77 deleted: sha256:aa73e9af30cbe4b6357e8ac4c3487dbc6c583735d43bfe5cf50cfccff475cf4d deleted: sha256:7eab1cae673c3d94d34323107e4cd0fdd5f8c256cd5e71ab0fdce09424fb002a deleted: sha256:887c5502cbb9f02951bca8d7e68830793b9472114c64c082e8d8f6a6f4d68a6e deleted: sha256:08db0086247070a2e555370f02874fe21c6b387af4826f21c7919899858315bd deleted: sha256:71cd28ecd0ce7ca6cbd625ecb62ef098a3358b7e1194683e66ce810b2adb3df9 deleted: sha256:bd0bdf77428517e270006650a1d6b2ef03aa857d83e5bfcd36088797c0c27ae8 deleted: sha256:54c8c0368945ae68b9cbb8ad119847316969d1533392a43b68023832878c2002 deleted: sha256:63ebcab08d69796dba845e7521ebae3a02f22b7f66a59f6b077f6bc560d58175 deleted: sha256:c25cf5dc3ca8001e115ab8ccb1cd5c4891fa91a223587e6e20372a129c23b87f deleted: sha256:f0ab2a7868c8a772bb1497916f322717c3dcdc586f94686c751f1e203b7fa6b6 deleted: sha256:71424517156e9460df1818697063d6ec719dc5f77aef6d2a59546e41d6f3a2d0 deleted: sha256:6d1a2de1646cc22938881a7af261581b698c9cb21de1a307d25c9125bb78bdfe deleted: sha256:4d6defa24fbccf586bcb52aa2238bc9f57a2a3185059c0f41d81a29e99389cd3 deleted: sha256:8b0b928b7faf226984c77574686163c3b8780695d449e64d0c8c4611305b2a64 deleted: sha256:297d10eee2d07b2273017bc7bc99ff14b9ef2f87ad3320fb4c4d978ca2b38bfe deleted: sha256:436e4ddc6764d3b0e492cff485098aa626c53a3b3d3f21f890fd985e6e5ca890 deleted: sha256:69fc7b34b2b7faced444213a68a752704ae0b8655f5413f0f01dd636727e0b34 deleted: sha256:d6c273beca9602df8cc77ddd3fba12db5ef6ec9dbf6c7399a7bfc5a45a2746d4 deleted: sha256:5e12dbcf4f34ac5cb7922c88475aa75cf88caaad5730ca02b354ef617617466d deleted: sha256:96afcfe2d744fa9710a752303d8e918d9101e86a9db95f312e1256c6548a878d deleted: sha256:9348cce04bb66843b23c8651348eded025d6684f2f059337d0abd802dcba8d3d deleted: sha256:d82274228d77c9af1a974dcd04ee0ec33a98db7099170024a11cb61d39ff30bb

Total reclaimed space: 3.555GB


docker system df


$ docker system df TYPE TOTAL ACTIVE SIZE RECLAIMABLE Images 11 0 13.56GB 13.56GB (100%) Containers 0 0 0B 0B Local Volumes 12 0 0B 0B Build Cache 0 0 0B 0B


anani-a-missinou commented 4 years ago

The build docker does not work with the same error.

docker build -t grine/msicommunitydetection .


docker build -t grine/msicommunitydetection . Sending build context to Docker daemon 360.4kB Step 1/8 : FROM continuumio/anaconda3 ---> bdb4a7e92a49 Step 2/8 : COPY environment.yml . ---> 693114ba6714 Step 3/8 : COPY main.py . ---> 9001bf717c78 Step 4/8 : COPY /kode ./kode ---> 4581f4d06bec Step 5/8 : RUN conda env create -f environment.yml ---> Running in cf9e628cc1b9 Collecting package metadata (repodata.json): ...working... failed

>>ERROR REPORT <<

Traceback (most recent call last):
  File "/opt/conda/lib/python3.7/site-packages/urllib3/response.py", line 388, in _decode
    data = self._decoder.decompress(data)
  File "/opt/conda/lib/python3.7/site-packages/urllib3/response.py", line 85, in decompress
    ret += self._obj.decompress(data)
zlib.error: Error -3 while decompressing data: incorrect data check

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/opt/conda/lib/python3.7/site-packages/requests/models.py", line 750, in generate
    for chunk in self.raw.stream(chunk_size, decode_content=True):
  File "/opt/conda/lib/python3.7/site-packages/urllib3/response.py", line 560, in stream
    for line in self.read_chunked(amt, decode_content=decode_content):
  File "/opt/conda/lib/python3.7/site-packages/urllib3/response.py", line 757, in read_chunked
    chunk, decode_content=decode_content, flush_decoder=False
  File "/opt/conda/lib/python3.7/site-packages/urllib3/response.py", line 394, in _decode
    e,
urllib3.exceptions.DecodeError: ('Received response with content-encoding: gzip, but failed to decode it.', error('Error -3 while decompressing data: incorrect data check'))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/opt/conda/lib/python3.7/site-packages/conda/exceptions.py", line 1079, in __call__
    return func(*args, **kwargs)
  File "/opt/conda/lib/python3.7/site-packages/conda_env/cli/main.py", line 80, in do_call
    exit_code = getattr(module, func_name)(args, parser)
  File "/opt/conda/lib/python3.7/site-packages/conda_env/cli/main_create.py", line 111, in execute
    result[installer_type] = installer.install(prefix, pkg_specs, args, env)
  File "/opt/conda/lib/python3.7/site-packages/conda_env/installers/conda.py", line 32, in install
    prune=getattr(args, 'prune', False), update_modifier=UpdateModifier.FREEZE_INSTALLED)
  File "/opt/conda/lib/python3.7/site-packages/conda/core/solve.py", line 117, in solve_for_transaction
    should_retry_solve)
  File "/opt/conda/lib/python3.7/site-packages/conda/core/solve.py", line 158, in solve_for_diff
    force_remove, should_retry_solve)
  File "/opt/conda/lib/python3.7/site-packages/conda/core/solve.py", line 262, in solve_final_state
    ssc = self._collect_all_metadata(ssc)
  File "/opt/conda/lib/python3.7/site-packages/conda/common/io.py", line 88, in decorated
    return f(*args, **kwds)
  File "/opt/conda/lib/python3.7/site-packages/conda/core/solve.py", line 415, in _collect_all_metadata
    index, r = self._prepare(prepared_specs)
  File "/opt/conda/lib/python3.7/site-packages/conda/core/solve.py", line 1011, in _prepare
    self.subdirs, prepared_specs, self._repodata_fn)
  File "/opt/conda/lib/python3.7/site-packages/conda/core/index.py", line 228, in get_reduced_index
    repodata_fn=repodata_fn)
  File "/opt/conda/lib/python3.7/site-packages/conda/core/subdir_data.py", line 105, in query_all
    result = tuple(concat(executor.map(subdir_query, channel_urls)))
  File "/opt/conda/lib/python3.7/concurrent/futures/_base.py", line 598, in result_iterator
    yield fs.pop().result()
  File "/opt/conda/lib/python3.7/concurrent/futures/_base.py", line 435, in result
    return self.__get_result()
  File "/opt/conda/lib/python3.7/concurrent/futures/_base.py", line 384, in __get_result
    raise self._exception
  File "/opt/conda/lib/python3.7/concurrent/futures/thread.py", line 57, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/opt/conda/lib/python3.7/site-packages/conda/core/subdir_data.py", line 98, in <lambda>
    package_ref_or_match_spec))
  File "/opt/conda/lib/python3.7/site-packages/conda/core/subdir_data.py", line 110, in query
    self.load()
  File "/opt/conda/lib/python3.7/site-packages/conda/core/subdir_data.py", line 174, in load
    _internal_state = self._load()
  File "/opt/conda/lib/python3.7/site-packages/conda/core/subdir_data.py", line 248, in _load
    repodata_fn=self.repodata_fn)
  File "/opt/conda/lib/python3.7/site-packages/conda/core/subdir_data.py", line 485, in fetch_repodata_remote_request
    timeout=timeout)
  File "/opt/conda/lib/python3.7/site-packages/requests/sessions.py", line 546, in get
    return self.request('GET', url, **kwargs)
  File "/opt/conda/lib/python3.7/site-packages/requests/sessions.py", line 533, in request
    resp = self.send(prep, **send_kwargs)
  File "/opt/conda/lib/python3.7/site-packages/requests/sessions.py", line 686, in send
    r.content
  File "/opt/conda/lib/python3.7/site-packages/requests/models.py", line 828, in content
    self._content = b''.join(self.iter_content(CONTENT_CHUNK_SIZE)) or b''
  File "/opt/conda/lib/python3.7/site-packages/requests/models.py", line 755, in generate
    raise ContentDecodingError(e)
requests.exceptions.ContentDecodingError: ('Received response with content-encoding: gzip, but failed to decode it.', error('Error -3 while decompressing data: incorrect data check'))

$ /opt/conda/bin/conda-env create -f environment.yml

environment variables: CIO_TEST= CONDA_AUTO_UPDATE_CONDA=false CONDA_ROOT=/opt/conda PATH=/opt/conda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin :/bin REQUESTS_CA_BUNDLE= SSL_CERT_FILE=

 active environment : None
   user config file : /root/.condarc

populated config files : conda version : 4.8.2 conda-build version : 3.18.11 python version : 3.7.6.final.0 virtual packages : __glibc=2.28 base environment : /opt/conda (writable) channel URLs : https://repo.anaconda.com/pkgs/main/linux-64 https://repo.anaconda.com/pkgs/main/noarch https://repo.anaconda.com/pkgs/r/linux-64 https://repo.anaconda.com/pkgs/r/noarch package cache : /opt/conda/pkgs /root/.conda/pkgs envs directories : /opt/conda/envs /root/.conda/envs platform : linux-64 user-agent : conda/4.8.2 requests/2.22.0 CPython/3.7.6 Linux/4.19.130-boot2docker debian/10 glibc/2.28 UID:GID : 0:0 netrc file : None offline mode : False

An unexpected error has occurred. Conda has prepared the above report.

The command '/bin/sh -c conda env create -f environment.yml' returned a non-zero code: 1


docker system df


docker system df TYPE TOTAL ACTIVE SIZE RECLAIMABLE Images 12 1 13.56GB 13.56GB (99%) Containers 1 0 61.64MB 61.64MB (100%) Local Volumes 12 0 0B 0B Build Cache


docker system df -v

image

Kawue commented 4 years ago

ok, can you try to close docker desktop again, start it as admin and the powershell as admin as well? Otherwise you can try the docker shell again after the system prune. If all this does not work, this link gives plenty of ideas how to resolve the powershell error: https://stackoverflow.com/questions/40459280/docker-cannot-start-on-windows

Kawue commented 4 years ago

ok, currently I have no idea why this happens. Could you open a post on stackoverflow? This seems to be a very special problem.

Kawue commented 4 years ago

I got another idea. I do not know how to fix this docker issue, but you could try to run it without docker.

  1. Install anaconda
  2. Navigate into the msi-community-detection folder.
  3. conda env create -f .\environment.yml
  4. conda activate grine
  5. python main.py -d D:/Parts/DATA/Metabolomics/rawData/M ETAPHOR/MALDI-MSI/Image_File/GRINE/output/peakpicking/dni-14jai2_pos100-700_181009-root_mean_square_autopicked_deisotroping_treshold_0.04.h5 -p D:/Parts/DATA/Metabolomics/rawData/ METAPHOR/MALDI-MSI/Image_File/GRINE/output/peakpicking/msi-community-detection/someName.json -cm louvain -sm pearson -tm statistics -tp mean std 1 -dr umap
anani-a-missinou commented 4 years ago

Hi dear Karsten,

The installation of grine in windows 10 returns an error with python-igraph==0.7.1.post6.

According with https://anaconda.org/marufr/python-igraph , I add a "marufr" as a channel in a new *.yml file named "environment_windows.yml. "

PS D:\Parts\DATA\Metabolomics\rawData\METAPHOR\MALDI-MSI\Image_File\GRINE\msi-community-detection> conda env create -f .\environment.yml Collecting package metadata (repodata.json): done Solving environment: failed

ResolvePackageNotFound:

$ cp environment.yml environment_windows.yml $ vi environment_windows.yml

$ more environment.yml name: grine channels:

$ more environment_windows.yml more name: grine channels:


After this, the installation is currently running, but it is extremely slow. I'll keep you informed, but at first glance, I think this is because it seems to have found incompatible package conflicts.

PS D:\Parts\DATA\Metabolomics\rawData\METAPHOR\MALDI-MSI\Image_File\GRINE\msi-community-detection> conda env create -f .\environment_windows.yml Collecting package metadata (repodata.json): done Solving environment: | Found conflicts! Looking for incompatible packages. This can take several minutes. Press CTRL-C to abort. Examining vc: 2%|████▉ | 3/142 [00:00<00:00, 3065.26it/| Comparing specs that have this dependency: 7%|█████████████▍ | 4/61 [00:03<00:53, 1.07it/| Comparing specs that have this dependency: 11%|███████████████████████▌ | 7/61 [00:31<03:59, 4.44s/i/ - omparing specs that have this dependency: 16%|█████████████████████████████████▍ | 10/61 [00:36<03:08, 3.69s/i| / Comparing specs that have this dependency: 23%|██████████████████████████████████████████████▊ | 14/61 [00:43<02:26, 3.11s\ - \ omparing specs that have this dependency: 52%|███████████████████████████████████████████████████████████████████████████████████████████████████████████ | 32/61 [01:52<01:41, 3.51s/i/ - omparing specs that have this dependency: 56%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████▋ | 34/61 [01:52<01:29, 3.31s/i/ - Comparing specs that have this dependency: 67%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████ | 41/61 [02:58<01:26, 4.35s/i|


good lunch

Kawue commented 4 years ago

Forgot about that, can you please delete thei igraph line from the environment.yml and install it from here: https://www.lfd.uci.edu/~gohlke/pythonlibs/#python-igraph

Windows has some problems compiling this package.

So:

  1. Remove the igraph line
  2. Install the environment
  3. Activate the environment
  4. Install igraph
anani-a-missinou commented 4 years ago

The environment grive is well installed. I tried to install python_igraph-0.7.1.post6 using pip and conda, but it don't work.


PS D:\Parts\DATA\Metabolomics\rawData\METAPHOR\MALDI-MSI\Image_File\GRINE\msi-community-detection> conda activate PS D:\Parts\DATA\Metabolomics\rawData\METAPHOR\MALDI-MSI\Image_File\GRINE\msi-community-detection> pip install python_igraph-0.7.1.post6-cp38-cp38-win_amd64.whl ERROR: python_igraph-0.7.1.post6-cp38-cp38-win_amd64.whl is not a supported wheel on this platform. PS D:\Parts\DATA\Metabolomics\rawData\METAPHOR\MALDI-MSI\Image_File\GRINE\msi-community-detection> conda install python_igraph-0.7.1.post6-cp38-cp38-win_amd64.whl Collecting package metadata (current_repodata.json): done Solving environment: failed with initial frozen solve. Retrying with flexible solve. Collecting package metadata (repodata.json): done Solving environment: failed with initial frozen solve. Retrying with flexible solve.

PackagesNotFoundError: The following packages are not available from current channels:

Current channels:

To search for alternate channels that may provide the conda package you're looking for, navigate to

https://anaconda.org

and use the search bar at the top of the page.

PS D:\Parts\DATA\Metabolomics\rawData\METAPHOR\MALDI-MSI\Image_File\GRINE\msi-community-detection>


Kawue commented 4 years ago

I think you have to write conda activate grine furthermore this environment uses Python 3.7.3, which means you have to download python_igraph‑0.7.1.post6‑cp37‑cp37m‑win_amd64.whl and install the whl via pip install whl-file.

anani-a-missinou commented 4 years ago

Ok, Successfully installed, but another error with umap package.


PS D:\Parts\DATA\Metabolomics\rawData\METAPHOR\MALDI-MSI\Image_File\GRINE\msi-community-detection> python main.py -d D:/Parts/DATA/Metabolomics/rawData/M ETAPHOR/MALDI-MSI/Image_File/GRINE/output/peakpicking/dni-14jai2_pos100-700_181009-root_mean_square_autopicked_deisotroping_treshold_0.04.h5 -p D:/Parts/DATA/Me tabolomics/rawData/METAPHOR/MALDI-MSI/Image_File/GRINE/output/peakpicking/msi-community-detection/someName.json -cm louvain -sm pearson -tm statistics -tp mean std 1 -dr umap Traceback (most recent call last): File "main.py", line 1, in from kode.workflow import workflow_exec File "D:\Parts\DATA\Metabolomics\rawData\METAPHOR\MALDI-MSI\Image_File\GRINE\msi-community-detection\kode\workflow.py", line 12, in from kode.msi_dimension_reducer import * File "D:\Parts\DATA\Metabolomics\rawData\METAPHOR\MALDI-MSI\Image_File\GRINE\msi-community-detection\kode\msi_dimension_reducer.py", line 5, in import umap as uumap ModuleNotFoundError: No module named 'umap'


Kawue commented 4 years ago

Thats weird, umap is within the environment file. What happens when u call pip install umap-learn with activated grine environment?

Kawue commented 4 years ago

oohhh, I read PS, are u using all this within the Powershell?

anani-a-missinou commented 4 years ago

PS D:\Parts\DATA\Metabolomics\rawData\METAPHOR\MALDI-MSI\Image_File\GRINE\msi-community-detection> pip install umap-learn Collecting umap-learn Requirement already satisfied: numpy>=1.17 in c:\users\amissinou\appdata\local\continuum\anaconda\lib\site-packages (from umap-learn) (1.19.1) Requirement already satisfied: scipy>=1.3.1 in c:\users\amissinou\appdata\local\continuum\anaconda\lib\site-packages (from umap-learn) (1.3.1) Collecting numba!=0.47,>=0.46 (from umap-learn) Using cached https://files.pythonhosted.org/packages/3c/92/1b19ce51458aab0bf9e5306b05b91c6aee71fef29ac338b85864e2e0e54e/numba-0.50.1-cp37-cp37m-win_amd64.whl Requirement already satisfied: scikit-learn>=0.20 in c:\users\amissinou\appdata\local\continuum\anaconda\lib\site-packages (from umap-learn) (0.21.3) Collecting llvmlite<0.34,>=0.33.0.dev0 (from numba!=0.47,>=0.46->umap-learn) Using cached https://files.pythonhosted.org/packages/5c/1e/0844f91499004b3510878f10e9c7cafc7d34a89bd74a3667974d91745bf0/llvmlite-0.33.0-cp37-cp37m-win_amd64.whl Requirement already satisfied: setuptools in c:\users\amissinou\appdata\local\continuum\anaconda\lib\site-packages (from numba!=0.47,>=0.46->umap-learn) (42.0.2) Requirement already satisfied: joblib>=0.11 in c:\users\amissinou\appdata\local\continuum\anaconda\lib\site-packages (from scikit-learn>=0.20->umap-learn) (0.13.2) Installing collected packages: llvmlite, numba, umap-learn Found existing installation: llvmlite 0.29.0 ERROR: Cannot uninstall 'llvmlite'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall.

Kawue commented 4 years ago

Environment activation seems to be buggy with the Powershell, either use the anaconda prompt or the CMD to call conda activate grine otherwise you will automatically use python outside of the environment.

Kawue commented 4 years ago

You may also have to reinstall the whl. Its likely that you installed it outside of the environment if you used the Powershell.

anani-a-missinou commented 4 years ago

I closed the anaconda powerShell and opened the CMD as admin, but it can't even activate the grine environment.


C:\WINDOWS\system32>conda activate grine Could not find conda environment: grine You can list all discoverable environments with conda info --envs.

C:\WINDOWS\system32>conda info --envs conda environments:

base * C:\ProgramData\Anaconda3 C:\Users\amissinou\AppData\Local\Continuum\anaconda C:\Users\amissinou\AppData\Local\Continuum\anaconda\envs\DeepBGC C:\Users\amissinou\AppData\Local\Continuum\anaconda\envs\grine C:\Users\amissinou\AppData\Local\Continuum\anaconda\envs\msiparse C:\Users\amissinou\AppData\Local\Continuum\anaconda\envs\oetoolkits C:\Users\amissinou\AppData\Local\Continuum\anaconda\envs\provim

C:\WINDOWS\system32>activate grine Could not find conda environment: grine You can list all discoverable environments with conda info --envs.


anani-a-missinou commented 4 years ago

grine env activation on cmd

C:\WINDOWS\system32>conda activate C:/Users/amissinou/AppData/Local/Continuum/anaconda/envs/grine

installation of python_igraph-0.7.1.post6

(grine) C:\WINDOWS\system32> cd D:\Parts\DATA\Metabolomics\rawData\METAPHOR\MALDI-MSI\Image_File\GRINE\msi-community-detection (grine) D:\Parts\DATA\Metabolomics\rawData\METAPHOR\MALDI-MSI\Image_File\GRINE\msi-community-detection>pip install python_igraph-0.7.1.post6-cp37-cp37m-win_amd64.whl Processing d:\parts\data\metabolomics\rawdata\metaphor\maldi-msi\image_file\grine\msi-community-detection\python_igraph-0.7.1.post6-cp37-cp37m-win_amd64.whl Installing collected packages: python-igraph Successfully installed python-igraph-0.7.1.post6

installation of umap-learn

(grine) D:\Parts\DATA\Metabolomics\rawData\METAPHOR\MALDI-MSI\Image_File\GRINE\msi-community-detection>pip install umap-learn Requirement already satisfied: umap-learn in c:\users\amissinou\appdata\local\continuum\anaconda\envs\grine\lib\site-packages (0.3.9) Requirement already satisfied: scipy>=0.19 in c:\users\amissinou\appdata\local\continuum\anaconda\envs\grine\lib\site-packages (from umap-learn) (1.3.1) Requirement already satisfied: numpy>=1.13 in c:\users\amissinou\appdata\local\continuum\anaconda\envs\grine\lib\site-packages (from umap-learn) (1.17.3) Requirement already satisfied: numba>=0.37 in c:\users\amissinou\appdata\local\continuum\anaconda\envs\grine\lib\site-packages (from umap-learn) (0.50.1) Requirement already satisfied: scikit-learn>=0.16 in c:\users\amissinou\appdata\local\continuum\anaconda\envs\grine\lib\site-packages (from umap-learn) (0.21.3) Requirement already satisfied: setuptools in c:\users\amissinou\appdata\local\continuum\anaconda\envs\grine\lib\site-packages (from numba>=0.37->umap-learn) (41.6.0.post20191101) Requirement already satisfied: llvmlite<0.34,>=0.33.0.dev0 in c:\users\amissinou\appdata\local\continuum\anaconda\envs\grine\lib\site-packages (from numba>=0.37->umap-learn) (0.33.0) Requirement already satisfied: joblib>=0.11 in c:\users\amissinou\appdata\local\continuum\anaconda\envs\grine\lib\site-packages (from scikit-learn>=0.16->umap-learn) (0.14.0)

Running

(grine) D:\Parts\DATA\Metabolomics\rawData\METAPHOR\MALDI-MSI\Image_File\GRINE\msi-community-detection>python main.py -d D:/Parts/DATA/Metabolomics/rawData/M ETAPHOR/MALDI-MSI/Image_File/GRINE/output/peakpicking/dni-14jai2_pos100-700_181009-root_mean_square_autopicked_deisotroping_treshold_0.04.h5 -p D:/Parts/DATA/Metabolomics/rawData/METAPHOR/MALDI-MSI/Image_File/GRINE/output/peakpicking/msi-community-detection/someName.json -cm louvain -sm pearson -tm statistics -tp mean std 1 -dr umap usage: main.py [-h] -d DATAPATH -p SAVEPATH -sm {pearson,cosine,euclidean,euclidean2} -cm {eigenvector,louvain} -tm {pca,statistics,modularity_weighted,modularity_unweighted} [-tp TRANSFORMATIONPARAMS [TRANSFORMATIONPARAMS ...]] [-dr {pca,nmf,umap,tsne,lsa,ica,kpca,lda,lle,mds,isomap,spectralembedding}] main.py: error: unrecognized arguments: ETAPHOR/MALDI-MSI/Image_File/GRINE/output/peakpicking/dni-14jai2_pos100-700_181009-root_mean_square_autopicked_deisotroping_treshold_0.04.h5

anani-a-missinou commented 4 years ago

Running msi-community successfully.

It generated 4 files:


(grine) D:\Parts\DATA\Metabolomics\rawData\METAPHOR\MALDI-MSI\Image_File\GRINE\msi-community-detection>python main.py -d ..\output\peakpicking\dni-14jai2_pos100-700_181009-root_mean_square_autopicked_deisotroping_treshold_0.04.h5 -p ..\output\msi-community-detection\someName.json -cm louvain -sm pearson -tm statistics -tp mean std 1 -dr umap Pearson Matrix Minimum: -0.06543602410500905 Pearson Matrix Maximum: 0.6850798026815155 Similarity Matrix Calculation Done! Chosen threshold: 0.434909 Adjecency Matrix Calculation Done!

Base Graph: IGRAPH U-W- 66 332 --

Number of Communities: 9

Modularity: 0.39925515314269117 Community Calculation Done! The given Level is too deep. The maximum is: 0 list index out of range list index out of range list index out of range list index out of range list index out of range list index out of range list index out of range list index out of range list index out of range C:\Users\amissinou\AppData\Local\Continuum\anaconda\envs\grine\lib\site-packages\umap\rp_tree.py:450: NumbaWarning: Compilation is falling back to object mode WITH looplifting enabled because Function "make_euclidean_tree" failed type inference due to: Invalid use of recursive(type(CPUDispatcher(<function make_euclidean_tree at 0x0000027321344840>))) with parameters (array(float32, 2d, C), array(int64, 1d, C), array(int64, 1d, C), int64) No type info available for recursive(type(CPUDispatcher(<function make_euclidean_tree at 0x0000027321344840>))) as a callable. During: resolving callee type: recursive(type(CPUDispatcher(<function make_euclidean_tree at 0x0000027321344840>))) During: typing of call at C:\Users\amissinou\AppData\Local\Continuum\anaconda\envs\grine\lib\site-packages\umap\rp_tree.py (457)

File "C:\Users\amissinou\AppData\Local\Continuum\anaconda\envs\grine\lib\site-packages\umap\rp_tree.py", line 457: def make_euclidean_tree(data, indices, rng_state, leaf_size=30):

    left_node = make_euclidean_tree(data, left_indices, rng_state, leaf_size)
    ^

@numba.jit() C:\Users\amissinou\AppData\Local\Continuum\anaconda\envs\grine\lib\site-packages\numba\core\object_mode_passes.py:178: NumbaWarning: Function "make_euclidean_tree" was compiled in object mode without forceobj=True.

File "C:\Users\amissinou\AppData\Local\Continuum\anaconda\envs\grine\lib\site-packages\umap\rp_tree.py", line 451: @numba.jit() def make_euclidean_tree(data, indices, rng_state, leaf_size=30): ^

state.func_ir.loc)) C:\Users\amissinou\AppData\Local\Continuum\anaconda\envs\grine\lib\site-packages\numba\core\object_mode_passes.py:188: NumbaDeprecationWarning: Fall-back from the nopython compilation path to the object mode compilation path has been detected, this is deprecated behaviour.

For more information visit http://numba.pydata.org/numba-doc/latest/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit

File "C:\Users\amissinou\AppData\Local\Continuum\anaconda\envs\grine\lib\site-packages\umap\rp_tree.py", line 451: @numba.jit() def make_euclidean_tree(data, indices, rng_state, leaf_size=30): ^

state.func_ir.loc)) C:\Users\amissinou\AppData\Local\Continuum\anaconda\envs\grine\lib\site-packages\umap\nndescent.py:92: NumbaPerformanceWarning: The keyword argument 'parallel=True' was specified but no transformation for parallel execution was possible.

To find out why, try turning on parallel diagnostics, see http://numba.pydata.org/numba-doc/latest/user/parallel.html#diagnostics for help.

File "C:\Users\amissinou\AppData\Local\Continuum\anaconda\envs\grine\lib\site-packages\umap\utils.py", line 409: @numba.njit(parallel=True) def build_candidates(current_graph, n_vertices, n_neighbors, max_candidates, rng_state): ^

current_graph, n_vertices, n_neighbors, max_candidates, rng_state C:\Users\amissinou\AppData\Local\Continuum\anaconda\envs\grine\lib\site-packages\numba\core\typed_passes.py:314: NumbaPerformanceWarning: The keyword argument 'parallel=True' was specified but no transformation for parallel execution was possible.

To find out why, try turning on parallel diagnostics, see http://numba.pydata.org/numba-doc/latest/user/parallel.html#diagnostics for help.

File "C:\Users\amissinou\AppData\Local\Continuum\anaconda\envs\grine\lib\site-packages\umap\nndescent.py", line 47: @numba.njit(parallel=True) def nn_descent( ^

state.funcir.loc)) C:\Users\amissinou\AppData\Local\Continuum\anaconda\envs\grine\lib\site-packages\umap\umap.py:349: NumbaWarning: Compilation is falling back to object mode WITH looplifting enabled because Function "fuzzy_simplicial_set" failed type inference due to: Untyped global name 'nearest_neighbors': cannot determine Numba type of <class 'function'>

File "C:\Users\amissinou\AppData\Local\Continuum\anaconda\envs\grine\lib\site-packages\umap\umap_.py", line 467: def fuzzy_simplicial_set(

if knn_indices is None or knn_dists is None:
    knn_indices, knn_dists, _ = nearest_neighbors(
    ^

@numba.jit() C:\Users\amissinou\AppData\Local\Continuum\anaconda\envs\grine\lib\site-packages\numba\core\object_mode_passes.py:178: NumbaWarning: Function "fuzzy_simplicial_set" was compiled in object mode without forceobj=True.

File "C:\Users\amissinou\AppData\Local\Continuum\anaconda\envs\grine\lib\site-packages\umap\umap_.py", line 350: @numba.jit() def fuzzy_simplicial_set( ^

state.func_ir.loc)) C:\Users\amissinou\AppData\Local\Continuum\anaconda\envs\grine\lib\site-packages\numba\core\object_mode_passes.py:188: NumbaDeprecationWarning: Fall-back from the nopython compilation path to the object mode compilation path has been detected, this is deprecated behaviour.

For more information visit http://numba.pydata.org/numba-doc/latest/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit

File "C:\Users\amissinou\AppData\Local\Continuum\anaconda\envs\grine\lib\site-packages\umap\umap_.py", line 350: @numba.jit() def fuzzy_simplicial_set( ^

state.func_ir.loc)) C:\Users\amissinou\AppData\Local\Continuum\anaconda\envs\grine\lib\site-packages\tables\path.py:155: NaturalNameWarning: object name is not a valid Python identifier: 'dimreduce-dni-14jai2_pos100-700_181009-root_meansquare-umap'; it does not match the pattern ``^[a-zA-Z][a-zA-Z0-9_]*$; you will not be able to use natural naming to access this object; usinggetattr()`` will still work, though check_attribute_name(name)


Kawue commented 4 years ago

ok, good. Please rename the json to dni-14jai2_pos100-700_181009-root_mean_square.json Thereafter copy the files into their respecting folders in grine-v2/backend/data:

Then navigate into grine-v2 and call docker-compose build and docker-compose up -d according to the repository readme. Access the tool with your browser with the url: localhost:8080

To stop docker call: docker-compose down

anani-a-missinou commented 4 years ago

Hi dear Karsten,

sorry for the silence. I had a technical breakdown with my computer.

I followed your instructions, but I am unable to explore my data. I only see the data in your article with the http: // localhost: 8080 / # / after docker-compose build and docker-compose up -d.

We agree, it is the original large size file dni-14jai2_pos100-700_181009-root_mean_square.h5 or the reduced size file from peakpicking and dedeisotroping options "dni-14jai2_pos100-700_181009-root_mean_square0.04.04ing_deisotroping .h5 "?

Also I notice that the file "adjacency-matrix-dni-14jai2_pos100-700_181009-root_mean_square.npy" is not taken as input. Is this normal?

Patho-COBI-GRINE_test

Kawue commented 4 years ago

Oh, sorry I forgot a step. Ok, so:

  1. You have to use the smaller file.

  2. Please rename either similarity-matrix, json and dimreduce files or the h5 file. They require the same name, I didn't had the time to fix this. so either:

    similarity-matrix-dni-14jai2_pos100-700_181009-root_mean_square0.04.04ing_deisotroping.npy
    dimreduce-dni-14jai2_pos100-700_181009-root_mean_square0.04.04ing_deisotroping-umap.h5
    dni-14jai2_pos100-700_181009-root_mean_square0.04.04ing_deisotroping.json

    or you rename dni-14jai2_pos100-700_181009-root_mean_square0.04.04ing_deisotroping.h5 into dni-14jai2_pos100-700_181009-root_mean_square.h5

  3. adjacency-matrix is currently not required, thats correct.

  4. You have to navigate into grine-v2/backend/ open the uwsgi.ini and change the line pyargv = -j real_data.json into pyargv = -j dni-14jai2_pos100-700_181009-root_mean_square.json or the other name, depends on what you choose to do in step 2.

There are currently some flaws because these tools (Grine, ProViM), as well as my others (VAIDRA, WHIDE, QUIMBI) were all developed independpendently. So the interplay between all of this is currently not the most fluid one.

anani-a-missinou commented 4 years ago

I have an "Error while loading graph data from api."

$ pwd /d/Parts/DATA/Metabolomics/rawData/METAPHOR/MALDI-MSI/Image_File/GRINE/grine-v2-dev/backend/data

$ ls -l dataset/ total 13M -rw-r--r-- 1 amissinou 1049089 2,3M mars 21 14:06 barley101GrineV2.h5 -rw-r--r-- 1 amissinou 1049089 261K mars 21 14:06 cluster-nodetrix.h5 -rw-r--r-- 1 amissinou 1049089 9,7M juil. 20 17:49 dni-14jai2_pos100-700_181009-root_mean_square0.04.04ing_deisotroping.h5 -rw-r--r-- 1 amissinou 1049089 261K mars 21 14:06 hetero-nodetrix.h5 -rw-r--r-- 1 amissinou 1049089 261K mars 21 14:06 homo-nodetrix.h5

$ ls -l dimreduce/ total 384 -rw-r--r-- 1 amissinou 1049089 111174 mars 21 14:06 dimreduce-barley101GrineV2.h5 -rw-r--r-- 1 amissinou 1049089 275558 août 5 14:53 dimreduce-dni-14jai2_pos100-700_181009-root_mean_square0.04.04ing_deisotroping-umap.h5

$ ls -l json/ total 276 -rw-r--r-- 1 amissinou 1049089 50186 août 5 14:53 dni-14jai2_pos100-700_181009-root_mean_square0.04.04ing_deisotroping.json -rw-r--r-- 1 amissinou 1049089 14520 mars 21 14:06 nodetrix-example.json -rw-r--r-- 1 amissinou 1049089 211963 mars 21 14:06 real_data.json

$ ls -l matrix/ total 128 -rw-r--r-- 1 amissinou 1049089 81736 mars 21 14:06 similarity-matrix-barley101GrineV2.npy -rw-r--r-- 1 amissinou 1049089 776 mars 21 14:06 similarity-matrix-cluster-nodetrix.npy -rw-r--r-- 1 amissinou 1049089 34976 août 5 14:52 similarity-matrix-dni-14jai2_pos100-700_181009-root_mean_square0.04.04ing_deisotroping.npy -rw-r--r-- 1 amissinou 1049089 776 mars 21 14:06 similarity-matrix-hetero-nodetrix.npy -rw-r--r-- 1 amissinou 1049089 776 mars 21 14:06 similarity-matrix-homo-nodetrix.npy

$ more ../uwsgi.ini [uwsgi] module = api:app uid = www-data gid = www-data master = true processes = 2 socket = /tmp/uwsgi.socket chmod-sock = 664 vacuum = true die-on-term = true ; uncomment following line to load the barley dataset pyargv = -j dni-14jai2_pos100-700_181009-root_mean_square0.04.04ing_deisotroping.json ; uncomment following line to load the different datasets ; pyargv = -j MK_202_3.json ; pyargv = -j vibrissae_163_6.json

localhost_8080_error Patho-COBI-GRINE_error

Kawue commented 4 years ago

Could you mail the json file to me?

anani-a-missinou commented 4 years ago

Hi dear Karsten,

I send you a JSON file. thanks

Kawue commented 4 years ago

ok, could you rename all the files from dni-14jai2_pos100-700_181009-root_mean_square0.04.04ing_deisotroping to dni-14jai2_pos100-700_181009-root_mean_square? I know this is far from optimal, but this is due to an error in ProViM which needs to be resolved.

To explain the problem: You renamed the dataset from dni-14jai2_pos100-700_181009-root_mean_square to dni-14jai2_pos100-700_181009-root_mean_square0.04.04ing_deisotroping. However, there is a name encoded within the data set, which is not adjusted properly. So inside of the data set, the name is still dni-14jai2_pos100-700_181009-root_mean_square. This causes the problems.

So for every file used in the backend of GRINE dni-14jai2_pos100-700_181009-root_mean_square0.04.04ing_deisotroping should be exchanged with dni-14jai2_pos100-700_181009-root_mean_square. so:

similarity-matrix-dni-14jai2_pos100-700_181009-root_mean_square.npy
dimreduce-dni-14jai2_pos100-700_181009-root_mean_square-umap.h5
dni-14jai2_pos100-700_181009-root_mean_square.json
dni-14jai2_pos100-700_181009-root_mean_square.h5

Just be careful not to overwrite your original data set with copy paste actions. However, if you only use these files in the backend of GRINE this should be fine.

Then it should work. I have to adjust this problem in ProViM in the future.

anani-a-missinou commented 4 years ago

it worked.

I hope for the results and I will get back to you. First of all, I find that there are fewer molecular communities. I expected to know more. There are certainly options/parameters to be tested in order to improve these results.

But can be normal because it is a sample of the non-infected stem. I will run it on an infected stem sample within which we have localized accumulations of metabolites and the metabolic response to infection is spatialized relative to the pathogen's point of attack.

THANK Patho-CIBO-GRINE_test1

Kawue commented 4 years ago

Finally! Remember that Louvain is a hierarchical method. You will be able to unfold each vertex by Ctrl+left click until you reach the lowest hierarchy, which is in fact that each vertex is a single mass channel image. The graph itself on the lowest hierarchy may reveal clustered structures that were not detected by the algorithm. To check that you can use the lasso mode to make a manual selection of vertices and show their aggregated image (activated by the button on the top left).

Also you are totally correct that the graph might be suboptimal. So thresholds adjustments night be necessary.

anani-a-missinou commented 3 years ago

Hi dear Karsten,

the first analyzes seem interesting. I had already spotted certain discriminating molecules in my infected / control plants and resistant/susceptible genotypes. My central hypothesis was that individual metabolites would assemble into functional modules (molecular communities) and networks dedicated to specific biological activities (response to infection and/or resistance response). But this is not clear enough. Indeed, I do not find a large part of my molecules of interest.

Did you try to implement Probabilistic Latent Semantic Analysis or independent component analysis as options in whide-v2 ?

Would there be a way to recover the intensity of the ions before and after the filter?

Does the computation of the correlative spatial distribution take into account the intensity?

Can you add the masses on the full scan mean spectrum graph when applying deisotroping ?

Is it possible to predict adducts in order to remove them in the correlative map of the native molecules?

THANK FOR ALL

di_jn2_14dpi_peakSelected=0 02223 di_jn2_14dpi_deisotroping=0 02223

Kawue commented 3 years ago

Hey Anani, those are some complex questions, which are partly offtopic. I think we should continue this conversation per mail.