Open Sa753 opened 1 year ago
Can you try pySCENIC 0.12.1?
There are also pySCENIC docker images: https://pyscenic.readthedocs.io/en/latest/installation.html#docker-podman
Hello,bros Have you used Docker to solve the problem? Looking forward to your early reply,Thaks!
Dear team,
I can see that this issue was opened twice before #64 and #442 but there was no solution provided. I am using pyscenic version 0.11.2 via CLI . python version 3.8.5 on a dataset of 3000 cells. I am running it on HPC. But I keep getting the error below after the command starts working. I use the following code. the pout of pip freeze
This is my code: pyscenic grn file.loom allTFs_hg38.txt -o adj.csv --num_workers 20 --sparse
This is the pip freeze output aiohttp==3.8.1 aiosignal==1.2.0 anndata==0.8.0 arboreto==0.1.6 async-timeout==4.0.2 attrs==21.4.0 bokeh==2.3.0 boltons==21.0.0 Bottleneck @ file:///tmp/build/80754af9/bottleneck_1648028895253/work brotlipy==0.7.0 certifi @ file:///croot/certifi_1665076670883/work/certifi cffi @ file:///tmp/abs_98z5h56wf8/croots/recipe/cffi_1659598650955/work charset-normalizer==2.0.12 click==6.7 cloudpickle==1.6.0 cryptography @ file:///croot/cryptography_1665612644927/work ctxcore==0.1.1 cycler @ file:///tmp/build/80754af9/cycler_1637851556182/work cytoolz==0.11.0 dask==2021.3.0 dill==0.3.4 distributed==2021.3.0 fonttools==4.25.0 frozendict==2.3.1 frozenlist==1.3.0 fsspec==0.8.7 h5py==3.2.1 HeapDict==1.0.1 idna @ file:///croot/idna_1666125576474/work igraph @ file:///home/conda/feedstock_root/build_artifacts/python-igraph_1641852712661/work interlap==0.2.7 Jinja2==2.11.3 joblib==1.0.1 kiwisolver @ file:///opt/conda/conda-bld/kiwisolver_1638569886207/work llvmlite==0.35.0 locket==0.2.1 loompy==3.0.6 louvain==0.7.1 lz4 @ file:///tmp/build/80754af9/lz4_1619516501300/work MarkupSafe==1.1.1 matplotlib @ file:///tmp/build/80754af9/matplotlib-suite_1647441664166/work mkl-fft==1.3.1 mkl-random @ file:///tmp/build/80754af9/mkl_random_1626186064646/work mkl-service==2.4.0 mock @ file:///tmp/build/80754af9/mock_1607622725907/work msgpack==1.0.2 multidict==6.0.2 multiprocessing-on-dill==3.5.0a4 munkres==1.1.4 natsort==8.1.0 networkx==2.8 numba==0.52.0 numexpr @ file:///tmp/build/80754af9/numexpr_1640704208950/work numpy==1.19.4 numpy-groupies==0.9.13 packaging @ file:///tmp/build/80754af9/packaging_1637314298585/work pandas==1.4.2 partd==1.1.0 patsy==0.5.2 Pillow==8.1.2 psutil==5.8.0 pyarrow==0.16.0 pycparser @ file:///tmp/build/80754af9/pycparser_1636541352034/work pynndescent==0.5.6 pyOpenSSL @ file:///opt/conda/conda-bld/pyopenssl_1643788558760/work pyparsing==2.4.7 pyscenic==0.11.2 PySocks @ file:///tmp/build/80754af9/pysocks_1605305779399/work python-dateutil==2.8.1 pytz==2020.4 PyYAML==5.4.1 requests==2.27.1 scanpy==1.9.1 scikit-learn==0.24.1 scipy==1.6.1 seaborn @ file:///tmp/build/80754af9/seaborn_1629307859561/work session-info==1.0.0 sip==4.19.13 six @ file:///tmp/build/80754af9/six_1644875935023/work sortedcontainers==2.3.0 statsmodels @ file:///tmp/build/80754af9/statsmodels_1648033297787/work stdlib-list==0.8.0 tables @ file:///tmp/build/80754af9/pytables_1647579781405/work tblib==1.7.0 texttable @ file:///home/conda/feedstock_root/build_artifacts/texttable_1626204417032/work threadpoolctl==2.1.0 toolz==0.11.1 tornado==6.1 tqdm==4.64.0 typing-extensions==3.7.4.3 umap-learn==0.5.2 urllib3 @ file:///croot/urllib3_1666298941550/work yarl==1.7.2 zict==2.0.0
**This is the error after the command starts running preparing dask client parsing input creating dask graph 10 partitions computing dask graph
tornado.application - ERROR - Exception in callback <function Worker.init.. at 0x2b61524b0700>
Traceback (most recent call last):
File "/data/home/hfx505/.local/lib/python3.8/site-packages/tornado/ioloop.py", line 905, in _run
return self.callback()
File "/data/home/hfx505/.local/lib/python3.8/site-packages/distributed/worker.py", line 708, in
lambda: self.batched_stream.send({"op": "keep-alive"}), 60000
File "/data/home/hfx505/.local/lib/python3.8/site-packages/distributed/batched.py", line 136, in send
raise CommClosedError()
distributed.comm.core.CommClosedError
distributed.core - ERROR - 'tcp://127.0.0.1:44971'
Traceback (most recent call last):
File "/data/home/hfx505/.local/lib/python3.8/site-packages/distributed/core.py", line 572, in handle_stream
handler(merge(extra, msg))
File "/data/home/hfx505/.local/lib/python3.8/site-packages/distributed/scheduler.py", line 4826, in handle_release_data
ws: WorkerState = parent._workers_dv[worker]
KeyError: 'tcp://127.0.0.1:44971'
distributed.utils - ERROR - 'tcp://127.0.0.1:44971'
Traceback (most recent call last):
File "/data/home/hfx505/.local/lib/python3.8/site-packages/distributed/utils.py", line 666, in log_errors
yield
File "/data/home/hfx505/.local/lib/python3.8/site-packages/distributed/scheduler.py", line 3840, in add_worker
await self.handle_worker(comm=comm, worker=address)
File "/data/home/hfx505/.local/lib/python3.8/site-packages/distributed/scheduler.py", line 4926, in handle_worker
await self.handle_stream(comm=comm, extra={"worker": worker})
File "/data/home/hfx505/.local/lib/python3.8/site-packages/distributed/core.py", line 572, in handle_stream
handler(merge(extra, msg))
File "/data/home/hfx505/.local/lib/python3.8/site-packages/distributed/scheduler.py", line 4826, in handle_release_data
ws: WorkerState = parent._workers_dv[worker]
KeyError: 'tcp://127.0.0.1:44971'
distributed.core - ERROR - Exception while handling op register-worker
Traceback (most recent call last):
File "/data/home/hfx505/.local/lib/python3.8/site-packages/distributed/core.py", line 500, in handle_comm
result = await result
File "/data/home/hfx505/.local/lib/python3.8/site-packages/distributed/scheduler.py", line 3840, in add_worker
await self.handle_worker(comm=comm, worker=address)
File "/data/home/hfx505/.local/lib/python3.8/site-packages/distributed/scheduler.py", line 4926, in handle_worker
await self.handle_stream(comm=comm, extra={"worker": worker})
File "/data/home/hfx505/.local/lib/python3.8/site-packages/distributed/core.py", line 572, in handle_stream
handler(merge(extra, msg))
File "/data/home/hfx505/.local/lib/python3.8/site-packages/distributed/scheduler.py", line 4826, in handle_release_data
ws: WorkerState = parent._workers_dv[worker]
KeyError: 'tcp://127.0.0.1:44971'
tornado.application - ERROR - Exception in callback functools.partial(<function TCPServer._handle_connection.. at 0x2ab979863ee0>, <Task finished name='Task-728287' coro=<BaseTCPListener._handle_stream() done, defined at /data/home/hfx505/.local/lib/python3.8/site-packages/distributed/comm/tcp.py:473> exception=KeyError('tcp://127.0.0.1:44971')>)
Traceback (most recent call last):
File "/data/home/hfx505/.local/lib/python3.8/site-packages/tornado/ioloop.py", line 741, in _run_callback
ret = callback()
File "/data/home/hfx505/.local/lib/python3.8/site-packages/tornado/tcpserver.py", line 331, in
gen.convert_yielded(future), lambda f: f.result()
File "/data/home/hfx505/.local/lib/python3.8/site-packages/distributed/comm/tcp.py", line 490, in _handle_stream
await self.comm_handler(comm)
File "/data/home/hfx505/.local/lib/python3.8/site-packages/distributed/core.py", line 500, in handle_comm
result = await result
File "/data/home/hfx505/.local/lib/python3.8/site-packages/distributed/scheduler.py", line 3840, in add_worker
await self.handle_worker(comm=comm, worker=address)
File "/data/home/hfx505/.local/lib/python3.8/site-packages/distributed/scheduler.py", line 4926, in handle_worker
await self.handle_stream(comm=comm, extra={"worker": worker})
File "/data/home/hfx505/.local/lib/python3.8/site-packages/distributed/core.py", line 572, in handle_stream
handler( merge(extra, msg))
File "/data/home/hfx505/.local/lib/python3.8/site-packages/distributed/scheduler.py", line 4826, in handle_release_data
ws: WorkerState = parent._workers_dv[worker]
KeyError: 'tcp://127.0.0.1:44971'
tornado.application - ERROR - Exception in callback <function Worker.init.. at 0x2b6f6f80a670>
Traceback (most recent call last):
File "/data/home/hfx505/.local/lib/python3.8/site-packages/tornado/ioloop.py", line 905, in _run
return self.callback()
File "/data/home/hfx505/.local/lib/python3.8/site-packages/distributed/worker.py", line 708, in
lambda: self.batched_stream.send({"op": "keep-alive"}), 60000
File "/data/home/hfx505/.local/lib/python3.8/site-packages/distributed/batched.py", line 136, in send
raise CommClosedError()
distributed.comm.core.CommClosedError
distributed.utils - ERROR - "('infer_partial_network-from-delayed-0bf9924a965aed3e8d48b9483a39dc68', 2184)"
Traceback (most recent call last):
File "/data/home/hfx505/.local/lib/python3.8/site-packages/distributed/utils.py", line 666, in log_errors
yield
File "/data/home/hfx505/.local/lib/python3.8/site-packages/distributed/scheduler.py", line 3794, in add_worker
typename=types[key],
KeyError: "('infer_partial_network-from-delayed-0bf9924a965aed3e8d48b9483a39dc68', 2184)"
distributed.core - ERROR - Exception while handling op register-worker
Traceback (most recent call last):
File "/data/home/hfx505/.local/lib/python3.8/site-packages/distributed/core.py", line 500, in handle_comm
result = await result
File "/data/home/hfx505/.local/lib/python3.8/site-packages/distributed/scheduler.py", line 3794, in add_worker
typename=types[key],
KeyError: "('infer_partial_network-from-delayed-0bf9924a965aed3e8d48b9483a39dc68', 2184)"