nextflow run main.nf -profile docker --reads ../../../../../../../home/group_gaurav01/Akash_tab/Nanopore/FAST_P/SO_10645_A_GS_Con1_barcode13_fastp.fastq --db ../../db/16S_ribosomal_RNA --tax ../../db/taxdb/
N E X T F L O W ~ version 22.04.0
Launching main.nf [disturbed_heisenberg] DSL1 - revision: 2a51687d92
Run Name : disturbed_heisenberg
Reads : ../../../../../../../home/group_gaurav01/Akash_tab/Nanopore/FAST_P/SO_10645_A_GS_Con1_barcode13_fastp.fastq
Max Resources : 128 GB memory, 16 cpus, 10d time per job
Container : docker - [:]
Output dir : ./results
Launch dir : /mnt/USB2/group_gs/Akash/Database/NanoClust/NanoCLUST-master
Working dir : /mnt/USB2/group_gs/Akash/Database/NanoClust/NanoCLUST-master/work
Script dir : /mnt/USB2/group_gs/Akash/Database/NanoClust/NanoCLUST-master
User : group_gaurav01
Config Profile : docker
executor > local (5)
[25/8115a7] process > QC (1) [100%] 1 of 1 ✔
[59/d688ab] process > fastqc (1) [100%] 1 of 1 ✔
[97/590a24] process > kmer_freqs (1) [100%] 1 of 1 ✔
[61/ebd5ef] process > read_clustering (1) [ 0%] 0 of 1
[- ] process > split_by_cluster -
[- ] process > read_correction -
[- ] process > draft_selection -
[- ] process > racon_pass -
[- ] process > medaka_pass -
[- ] process > consensus_classification -
[- ] process > join_results -
[- ] process > get_abundances -
[- ] process > plot_abundances -
[05/bd73e4] process > output_documentation [100%] 1 of 1 ✔
Error executing process > 'read_clustering (1)'
Caused by:
Process read_clustering (1) terminated with an error exit status (1)
import numpy as np
import umap
import matplotlib.pyplot as plt
from sklearn import decomposition
import random
import pandas as pd
import hdbscan
df = pd.read_csv("freqs.txt", delimiter=" ")
UMAP
motifs = [x for x in df.columns.values if x not in ["read", "length"]]
X = df.loc[:,motifs]
X_embedded = umap.UMAP(n_neighbors=15, min_dist=0.1, verbose=2).fit_transform(X)
import numpy as np
import umap
import matplotlib.pyplot as plt
from sklearn import decomposition
import random
import pandas as pd
import hdbscan
df = pd.read_csv("freqs.txt", delimiter=" ")
UMAP
motifs = [x for x in df.columns.values if x not in ["read", "length"]]
X = df.loc[:,motifs]
X_embedded = umap.UMAP(n_neighbors=15, min_dist=0.1, verbose=2).fit_transform(X)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File ".command.sh", line 4, in
import umap
File "/opt/conda/envs/readclustering/lib/python3.8/site-packages/umap/init.py", line 1, in
from .umap import UMAP
File "/opt/conda/envs/readclustering/lib/python3.8/site-packages/umap/umap.py", line 54, in
from umap.layouts import (
File "/opt/conda/envs/read_clustering/lib/python3.8/site-packages/umap/layouts.py", line 39, in
def rdist(x, y):
File "/opt/conda/envs/read_clustering/lib/python3.8/site-packages/numba/core/decorators.py", line 218, in wrapper
disp.compile(sig)
File "/opt/conda/envs/read_clustering/lib/python3.8/site-packages/numba/core/compiler_lock.py", line 32, in _acquire_compile_lock
return func(*args, **kwargs)
File "/opt/conda/envs/read_clustering/lib/python3.8/site-packages/numba/core/dispatcher.py", line 825, in compile
self._cache.save_overload(sig, cres)
File "/opt/conda/envs/read_clustering/lib/python3.8/site-packages/numba/core/caching.py", line 671, in save_overload
self._save_overload(sig, data)
File "/opt/conda/envs/read_clustering/lib/python3.8/site-packages/numba/core/caching.py", line 681, in _save_overload
self._cache_file.save(key, data)
File "/opt/conda/envs/read_clustering/lib/python3.8/site-packages/numba/core/caching.py", line 496, in save
self._save_index(overloads)
File "/opt/conda/envs/read_clustering/lib/python3.8/site-packages/numba/core/caching.py", line 545, in _save_index
f.write(data)
File "/opt/conda/envs/read_clustering/lib/python3.8/contextlib.py", line 120, in exit
next(self.gen)
File "/opt/conda/envs/read_clustering/lib/python3.8/site-packages/numba/core/caching.py", line 581, in _open_for_write
yield f
OSError: [Errno 28] No space left on device
Work dir:
/mnt/USB2/group_gs/Akash/Database/NanoClust/NanoCLUST-master/work/61/ebd5ef17b84d0833cf0e048da0c72e
Tip: you can replicate the issue by changing to the process work dir and entering the command bash .command.run
[nf-core/nanoclust] Pipeline completed with errors
nextflow run main.nf -profile docker --reads ../../../../../../../home/group_gaurav01/Akash_tab/Nanopore/FAST_P/SO_10645_A_GS_Con1_barcode13_fastp.fastq --db ../../db/16S_ribosomal_RNA --tax ../../db/taxdb/ N E X T F L O W ~ version 22.04.0 Launching
main.nf
[disturbed_heisenberg] DSL1 - revision: 2a51687d92/ /| / // / / / / // / / // // /_/ // // /
// |/_,// /_/_/ ____//__//__//_/
NanoCLUST v1.0dev
Run Name : disturbed_heisenberg Reads : ../../../../../../../home/group_gaurav01/Akash_tab/Nanopore/FAST_P/SO_10645_A_GS_Con1_barcode13_fastp.fastq Max Resources : 128 GB memory, 16 cpus, 10d time per job Container : docker - [:] Output dir : ./results Launch dir : /mnt/USB2/group_gs/Akash/Database/NanoClust/NanoCLUST-master Working dir : /mnt/USB2/group_gs/Akash/Database/NanoClust/NanoCLUST-master/work Script dir : /mnt/USB2/group_gs/Akash/Database/NanoClust/NanoCLUST-master User : group_gaurav01 Config Profile : docker
executor > local (5) [25/8115a7] process > QC (1) [100%] 1 of 1 ✔ [59/d688ab] process > fastqc (1) [100%] 1 of 1 ✔ [97/590a24] process > kmer_freqs (1) [100%] 1 of 1 ✔ [61/ebd5ef] process > read_clustering (1) [ 0%] 0 of 1 [- ] process > split_by_cluster - [- ] process > read_correction - [- ] process > draft_selection - [- ] process > racon_pass - [- ] process > medaka_pass - [- ] process > consensus_classification - [- ] process > join_results - [- ] process > get_abundances - [- ] process > plot_abundances - [05/bd73e4] process > output_documentation [100%] 1 of 1 ✔ Error executing process > 'read_clustering (1)'
Caused by: Process
read_clustering (1)
terminated with an error exit status (1)Command executed [/mnt/USB2/group_gs/Akash/Database/NanoClust/NanoCLUST-master/templates/umap_hdbscan.py]:
!/usr/bin/env python
import numpy as np import umap import matplotlib.pyplot as plt from sklearn import decomposition import random import pandas as pd import hdbscan
df = pd.read_csv("freqs.txt", delimiter=" ")
UMAP
motifs = [x for x in df.columns.values if x not in ["read", "length"]] X = df.loc[:,motifs] X_embedded = umap.UMAP(n_neighbors=15, min_dist=0.1, verbose=2).fit_transform(X)
df_umap = pd.DataFrame(X_embedded, columns=["D1", "D2"]) umap_out = pd.concat([df["read"], df["length"], df_umap], axis=1)
HDBSCAN
X = umap_out.loc[:,["D1", "D2"]] umap_out["bin_id"] = hdbscan.HDBSCAN(min_cluster_size=int(50), cluster_selection_epsilon=int(0.5)).fit_predict(X)
PLOT
plt.figure(figsize=(20,20)) plt.scatter(X_embedded[:, 0], X_embedded[:, 1], c=umap_out["bin_id"], cmap='Spectral', s=1) plt.xlabel("UMAP1", fontsize=18) plt.ylabel("UMAP2", fontsize=18) plt.gca().set_aspect('equal', 'datalim') plt.title("Projecting " + str(len(umap_out['bin_id'])) + " reads. " + str(len(umap_out['bin_id'].unique())) + " clusters generated by HDBSCAN", fontsize=18)
for cluster in np.sort(umap_out['bin_id'].unique()): read = umap_out.loc[umap_out['bin_id'] == cluster].iloc[0] plt.annotate(str(cluster), (read['D1'], read['D2']), weight='bold', size=14)
plt.savefig('hdbscan.output.png') umap_out.to_csv("hdbscan.output.tsv", sep=" ", index=False)
Command exit status: 1
Command output: (empty)
Command error: aa2745a0cb3e: Pulling fs layer d770858aae07: Verifying Checksum d770858aae07: Download complete d770858aae07: Pull complete b0f5716385b6: Verifying Checksum b0f5716385b6: Download complete b0f5716385b6: Pull complete 6a1ff24c174d: Verifying Checksum 6a1ff24c174d: Download complete 6a1ff24c174d: Pull complete aa2745a0cb3e: Verifying Checksum aa2745a0cb3e: Download complete aa2745a0cb3e: Pull complete Digest: sha256:7f7561518ea9118613c2082a9fc77aa45fab375a43f77d15cb415631d3ffc600 Status: Downloaded newer image for hecrp/nanoclust-read_clustering:latest Traceback (most recent call last): File "/opt/conda/envs/read_clustering/lib/python3.8/site-packages/numba/core/caching.py", line 487, in save data_name = overloads[key] KeyError: ('f4(f4[::1],f4[::1])', ('x86_64-unknown-linux-gnu', 'sandybridge', '+64bit,-adx,+aes,-avx,-avx2,-avx512bf16,-avx512bitalg,-avx512bw,-avx512cd,-avx512dq,-avx512er,-avx512f,-avx512ifma,-avx512pf,-avx512vbmi,-avx512vbmi2,-avx512vl,-avx512vnni,-avx512vpopcntdq,-bmi,-bmi2,-cldemote,-clflushopt,-clwb,-clzero,+cmov,+cx16,+cx8,-eexecutor > local (5) [25/8115a7] process > QC (1) [100%] 1 of 1 ✔ [59/d688ab] process > fastqc (1) [100%] 1 of 1 ✔ [97/590a24] process > kmer_freqs (1) [100%] 1 of 1 ✔ [61/ebd5ef] process > read_clustering (1) [100%] 1 of 1, failed: 1 ✘ [- ] process > split_by_cluster - [- ] process > read_correction - [- ] process > draft_selection - [- ] process > racon_pass - [- ] process > medaka_pass - [- ] process > consensus_classification - [- ] process > join_results - [- ] process > get_abundances - [- ] process > plot_abundances - [05/bd73e4] process > output_documentation [100%] 1 of 1 ✔ Error executing process > 'read_clustering (1)'
Caused by: Process
read_clustering (1)
terminated with an error exit status (1)Command executed [/mnt/USB2/group_gs/Akash/Database/NanoClust/NanoCLUST-master/templates/umap_hdbscan.py]:
!/usr/bin/env python
import numpy as np import umap import matplotlib.pyplot as plt from sklearn import decomposition import random import pandas as pd import hdbscan
df = pd.read_csv("freqs.txt", delimiter=" ")
UMAP
motifs = [x for x in df.columns.values if x not in ["read", "length"]] X = df.loc[:,motifs] X_embedded = umap.UMAP(n_neighbors=15, min_dist=0.1, verbose=2).fit_transform(X)
df_umap = pd.DataFrame(X_embedded, columns=["D1", "D2"]) umap_out = pd.concat([df["read"], df["length"], df_umap], axis=1)
HDBSCAN
X = umap_out.loc[:,["D1", "D2"]] umap_out["bin_id"] = hdbscan.HDBSCAN(min_cluster_size=int(50), cluster_selection_epsilon=int(0.5)).fit_predict(X)
PLOT
plt.figure(figsize=(20,20)) plt.scatter(X_embedded[:, 0], X_embedded[:, 1], c=umap_out["bin_id"], cmap='Spectral', s=1) plt.xlabel("UMAP1", fontsize=18) plt.ylabel("UMAP2", fontsize=18) plt.gca().set_aspect('equal', 'datalim') plt.title("Projecting " + str(len(umap_out['bin_id'])) + " reads. " + str(len(umap_out['bin_id'].unique())) + " clusters generated by HDBSCAN", fontsize=18)
for cluster in np.sort(umap_out['bin_id'].unique()): read = umap_out.loc[umap_out['bin_id'] == cluster].iloc[0] plt.annotate(str(cluster), (read['D1'], read['D2']), weight='bold', size=14)
plt.savefig('hdbscan.output.png') umap_out.to_csv("hdbscan.output.tsv", sep=" ", index=False)
Command exit status: 1
Command output: (empty)
Command error: aa2745a0cb3e: Pulling fs layer d770858aae07: Verifying Checksum d770858aae07: Download complete d770858aae07: Pull complete b0f5716385b6: Verifying Checksum b0f5716385b6: Download complete b0f5716385b6: Pull complete 6a1ff24c174d: Verifying Checksum 6a1ff24c174d: Download complete 6a1ff24c174d: Pull complete aa2745a0cb3e: Verifying Checksum aa2745a0cb3e: Download complete aa2745a0cb3e: Pull complete Digest: sha256:7f7561518ea9118613c2082a9fc77aa45fab375a43f77d15cb415631d3ffc600 Status: Downloaded newer image for hecrp/nanoclust-read_clustering:latest Traceback (most recent call last): File "/opt/conda/envs/read_clustering/lib/python3.8/site-packages/numba/core/caching.py", line 487, in save data_name = overloads[key] KeyError: ('f4(f4[::1],f4[::1])', ('x86_64-unknown-linux-gnu', 'sandybridge', '+64bit,-adx,+aes,-avx,-avx2,-avx512bf16,-avx512bitalg,-avx512bw,-avx512cd,-avx512dq,-avx512er,-avx512f,-avx512ifma,-avx512pf,-avx512vbmi,-avx512vbmi2,-avx512vl,-avx512vnni,-avx512vpopcntdq,-bmi,-bmi2,-cldemote,-clflushopt,-clwb,-clzero,+cmov,+cx16,+cx8,-enqcmd,-f16c,-fma,-fma4,-fsgsbase,+fxsr,-gfni,-invpcid,-lwp,-lzcnt,+mmx,-movbe,-movdir64b,-movdiri,-mwaitx,+pclmul,-pconfig,-pku,+popcnt,-prefetchwt1,-prfchw,-ptwrite,-rdpid,-rdrnd,-rdseed,-rtm,+sahf,-sgx,-sha,-shstk,+sse,+sse2,+sse3,+sse4.1,+sse4.2,-sse4a,+ssse3,-tbm,-vaes,-vpclmulqdq,-waitpkg,-wbnoinvd,-xop,+xsave,-xsavec,+xsaveopt,-xsaves'), ('0c41297f1f6a9ad7b5e76ac3455427cf5bda8e992eb29997e17af36f3f4e8fbb', 'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File ".command.sh", line 4, in
import umap
File "/opt/conda/envs/readclustering/lib/python3.8/site-packages/umap/init.py", line 1, in
from .umap import UMAP
File "/opt/conda/envs/readclustering/lib/python3.8/site-packages/umap/umap.py", line 54, in
from umap.layouts import (
File "/opt/conda/envs/read_clustering/lib/python3.8/site-packages/umap/layouts.py", line 39, in
def rdist(x, y):
File "/opt/conda/envs/read_clustering/lib/python3.8/site-packages/numba/core/decorators.py", line 218, in wrapper
disp.compile(sig)
File "/opt/conda/envs/read_clustering/lib/python3.8/site-packages/numba/core/compiler_lock.py", line 32, in _acquire_compile_lock
return func(*args, **kwargs)
File "/opt/conda/envs/read_clustering/lib/python3.8/site-packages/numba/core/dispatcher.py", line 825, in compile
self._cache.save_overload(sig, cres)
File "/opt/conda/envs/read_clustering/lib/python3.8/site-packages/numba/core/caching.py", line 671, in save_overload
self._save_overload(sig, data)
File "/opt/conda/envs/read_clustering/lib/python3.8/site-packages/numba/core/caching.py", line 681, in _save_overload
self._cache_file.save(key, data)
File "/opt/conda/envs/read_clustering/lib/python3.8/site-packages/numba/core/caching.py", line 496, in save
self._save_index(overloads)
File "/opt/conda/envs/read_clustering/lib/python3.8/site-packages/numba/core/caching.py", line 545, in _save_index
f.write(data)
File "/opt/conda/envs/read_clustering/lib/python3.8/contextlib.py", line 120, in exit
next(self.gen)
File "/opt/conda/envs/read_clustering/lib/python3.8/site-packages/numba/core/caching.py", line 581, in _open_for_write
yield f
OSError: [Errno 28] No space left on device
Work dir: /mnt/USB2/group_gs/Akash/Database/NanoClust/NanoCLUST-master/work/61/ebd5ef17b84d0833cf0e048da0c72e
Tip: you can replicate the issue by changing to the process work dir and entering the command
bash .command.run
[nf-core/nanoclust] Pipeline completed with errors