pangenome / odgi

Optimized Dynamic Genome/Graph Implementation: understanding pangenome graphs
https://doi.org/10.1093/bioinformatics/btac308
MIT License
194 stars 39 forks source link

odgi build error with GFA graph #282

Closed brettChapman closed 3 years ago

brettChapman commented 3 years ago

Hi

I'm getting an error with one of my graphs. I'm trying to build from GFA and am getting the following error:

odgi build -t 32 -P -g barley_pangenome_graph_2H.gfa -o barley_pangenome_graph_2H.gfa.og
[odgi::gfa_to_handle] building nodes: 73.86% @ 1.99e+06/s elapsed: 00:00:00:53 remain: 00:00:00:19odgi: /smoothxg/deps/odgi/src/odgi.cpp:507: virtual handlegraph::handle_t odgi::graph_t::create_handle(const string&, const nid_t&): Assertion `!has_node(id)' failed.
srun: error: node-9: task 0: Aborted (core dumped)

The original graph has consensus paths in it, so I removed them. I then converted from VG to GFA.

Is there a way I can process the graph either from the VG or GFA to clean it up and remove what appears to be a node ID causing a particular error. I don't think its a memory issue as I checked for any killed processes on the compute node, and there were none. I also have 126GB RAM on the node, and the GFA file is only 25GB is size.

subwaystation commented 3 years ago

Hi @brettChapman

The error message indicates that you have two times the same node identifier in your GFA. One way to verify this is to compare the total number of node identifers with the unique identifiers:

grep "^S" barley_pangenome_graph_2H.gfa | cut -f 2 | wc -l
grep "^S" barley_pangenome_graph_2H.gfa | cut -f 2 | uniq | wc -l

If both numbers are the same, then this needs further investigation on the odgi side.

Or you can also validate you GFA with vg validate. I don't think a GFAv1 is allowed to have duplicated node identifiers. So the file you have could have become malformed at some point. In vg there is the ids subcommand, but I don't think it will fix your problem.

How did you built your GFA?

brettChapman commented 3 years ago

Hi @subwaystation

I first generated the GFA through PGGB. I then removed consensus paths and generated VG graphs.

I then ran vg ids to get the node id unique across all graphs:

vg ids -j $(for i in $(seq 1 7); do echo barley_pangenome_graph_${i}H.vg; done)

Now with unique node IDs, I indexed the graphs to generate one large index, so I could run vg gbwt.

vg index -x barley_pangenome_graph.xg $(for i in $(seq 1 7); do echo barley_pangenome_graph_${i}H.vg; done)
vg gbwt -x barley_pangenome_graph.xg --buffer-size 1000 --index-paths -o barley_pangenome_graph.gbwt

I want to further query the graphs using odgi and visualise (my original visualised graphs generated using PGGB had consensus paths in).

My vg gbwt run is still running.

I'll try as you suggested and count the unique node IDs and also try vg validate, and get back to you. Thanks.

brettChapman commented 3 years ago

To get the GFA from the VG graph I ran:

vg view --threads 32 -Vg barley_pangenome_graph_2H.vg > barley_pangenome_graph_2H.gfa
brettChapman commented 3 years ago

Hi @subwaystation

I just ran a count on the node IDs. Both give the same number whether filtering for unique or not:

srun -n 1 grep '^S' barley_pangenome_graph_2H.gfa | cut -f -2 | wc -l
145086038
srun -n 1 grep '^S' barley_pangenome_graph_2H.gfa | cut -f -2 | uniq | wc -l
145086038

I'll now try vg validate and see what happens.

brettChapman commented 3 years ago

Hi @subwaystation

I just ran vg validate:

srun -n 1 singularity exec --bind /data/pangenome_snp_calling/problem_gfa:/data/pangenome_snp_calling/problem_gfa /data/vg_builds/vg.sif vg validate barley_pangenome_graph_2H.vg
graph: valid

It appears to be valid. Perhaps the problem is when I convert to a GFA?:

vg view --threads 32 -Vg barley_pangenome_graph_2H.vg > barley_pangenome_graph_2H.gfa

Would it be worth running vg ids -j again on just that single 2H graph?

subwaystation commented 3 years ago

Hi @brettChapman As you showed, the node identifiers are unique, I doubt vg ids -j will help us. But you can validate the GFA directly after the conversion from the vg format. vg validate is flexible here.

Maybe it is odgi's fault then. Could be a concurrency problem. What version or commit of odgi did you use? Would it be possible for you to share the GFA or a subset of it, which reproduces the error? Thanks!

brettChapman commented 3 years ago

Hi @subwaystation

Thanks. I'm now trying to validate the GFA.

I think you're right its sounding like a concurrency problem. If the GFA fails to validate, I'll trying converting to GFA again but only using 1 thread. If that fails, I'll try getting the VG and GFA to you.

I'm using odgi version v0.5.1-331-g29da972 "Phoenix"

subwaystation commented 3 years ago

Could you please try out v0.6 or the current master? There were lot's of changes since the end of April. Maybe the problem won't occur anymore. Are you using the pggb docker? Maybe it is time to update it there, too.

brettChapman commented 3 years ago

Yes, I'm using PGGB, the docker version. I'll update PGGB as well and see if it runs ok. I'll let you know how it goes.

subwaystation commented 3 years ago

Sorry, I meant I need to update odgi in PGGB. Will let you know when it is there.

subwaystation commented 3 years ago

Alright, I just merged https://github.com/pangenome/pggb/pull/109. As soon as the CI built the docker image, could you please try again? By the way, I just realized that Bioconda also comes with a tiny docker image for odgi https://quay.io/repository/biocontainers/odgi?tab=tags.

brettChapman commented 3 years ago

Hi @subwaystation

I tried with the updated ODGI (version 0.6-39-g993dc1b "Domani"),from the latest PGGB docker, but my 2H graph still complains about the same error, even though it validates with vg validate on both GFA and VG graphs.

Is there an SFTP site I could transfer it to, for you to take a look at the graph? I could transfer directly from my cluster. The graph is about 25G in size. Thanks.

subwaystation commented 3 years ago

Hi @brettChapman

We took another look at your error and @ekg pointed out, that the -mcx16 flag during building the docker image and compiling smoothxg and odgi could be the guy at fault. I am trying to modify odgi now, so we can leave this flag out, ensuring cross-machine compatibility.

However, it would help a lot, if you could build odgi from source and try your command line again. This will tell us, if there is an issue with the docker image or really with odgi itself. Building instructions can be found in our brand new documentation: https://odgi.readthedocs.io/en/latest/rst/installation.html#building-from-source. Thanks!

subwaystation commented 3 years ago

@brettChapman Could you please try again? I just merged https://github.com/pangenome/pggb/pull/110.

brettChapman commented 3 years ago

Hi @subwaystation

I just tried the new PGGB build with ODGI version 0.6-48-g94969da "Domani". I'm still getting the same error message.

I then tried building from the ODGI source. I built ODGI version 0.6-69-g922230f "Domani" within a local docker image and it appears to be running now. Is there a reason you can't update ODGI in the PGGB docker image to 0.6-69-g922230f "Domani"? Based on the fact it's now working with a later version in its own docker image, I assume the issue is with the older version of ODGI that PGGB is using? Thanks.

brettChapman commented 3 years ago

Hi @subwaystation and @ekg

Something I thought worth pointing out, while running odgi I came across errors about my graphs being unsorted, so I've added odgi sort with the -O flag prior to odgi viz to my script. I notice odgi sort is left out of the PGGB script, which may cause issues with some graphs, like I've found with mine after removal of the consensus paths. It might be worth adding that into the PGGB script to avoid possible issues with graphs.

subwaystation commented 3 years ago

Hi @brettChapman

Damn, I was hoping it is fixed. Maybe we are overseeing some compiling instructions. Anyhow, could you please drop a mail to simon.heumos@qbic.uni-tuebingen.de? We don't have an SFTP, but I could give you access to a VM, where you can drop your data.

When I finished my work yesterday, 0.6-48-g94969da "Domani" was still the latest version. @ekg merged his untangle branch in the meanwhile, that's why you were already able to check out 0.6-69-g922230f "Domani". His code is not related to solving your problem, though. We think it's a problem when the CI is compiling ODGI in the docker image, which is spit out in the end. Often times we have observed issues here. Because the CI machine compiling the code uses certain instruction sets which e.g. might not be compatible with the machine you are using. The fact that you can build and run your docker image locally underpins this theory.

Your PGGB question is tracked here: https://github.com/pangenome/pggb/issues/111

subwaystation commented 3 years ago

Hi @brettChapman

Indeed, you were right. 0.6-48-g94969da "Domani" did not work. I then tried with 0.6-75-g2fc8504 "Domani" and I was not able to reproduce the error anymore. I am puzzled by this, because I am not aware of changes that would affect your problem.

Updating smoothxg to the working version here: https://github.com/pangenome/smoothxg/pull/129. Will let you know, when it's present in PGGB.

subwaystation commented 3 years ago

@brettChapman I just tried out the most recent PGGB docker image with ODGI 0.6-75-g2fc8504 "Domani". I was able to execute your command above! :) As this is fixed from my point of view, I am closing this issue. Else, please re-open again.

brettChapman commented 3 years ago

Hi @subwaystation

Thanks for updating PGGB with the latest ODGI. While you were updating PGGB, I had been running ODGI version 0.6-69-g922230f "Domani", odgi build followed by odgi sort and I found an error complaining about my OG graph not being sorted. However it's requesting I sort the graph while I'm trying to sort the graph, which is kind of a chicken and an egg problem. I can't sort it before I sort it.

I ran this code:

singularity exec --bind /data/pangenome_snp_calling:/data/pangenome_snp_calling /data/odgi_builds/odgi.sif odgi build -t 32 -P -g barley_pangenome_graph_2H.gfa -o barley_pangenome_graph_2H.gfa.unsorted.og
[odgi::gfa_to_handle] building nodes: 100.00% @ 1.42e+06/s elapsed: 00:00:01:42 remain: 00:00:00:00
[odgi::gfa_to_handle] building edges: 100.00% @ 1.07e+06/s elapsed: 00:00:03:08 remain: 00:00:00:001
[odgi::gfa_to_handle] building paths: 100.00% @ 6.53e-02/s elapsed: 00:00:05:06 remain: 00:00:00:00
+ srun -n 1 singularity exec --bind /data/pangenome_snp_calling:/data/pangenome_snp_calling /data/odgi_builds/odgi.sif odgi sort -i barley_pangenome_graph_2H.gfa.unsorted.og --threads 32 -P -Y -O -o barley_pangenome_graph_2H.gfa.og
error [xp]: Graph to index is not optimized. Please run 'odgi sort' using -O, --optimize
brettChapman commented 3 years ago

I'll try and run it again with the updated PGGB instead of my local built ODGI, and see if that fixes it, but I thought it worth mentioning

brettChapman commented 3 years ago

Hi @subwaystation

I just tested my 2H GFA with ODGI from the latest docker PGGB image and I'm still getting the error with odgi sort:

srun -n 1 singularity exec --bind /data/pangenome_snp_calling/problem_gfa:/data/pangenome_snp_calling/problem_gfa /data/pggb_builds/pggb.sif odgi build -t 32 -P -g barley_pangenome_graph_2H.gfa -o barley_pangenome_graph_2H.gfa.unsorted.og
[odgi::gfa_to_handle] building nodes: 100.00% @ 1.52e+06/s elapsed: 00:00:01:35 remain: 00:00:00:00
[odgi::gfa_to_handle] building edges: 100.00% @ 9.17e+05/s elapsed: 00:00:03:40 remain: 00:00:00:003
[odgi::gfa_to_handle] building paths: 100.00% @ 6.99e-02/s elapsed: 00:00:04:46 remain: 00:00:00:00
+ srun -n 1 singularity exec --bind /data/pangenome_snp_calling/problem_gfa:/data/pangenome_snp_calling/problem_gfa /data/pggb_builds/pggb.sif odgi sort -i barley_pangenome_graph_2H.gfa.unsorted.og --threads 32 -P -Y -O -o barley_pangenome_graph_2H.gfa.og
error [xp]: Graph to index is not optimized. Please run 'odgi sort' using -O, --optimize
srun: error: node-5: task 0: Exited with exit code 1

Should I be dropping the other parameters such as -P -Y? I'm using -O as the error suggests, but it still gets that error. If you could please try with the graphs I sent you, and see if you can reproduce the error. Thanks.

ekg commented 3 years ago

On the first sort, just use -O. Then you can sort again. Probably, it should be adjusted to let you optimize before sorting with path-guided SGD, or maybe to do so automatically.

On Thu, Jul 1, 2021, 07:13 Brett Chapman @.***> wrote:

Hi @subwaystation https://github.com/subwaystation

I just tested my 2H GFA with ODGI from the latest docker PGGB image and I'm still getting the error with odgi sort:

srun -n 1 singularity exec --bind /data/pangenome_snp_calling/problem_gfa:/data/pangenome_snp_calling/problem_gfa /data/pggb_builds/pggb.sif odgi build -t 32 -P -g barley_pangenome_graph_2H.gfa -o barley_pangenome_graph_2H.gfa.unsorted.og [odgi::gfa_to_handle] building nodes: 100.00% @ 1.52e+06/s elapsed: 00:00:01:35 remain: 00:00:00:00 [odgi::gfa_to_handle] building edges: 100.00% @ 9.17e+05/s elapsed: 00:00:03:40 remain: 00:00:00:003 [odgi::gfa_to_handle] building paths: 100.00% @ 6.99e-02/s elapsed: 00:00:04:46 remain: 00:00:00:00

  • srun -n 1 singularity exec --bind /data/pangenome_snp_calling/problem_gfa:/data/pangenome_snp_calling/problem_gfa /data/pggb_builds/pggb.sif odgi sort -i barley_pangenome_graph_2H.gfa.unsorted.og --threads 32 -P -Y -O -o barley_pangenome_graph_2H.gfa.og error [xp]: Graph to index is not optimized. Please run 'odgi sort' using -O, --optimize srun: error: node-5: task 0: Exited with exit code 1

Should I be dropping the other parameters such as -P -Y? I'm using -O as the error suggests, but it still gets that error. If you could please try with the graphs I sent you, and see if you can reproduce the error. Thanks.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/pangenome/odgi/issues/282#issuecomment-871926268, or unsubscribe https://github.com/notifications/unsubscribe-auth/AABDQEJDRW4VKTB4Z2JOHSDTVP2PNANCNFSM47AXSP2Q .

brettChapman commented 3 years ago

Hi @ekg , thanks, I'll try again by running sort twice, first with just -O. I'll update again if any issues.

subwaystation commented 3 years ago

@brettChapman Sorry for the confusion. This error message is not a bug, it is desired. Some sorting algorithms, like the PG-SGD, require the graph to be optimized. This means that all nodes have strictly continuous node identifiers. If not, odgi sort boils out. But I understand from a user point of view, that's inconvenient, because you specify -O -Y.

ekg commented 3 years ago

We should probably always optimize before the sorting. Or at least check if it needs to be done and do it if it does.

On Thu, Jul 1, 2021, 10:08 Simon Heumos @.***> wrote:

@brettChapman https://github.com/brettChapman Sorry for the confusion. This error message is not a bug, it is desired. Some sorting algorithms, like the PG-SGD, require the graph to be optimized. This means that all nodes have strictly continuous node identifiers. If not, odgi sort boils out. But I understand from a user point of view, that's inconvenient, because you specify -O -Y.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/pangenome/odgi/issues/282#issuecomment-872023982, or unsubscribe https://github.com/notifications/unsubscribe-auth/AABDQEID42VKKVBNQFNSOJLTVQO6XANCNFSM47AXSP2Q .

subwaystation commented 3 years ago

I agree. It is planned in #293 ;)

brettChapman commented 3 years ago

Hi @ekg and @subwaystation

I tried optimizing the graph before sorting, but I'm still getting the same error message:

srun -n 1 singularity exec --bind /data/pangenome_snp_calling:/data/pangenome_snp_calling /data/pggb_builds/pggb.sif odgi build -t 32 -P -g barley_pangenome_graph_2H.gfa -o barley_pangenome_graph_2H.gfa.unsorted.og
[odgi::gfa_to_handle] building nodes: 100.00% @ 1.58e+06/s elapsed: 00:00:01:31 remain: 00:00:00:00
[odgi::gfa_to_handle] building edges: 100.00% @ 8.98e+05/s elapsed: 00:00:03:45 remain: 00:00:00:008
[odgi::gfa_to_handle] building paths: 100.00% @ 6.56e-02/s elapsed: 00:00:05:05 remain: 00:00:00:00
+ srun -n 1 singularity exec --bind /data/pangenome_snp_calling:/data/pangenome_snp_calling /data/pggb_builds/pggb.sif odgi sort -i barley_pangenome_graph_2H.gfa.unsorted.og --threads 32 -O -o barley_pangenome_graph_2H.gfa.optimized.og
+ srun -n 1 singularity exec --bind /data/pangenome_snp_calling:/data/pangenome_snp_calling /data/pggb_builds/pggb.sif odgi sort -i barley_pangenome_graph_2H.gfa.optimized.og --threads 32 -P -Y -o barley_pangenome_graph_2H.gfa.og
error [xp]: Graph to index is not optimized. Please run 'odgi sort' using -O, --optimize
srun: error: node-9: task 0: Exited with exit code 1

I run odgi sort only with -O first, then run again on the optimized graph with only -P and -Y.

brettChapman commented 3 years ago

@subwaystation this is with the 2H graph I sent you. So there may be something peculiar about it which is causing these problems. You would think that if there's something wrong with the node IDs (like we found when using older versions of ODGI) that it would get picked up during the optimization step.

subwaystation commented 3 years ago

Hi @brettChapman I can reproduce the error. Finding out, what the problem is.

subwaystation commented 3 years ago

Hi @subwaystation

I first generated the GFA through PGGB. I then removed consensus paths and generated VG graphs.

I then ran vg ids to get the node id unique across all graphs:

vg ids -j $(for i in $(seq 1 7); do echo barley_pangenome_graph_${i}H.vg; done)

Now with unique node IDs, I indexed the graphs to generate one large index, so I could run vg gbwt.

vg index -x barley_pangenome_graph.xg $(for i in $(seq 1 7); do echo barley_pangenome_graph_${i}H.vg; done)
vg gbwt -x barley_pangenome_graph.xg --buffer-size 1000 --index-paths -o barley_pangenome_graph.gbwt

I want to further query the graphs using odgi and visualise (my original visualised graphs generated using PGGB had consensus paths in).

My vg gbwt run is still running.

I'll try as you suggested and count the unique node IDs and also try vg validate, and get back to you. Thanks.

@brettChapman Would it be possible to send us the whole, full graph? Just as last time ;) We want to figure out, if the initial splitting by chromosome could be a problem. Thanks!

brettChapman commented 3 years ago

Hi @subwaystation

By full graph do you mean the entire genome with all 7 chromosomes? I didn't run PGGB on the entire genome. Only each chromosome separately. It was mainly a decision to do that due to our limited memory overhead of 126GB per node. I could send you each of the chromosome graphs 1H to 7H as a GFA/VG graph, if that would help.

brettChapman commented 3 years ago

I could also send the graphs with the consensus paths embedded (original PGGB output) if that would be of help too

brettChapman commented 3 years ago

Hi @subwaystation

I'm currently uploading all graphs to the VM. This includes every chromsome graph (GFA and VG), and also the original smoothed graphs (.gfa.gz) outputted directly from PGGB, which include the consensus paths, which I removed later from the GFA and VG graphs we're currently looking at. I'm concerned the issue may be related to the removal of the consensus paths, or perhaps the issue is from when I reindexed the node ids with vg ids -j.

For reference, I removed the consensus paths via this method:

vg convert -g test_graph.gfa -p > test_graph.pg
vg index test_graph.pg -x test_graph.xg
vg paths -x test_graph.xg -L | grep Consensus > consensus_paths.txt
vg paths -v test_graph.pg -p consensus_paths.txt -d > test_graph.noConsensus.vg
vg index test_graph.noConsensus.vg -x test_graph.noConsensus.xg
subwaystation commented 3 years ago

Hi @brettChapman thanks for the CLIs and data!

As you can see in https://github.com/pangenome/odgi/pull/296 we are still struggling to identify the bug. Will keep you updated.

subwaystation commented 3 years ago

Hi

I'm getting an error with one of my graphs. I'm trying to build from GFA and am getting the following error:

odgi build -t 32 -P -g barley_pangenome_graph_2H.gfa -o barley_pangenome_graph_2H.gfa.og
[odgi::gfa_to_handle] building nodes: 73.86% @ 1.99e+06/s elapsed: 00:00:00:53 remain: 00:00:00:19odgi: /smoothxg/deps/odgi/src/odgi.cpp:507: virtual handlegraph::handle_t odgi::graph_t::create_handle(const string&, const nid_t&): Assertion `!has_node(id)' failed.
srun: error: node-9: task 0: Aborted (core dumped)

The original graph has consensus paths in it, so I removed them. I then converted from VG to GFA.

Is there a way I can process the graph either from the VG or GFA to clean it up and remove what appears to be a node ID causing a particular error. I don't think its a memory issue as I checked for any killed processes on the compute node, and there were none. I also have 126GB RAM on the node, and the GFA file is only 25GB is size.

@brettChapman With the current master, this ran through for me :)

brettChapman commented 3 years ago

Thanks @subwaystation I'll retry with the new PGGB docker build.

brettChapman commented 3 years ago

Hi @subwaystation

I can confirm ODGI in the PGGB docker build now works fine. The problem GFA is now being sorted correctly, and I'm working through each of the other GFA files, with no problems so far. Thanks.

subwaystation commented 3 years ago

Glad to hear @brettChapman !

cwatt commented 1 month ago

Hello, I'm also experiencing this error. I installed odgi via bioconda. Unfortunately, odgi version doesn't print anything, but I assume the version is 0.8.6-2 according to the current bioconda recipe.

What I did: I constructed a graph of a single chromosome using minigraph-cactus and converted the resulting v1.1 .gfa file to v1.0 via vg convert -gfW graph.gfa -t 30 > graph_v1.0.gfa

I then attempted to build the .og file... odgi build -g graph_v1.0.gfa -o graph.og -s -t 30

... But received this error odgi: /opt/conda/conda-bld/odgi_1623932587958/work/src/odgi.cpp:507: virtual handlegraph::handle_t odgi::graph_t::create_handle(const string&, const nid_t&): Assertion '!has_node(id)' failed.

The nodes are unique grep "^S" graph_v1.0.gfa | cut -f 2 | wc -l 5745440 grep "^S" graph_v1.0.gfa | cut -f 2 | uniq | wc -l 5745440

And the .gfa file is valid vg validate graph_v1.0.gfa graph: valid

I also tried constructing the .og using the v1.1 .gfa file with the same result, in case the conversion was the issue. Lastly, I tried running og build with only 1 thread because I saw that suggestion in the thread, but no luck.

I'm not sure what the problem is or how else to troubleshoot. Any help would be appreciated!

AndreaGuarracino commented 4 weeks ago

Can you share the GFA? or a subset of it that triggers the error?


From: Cassandra Wattenburger @.> Sent: Friday, August 30, 2024 13:34 To: pangenome/odgi @.> Cc: Subscribed @.***> Subject: Re: [pangenome/odgi] odgi build error with GFA graph (#282)

Hello, I'm also experiencing this error. I installed odgi via bioconda. Unfortunately, odgi version doesn't print anything, but I assume the version is 0.8.6-2 according to the current bioconda recipehttps://bioconda.github.io/recipes/odgi/README.html.

What I did: I constructed a graph of a single chromosome using minigraph-cactus and converted the resulting v1.1 .gfa file to v1.0 via vg convert -gfW graph.gfa -t 30 > graph_v1.0.gfa

I then attempted to build the .og file... odgi build -g graph_v1.0.gfa -o graph.og -s -t 30

... But received this error odgi: /opt/conda/conda-bld/odgi_1623932587958/work/src/odgi.cpp:507: virtual handlegraph::handle_t odgi::graph_t::create_handle(const string&, const nid_t&): Assertion '!has_node(id)' failed.

The nodes are unique grep "^S" graph_v1.0.gfa | cut -f 2 | wc -l 5745440 grep "^S" graph_v1.0.gfa | cut -f 2 | uniq | wc -l 5745440

And the .gfa file is valid vg validate fullgraph_v1.0.gfa graph: valid

I also tried constructing the .og using the v1.1 .gfa file with the same result, in case the conversion was the issue. Lastly, I tried running og build with only 1 thread because I saw that suggestion in the thread, but no luck.

I'm not sure what the problem is or how else to troubleshoot. Any help would be appreciated!

— Reply to this email directly, view it on GitHubhttps://github.com/pangenome/odgi/issues/282#issuecomment-2322122371, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AO26XHVU3L7KS4JYJVP5M6LZUC3J5AVCNFSM6AAAAABNM6XKP2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGMRSGEZDEMZXGE. You are receiving this because you are subscribed to this thread.Message ID: @.***>

cwatt commented 2 weeks ago

Hi sorry for the late reply, unfortunately I can't share any of the GFA because it contains proprietary data, and I'm not sure what aspect is causing the issue. Strangely, I was able to create .og versions of some graphs using the same methods but not others. The only difference I could detect is that the graphs that could be converted contained 4 genomes and the graphs that could not contained 8+ genomes. Hopefully the next time this error pops up its from someone who has shareable data!