Open DevangThakkar opened 1 year ago
Thanks for this. I'm worried about the performance hit though. Typically, the string splitting is what takes the most time when creating a db, and python regular expressions are notoriously slow.
This benchmarking code:
import re
import timeit
r = re.compile(f''';(?=(?:[^"]|"[^"]*")*$)''')
a = '''gene_id "BSU_00010"; transcript_id "unassigned_transcript_1"; db_xref "EnsemblGenomes-Gn:BSU00010"; db_xref "EnsemblGenomes-Tr:CAB11777"; db_xref "GOA:P05648"; db_xref "InterPro:IPR001957"; db_xref "InterPro:IPR003593"; db_xref "InterPro:IPR010921"; db_xref "InterPro:IPR013159"; db_xref "InterPro:IPR013317"; db_xref "InterPro:IPR018312"; '''
def split(a):
a.split()
def regex(a):
r.split(a)
print("split: " , timeit.Timer(f"split('{a}')", "from __main__ import split").timeit())
print("regex: " , timeit.Timer(f"regex('{a}')", "from __main__ import regex").timeit())
gives this on my laptop:
split: 0.7770832080277614
regex: 23.40327387099387
So for this example, the regex is 30x slower! That means a human GTF, instead of taking ~15 min to build a db, would take over 7 hrs. And based on some initial testing, the gap between the methods grows with increasing attribute string length.
Unfortunately without getting the performance closer to what str.split
is, handling a corner case like this is not worth slowing everything down.
Maybe there could be a fallback option, where the regex only happens under certain circumstances? I think the issue in #212 is not hit until after parsing, but maybe there would be some way of detecting, at parse time, cases where it should try harder with the regex.
That makes sense, I had not thought about the speed implications of regex. An alternative solution is to allow for this using an additional term in the dialect. Again, given that this is an edge case, I didn't want to slow everything down for this - so I followed the logic in the wormbase_gff2 example where the user needs to infer the dialect using helpers.infer_dialect()
and pass that dialect to the created db or to gffutils.create_db(... , dialect=dialect, ...)
The changes I have made are:
constants.py
semicolon_in_quotes
which defaults to False.parser.py
infer_dialect_call
to the function definition _split_keyvals()
with a default value of False
.semicolon_in_quotes
is explicitly set to True
.helpers.infer_dialect()
with the infer_dialect_call
explicitly set to True. If the call was made from helpers.infer_dialect()
, we see if there is a difference between the regex split and the default split. The two would be different if there is a semicolon inside quotes, in which case semicolon_in_quotes
is set to True
.helpers.py
infer_dialect_call
to the function call _split_keyvals()
with the value set to True.Let me know what you think.
This PR fixes https://github.com/daler/gffutils/issues/212 where semicolons inside quoted string values were causing issues. I was not able to replicate the
AttributeStringError
mentioned in the issue since the usage was not specified but this is what the outputs look like before and after the change. I used the file mentioned in the issue for this test.BEFORE:
AFTER:
Let me know what you think!