Our score method takes in pydantic models, then converts them to dicts, operates on them, and then turns them back into models to return them.
These operations are very expensive for large messages. For a one hop coalesced answer to (small molecules)-[treat]-(hyperlipidemia) the decoding process calls instancecheck a zillion times, accounting for about 40% of the time.
Presumably there's no reason to spend 40% of our time decoding things.
Our score method takes in pydantic models, then converts them to dicts, operates on them, and then turns them back into models to return them.
These operations are very expensive for large messages. For a one hop coalesced answer to (small molecules)-[treat]-(hyperlipidemia) the decoding process calls instancecheck a zillion times, accounting for about 40% of the time.
Presumably there's no reason to spend 40% of our time decoding things.