At present unfortunately the "taxon_annotations" corpus can take quite a while (0.5-1 hour or more) to compute term frequencies, because they are not pre-computed in the database. It does work though. For entity terms, many pre-generated post-compositions in the subsumer list incorrectly receive a count of zero from the API, but this may not necessarily matter, because if terms with higher IC are also shared, they will masquerade this problem.
At present unfortunately the "taxon_annotations" corpus can take quite a while (0.5-1 hour or more) to compute term frequencies, because they are not pre-computed in the database. It does work though. For entity terms, many pre-generated post-compositions in the subsumer list incorrectly receive a count of zero from the API, but this may not necessarily matter, because if terms with higher IC are also shared, they will masquerade this problem.