Closed saracarl closed 1 month ago
Q: Where should the @context live in the manifest, now that we've embedded the annotation pages within the manifest?
A: At the top of the manifest, along with the IIIF presentation API context. NB: the IIIF presentation API @context statement should be last, since it overrides other values.
Q: Should these plaintext representations of a page of text be in a rendering element within the canvas, or in an annotation targeting the full canvas?
A: Philosophically speaking, rendering is probably better. However, from a UI perspective, viewers are likely to present it to the user as a downloadable link (as one would with a PDF file). That behavior is probably not desired for some OCR -- for example when a user cannot read Fraktur typefaces, and wants to read the text of the page alongside the facsimile.
Current plan is to implement it in one direction and test in viewers.
Note from 9/7/2023: We should place the text granularity context after the IIIF presentation context in the v2 manifest, but before it in the v3 manifest.
This is Johannes OCR viewer: https://mirador-textoverlay.netlify.app/ (To test with)
When we drop this manifest: https://gist.githubusercontent.com/benwbrum/e7e2fb9962a6aaba2cc0d6ae8f1b6d98/raw/df2598dd8a67ef941c2e03fa07dbe9485f736a9c/ia_ocr_annotation_mockup_v2.json Into Mirador, the annotations don't show up.
Here's how the OCR text annotation is modeled in the manifest:
"otherContent": [
{
"@id": "https://iiif.archivelab.org/iiif/rbmsbk_ap2-v4_2001_V55N4$9/ocr",
"@type": "sc:AnnotationList",
"label": "OCR Text",
"resources": [
{
"@type": "oa:Annotation",
"motivation": "sc:painting",
"textGranularity": "page",
"on": "https://iiif.archivelab.org/iiif/books/rbmsbk_ap2-v4_2001_V55N4$9/canvas",
"resource": {
"@id": "https://api.archivelab.org/books/rbmsbk_ap2-v4_2001_V55N4/pages/9/plaintext",
"@type": "dctypes:Text",
"format": "text/plain"
}
}
]
}
]
Any idea why not?
The gist is a v2 manifest; therefore the annotations need to be "seeAlso" or "rendering" (more correctly rendering, but seeAlso is likely better supported.)
The annotation seems right for v3; at least it matches the recipe. Where to test?? Maybe Johannes' mirador that takes hOCR or Alto would show it? Or perhaps it won't because it's just text?
As mentioned I think this is syntacticlly correct and matches the recipe:
https://iiif.io/api/cookbook/recipe/0068-newspaper/
but in v2 its probably seeAlso or rendering
Ben to look at creating a mock up for v3 manifest.
The problem @benwbrum ran into is a limitation of the viewers: the OCR text can be in AnnotationPages which are external to the manifest, linked in the id
property of the Annotation within the AnnotationPage. The motivation should be supplementing
, following the second use case in https://iiif.io/api/cookbook/recipe/0231-transcript-meta-recipe/
To get over viewer problems making to hops (Manifest->AnnotiationPage->OCR URI), we will try these strategies (Glen is adding that below)
Two options:
Maybe able to use the service mentioned in: https://github.com/internetarchive/iiif/issues/21
Can also just copy and paste the code from:
Which uses the archive infrasturcure.
Example with djvu: https://archive.org/details/journalofexpedit00ford
Fulltext not so great as requires a search term.
Action findout what parameters are avilaible for BookReaderGetTextWrapper.php service.
Since all of our handy helper functions rely on the archivelabs services, it looks like the best option is to produce annotations from the DjVu XML file itself. This can be done (probably most easily) at word
-level granularity.
Next step is to pseudocode the conversion from DjVu XML file representing multiple canvases into a set of annotations per canvas.
FromThePage code that converts IA DjVu into canvas-specific text:
To produce the leaf-level annotations for canvas https://iiif.archive.org/iiif/journalofexpedit00ford$4/canvas of https://iiif.archive.org/iiif/3/journalofexpedit00ford/manifest.json (with the page number/canvas label of 5),
journalofexpedit00ford_0005.djvu
, which corresponds to the string we use for the journalofexpedit00ford_0005
painting annotation id
and image service id
OBJECT
element with the usemap
attribute value matching the map name valueWORD
child element of the OBJECT
,
coords
into a fragment, transforming upper-left/lower-right into xywh. It is not clear to me that the coords
values are correctly generated, since the values do not seem to match the expected min(x),min(y),max(x),max(y)
.To produce page-level annotations without coordinate values,
OBJECT
element corresponding to the canvas as abovePARAGRAPH
LINE
WORD
join
word text (no whitespace padding should be needed, as spaces are in the DjVu elements)join
line text using a newlinejoin
paragraph text using two newlinesTo produce paragraph-level or line-level annotations, follow the page-level annotation strategy for the appropriate PARAGRAPH
or LINE
element, but find the minimum/maximum coordinates from the WORD
elements to generate a line/paragraph region fragment.
It looks like DJVU coordinates are lower-left x,y; upper right x,y!
<LINE>
<WORD coords="444,1353,635,1294" x-confidence="10">[David </WORD>
<WORD coords="635,1336,782,1294" x-confidence="7">Ford </WORD>
<WORD coords="782,1335,894,1305" x-confidence="2">was </WORD>
<WORD coords="894,1335,941,1305" x-confidence="10">a </WORD>
<WORD coords="941,1335,1112,1292" x-confidence="31">native </WORD>
Converting these into IIIF-style upper-left x,y; w,h wil ltake some calculations
<WORD coords="444,1353,635,1294" x-confidence="10">[David </WORD>
<WORD coords="lx,by,rx,ty" x-confidence="10">[David </WORD>
x = lx y = ty w = rx - lx h = by - ty
<moved from https://github.com/ArchiveLabs/iiif.archivelab.org/issues/80 I'd recommend reading that one for the whole discussion, but I pulled most of the comments in here.>
OCR text created by the derivation process can be exposed as annotations for books and image-based media, enabling presentation and consumption of the text by IIIF clients.
v2 mock-up (see the second page) Manifest Annotation List
Notes from: 2023-08-17 @mekarpeles proposal: may want to port over the OCR archive lab into this manifest service / app Code is contained here: https://github.com/ArchiveLabs/api.archivelab.org/blob/master/server/api/archive.py#L240-L268
In the annotation list mockup, I think the type should be dctypes:Text rather than cnt:ContentAsText
Context needed for textgranularity extension use of "page"
Decision to embed annotation lists to reduce potential number of requests on clients
Mek adds these resources to the AL APIs which should probably be incorporated into production: https://github.com/archiveLabs/api.archivelab.org https://github.com/ArchiveLabs/api.archivelab.org/blob/master/server/api/archive.py#L240-L268