In our meeting today @WhatTheDickens mentioned he'd like us to talk to the makers of TextLab, which appeals due to its applications of the principles of "fluid-text editing" developed by John Bryant, director of the Melville Electronic Library. Here's an explanation of the fluid-text workflow, from transcription through human-directed/machine-assisted collation of revision stages:
https://mel.hofstra.edu/textlab.html
The principles guiding the workflow seem smart to me, but I'm worried about the constraints of TextLab and Juxta in handling manuscript materials. The documentation of TextLab suggests that it's an already-existing customization of the TEI, designed for page-by-page rather than semantic organization (e.g. chapter-by-chapter, or unit-by-unit). See
https://mel.hofstra.edu/pdf/textlab_user_manual.pdf
Questions to ask the TextLab people:
Can we develop our own TEI ODD customization for the Walden MS project, and permit customized transcription and editing on our own terms?
In TEI encoding revisions are handled with a critical apparatus . TextLab's integration with Juxta may be automatically generating this, but we need to see what that code looks like to make sure we can read and process it cleanly later. Can we take a look at the code generated at each stage of the workflow? (I'm wondering about the representation of revision stages via Juxta: what does that code look like?) What happens with the integration of the following:
Diplomatic MS encoding in TextLab tool
Juxta comparison with the print editions?
(Currently this work doesn't appear to be complete on the Melville Electronic Library, but we should find out more from the developers and see where they are.)
The fluid-text workflow doesn't engage the editor with the code when describing moments of comparison. Instead, the editor is invited to add comments in a box to highlighted passages in Juxta. So the editor appears to implicitly trust the Juxta highlighting of compared passages in this articulation of a "fluid text" workflow. Is Juxta reliable enough for such trust? Does it take a developer or programmer to intervene in the software to correct computational misalignments?
Review the TextLab documentation on "text encoding strategies" or the Art of Fluid Text Editing, and look at page 9 of the PDF on transpositions, which describes a specific editing problem that we're likely to encounter in the Walden MS. This is a moment when the editors discuss (frankly and intelligently) how they've had to deal with a problem of diplomatic editing to handle movement of passages between pages. Does this encoding of transpositions pose a problem for Juxta's visualization of a revised passage?
Questions to ask ourselves:
Are we willing to work with TextLab's document model, if it doesn't let us refine it according to our own project customization? Is it sufficient for our purposes? What are its advantages and what are its limitations?
What are the main aspects of "fluid text editing" that we want to apply to our project? What aspects of Bryant's theory matter to our project on Walden's manuscripts?
Is text encoding "difficult"? Why? What kinds of "difficulty" do we expect ourselves (as project team members) to tolerate? What kinds of "difficulty" do we expect our students to experience in approaching:
transcription
description
editing
as a coordinated process with a project team?
Alternative possibilities
oXygen's Editor view is great for syntax checking (like proper closing of tags) and offering guidance directly from a schema. oXygen's Author View permits editing texts following a customized schema without having to enter angle-bracketed tags, and might answer those who are fearful that tagging is too difficult for students.
TextLab was created for Melville Electronic Library probably in the somewhat distant past. I suspect a good programmer who works alongside us as we develop our own ODD-schema can make us a transcription tool that's suited to the Walden project's own parameters, and perhaps help the cause of fluid-text editing in our own way.
In our meeting today @WhatTheDickens mentioned he'd like us to talk to the makers of TextLab, which appeals due to its applications of the principles of "fluid-text editing" developed by John Bryant, director of the Melville Electronic Library. Here's an explanation of the fluid-text workflow, from transcription through human-directed/machine-assisted collation of revision stages: https://mel.hofstra.edu/textlab.html
The principles guiding the workflow seem smart to me, but I'm worried about the constraints of TextLab and Juxta in handling manuscript materials. The documentation of TextLab suggests that it's an already-existing customization of the TEI, designed for page-by-page rather than semantic organization (e.g. chapter-by-chapter, or unit-by-unit). See https://mel.hofstra.edu/pdf/textlab_user_manual.pdf
Questions to ask the TextLab people:
Can we develop our own TEI ODD customization for the Walden MS project, and permit customized transcription and editing on our own terms?
In TEI encoding revisions are handled with a critical apparatus . TextLab's integration with Juxta may be automatically generating this, but we need to see what that code looks like to make sure we can read and process it cleanly later. Can we take a look at the code generated at each stage of the workflow? (I'm wondering about the representation of revision stages via Juxta: what does that code look like?) What happens with the integration of the following:
The fluid-text workflow doesn't engage the editor with the code when describing moments of comparison. Instead, the editor is invited to add comments in a box to highlighted passages in Juxta. So the editor appears to implicitly trust the Juxta highlighting of compared passages in this articulation of a "fluid text" workflow. Is Juxta reliable enough for such trust? Does it take a developer or programmer to intervene in the software to correct computational misalignments?
Review the TextLab documentation on "text encoding strategies" or the Art of Fluid Text Editing, and look at page 9 of the PDF on transpositions, which describes a specific editing problem that we're likely to encounter in the Walden MS. This is a moment when the editors discuss (frankly and intelligently) how they've had to deal with a problem of diplomatic editing to handle movement of passages between pages. Does this encoding of transpositions pose a problem for Juxta's visualization of a revised passage?
Questions to ask ourselves:
Are we willing to work with TextLab's document model, if it doesn't let us refine it according to our own project customization? Is it sufficient for our purposes? What are its advantages and what are its limitations?
What are the main aspects of "fluid text editing" that we want to apply to our project? What aspects of Bryant's theory matter to our project on Walden's manuscripts?
Is text encoding "difficult"? Why? What kinds of "difficulty" do we expect ourselves (as project team members) to tolerate? What kinds of "difficulty" do we expect our students to experience in approaching:
Alternative possibilities