ebeshero / DHClass-Hub

a repository to help introduce and orient students to the GitHub collaboration environment, and to support DH classes.
GNU Affero General Public License v3.0
27 stars 27 forks source link

Discussion of "Traversing the Tree": Overlapping Hierarchies #192

Closed ebeshero closed 7 years ago

ebeshero commented 8 years ago

Here is our first Discussion assignment for the semester:

The reading:

Read Gabrielle Kirilloff, “<Traversing_the_Tree/>”, the short article, “Frankenstein novel analyzed” and scroll through Wendell Piez's conference talk and images for the Balisage Markup Conference 2014. If you like, you can take a close look at his LMNL code of Frankenstein on GitHub. Special note of interest: Gabi Kirilloff was a student in a digital humanities course at Pitt like the one you are taking, and she originally wrote <Traversing_the_Tree/> for a seminar paper assignment in another class.

The discussion prompts:

The discussion is worth points to you as a homework exercise. For full credit, your posts should make specific reference to passages in Kirilloff's essay, and reflect on those passages. Note: You (as an individual) do not have to respond to every one of the discussion prompts, but the class as a whole should cover them all. A good strategy would be to try to respond in some way to at least two of the prompts in the list above. Raising questions is encouraged, and so is responding to each other, but responding should do more than simply say, "yes, I agree." A good response will add something new to the conversation, or help promote more discussion. You should make at least two substantial posts to fully contribute to the discussion.

As you're drafting your comments, see if you can apply "Markdown" formatting if you'd like to use bold or italics or make a list, form a link, add an image, etc. Follow the link to "Markdown supported" at the top right of the comment "Write" screen for an orientation to Github's markdown.

etj27 commented 8 years ago

Kiriloffs perspective? Somewhere between objective and biased. The shortened version, and not the elaborately drawn out one would be that it's useful but not all encompassing. That with the OHCO system its harder to argue against the concept that text is suited hierarchy. This can especially be seen by her views in the Alternatives and Conclusions section as well as Why OHCO. She seemed particularly negative on how people were taking the history of XML and its predecessor, and it seemed to me that she was saying that everyone should adopt a philosopher's mindset when thinking about XML and it's uses.

All in all, the way she addressed its potential issues sounded somewhat like this, "Yeah there are issues, some of which are wrongly looked at by critics but atm there is nothing else better, but in the future there could be if more people thought about it."

Both overlapping hierarchies kind of accomplish the same thing, which is providing different ways to look at text. Both come to the same conclusion as well. With encoding there are different understandings of text when people break it down.

If I were to say that any of the two proved to be more interesting to me, then I would say that Piez's idea of overlapping hierarchies within the text of Frankenstein was fairly interesting. Although that could be said mainly because I have also read the aforementioned text so I could just be seeing something as more relatable. Although as far concept wise, I must say that I thought that the Piez concept seemed more appealing to me. The ability to change and clarify certain aspects of the text based upon who the person is that is actually doing the digital coding is an interesting aspect that must withstand the judgment of those who would criticize it based on old standards of the argument mentioned of "What is text really?" Having a free and open mind is essential, especially when analyzing text of any kind because the idea of perspective is so critically important to the entire field of literature, so in mu opinion, anyone who claims to be a literary caretaker of sorts should know that the best part about looking at documents is that there is more than one answer, and that the only way to sometimes uncover the truth behind text is to look at it in every way, shape and form possible which is why digitally looking at texts through encoding them makes for such a reliable structure in overlapping hierarchies.

Something that I think is overlooked but it became noticed by me right away is that the arguments in both Piez and Kirilloffs works neither suggest that their way of doing things digitally is the only way to look at text, and those who aren't necessarily educated on coding would more or less see this argument as something to just replace old fashion texts as they know it. It is only natural I feel like that this entire topic is quite misunderstood, because while it is true that the world of Digital Encoding is only expanding in the modern day, that does not quite make it known about. I feel this is essentially just a topic that only those interested within the field are really concerned or interested in.

It is mentioned in Kirilloffs perspective that keeping an eye out for better methods is always the preferred method which is critical for people with the aforementioned mindset.

Edit : probably final version

ebeshero commented 8 years ago

Great start, Evan, and yes--feel free to edit to keep going later. Thanks for highlighting that you can edit your posts here! 👍

setriplette commented 8 years ago

I think that Kirloff and Piez have in common a serious meditation on form and the way it interacts with meaning in any kind of text, traditional or digital. For me, the key thing to keep in mind for doing Digital Humanities work is Kiriloff’s point that

the advent of digital texts and technologies in the humanities must be met with a critical interrogation of the procedures being used and the ideologies behind them.

I like that she recognizes that editors and translators (good ones) have always done this. Kiriloff states that

[t]he encoding process demands that the scholar makes a decision in terms of classifying elements of the text.

This means that when we produce digital texts, we are limiting the things they can mean to readers. That’s a huge responsibility. What I want to add to Kiriloff’s point is that the forms in which texts have circulated—oral storytelling, hieroglyphs, scrolls of papyrus, codices on animal skin, letters on folded paper, print books—have always determined what texts can mean. What’s great about Digital Humanities coding is that we get to be part of the process of publishing text in real time without having to operate something as unwieldy as a Gutenberg press (though if someone buys me a Gutenberg replica, I will learn how to operate it, gleefully).

Kiriloff brings up the idea that we are in a new era, and I agree—the technology of literature is changing fast. But this is not the first time it has done so. Most of us grew up with a single idea of a book: the codex, the collection of pages bound on the left-hand side, held between two sturdy covers. We flip through, letting our eyes catch one detail or another. My fingers will start to remember about where in the thickness of the book a certain passage might be. The whole thing is organized by a table of contents at the front and an index at the back (unless you’re in Spain, and then the table of contents is in the back). Chapters have numbers and sometimes titles, and we take for granted that Chapter 4 follows Chapter 3 in a predictable, beautiful way. The text printed on the back of page 5 will be page 6, and both of them will be marked with a number, probably in the upper corner.

When you have only one technology that delivers your text, it seems like something you can just take for granted. I will point out that none of the things I mentioned were a guarantee with early print books. The codex is only one of the formats in which manuscript texts were produced, and it won out over the scroll or the folded page. Well guess what? Scrolling is back in the digital era, and all our assumptions are being renegotiated. Literary form is cultural and also arbitrary, and we have to be careful about applying rules that “make sense”—they might make sense only to us, here and now, in the DH class at Pitt-Greensburg. Anything that “just makes sense” is probably an illusion.

Both Piez and Kiriloff engage with one of the big illusions of literary texts rendered in code: that they have hierarchies that do not or should not overlap. Piez points out quite correctly that some literary forms have quite a lot of overlap in their hierarchies, poetry in particular, but also formally complex novels like Frankenstein. I was lucky enough to be at one of Piez’s talks last year, and his structural analysis of the novel made me see it in an entirely new way. For me, that’s what’s best about reconsidering hierarchies. If we just take them for granted, new readings don’t become possible.

Thus far it probably sounds like I’m against the notion of non-overlapping hierarchies. In practice, while knowing any set of rules to be an illusion, I’m all for imposing rules on coding literature. This is why I joined the TEI and why I try to use it. I belong in Kiriloff’s group of people who “support the [OHCO] model for practical reasons.” I do feel at times that I have to push against the boundaries of TEI in order to be accurate in my representation of the text. However, as a new coder, I have to start somewhere, and for me that somewhere is the ultra-rule bound, ultra-hierarchical TEI.

Even with a set of rules like TEI, there is still room for human interpretation. Paraphrasing Thomas Rommel, Kiriloff notes that

the strength of using computing tools in literary scholarship is that they provide speed, accuracy, unlimited memory, and instantaneous access to virtually all textual features, but are still completely reliant on the scholar.

Human brains are chaotic, associative, and prone to being sidetracked. Computers are quite different, and the collision of the two ways of thinking yields results we might not have come up with otherwise.

PPH3 commented 8 years ago

I find the historical references here fascinating. I am decidedly older than my fellow classmates, so I have seen the evolution of technology over the past couple of decades. We didn't have proper computers in our school "labs" until I was a senior in high school. I didn't start using computers seriously until I was in my early 20's. Students now begin working with these machines from a very early age. My current phone has a technological endowment that far surpasses decent desktops from just a few years ago. The coding evolution is a curious one, as we have moved from SGML to XML. In the article's conclusion, a new process is proposed using CONCUR. People working in the field of computer science are content with solving current problems while acknowledging future ones in the hopes of solving them within a few years. This article addresses potential improvements via CONCUR, by simultaneously making up multiple tag sets. However, as improvements are being made and improved programs are being written, the central ontological question remains: "what is text, really"? Piez's work with overlapping hierarchies is staggering. I attended his presentation earlier this year and the work he did on Frankenstein is compulsory to the burgeoning field of digital humanities. Upon analyzing these hierarchies, a literary mind will stagger at the possibilities these offer to academics and literary critics. The sprawl of letters among chapters inspires fresh appreciation of the work. It begs the question of 'what would other epistolary novels such as Dracula or Melmoth the Wanderer look like in digital format'?

Edit: I'll come back to this if I have time, tighten this up

jonhoranic commented 8 years ago

I had a little bit of an epiphany while interacting with this page, so I am going to attempt to try to bring something new to the discussion table (before someone else tries to at least) in the second half of my observations. First of all, I am going to be honest, I have only skimmed though this article, but I went back then listened to it after putting it through a text to speech application I use on the web to multitask while I did some file management.

  1. I would like to note that the word "interpretations" appears within almost every section of this article save for one near the end. The main point I take from this article is that the machine cannot distinguish any differences between text inputs unless specifically told to by the person "interpreting" (coding) the text. This means the coder(s) idea(s) of "hows and whats" in respect to individual mark ups should be viewed as a form of text manipulation, not an act of changing the text. When we as humans view the output, this being the text printed out on screen, we can only see what another human told the machine to display via the mark up (any highlights, footnotes, annotations, fonts, size etc.). The machine is just a tool for us to use in order to convey and establish new interpretations of text information on screen. Since humans need the invested emotion and emphasis of words via sound, structure, placement, punctuation, visualization, style, and other subtle pieces to draw out meaning, the coding process becomes imposing. Do we then loose context if we as coders view mark up as just "language interpreted by the machine"? Are we "removing the human from the experience? I have this feeling that if we as early coders and scholars get stuck between these ideas when looking at a wall of code it will serve as a detriment to our understanding of not only this course, but to the possibilities of what digital transcription could do overall. We do not have to think like the machine because the machine does not have to think like we do, its that simple. This is where the "what is text,really?" question is addressed in the article. I would like to think that instead of arguing the separations between the human aspect and the computer aspect of mark up, we should experiment and try to close or overall disprove any separations that we believe to exist by showing what code can do as an interpreters tool.
  2. This is the thing that intrigued me the most, at the top of the article it states, "Please right click and select 'view source' to see what this text actually looks like." The article discusses the relationships between text and mark up throughout, but as a reader, following that step "changes" the text. Now in truth there is no "real" change of the text, nothing added, nothing removed. All that is happening here is that the computer is showing the reader the "hidden" mark up. This "change" is a change of understanding, viewing the mark up will actually give more context to the article then just the conventional text. It becomes the meta, the evidence to support the arguments within. There are added explanations inside the code that is the mark up form of annotation and the virtualization of text hierarchy occurs right in front of the reader. It shows the level of dedication to the medium of xml mark up. The medium used in order to illustrate the points within the article, and the article lends credence to the medium by being nested inside code properly. I find it amazing that the article can support its own arguments constructively by telling the reader to view or "interpret" the text like a coder would.
etj27 commented 8 years ago

@PPH3 Having a viewpoint from someone who is "decidedly older than my fellow classmates" is an incredibly important aspect of both perspectives mentioned in the prompts above I feel, one of the main points of the first prompt was about the negligence of the history of some of the industries and for someone who has been around for the evolution of technology and to acknowledge the importance of digital encoding is a good indicator that we are on the right path, in my opinion.

Also, it is quite fascinating that you actually attended the presentation of Piez. I also agree with your sort of wonder at what other classical pieces of literature would look like once under the digital format. Would our interpretation of texts as a whole change if some of the worlds greatest literary pieces underwent this same process? The Divine Comedy, The Odyssey, Don Quixote, War and Peace, the Brothers Karamazov, would all of these texts change in the eyes of the public? I feel that is a very question and is only supported more by your perspective on these texts.

jonhoranic commented 8 years ago

@etj27 Hate to be a bother, but I assume that your comment was to be directed at the post above mine belonging to Pat (PPH3). I believe you can remedy that with the "edit comment" pencil, hopefully that helps clear up any other future commenter/reader confusion in this board.

etj27 commented 8 years ago

@jonhoranic indeed

ebeshero commented 8 years ago

[Terrific discussion so far--let's keep it going! I'm reposting this on behalf of @msb81 who posted in another thread. Maddie, you may want to add more once you take a look at this thread!] Before this class, I really had no idea what digital humanities even was or what our class would even entail. In her essay, Kirilloff says how digital humanities is becoming an increasing subject of academic study, which I never knew. She also states how the relationships have changed between reader, scholar and the text itself. I never thought of these 3 things ever having a relationship but after she explained it I can now clearly see one.

msb81 commented 8 years ago

@setriplette, I really agree with your statement. Since we are both into languages, it's nice to know someone else thinks like me! I like how Kiriloff does mention is the importance of text. With DH coding, you can give language a whole meaning as well as the text, which she illustrates greatly. It's awesome to see how that happens. While I don't know much about DH or coding, I think all of these people executed their ideas very professionally and well. They had a lot to and I have a lot to learn!

ahunker commented 8 years ago

As someone who is new to digital humanities, I found Kirilloff’s essay to be informative and interesting. At first, the language and information provided were rather confusing but a couple read throughs later, I began to understand a little bit more about what it is we’re actually learning about, where it has come from, and how it is being used. I like how Kirilloff took the time to outline how XML markup works:

"Encoding a text with XML involves wrapping the text in tags, markup that describes components of the text. Tags divide the text into elements. Elements include tags and everything in between them. For example, in the following sample the text, “This is a sample,” is wrapped between a sentence start tag and a sentence end tag. End tags always contain a slash. The entire example constitutes a sentence element.”

While there is a lot of information provided, the author takes the time to go over the basics, which as a beginner I fully appreciated. It really reiterated what we’ve been learning about. I find she takes a simplistic and basic approach to the history of coding. SGML is new information for me and it’s presented fairly well. I found the fact that we've advanced so much in the past 20-30 years rather remarkable. If we've come so far in just a few decades, the potential for the future is quite vast. We are limited, however, as Kirilloff mentions, "computers do not have the power to analyze or interpret data". With this limitation, human coders are still required to categorize the information in the markup. I see this an issue because This could lead to bias as how people interpret and name different parts of text varies depending on personal experiences. While my addition to the discussion isn't very technical, (sorry, I have no experience or knowledge in this area except three classes and reading the provided links) I've attempted to organize some thoughts and opinions on these things as best as I can.

JaredKramer40 commented 8 years ago

These articles have given me a good insight to Digital Humanities as a whole. I wasn't really sure what Digital Humanities was coming into this class but after reading these I feel that I may have a good understanding of it for what little knowledge I have. Kirilloff's perspective has helped connect some dots in my head, for example Digital Humanities is all about relationships. Whether it be humans to computers or scholars to readers, there are numerous relationships to be found. The two sub-groups of support of OHCO, those who support the model for practical reasons and those for ontological reasons, both of which are based upon relationships. I also like the use of the word "building" Kirilloff uses in the text. "...the scholar and markup language as that of builder and tool." This analogy supports my initial thoughts before reading her article.

I didn't realize how overlapping hierarchies worked let alone what they were until reading the article. Overlapping Hierarchies do present some issues in any texts causing confusion in where the relationships lie and weakening the structure. Although there are possible issues there are also solutions or ways of "tricking the system" as Kirilloff explained.

Hopefully my thoughts were accurate as I am just beginning to learn about coding and XML.

ebeshero commented 8 years ago

Thanks @JaredKramer40 and @ahunker! We'll look at some examples to see how XML coders dodge around overlapping hierarchy issues tomorrow, and keep talking more about this--how big a problem it is or not! @wendellpiez came to visit our coding class last spring, and showed us how he works with his LMNL markup: It's great for interpreting a text and letting hierarchies overlap, but to do web processing over it, he also wrote some amazing code to transform LMNL back into XML! Coding in LMNL is fun because you can see aspects of the markup you're getting to know with element names and attributes, but the syntax uses different kinds of brackets to help handle the overlap. If you're curious, you can see some of it on his GitHub, including the poem we'll look at tomorrow in class, "Kubla Khan": https://github.com/wendellpiez/Luminescent/blob/master/lmnl/KublaKhan.lmnl