Archive for the ‘ HUCO 500 (2009) ’ Category

Are Digital Humanists Relevant?

On October 7, Distinguished Visitor Dr. Howard White presented “Defining Information Science” as part of the SLIS colloquia.

He began his presentation by offering the several traditional definitions of information science (Rubin, 2004; Hawkins, 2001; Borko, 1968), as well as Wikipedia’s definition as an illustration of how difficult it is to pin down, before offering his own much simpler definition:

[Information Science is] The study of literature-based answering.

Given that he was speaking to a room full of future librarians, White elaborated what that meant in the context of reference librarian.  the reference librarian should be able to provide relevant answers to “relevance-seekers” (library users) by giving truthful, novel, on topic, specific, understandable, and timely answers (in that order).  Librarians should be better equipped than Google to filter relevance for a given question; their “equipment” is “literatures”– that is, the library collection.  It’s possible to shorten White’s answer down even more: information science is the study of relevant answers, or simply relevance, given that relevance implies (a) a system (“literatures”) and (b) requirements for answers (truthfulness, novelty, on-topic-ness, specificity, understandability, and timeliness).

What struck me as most interesting, however, were the parallels between White’s librarian/information scientist and the digital humanist.  A digital humanist is, after all, essentially interested in seeking and supplying relevant answers by searching ‘literatures’ with the use of computational methods (Hockey, 2004). Does that make the digital humanist an information scientist?  And does that make the information scientist a digital humanist?

Works cited

Borko, H. (1968). “Information science: what is it?” American Documentation, 19(1).

Hawkins, D.T. (2001). “Information science abstracts: tracking the literature of information science.  Part 1: definition and map.” Journal of the American Society for Information Science and Technology, 52.

Hockey, S. (2004).  “History of Humanities Computing.”  A Companion to Digital Humanities, ed. Susan Schreibman, Ray Siemens, John Unsworth.  Oxford: Blackwell, 2004.

Rubin, R. E. (2004).  Foundations of Library and Information Science. 2nd ed.  New York: Neal-Schuman Publishers Inc.

Designing Visual Differentiators

On Friday, November 20, as a guest of Humanities Computing colloquia, Sandra Gabriele (professor of Design, York University) presented “Visualization Differentiation in Look-alike Medication Names: Evaluating Design in Context”.

The problem that inspired Gabriele’s study is a troubling one: 7.5 % of patients admitted for acute care experience one or more adverse events; 24% of these are drug-related.  Meaning that, all too often, the wrong medication is administered to patients.  Why?

Gabriele identified two sources for design errors in hospital drug-selection:

  • orthographic similarities of drug names
  • phonetic similarities of drug names

Drug names come in two varieties: the generic name (or type), and the brand name (or unique name).  Gabriele showed us examples of how medication is stored in hospital pharmacies, presenting pictures of uniform bins of drugs organized alphabetically by name, usually regardless of its intended purpose.  One bin contained similarly named blood pressure medications, one for high blood pressure, one for low blood pressure, their names orthographically similar and the labels uniform as well; as a layperson, certainly, I would have been unable to tell the difference at a glance.

Gabriele’s project was to find better ways of designing drug labels for hospital pharmacies.  What was most interesting to me was the framework she chose in approaching this problem, asking what was required in effective drug-labelling:

1. Attention: that is, what makes the label distinctive.  Some of the designs she used in user tests were changing the colour and weight of the text, or using white text on solid black.  User tests showed drug names that were printed as white text on solid black made for the most attention-getting label.

2. Perception: or, legibility issues (cutting down the possibility of confusing orthographically similar drug names), establishing a visible hierarchy of data included on the label, and visual cueing (“chunking”, typographic styles, spatial cues, and mark cues).  Gabriele proposed to change the font, so that there was a clearer distinction between upper and lowercase letters, and cleaner font weight.  Interestingly enough, users in her test group responded negatively to this change; most drug labels use a Tallman font (which does pose legibility issues like those mentioned), and it seemed that the users (all hospital nurses) were conditioned to using it, when a layperson would have had more difficulty determining minor differences in names.  There seems to be some debate over the use of Tallman; a 2006 study in Glasgow indicated that Tallman was actually more effective in reducing name-related errors when selecting drugs (Filik et al.).

3. Understanding: Making sure a user can identify and understand all the data available on the label at a glance.  Gabriele’s presentation did not delve too deeply into this part of her study, but I would have found this probably the most interesting step in her research. How do users make sense of the labels?  Does the reorganization and stratification of data (in the “perception” stage) make a positive difference for comprehension with the trained professional?  It seems like, while errors do sometimes occur, changing labels that would avoid errors for a layperson might in fact cause more errors for someone trained to use the current labels in place.

Works cited

Filik, R., Purdy, K., Gale, A., and Gerrett, D. (2006).  “Labeling of Medicines and Patient Safety: Evaluating Methods of Reducing Drug Name Confusion.” Human Factors: The Journal of the Human Factors and Ergonomics Society, 48. pp. 39-47.

Cyborgization = Evolution?

HuCo 500 – Weekly questions

Communications technologies and biotechnologies are the crucial tools recrafting our bodies. These tools embody and enforce new social relations for women world-wide. Technologies and scientific discourses can be partially understood as formalizations, i.e., as frozen moments, of the fluid social interactions constituting them, but they should also be viewed as instruments for enforcing meanings. The boundary is permeable between tool and myth, instrument and concept, historical systems of social relations and historical anatomies of possible bodies, including objects of knowledge. Indeed, myth and tool mutually constitute each other. (Haraway)

Throughout this course, I think my questions have demonstrated that I am particularly concerned with how technology fundamentally changes how we think.  Donna Haraway, a bit dramatically, states the obvious about the transformations that are occurring and have occurred in our society (or ‘politics’, in the sense that Haraway uses the word): that we are all socially constructed by the tools we rely on to shape our reality.  We are all cyborgs, already, since in many cases the tools have already been embodied; we use them to define ourselves.  They shape our mythology.  Take, for instance, the act of knowledge acquisition; the internet as technological development has changed how we process and evaluate information by making it almost universally accessible and mostly unfiltered, and by putting the means of production and mass-dissemination in the hands of the public.  The speed of communication has also affected how we process information; it has created social expectations, new conventions for interaction.  An individual of average intelligence from fifty years ago would have to struggle to make sense of our 21st century reality, and would likely experience a crippling anxiety just trying to keep up with what the individual of average intelligence today does effortlessly.[1] Arguably this could be said of any given time period, but I think the changes over time have never been so drastic or dramatic throughout human history than in the last two decades, and almost entirely due to how technology has transformed our lives.  My question is: where does such a paradigm lead?  What does it mean for us, as human beings, to become increasingly defined by our technologies (rather than defining them)?

This is the great secret of language: Because it comes from inside us, we believe it to be a direct, unedited, unbiased, apolitical expression of how the world is.  A machine, on the other hand, is outside of us, clearly created by us, modifiable by us… (Postman, 124-125)

Do machines that are communication devices (i.e. that facilitate the expression of language, interaction, the sharing of ideas verbally or visually/textually) take on the properties of language—that is, do we begin to internalize the machines, see them as extensions of our selves, as “direct, unedited, unbiased”—or, quite the opposite, do they afford us the opportunity of perceiving language as the technology that it is, with a set of assumptions about the world implicit in its construction?


Haraway, Donna (1991). “A Cyborg Manifesto: Science, Technology, and Socialist-Feminism in the Late Twentieth Century.” Simians, Cyborgs and Women: The Reinvention of Nature. New York: Routledge. pp.149-181.

Postman, Neil  (1993). “Invisible Technologies.” Technopoly: The Surrender of Culture to Technology. New York: Vintage Books. pp.123-143.

[1] A fascinating study would be to assess the level of anxiety and the struggle that Canadian immigrants from third world/under-developed countries face when confronted with the technologies and their social conventions.  It could perhaps help answer the question of how fundamental the change affected by technology is, in our perception of reality.

Affordances and the Reality of Perception

HuCo 500 – Weekly questions

An affordance, it is said, points two ways, to the environment and to the observer.  …It says only that the information to specify the utilities of the environment is accompanied by information to specify the observer himself, his body, legs, hands, and mouth.  This is wholly inconsistent with dualism of any form…  The awareness of the world and of one’s complementary relations to the world are not separable.(Gibson, 141)

Maybe I am misunderstanding Gibson’s point, but the claim that the theory of affordances dispels the notion of dualism of any kind seems to be quite a leap, making assumptions about what reality and perception are.  How do you separate subjective sense-making (i.e. perception of one’s environment/reality) from objective fact (what Gibson calls “invariants”)?  It is certainly true that the affordances of an object as evaluated by a biped, for instance, will be different than those evaluated by a quadruped, and that either evaluation says something about the evaluator (i.e. that they are a two-legged or a four-legged).  That doesn’t mean that the act of evaluating—the interpretive or perceptive act—is not subject to the context and experience of the ego.  Gibson makes allowances for “misinformation” and “misperception” (142); I’m not sure how he can reconcile this allowance with his claim that there can be no mind-body/abstract-concrete/mental-physical dualism.    

Is there still such a large gap in the affordances of paper compared to computers, as Gaver describes (115-117)?  Do computers today offer all the same affordances of paper?  Is there anything that paper affords that computers/cell phones/electronic devices today do not?


Gaver, William W. “Situating Action II: Affordances for Interaction: The Social Is Material for Design.” Ecological Psychology 8(2), 1996. 111-129

Gibson, J.J. “The Theory of Affordances.” The Ecological Approach to Visual Perception. Hillsdale, NJ: Lawrence Erlbaum Associates, 1986.

Pleasure-seeking scholarly primitives

HuCo500 – Weekly questions

According to Aristotle, scientific knowledge (episteme) must be expressed in statements that follow deductively from a finite list of self-evident statements (axioms) and only employ terms defined from a finite list of self-understood terms (primitives). [Stanford Encyclopedia of Philosophy] (Unsworth)

In his article, Unsworth uses the notion of “primitives” as a way of understanding how humanities researchers can put digital methods into practice.  More specifically, he looks at how Aristotle’s “episteme” could be applied as a method in interface design.  In reading the article, it seemed that the “scholarly primitives” (our finite list of self-understood terms) stood in for the basic needs of the “scholarly” user.  Could we alternately frame Unsworth’s “scholarly primitives” by defining the user’s basic needs as the starting point in designing interfaces (for humanities scholars)?


From a theoretical perspective, the exploration of online browsing environments can be situated wihtin the design of new digital affordances…  As Frascara (xvii) points out, such affordances are particularly attractive when they exist in a context of an environment specifically inteneded to support and extend communication: “We need so much to see what surrounds us that the sheer fact of seeing a wide panorama gives us pleasure.” (Ruecker)

Is “pleasure” a goal of humanities research?  Is this perhaps where we can situate the previously discussed element of “play” and its role in digital humanities methods (e.g. Sinclair’s Hyperpo, Ramsay’s ‘Algorithmic Criticism’, Manovich’s ‘Cultural Analytics’)?



Ruecker, Stan. “Experimental Interfaces Involving Visual Grouping During Browsing.” Partnership: the Canadian Journal of Library and Information Practice and Research. 1(1). 2006.

Unsworth, John. “Scholarly Primitives: what methods do humanities researchers have in common, and how might our tools reflect this?” part of a symposium on “Humanities Computing: formal methods, experimental practice” sponsored by King’s College, London. 2000.

Speculative Computing, Digital Media, and Visual Quotation

HuCo 500 – Weekly questions

Kolker posits that digital media answers the problem film scholars face in referencing a work by means of quotation to prove or illustrate an argument.  Putting aside for the moment the obstacles film scholars face with regard to programming/computing requirements, available technologies and resources, and copyright and intellectual property issues, how does the implementation of digital media as means of quotation change the way we conduct research/scholarship?  Kolker uses CD-ROM and the Web as examples of how digital media can be integrated in critical analysis; what are some other examples of how new media can be employed to benefit scholarly research and answer the problem of quotation?


From a distance, even a middle distance of practical engagement, much of what is currently done in digital humanities has the look of automation. (Drucker & Nowviskie)

Is this statement true?  I would argue that even a cursory examination of digital humanities should show that it is more than merely about “automation” or computational methods in the service of traditional humanities research.  It seems clear to me already, as it did when I first became interested in issues “cybercultural” (and long before I entered this MA program) that digital humanities is as much about the technologies we use as it is the use of technology (to “theoretically gloss” our discussions, as Drucker & Nowviskie phrase it).


Bonus question:

The requirement that a work of fiction or poetry be understood as an “ordered hierarchy of content objects”… raises issues, as Jerome McGann has pointed out. (Drucker & Nowviskie)

How else can we understand a work or text, if not as an “ordered hierarchy of content objects”?  What are the alternatives?  How else can we conceptualize such works, and how would we formalize these conceptualizations using computational methods?



Drucker, Johanna (and Bethany Nowviskie). “Speculative Computing: Aesthetic Provocations in Humanities Computing.” A Companion to Digital Humanities, ed. Susan Schreibman, Ray Siemens, John Unsworth. Oxford: Blackwell, 2004.

Kolker, Robert. “Digital Media and the Analysis of Film.” A Companion to Digital Humanities, ed. Susan Schreibman, Ray Siemens, John Unsworth. Oxford: Blackwell, 2004.


“Born Digital” Experiences: Identity crisis waiting to happen?

HuCo 500 – Weekly questions


Note: Rather than two separate questions, this week I’ve come up with a series of related questions addressing a single idea inspired by one of the readings. These questions take the form of a short, personal response.

As our lives and experiences become more digital, the records of our experiences become less tangible. (Viegas et al, 2004)

Is this statement true? It seems to me when considering the digital/analog dichotomy, the ‘record of our experiences’ has in fact become more explicit with the advent of the Internet. Social media applications such as Facebook, MySpace and Twitter allow us to track our lives in the most minute detail; blogging, microblogging, “lifestreams”, these all provide ways for us to record and trace our experiences over time. Social networks (Facebook, MySpace) allow us to track the relationships that we maintain and the patterns that they represent in our lives. If anything, the ‘record of our experiences’ has become more tangible, not less. The question is rather how accurate a representation of our experiences is the record? Our digital record naturally biases our “born digital” experiences (to borrow a term from last week’s readings); the Internet is a space in which we spend a significant portion of our lives and where we have experiences (rather than simply being a medium with which to record them). For example, the relationships we make within a massively multiplayer online game (MMO) may be more heavily recorded online than relationships we have in our “analog” lives. Does that make them more or less real? More or less important?

The argument presented by Viegas et al. suggests that there are digital experiences that are obscured from the record; but how is this different than “analog” experiences (i.e. the experiences that occur outside the digital space)? There are many things that we do that do not require, demand, or deserve to be recorded. I can’t remember, for instance, what I had for breakfast three weeks ago last Monday. In this sense, the record has not become more or less tangible; it is, perhaps, less relevant. Perhaps the problem is how we make sense of the massive amount of information we create in the digital space. We need to translate the digital record of our experiences into something we can interpret more easily– into something “analog”. Visualization is one of the tools that allow us to do this. In essence, visualization is translation of digital media.

Viegas et al. indicate that we attach personal meaning to objects, that these objects are tied to our senses of self and reality, of what is and who we are. The fact that they are examining email as one such object indicates that these objects can just as easily be virtual as physical. As our lives become more digital, so will the objects we imbue with meaning. What will this mean for us and how we construct our realities?


Viegas, Fernanda, danah boyd, David H. Nguyen, Jeffrey Potter, and Judith Donath (2004). Digital Artifacts for Remembering and Storytelling: PostHistory and SocialNetworkFragments. Proceedings of the 37th Hawaii International Conference on System Sciences.

Arya, Agustin (2003). The Hidden Side of Visualization. Techné: Research in Philosophy and Technology. Winter 2003.