Posts Tagged ‘ computing ’

Mission Statements: Workshopping the Proposal

While my study on mission statement dissemination is on hold, that doesn’t mean I’ve stopped thinking about it by any means.  I’m currently workshopping a research proposal for the study in two separate courses this term, and by the end of April I’m hoping to have a really fleshed-out plan of how to proceed.  Here are some of the documents that I’ve written and that are helping me shape this project.

PDFs:

Fall 2009, SSHRC Application: Program of Study

Winter 2010, HUCO530, Thesis Question

Winter 2010, LIS505, Research Proposal pt1 – Problem and Definitions

Robots Frozen in the Snow

I realize I haven’t been posting as frequently as I should.  The reasons for this are less about my having nothing to post and more a complete lack of time to do so.  The Asimov Robot Stories research continues, though I’ve lost a bit of the momentum I’d gained last term thanks to impending deadlines.  I will be presenting my paper Meditating on the Robot (see below) at HuCon at the end of the month.

Among other things, my time is being dominated by a project with the University of Alberta Press’s forthcoming publication, Weeds of North America.  As part of a project management course, I’m offering my services (for free) to help develop a database system that could be used for future editions of the field guide.  It would essentially be an updatable and comprehensive catalogue of weeds.  Completion of the project, of course, is contingent on my learning how to build a database (or, if the deadline starts looming, finding someone who can).

There’s a few other things I’ve been working on, but nothing concrete enough for me to post here.  I’m currently workshopping my research proposal about using social media in organizations for mission statement dissemination, particularly in terms of methodology.  If the project looks feasible and I’m feeling good about it, I’m looking at submitting an application for ethics review this summer, and starting interviews in the Fall/Winter terms.

I’ve also been mulling over how I could approach future research with XML/Mandala browser; the Robot Stories paper got me thinking about how XML can be used as a new form of close reading that allows users to compile and compare notes in a visual, intuitive medium (i.e. rich prospect browser, like Mandala).  Recently it struck me that it would be relatively easy to conduct a user study with a variety of undergraduates, graduate students, and faculty in the English dept as subjects to test this.  I could consider the results in terms of reader response theory, or simply present them as informing new methods in scholarship.  Questions/Issues: how would I compare XML-close-reading with traditional close reading?  Is it even possible?  How would I go about writing a program that would allow users to encode texts without actually having to learn XML?  Something that could output the XML that could then be viewed in Mandala.

Downloads:

PDF – Meditating on the Robot

Are Digital Humanists Relevant?

On October 7, Distinguished Visitor Dr. Howard White presented “Defining Information Science” as part of the SLIS colloquia.

He began his presentation by offering the several traditional definitions of information science (Rubin, 2004; Hawkins, 2001; Borko, 1968), as well as Wikipedia’s definition as an illustration of how difficult it is to pin down, before offering his own much simpler definition:

[Information Science is] The study of literature-based answering.

Given that he was speaking to a room full of future librarians, White elaborated what that meant in the context of reference librarian.  the reference librarian should be able to provide relevant answers to “relevance-seekers” (library users) by giving truthful, novel, on topic, specific, understandable, and timely answers (in that order).  Librarians should be better equipped than Google to filter relevance for a given question; their “equipment” is “literatures”– that is, the library collection.  It’s possible to shorten White’s answer down even more: information science is the study of relevant answers, or simply relevance, given that relevance implies (a) a system (“literatures”) and (b) requirements for answers (truthfulness, novelty, on-topic-ness, specificity, understandability, and timeliness).

What struck me as most interesting, however, were the parallels between White’s librarian/information scientist and the digital humanist.  A digital humanist is, after all, essentially interested in seeking and supplying relevant answers by searching ‘literatures’ with the use of computational methods (Hockey, 2004). Does that make the digital humanist an information scientist?  And does that make the information scientist a digital humanist?

Works cited

Borko, H. (1968). “Information science: what is it?” American Documentation, 19(1).

Hawkins, D.T. (2001). “Information science abstracts: tracking the literature of information science.  Part 1: definition and map.” Journal of the American Society for Information Science and Technology, 52.

Hockey, S. (2004).  “History of Humanities Computing.”  A Companion to Digital Humanities, ed. Susan Schreibman, Ray Siemens, John Unsworth.  Oxford: Blackwell, 2004.

Rubin, R. E. (2004).  Foundations of Library and Information Science. 2nd ed.  New York: Neal-Schuman Publishers Inc.

Asimov: Robot Dreams

UPDATE: Mandala screenshots (below)

I’ve begun encoding “Robot Dreams”, a short story about a robot named Elvex (LVX-1) whose positronic brain has been uniquely imprinted with fractal patterns, and as a result has learned how to dream.  This text also features Susan Calvin, the mother of robot psychology in the continuity of most of Asimov’s robot stories.  In my encoding of this text, I’ve run into several challenges:

  • I’m finding “otherness” more difficult to determine than I’d expected.  This story in particular is challenging, because Elvex has become more “human-like” due to the unique architecture of his brain– a fact that appalls his creator and Susan Calvin.  The more Elvex describes his dreams, appearing increasingly “human”, the more the human characters try to distance themselves from him and emphasize his robotic characteristics.  In this situation, there is a definite tension between “other” and “same”; I can’t ignore that tension by making that attribute “null”, but how can I determine otherness in such an ambivalent circumstance?  …One solution is to look at the source’s motivation.  Is the source saying/doing something to create distance between human and robot, or to draw them closer together?  This raises a new challenge:
  • Can a reference then have multiple sources?  Can multiple sources have different motivations, and thus represent different levels on “otherness”?  If the answer is yes, how do I encode this?  …The answer I’ve come up with is to nest my pr_ref tags.  It’s still too early to tell if this is an effective strategy, but I’m trialing it.
  • How do I define my type attributes when it seems that the reference is fulfilling more than one of the possible types?  (e.g. in “Robot Dreams”  Susan Calvin interviews Elvex in her characteristically cold, clinical way.  Most of her questions/statements directed at Elvex can be construed both as “interactive”– since she is “interacting” with the robot– and “descriptive”– since she is describing the robot.)  One possible answer is to look at the possibility of multiple sources again.  The other is to identify a hierarchy of types: emotion trumps interaction trumps description, since all references “describe” something, but not all references “describe” an interaction, and not all interactions are emotional.  Without clearly setting this rule out, I think this is a strategy I followed when encoding “Someday”.  When there is a clearly a situation of multiple sources looking at motivation can again be valuable, and using nested tagging seems the natural answer.

I chose “Robot Dreams” because it has several elements that I felt needed to be explored in my analysis of Asimov’s robot stories.  First of all, whereas in “Someday” the two human characters were male children, in “Robot Dreams” the two human characters are female adults.  I wanted to see if gender and age played a factor (note: my tweaked encoding currently doesn’t catalog age as a factor– if it looks like this might be valuable information to mine, I may add it in future iterations).  Secondly it included Susan Calvin.  Although I have not, as yet, developed an element structure to analyze principal human characters, it has always been my intention for Calvin to be my first attempt.  Not only is her name synonymous with Asimov’s robot stories as a recurring character, but she plays a unique role in them as a foil for the various robots she psycho-analyzes; it would be a valuable exercise to compare the relationship references to her with those of the principle robot characters in the same stories.  Is Susan Calvin characterized as more robot (“other”) or more human?  In comparison, are the robot characters more or less human? Does she elicit more of an emotional response from the figures that interact with her?  An examination of reference sources in this analysis is useful too: does she express emotion more or less than the average robot?

Finally, the problem of “otherness” is central to this text.  I feel that the tension between being “too human” and “too different” is one that makes Asimov’s work so universally engaging, and has not been explored to its fullest.  My XML encoding can– hopefully– reveal exactly how that tension is expressed through the relationships in the text.

***

I have completed a first encoding of the principal robot references in “Robot Dreams”.  Here are screenshots of Mandala evaluating “otherness” from the perspective of the three characters: Elvex (principal robot), Susan (principal human), and Linda (secondary human).  Click on the thumbnails below to view the images in full size.

Asimov Update: Gender and Otherness

I’ve been working on my encoding of Asimov’s robot stories, and reworked the pr_ref tag to include attributes for the source gender and “otherness”, as well as generalized the source attribute values (phuman, shuman, probot, srobot, nvoice) so it can be used when analyzing a corpus of different texts.

My encoding can now examine the relation to gender of human-robot interactions in the text (i.e. do more female characters respond emotionally to the robots than male characters?  Do male characters physically interact with the robots more? etc.)

I can also track which references demonstrate a portrayal of the robot as “other”, and which references portray the robot as “same” in relation to the source factions in the text.  This otherness/sameness dichotomy is by no means a perfect science, but given a careful reading most references in the text usually imply one or the other.   (Not unlike determining the difference between an emotive and an interactive reference, determining “otherness” relies on interpretation.)

As well, I have made it possible for the principal robot character to reference itself.  This is important in a text like “Someday”, where the robot “the Bard” tells a story about itself.

Click on the screenshot below to see an example of how I’m using the Mandala browser to visualize these features.

Mandala Browser

The Bard's robot references and their "otherness"

Pleasure-seeking scholarly primitives

HuCo500 – Weekly questions

According to Aristotle, scientific knowledge (episteme) must be expressed in statements that follow deductively from a finite list of self-evident statements (axioms) and only employ terms defined from a finite list of self-understood terms (primitives). [Stanford Encyclopedia of Philosophy] (Unsworth)

In his article, Unsworth uses the notion of “primitives” as a way of understanding how humanities researchers can put digital methods into practice.  More specifically, he looks at how Aristotle’s “episteme” could be applied as a method in interface design.  In reading the article, it seemed that the “scholarly primitives” (our finite list of self-understood terms) stood in for the basic needs of the “scholarly” user.  Could we alternately frame Unsworth’s “scholarly primitives” by defining the user’s basic needs as the starting point in designing interfaces (for humanities scholars)?

 

From a theoretical perspective, the exploration of online browsing environments can be situated wihtin the design of new digital affordances…  As Frascara (xvii) points out, such affordances are particularly attractive when they exist in a context of an environment specifically inteneded to support and extend communication: “We need so much to see what surrounds us that the sheer fact of seeing a wide panorama gives us pleasure.” (Ruecker)

Is “pleasure” a goal of humanities research?  Is this perhaps where we can situate the previously discussed element of “play” and its role in digital humanities methods (e.g. Sinclair’s Hyperpo, Ramsay’s ‘Algorithmic Criticism’, Manovich’s ‘Cultural Analytics’)?

 

Readings:

Ruecker, Stan. “Experimental Interfaces Involving Visual Grouping During Browsing.” Partnership: the Canadian Journal of Library and Information Practice and Research. 1(1). 2006.

Unsworth, John. “Scholarly Primitives: what methods do humanities researchers have in common, and how might our tools reflect this?” part of a symposium on “Humanities Computing: formal methods, experimental practice” sponsored by King’s College, London. 2000.

Pseudocoding: Introducing the Pizzalgorithm

This was a fun exercise I had in an assignment as intro to programming, and I thought I’d share.

Phase 2: Problem Solving

Tasks:

1) Write an algorithm to tell a computer how to deal with traffic lights. Use only if, else

if, and/or else.

IF (the light is green) { PROCEED }

ELSE IF (the light is yellow) { SLOW DOWN }

ELSE IF (the light is red) { STOP }

2) Write an algorithm describing your morning routine from waking up to work/school.

Use if, else if, else, and loops.

WHILE (alarm goes off AND clock reads earlier than 7:00) {

Hit ‘Snooze’ ;

}

ELSE  put on glasses AND fall out of bed

IF (on floor AND have no shirt AND have no pants) {

Feel around;

IF (feel shirt) { pick it up } ;

IF (feel pants) { pick it up } ;

Crawl to dresser {

Grab clean underwear AND socks ;

IF (have no shirt) { grab one clean shirt } ;

IF (have no pants) { grab clean pair of pants } ;

}

Stand ;

}

Smell self

IF (stinky) {

go shower;

dry off;

put on underwear;

put on socks;

put on pants;

put on shirt;

}

ELSE IF (slight smell OR fresh as a daisy) {

put on underwear;

put on socks;

put on pants;

put on shirt;

}

IF (not in bathroom) { go to bathroom }

IF (in bathroom) {

have bowel movement;

brush teeth;

WHILE (hair is unruly AND clock reads earlier than 7:45) {

run fingers through hair;

};

WHILE (clock reads later than 7:45 AND earlier than 8:15) { check email }

IF (clock reads 8:15) {

put on jacket;

go catch bus;

}

3) Write 2 algorithms of your choice.

Bonus: Write an algorithm for writing algorithms.

Continue reading