Posts Tagged ‘ robot ’

Understanding Robots Through Derrida

This post is written in response to weekly readings for HUCO617: Posthumanism.  This week we were reading Jacques Derrida; specifically in the context of this response, “Structure, Sign and Play in the Discourse of the Human Sciences”, Writing and Difference, pp. 278-293.

In “Structure, Sign and Play in the Discourse of the Human Sciences”, Derrida casts the use of discourses as a way of criticizing the discourse itself, of invalidating its own premises.  At this point, I don’t pretend to completely understand the significance of what Derrida is saying (I suspect I’d need months or years at a minimum to fully grasp it), but this notion of the destruction of a thing by its own means, its own contradictions, strikes a chord with me.  Specifically, the opposition of human-machine and the anxiety prevalent in almost all narratives about the robot seems to embody this principle in microcosm; we fear our own destruction or substitution at the hands of what we have created, artificial beings cast in our own reflection.

I use Karel Čapek’s perspective on the robot, as I’ve used it before, as an emblem of the opposition at the heart of defining what is human (  Čapek’s 1921 play R.U.R. captures the fearful aspect of what results from the human endeavour to duplicate the creation of the Biblical God, to make life in our own image.  From a structuralist position, the “robot” is the centre of this discourse; the “robot” represents both the perfect human and the monstrously not-human, and as such exists both within and outside of the discourse depending on one’s approach.  From this we can already recognize the paradoxical difference Derrida suggests with post-structuralism.

The comment I want to make (and am probably making a hack job of it) is that perhaps the anxiety critics feel towards post-structuralism, like Harold Bloom from our previous readings, is at least similar if not identical to the anxiety we feel towards robotics.  We see this anxiety expressed in literature in innumerable ways: from Shelley’s monster to Rossum’s robots, Asimov’s laws to Dick’s replicants, Star Wars‘ droids to Gibson’s Neuromancer, they all represent the tension between progress (both technological and existential) and the fear of replacement (or death).  And isn’t that the same tension that exists in post-structuralism?  To take the concepts of a given discourse and employ them “to destroy the old machinery to which they belong and of which they themselves are pieces” (284); in other words, using principles from an existing system in order to re-imagine it, recreate it.  The tension is between method (the instruments of a system) and the truth (the “objective signification” it represents), borrowing from the language Derrida uses in his analysis of Lévi-Strauss.  Or is the tension between the old and the new in the continual act of re-constructing and replacing of the system from within itself?   All of these examples bring to mind the image of the Ouroboros, the snake biting its own tail in a perpetual cycle of re-invention.


Robots Frozen in the Snow

I realize I haven’t been posting as frequently as I should.  The reasons for this are less about my having nothing to post and more a complete lack of time to do so.  The Asimov Robot Stories research continues, though I’ve lost a bit of the momentum I’d gained last term thanks to impending deadlines.  I will be presenting my paper Meditating on the Robot (see below) at HuCon at the end of the month.

Among other things, my time is being dominated by a project with the University of Alberta Press’s forthcoming publication, Weeds of North America.  As part of a project management course, I’m offering my services (for free) to help develop a database system that could be used for future editions of the field guide.  It would essentially be an updatable and comprehensive catalogue of weeds.  Completion of the project, of course, is contingent on my learning how to build a database (or, if the deadline starts looming, finding someone who can).

There’s a few other things I’ve been working on, but nothing concrete enough for me to post here.  I’m currently workshopping my research proposal about using social media in organizations for mission statement dissemination, particularly in terms of methodology.  If the project looks feasible and I’m feeling good about it, I’m looking at submitting an application for ethics review this summer, and starting interviews in the Fall/Winter terms.

I’ve also been mulling over how I could approach future research with XML/Mandala browser; the Robot Stories paper got me thinking about how XML can be used as a new form of close reading that allows users to compile and compare notes in a visual, intuitive medium (i.e. rich prospect browser, like Mandala).  Recently it struck me that it would be relatively easy to conduct a user study with a variety of undergraduates, graduate students, and faculty in the English dept as subjects to test this.  I could consider the results in terms of reader response theory, or simply present them as informing new methods in scholarship.  Questions/Issues: how would I compare XML-close-reading with traditional close reading?  Is it even possible?  How would I go about writing a program that would allow users to encode texts without actually having to learn XML?  Something that could output the XML that could then be viewed in Mandala.


PDF – Meditating on the Robot

Asimov: Robot Dreams

UPDATE: Mandala screenshots (below)

I’ve begun encoding “Robot Dreams”, a short story about a robot named Elvex (LVX-1) whose positronic brain has been uniquely imprinted with fractal patterns, and as a result has learned how to dream.  This text also features Susan Calvin, the mother of robot psychology in the continuity of most of Asimov’s robot stories.  In my encoding of this text, I’ve run into several challenges:

  • I’m finding “otherness” more difficult to determine than I’d expected.  This story in particular is challenging, because Elvex has become more “human-like” due to the unique architecture of his brain– a fact that appalls his creator and Susan Calvin.  The more Elvex describes his dreams, appearing increasingly “human”, the more the human characters try to distance themselves from him and emphasize his robotic characteristics.  In this situation, there is a definite tension between “other” and “same”; I can’t ignore that tension by making that attribute “null”, but how can I determine otherness in such an ambivalent circumstance?  …One solution is to look at the source’s motivation.  Is the source saying/doing something to create distance between human and robot, or to draw them closer together?  This raises a new challenge:
  • Can a reference then have multiple sources?  Can multiple sources have different motivations, and thus represent different levels on “otherness”?  If the answer is yes, how do I encode this?  …The answer I’ve come up with is to nest my pr_ref tags.  It’s still too early to tell if this is an effective strategy, but I’m trialing it.
  • How do I define my type attributes when it seems that the reference is fulfilling more than one of the possible types?  (e.g. in “Robot Dreams”  Susan Calvin interviews Elvex in her characteristically cold, clinical way.  Most of her questions/statements directed at Elvex can be construed both as “interactive”– since she is “interacting” with the robot– and “descriptive”– since she is describing the robot.)  One possible answer is to look at the possibility of multiple sources again.  The other is to identify a hierarchy of types: emotion trumps interaction trumps description, since all references “describe” something, but not all references “describe” an interaction, and not all interactions are emotional.  Without clearly setting this rule out, I think this is a strategy I followed when encoding “Someday”.  When there is a clearly a situation of multiple sources looking at motivation can again be valuable, and using nested tagging seems the natural answer.

I chose “Robot Dreams” because it has several elements that I felt needed to be explored in my analysis of Asimov’s robot stories.  First of all, whereas in “Someday” the two human characters were male children, in “Robot Dreams” the two human characters are female adults.  I wanted to see if gender and age played a factor (note: my tweaked encoding currently doesn’t catalog age as a factor– if it looks like this might be valuable information to mine, I may add it in future iterations).  Secondly it included Susan Calvin.  Although I have not, as yet, developed an element structure to analyze principal human characters, it has always been my intention for Calvin to be my first attempt.  Not only is her name synonymous with Asimov’s robot stories as a recurring character, but she plays a unique role in them as a foil for the various robots she psycho-analyzes; it would be a valuable exercise to compare the relationship references to her with those of the principle robot characters in the same stories.  Is Susan Calvin characterized as more robot (“other”) or more human?  In comparison, are the robot characters more or less human? Does she elicit more of an emotional response from the figures that interact with her?  An examination of reference sources in this analysis is useful too: does she express emotion more or less than the average robot?

Finally, the problem of “otherness” is central to this text.  I feel that the tension between being “too human” and “too different” is one that makes Asimov’s work so universally engaging, and has not been explored to its fullest.  My XML encoding can– hopefully– reveal exactly how that tension is expressed through the relationships in the text.


I have completed a first encoding of the principal robot references in “Robot Dreams”.  Here are screenshots of Mandala evaluating “otherness” from the perspective of the three characters: Elvex (principal robot), Susan (principal human), and Linda (secondary human).  Click on the thumbnails below to view the images in full size.

Asimov Update: Gender and Otherness

I’ve been working on my encoding of Asimov’s robot stories, and reworked the pr_ref tag to include attributes for the source gender and “otherness”, as well as generalized the source attribute values (phuman, shuman, probot, srobot, nvoice) so it can be used when analyzing a corpus of different texts.

My encoding can now examine the relation to gender of human-robot interactions in the text (i.e. do more female characters respond emotionally to the robots than male characters?  Do male characters physically interact with the robots more? etc.)

I can also track which references demonstrate a portrayal of the robot as “other”, and which references portray the robot as “same” in relation to the source factions in the text.  This otherness/sameness dichotomy is by no means a perfect science, but given a careful reading most references in the text usually imply one or the other.   (Not unlike determining the difference between an emotive and an interactive reference, determining “otherness” relies on interpretation.)

As well, I have made it possible for the principal robot character to reference itself.  This is important in a text like “Someday”, where the robot “the Bard” tells a story about itself.

Click on the screenshot below to see an example of how I’m using the Mandala browser to visualize these features.

Mandala Browser

The Bard's robot references and their "otherness"

Robot-Human-Text Relationships in Asimov’s “Someday”

Let’s call this ‘phase 1’ of an on-going project of mine.  It started as a simple assignment as introduction to XML, and has since snowballed into something, I think, deserves more effort and study.  ‘Phase 1’ represents my initial attempt at encoding certain narrative features in Isaac Asimov’s “Someday”.  My idea: What sort of relationships between Asimov’s robot and human characters can be determined through encoding?  It’s still early to tell, but results so far seem promising.

The story and my meager attempts at rendering the results in XHTML are viewable at (recommend viewing in Firefox)

Here’s the initial report I wrote to accompany the site:

When I selected the text “Someday” by Isaac Asimov it was with a particular purpose in mind.  This short story is among a number of other Asimov texts commonly referred to as his “robot stories”.  Typically these robot stories contain a principal human character and a principal robot character, as well as any number of supporting characters both robot and human.  I wanted to establish a way of encoding these stories that would allow me to map relationships with the robot characters and identify how the figure of the robot is constructed.  How are robots treated in Asimov’s robot stories?  Early on I determined that I could use XML to encode all references in the narrative to the principal robot character[1], but this act alone was not enough to provide the data I needed to answer my question.  In order for my method to be successful, I had to distinguish different types of references and identify the source in each case.   “Someday” introduced additional challenges as well; the text contains four “inner stories” told by the principal robot character that parallel the main narrative.  There was also the question of document analysis: should I concern myself with the source of the text?  Is it important to place it in context?  What ancillary information needs to be encoded?

I accomplished my primary task of identifying and categorizing all references to the principal robot character with a single element: pr_ref.  This element is designed to extract two key pieces of information from each reference it catalogues: source and type.  All references are sorted into three different ‘types’: descriptive, emotive, and interactive.  The ‘descriptive’  value is used whenever a reference describes the principle robot character, e.g. “just an old thing I had when I was a kid”, “it turned out to be just as stupid as he expected.”  The ’emotive’ value represents any reference that contains an emotional response to the robot, e.g. “[he] looked at it critically”, “despite Niccolo’s own bitterness against the Bard.”  The ‘interactive’ value applies when there is an action being exerted upon or otherwise involving the robot, e.g. “he kicked the Bard with his foot”, “he had the front panel off and peered in.”  The source is identified by where the reference is located in the narrative.  Since the ’emotive’ and ‘interactive’ values both require agency to occur, the source will always be associated with another character in the text (e.g. Paul, Niccolo).  ‘Descriptive’ values can be associated with the narrative voice (“nvoice”), if it is clear the reference is occurring outside the experiential range of any of the characters.  Together the tag looks like this:

<pr_ref src=”[Paul|Niccolo|nvoice]“”>[descriptive|emotive|interactive]”>[p.r. reference]</pr_ref>[2]

When parsing data it then becomes possible to determine, for instance, how many, what type, and which references originated with Niccolo (the principal human character), and identify possible patterns in the text.  I did exactly that by creating separate XSL style sheets for each possible source (Niccolo, Paul, and “nvoice”)[3]; in so doing, I discovered that Paul (the supporting/secondary human character) interacts with the robot quite a bit more than Niccolo, but that Niccolo is more emotionally responsive to the robot.  This also allows me to see how the identity of the robot is constructed.  If my method were used on all of the robot stories, would there be a pattern to the formation of the robot character?  How would the relationships between robot, human, and text reveal themselves?  And what could it mean in an analysis of the corpus?

I also encoded character names and third-person pronouns in the narrative[4].  Although I did not make any particular use of this encoding, this could be valuable information in an analysis of the different relationships and how interaction is constructed in the text[5].  For instance, while the narrative voice refers to the robot only as “it”, on several occasions both human characters refer to it as “he”.  Is this merely an accurate portrayal of children incorrectly ascribing gender to an inanimate object, or could this have a deeper significance?  Are robots gendered in other robot stories?  What does that mean for Asimov’s robot?

One of the complexities of “Someday” that I wanted to explore was the presence of stories-within-story.  The principal robot character, the Bard, is a story-telling robot.   At key moments in the text, the Bard starts telling a story.  I was curious to view these “inner stories” in sequence, removed from the greater narrative.  By adding an “inner story” attribute to the <p> element I was able to extract all paragraphs containing “inner stories” and read them separately[6].   When the “innerstories” of “Someday” are seen side-by-side, one commonality sticks out: none of them have endings.  Throughout the narrative, these stories are interrupted by the humans, Niccolo and Paul, but the last story is interrupted by the robot’s own flawed parts.  My encoding facilitated this discovery; while in this case the information was fairly obvious, the encoding of “inner stories” could be valuable when analyzing the structures of meta-narratives.

In my approach to this project I determined that I wanted to prioritize discourse analysis over document analysis.  This decision is apparent in the tree structure of my XML.  Since I was focusing on a single text, the root element is “STORY”.  This is consistent with the idea that, were I to encode an entire corpus of robot stories, I would create a new root element for “STORIES” containing “STORY” as child elements.  I did, however, include document information under the child element “collection”, which includes front matter like reviews and table of contents from the text source.  My DTD includes the possibility of elements such as “printHistory” and “back” for content that appears after the text.  I also included the element “notes” for encoder comments that I wanted to include with the text[7].

Originally I was going to render my XML using CSS alone, but after a first attempt I realized I could not accomplish everything I wanted to with it; I could only render the information once onto a web page and I had no way of linking pages.  I turned to XSLT to create XHTML pages that would provide more room for me to play with the XML.  While all XML documents validated against the DTD and the XSL documents were well-formed, I ran into some difficulty trying to get the XHTML to render across different browsers.  Mozilla Firefox displays all pages correctly, while Internet Explorer and Safari read the inline styles differently.  Firefox is recommended for viewing these web pages, since it will not render correctly in other browsers.

There are other ways of using the XML encoding of “Someday”.  With a larger file including multiple texts, the Mandala browser developed by Dr. Stan Ruecker would allow me to visualize the same associations I have made using XHTML pages as a series of connected dots and “magnets”; the observation, for instance, that Niccolo interacts less and is more emotionally responsive to the Bard would be that much more apparent.  It would be possible to easily identify the patterns of human-robot relationships that occur throughout Asimov’s robot stories via this type of visualization.  I intend to expand on this project and continue this line of enquiry by examining how such an examination could be put into practice using Mandala.

[1] Note that this would work equally well with any other character in a given text.  It would probably be inadvisable, however, to encode references in a single XML document to more than one or two characters.
[2] I used the characters’ names when assigning the source, but if I were to encode several stories I could use a generic identifier, e.g. phuman = principal human character, shuman = secondary human character, and so on.
[3] The XHTML page can be viewed for each of these style sheets by opening the following xml files in a browser: somedaynvoice.xml, somedayphuman.xml, somedayshuman.xml.
[4] Names were encoded by identifying the role they played in the text: e.g. probot = principal robot character, shuman = secondary human character.
[5] The TEI does this, for instance, albeit more rigorously and in greater complexity.  The Encoding Guide for Early Printed Books (Women Writers Project) discusses some general reasons why it can be useful to encode names, titles, and pronouns:
[6] The XHTML page can be viewed by opening someday_innerstory.xml in a browser.
[7] These notes are used in someday.xml, which was rendered with CSS.  They explain why certain parts of the text are highlighted.

I’ve had a chance to mess around with Dr. Ruecker’s Mandala browser since writing this, and I can say that using such an interface to visualize the human-robot relationships, and as an aid in analysis seems particularly valuable.  Here’s just an example, based on some preliminary screen shots (click on images to view in medium- and full-size).

The next ‘phase’, I suppose, is developing a corpus of encoded robot stories, and refining my XML.

The MDS Robot

Meet Nexi, the first Mobile Dextrous Social (MDS) Robot, developed at MIT.

I don’t know about you, but I’m seeing a cross between Pinocchio and I, Robot‘s NS-5.

The MDS Robot is now commercially available by Xitome Design.

The case for robot ethics

W I R E D | Do Humanlike Machines Deserve Human Rights?

This question is starting to get debated by robot designers and toymakers. With advanced robotics becoming cheaper and more commonplace, the challenge isn’t how we learn to accept robots—but whether we should care when they’re mistreated. And if we start caring about robot ethics, might we then go one insane step further and grant them rights?

Domo w/banana (Rodney Brooks, MIT)

Robot ethics“. It’s an interesting question, and I think Daniel Roth (the essayist) does a good job of describing what’s at stake.



It’s not really about whether we’ve reached the point and/or are likely to ever reach the point when robots are created with the cognitive capacity to become self-aware and sentient. It’s at what point we’ve anthropomorphized them enough for us to feel compassionate towards them.

technology abuse“: “As technology develops animal-like sophistication, finding the thin metallic line between what’s safe to treat as an object and what’s not will be tricky.”


OK, so it’s not to say that the point at which machines can match human intellect wouldn’t be a defining moment for this argument. Just that the argument can be made without relying on it as an inevitability. Until someone can raise the level of credibility of the Singularity beyond the mere hypothetical, I prefer to err on the side of the skeptic.