Archive for the ‘ cyberliterature ’ Category

Crowdsourced Intelligence and You

This post should have gone up ages ago, as part of a course assignment for HUCO 510.  Sometimes you just get side-tracked.  Anyway, this week something happened that gave me the perfect topic to complete my assignment.  Enjoy.

~~

On May 2, 2011 Osama Bin Ladin, one of the most feared terrorist leaders in the world, was killed.  Nearly a decade after the September 11 attacks on the World Trade Center in New York, attacks orchestrated by Bin Laden, US Navy Seals successfully carried out the assassination.  A nation rejoiced.

And, as that nation rejoiced, within minutes of the news being made public on the Internet and on television, all social media websites were abuzz.  One can imagine the sheer volume of the expressions of support, opposition, incredulity, happiness, sadness, congratulations and disgust that flooded the web.  Or, one can simply search “osama” on the Twitter index.  The President would later televise an address to the nation confirming the death of the man who had been cast in the role of nemesis to an entire people and way of life.

It is during these kinds of world-changing events that the most interesting insights about our society are discovered.  Megan McArdle, editor for The Atlantic, made one such discovery, as she browsed her Twitter feed on the fateful day.  One tweet in particular caught her eye.  Being one of Penn Jillette’s 1.6 million followers, she read the following quote, apparently in response to the death of Bin Laden:

“ I mourn the loss of thousands of precious lives, but I will not rejoice in the death of one, not even an enemy.” – Martin Luther King, Jr

Amid the—no doubt—millions of reactions, some of them shocking, this short sentence at least had the ring of reason.  And it was attributed to perhaps the most famous civil rights activist in North America.  A combination of Jillette’s celebrity as a performer and this level-headed response to the event in contrast to many much less level-headed responses made it viral; within hours of it going up on Twitter, many of Jillette’s followers had retweeted the quote, and it had become a trending topic on the social network, in the midst of the Bin Laden furor.  McArdle, unlike many others, did not retweet the quote, though she did initially feel the urge to pass it on.  She hesitated, however, because it didn’t “sound” like Martin Luther King, Jr.  And for that hesitation, I am sure she was later grateful, when it was soon discovered that the quote was misattributed.

Besides the end to privacy (which I’ve repeatedly discussed on this blog), another quality of modern communication technologies that we must all adapt to is the speed at which information travels.  Networks like Twitter and Facebook increase the rate of transmission exponentially.  The cult of celebrity has also found fertile earth in these virtual spaces.  If I had been the person to publish the quote on Twitter, with my 80 or so followers, rather than Jillette, the quote would not have been so popular, and the backlash would not have been so severe.  The fact that the initial tweet reached 1.6 million people dramatically increased how quickly the quote spread from that point.  So where did Jillette get the quote?

Despite some media outlets implying that he did this deliberately to mess with his followers, it seems clear now that it was accidental.  Jillette copied the quote from a Facebook user’s status update that read:

I mourn the loss of thousands of precious lives, but I will not rejoice in the death of one, not even an enemy. “Returning hate for hate multiplies hate, adding deeper darkness to a night already devoid of stars.  Darkness cannot drive out darkness: only light can do that.  Hate cannot drive out hate: only love can do that.” MLK jr

In viewing this, it is clear that Jessica Dovey, the Facebook user, was adding her own interpretation to an authentic quote by Martin Luther King, Jr.  Jillette tried to copy it to Twitter, but given the 140 character limit for tweets, was forced to edit it down.  Apparently he did not realize the first sentence was not part of the quotation.  Jillette later apologized repeatedly for the tweet, stating that it was a mistake.

“Why all the fuss over this?” one might ask.  It seems that most people are upset not so much by the misattribution as they are at the criticism of the popular reaction and the media circus that has surrounded the assassination.  Dovey and Jillette, and McArdle as well, who went on to write a blog post and editorial in The Atlantic online about her discovery of the misattribution, have faced a great deal of criticism since the quote was first shared.

We live in a world of memes, in a place where information—regardless of its accuracy or authenticity—is shared at an exponential rate, and where fiction can be accepted as fact based on who says it and how many believe it.  The only thing surprising about this particular incident is that the mistake was discovered and the truth of it spread online as fast as the initial tweet did.  If it had taken a day or two longer for someone like McArdle, with a platform to spread the information, to discover the mistake, would anyone have noticed?  Probably not.  It is not like people haven’t been misquoted or misattributed in the past.  What’s noteworthy is the speed at which this particular misquote proliferated.

I find this interesting because, as I have stated, it gives evidence of how communication has changed in our society.  Many of us rely on sources like Twitter to engage with current events.  It serves us well to be reminded that, in spite of the many benefits of crowdsourced intelligence, the onus for fact-checking is on the reader.

Update

I know it seems like I haven’t posted since February, but I’ve actually got a backlog of entries that I just haven’t had a chance to put up yet.  I’ll be getting this up today (all related to LIS 599: KM) and back-dating them.

Also expect in the next week or so a blog post for HUCO 510: Theory of Humanities Computing.  Haven’t quite decided what to write about yet, but I would like to somehow incorporate this article about Bruce Sterling’s library getting archived, and his comments on digital preservation.

Also: How could I forget to mention my Day of DH blog?  That went up on March 18, and was actually completed on March 25.

Twitter and the KM Context

[W]e came across the word “twitter,” and it was just perfect. The definition was “a short burst of inconsequential information,” and “chirps from birds.” And that’s exactly what the product was.
– Jack Dorsey (Sarno, 2009)

Twitter, the popular microblogging tool from which users post updates in 140 character increments, recently celebrated its five-year anniversary. In the world of fly-by-night Web 2.0 applications, that makes it a well-established and time-tested social technology. What has contributed to Twitter’s success? Why is it such a popular tool?

As its co-founder, Jack Dorsey, suggests in the quotation above, Twitter is a place where users can publish short bursts of information and share it with a larger community. It is “the best way to discover what’s new in your world”, reads the website’s about page (http://twitter.com/about). Still, users unfamiliar with the platform or dubious about this claim might wonder precisely how this tool can be productive. After all, Dorsey’s endorsement is not exactly inspiring: what good is information if it is inconsequential? What makes Twitter such a powerful tool, from both a knowledge management or business perspective and the broader context of information-sharing is how it operates in real-time. It allows members of communities of practice to track relevant news and share important events as they happen. This crowdsourcing approach to information means that users who follow other users publishing information relevant to their community of practice can keep their finger on the pulse—an extremely valuable commodity in a world that is increasingly knowledge-centric. Similarly, these users can participate in a live, public conversation within a global network of peers, encouraging an ongoing exchange of knowledge. More importantly, the simple premise of “following” (or, in other words, subscribing to user feeds) allows complete personalization, while creating links between users that shape community networks organically, rhizomatically.

Another advantage of Twitter is that it is highly scalable. Twitter has an API (Application Programming Interface) that allows customized software to be built around the basic platform. In this way, users can log in to their account using third-party software like TweetDeck, which allows them to organize and view tweets in a variety of ways. In addition, this characteristic also allows the development of widgets to publish tweets on websites and blogs. Viewed as much as a disadvantage as an advantage, the 140-character limit on updates forces users to state a single idea clearly and concisely. This limitation was originally due to the average character limit for text messages from cell phones, which had been considered by the founders as the principal technology for using the service. Soon after the service went public, however, most smart phone models no longer had that limitation on text messages. By then users had discovered that the character limit was the ideal length for short status updates; the limitation distinguishes Twitter from other casual blogging services such as Tumblr, which, no doubt, helped promote the service as a brand. While sometimes inconvenient for users with longer, more elaborate messages, the difference makes Twitter unique as a social media tool.

A definite disadvantage of this technology, as with many social media technologies, is the public nature of updates and the murky notion of intellectual property. Twitter is perhaps more volatile in this sense than other, similar technologies like blogs or wikis, which require more thoughtful consideration before publishing. The brief nature of tweets make it easy for users to submit whatever they happen to be thinking or seeing, regardless of legal considerations such as intellectual property or copyrights, and updates are published immediately without the opportunity to review or delete before they go live. This can be problematic for users, particularly high-profile users; one dramatic example, though certainly not the only one, would be a tweet that resulted in the termination of one CNN correspondent. In 2010, Octavia Nasr was fired for publishing an update expressing regret over the death of the Grand Ayatollah Mohammad Hussein Fadlallah, a leader of Hezbollah. Twitter poses a problem for e-Discovery that courts around the world have not yet come to terms with.

To provide a nuts-and-bolts explanation of how Twitter works and to help understand its practicality, it is useful to consider the following scenario: You are interested in motorcycles, and want current information about promotions, events, and people in your area related to that interest. You create an account on Twitter.com, and search the website for likeminded users. Scanning through user profiles, you decide to follow users representing several motorcycle dealers in your city, a couple motorcycle riding clubs, a national motorcycle news magazine, and a number of individuals who identify themselves as “motorcycle enthusiasts”. You begin receiving these users’ updates (or tweets), and begin to learn about the local motorcycle community. After a few days of reading tweets, you learn that there is going to be a bike show and that several of the users will be attending. You are unable to attend the bike show yourself, but you get to experience it through the tweets of your fellow users, who describe the event and post pictures of different models on display. You are able to engage some of these users, asking them questions about the event as it is taking place. You also discover that there is a hashtag that Twitter users are using to identify tweets about the event, and by searching all tweets that include that hashtag you discover several more users to follow. In this way information is exchanged, and you develop relationships with other members of the community that you might otherwise not have had. Now consider this same scenario in a different context: you have recently opened a motorcycle shop. Using the tool in the same way, Twitter becomes a valuable social tool for promoting yourself or your company, in addition to acquiring and sharing useful information.

Knowledge management (KM) resides in an interesting interdisciplinary space, somewhere between sociology, philosophy and economics. In his 1962 article, “The Economic Implications of Learning by Doing”, Nobel-prize winning economist Kenneth Arrow clearly states the necessity for organizational practices that manage the learning process; the economics of KM are concerned with breaking down and quantifying this process. In The Tacit Dimension (1966), Michael Polanyi describes the concept of “tacit knowing”; knowledge that deals with the implicit nature of human experience, skill and action is considered tacit, while knowledge codified and transmittable through language is explicit. Polanyi’s epistemological model serves as the fundamental principle of KM, distinguishing knowledge from the concepts of information and data. The sociological underpinnings of KM provide us with the a sound basis for understanding “knowledge” as a concept and its notably various manifestations, while also giving us a framework for making sense of how knowledge circulates within communities and through individuals. The seminal work of Emile Durkheim lends KM a primary concern with the “social facts”—the observable behaviours at the root of human interaction. Rather than relying on theory, KM is preoccupied with studying how people actually share, learn, and use knowledge. KM arose from these disciplinary cornerstones in the early 1990s, when an increased emphasis on the creation, dissemination and utilization of organizational knowledge in professional and scholarly literature identified a growing need for a systematic approach to managing information and expertise in firms. Laurence Prusak identifies three social and economic trends that make KM essential in any organization today: globalization, ubiquitous computing and “the knowledge-centric view of the firm” (1002). Prusak’s description of globalization in particular emphasizes the necessity to stay current; information technology has resulted in a “speeding up” of all elements of global trade, as well as an increase in the “reach” of organizations. Twitter is a technology that can facilitate this necessity.

There are any number of examples that demonstrate how Twitter fulfills the requirements of KM that I have described. In terms of leveraging group and individual interactions based on “social facts”, we can consider the role Twitter has played in the recent revolution in Egypt. Protesters on the ground in Cairo were publishing updates about the conflicts they faced, giving the crisis international exposure it might otherwise not have had. Following the government’s failed attempt to block Twitter—evidence in itself as to the effectiveness of Twitter for spreading a message—there was overwhelming support from around the world for the protestors against President Mubarak’s regime. This global support, along with the grassroots reporting of Egyptian demonstrators, certainly contributed to Mubarak’s ultimate resignation from office. This example shows how the knowledge of individuals in a particular context spread to other communities, and how this in turn inspired a global movement—based on the ever-expanding network of interactions through this particular social tool. The “social fact” inherent in Twitter is how human interaction manifests around these short bursts of highly contextual information, and how communities take shape by engaging in the same and other closely related contextual spaces.

An example of how Twitter facilitates the transfer of tacit knowledge might be the way events are recorded and experienced through it. Take, for instance, the recent SXSW Conference and Festival in Austin, TX, a yearly event that is recognized worldwide as a showcase of music, films and emerging technologies; a Twitter search for “#SXSW” reveals a host of users recording their experience through a variety of media—text describing talks, shows and screenings combined with links to photos, videos, and websites that together form an image of the event. These individuals’ experiences might not otherwise be expressible without a tool like Twitter that facilitates the blending of online multimedia. Moreover, the combined force of a community of users sharing these experiences at the same time can provide a comprehensive panorama of what they are hearing, seeing, and learning. In this way, Twitter allows tacit knowledge to be codified for mass consumption.

Measuring the impact of Twitter and how knowledge circulates through the network is not a simple task. Perhaps the most effective way to do so that we have today is the application of web and text analytics to social media. There are several companies that have recently achieved success in this area, based on textual data (e.g. lexical analysis, natural language processing, etc), user data (e.g. demographics, geographic data), and traffic data (e.g. clickstream, page views, number of followers/subscribers, replies and retweets, etc) mined from social media websites. Canadian-based Sysomos has used MAP (Media Analysis Platform) to provide an in-depth analysis of how people, products and brands are effectively marketed through Twitter and other social media tools. One reviewer describes MAP as follows:

MAP can, for example, tell you that the largest number of Twitter users who wrote about the Palm Pre come from California and Great Britain, as well as who the most authoritative Twitter users who tend to tweet about the Pre are (MAP assigns a score from 1 to 10 to every Twitter user, based on the number of followers, replies, retweets, etc.). Of course, you can then also compare these results with results from a query for ‘iPhone,’ for example. (Lardinois, 2009)

MAP, in fact, was used for an analysis of users during the crisis in Egypt. Some of the visualizations of this data are available online[1] . A recent study comparing social media monitoring software identified five key categories that need to be considered to appropriately measure the effectiveness of a social media tool (FreshMinds Research, 2010):

  1. Coverage – Types of media available based on geographic coverage.
  2. Sentiment analysis – The attitude of the speaker/writer with respect to the topic, based on tone.
  3. Location of conversations
  4. Volume of conversations
  5. Data-latency – The speed at which conversations are collected by a tool, based on the frequency of its web crawlers and the length of time it takes the monitoring tool to process the data.

As the researchers who undertook the study indicate, the possibilities for such data, from both a qualitative and quantitative perspective, are “huge”. Social media monitoring allows us to examine any number of factors in the learning and communicative process as it is manifested through social media technologies, “from category choices to the lifestyles of different segments”, on an individual or at an aggregate level (ibid.). The research group also identifies areas in which social media monitoring needs to improve—particularly within the realm of sentiment analysis. The monitoring tools are not sophisticated enough to provide an accurate measure. While Twitter in itself can be thought of as an organizational practice for knowledge-sharing, the application of monitoring tools can be thought of as Arrow’s organizational practices for managing knowledge. Based on the analysis that such monitoring tools—like Sysomos’ MAP—can provide, organizations and individuals can make more effective use of Twitter.

It is clear that Twitter can be a huge benefit for the effective creation and dissemination of knowledge, if used correctly. Organizations that are prepared to invest the time and energy in a sound social media plan to improve KM would be remiss not to include a presence on Twitter. On the other hand, this technology poses many risks for organizations, particularly in the realm of e-Discovery. The fact that content published to Twitter resides on the website’s servers, and not in the hands of the organization must play an important factor in any organization’s KM assessment. Twitter is perhaps more useful for NFP organizations that have a mandate for advocacy and public promotion (take, for instance, SXSW). It also is useful for individuals with either a professional interest in promotional or informational knowledge-sharing (such as consultants, agents, performers, journalists and salespeople) or as members of an existing community (like our motorcycle enthusiast). The professional and the social are not easily distinguished on Twitter, which can be both a benefit and a curse for users, as we have seen. Finally, while the information shared on Twitter might seem “inconsequential” to some, to others it can be very valuable. It is this value that KM needs to harness, in order to effectively make use of Twitter.


[1] Visualizations for the Twitter data related to the crisis in Egypt can be found at http://mashable.com/2011/02/01/egypt-twitter-infographic/. For a compelling overview of the sort of data Sysomos has analyzed with respect to Twitter, an indispensable resource is their report “Inside Twitter: An in-depth look inside the Twitter World”, 2009: http://www.sysomos.com/docs/Inside-Twitter-BySysomos.pdf

Bibliography

Arrow, K. (1962, June). The Economic Implications of Learning by Doing. Review of Economic Studies 29(3), 153-73.

Durkheim, E. (1982). The Rules of the Sociological Method, Ed. S. Lukes. Trans. W.D. Halls. New York: Free Press.

FreshMinds Research. (2010, May 14). Turning conversations into insights: A comparison of Social Media Monitoring Tools. [A white paper from FreshMinds Research, http://www.freshminds.co.uk.] Retrieved on March 22, 2011 from http://shared.freshminds.co.uk/smm10/whitepaper.pdf

Lardinois, F. (2009, June 4). Pro Tools for Social Media Monitoring and Analysis: Sysomos Launches MAP and Heartbeat. ReadWriteWeb.com. Retrieved on March 22, 2011 from http://www.readwriteweb.com/archives/pro_tools_for_social_media_sysomos_launches_map_and_heatbeat.php

Polanyi, M. (1966). The Tacit Dimension. London: Routledge & Kegan Paul.

Prusak, L. (2001). Where did knowledge management come from? IBM Systems Journal, 40(4), 1002-1007.

Sarno, D. (2009, February 18) Twitter creator Jack Dorsey illuminates the site’s founding document. Part I. Los Angeles Times. Retrieved September 24, 2010 from http://latimesblogs.latimes.com/technology/2009/02/twitter-creator.html

Needling the Old Guard: XML in Prosopography

The last few weeks we have been discussing the ongoing debate in the digital humanities between textual markup and databases. Reading K.S.B. Keats-Rohan’s “Prosopography for Beginners” on her Prosopography Portal (http://prosopography.modhist.ox.ac.uk/index.htm), I found it interesting that the tutorial focuses initially and primarily on mark-up. Essentially, Keats-Rohan outlines three stages to prosopography:
1. “Data modelling”—For Keats-Rohan, this stage is accomplished by marking up texts with XML tags “to define the groups or groups to be studied, to determine the sources to be used from as wide a range as possible, and to formulate the questions to be asked.” It does far more than that, however, since the tags identify the particular features of sources that need to be recorded. Keats-Rohan covers this activity extensively with eleven separate exercises, each with its own page.
2. “Indexing”—This stage calls for the creation of indexes based on the tag set or DTD developed in stage one. These indexes collect specific types of information, such as “names”, “persons” and “sources”. These indexes are then massaged with the addition of biographical data into a “lexicon”, with the application of a “questionnaire” (i.e. a set of questions to query your data points.) Ideally, it is suggested, this is done through the creation of a relational database with appropriately linked tables. A single page is devoted to the explanation of this stage, with the following apology:

It is not possible in the scope of this tutorial to go into detail about issues relating to database design or software options. Familiarity with the principles of a record-and-row relational database has been assumed, though nothing more complex that an Excel spreadsheet is required for the exercises.

…11 lengthy exercises for XML, but you’re assumed to appreciate how relational databases work by filling out a few spreadsheets?
3. “Analysis”—This is, of course, the work of the researcher, once the data collection is complete. This section of the tutorial includes a slightly longer page than stage 2 with 4 sample exercises. The exercises are designed to teach users how prosopographical analysis can be conducted.
It strikes me as incongruous that, for a research method that relies so heavily on the proper application of a relational database model, so little time is devoted to discussing its role in processing data. Instead, Keats-Rohan devotes the majority of her tutorial in formulating an XML syntax that, when all is said and done, really only adds an unnecessary level of complexity to processing source data. You could quite easily completely do away with stage one, create your index categories in stage two as database tables, and process (or “model”) your data at that point, simply by entering it into your database. What purpose does markup serve as a means of organizing your content, if you’re just going to reorganize it into a more versatile database structure?
Keats-Rohan’s focus on markup starkly emphasizes how XML is far more greatly valued than databases by humanities scholars. Since both are useful for quite different purposes, and relational databases have so much to offer to humanities scholarship—as prosopographies prove—I am baffled that such a bias persists.

The Implications of Database Design

In studying the database schema for the Prosopography of Anglo-Saxon England (PASE), several features of the design are immediately apparent[1].  Data is organized around three principal tables, or data points: the Person (i.e. the historical figure mentioned in a source), the Source (i.e. a text or document from which information about historical figures is derived), and the Factoid (i.e. the dynamic set of records associated with a particular reference in a source about a person).  There are a number of secondary tables as well, such as the Translation, Colldb and EditionInfo tables that provide additional contextual data to the source, and the Event, Person Info, Status, Office, Occupation and Kinship tables, among others, that provide additional data to the Factoid table.  In looking at these organizational structures, it is clear that the database is designed to pull out information about historical figures based on Anglo-Saxon texts.   I admire the versatility of the design and the way it interrelates discrete bits of data (even more impressive when tested using the web interface at http://www.pase.ac.uk ), but I can’t help but recognize an inherent bias in this structure. In reading John Bradley and Harold Short’s article “Using Formal Structures to Create Complex Relationships: The Prosopography of the Byzantine Empire—A Case Study”, I found myself wondering at the choices made in the design of both databases.  The PBE database structure appears to be very similar if not identical to that of the PASE.  Perhaps it’s my background as an English major—rather than a History major—but I found it especially unhelpful in one particular instance: how do I find and search the information associated with a unique author? With its focus on historical figures written about in sources, rather than the authors of those sources, the creators made a conscious choice to value historical figures over authors and sources.  To be fair, the structure does not necessarily preclude the possibility of searching author information, which appears in the Source table, and there is likely something to be said of the anonymous and possibly incomplete nature of certain Anglo-Saxon texts.  In examining the PASE interface, the creators appear to have resolved this issue somewhat by allowing users to browse by source, and listing the author’s name in place of the title of the source (which, no doubt, is done by default when the source document has no official title).  It is then possible to browse references within the source and to match the author’s name to a person’s name[2].  The decision to organize information in this way, however, de-emphasizes the role of the author and his historical significance, and reduces him to a faceless and neutral authority.  This is maybe to facilitate interpretation; Bradley & Short discuss the act of identifying factoid assertions about historical figures as an act of interpretation, in which the researcher must make a value judgment about what the source is saying about a particular person(8).  Questions about the author’s motives would only problematize this act.  The entire organization of the database, in fact, results in the almost complete erasure of authorial intent. What this analysis of PASE highlights for me is how important it is to be aware of the implications of our choices in designing databases and creating database interfaces.  The creators of PASE might not have intended to render the authors of their sources so impotent, but the decisions they made both in the construction of their database tables and of the user interface, and of the approach to entering factoid data had that ultimate result. Bradley, J. and Short, H. (n.d.).  Using Formal Structure to Create Complex Relationships: The Prosopography of the Byzantine Empire.  Retrieved from http://staff.cch.kcl.ac.uk/~jbradley/docs/leeds-pbe.pdf PASE Database Schema. (n.d.). [PDF].  Retrieved from http://huco.artsrn.ualberta.ca/moodle/file.php/6/pase_MDB4-2.pdf Prosopography of Anglo-Saxon England. (2010, August 18). [Online database].  Retrieved from http://www.pase.ac.uk/jsp/index.jsp


[1] One caveat: As I am no expert, what is apparent to me may not be what actually is.  This analysis is necessarily based on what I can understand of how PASE and PBE are designed, both as databases and as web interfaces, and it’s certainly possible I’ve made incorrect assumptions based on what I can determine from the structure.  Not unlike the assumptions researchers must make when identifying factoid assertions (Bradley & Short, 8).
[2] For example, clicking “Aldhelm” the source will list all the persons found in Aldhelm, including Aldhelm 3, bishop of Malmsbury, the eponymous author of the source (or rather, collection of sources).  Clicking Aldhelm 3 will provide the Person record, or factoid—Aldhelm, as historical figure.  The factoid lists all of the documents attributed to him under “Authorship”.  Authorship, incidentally, is a secondary table linked to the Factoid table; based on the structure, it seems like this information is derived from the Colldb table, which links to the source table.  All this to show that it is possible but by no means evident to search for author information.

The Commonplace Book—extinct form of critical reading and sensemaking?

I found Robert Darnton’s chapter on the Renaissance tradition of the commonplace book an interesting insight into how people made—and make—sense of what they read.  It made me wonder about how this tradition of reading has changed over time.  Darnton suggests that today’s reader has learned to read sequentially, while the early modern reader read segmentally, “concentrating on small chunks of text and jumping from book to book” (169).  The implication is that, from this transformation of practice, we have lost a critical approach to reading.  The commonplace book, Darnton describes, was a place where early modern readers collected bits and pieces of texts alongside personal reflections about their significance (149).  This activity was a hybrid of reading and writing, making an author of the reader, and serving as a method for “Renaissance self-fashioning”—the grasping for a humanist understanding of the autonomous individual (170).  Arguably, in adopting a sequential mode of reading and forgetting the practice of the commonplace book, we have lost a useful tool for making sense of the world and of ourselves.

At the beginning of the chapter, Darnton makes a curious allusion to the present reality, the Digital Age.  He writes: “Unlike modern readers, who follow the flow of a narrative from beginning to end (unless they are digital natives and click through texts on machines), early modern Englishmen read in fits and starts and jumped from book to book.” [Emphasis is my own] (149). Clearly he is referring to hypertextual practice, the connective structure of texts on the Web that are joined through a network of inter-referential links, and provoke a non-sequential mode of reading.  The Web has initiated a number of changes in how we read, write, create and make sense of texts.  Hypertextuality is certainly one them, but I think Darnton only touches upon the tip of the iceberg with this passing reference.  While the commonplace book as genre might be extinct, new hybrid forms of critical reading/writing have taken its place.  Take, for instance, the blogging phenomenon.  Many people today write blogs on a vast variety of subjects.  Most represent critical responses to other media—articles, videos, images, stories, other blog posts.  They are the commonplace book of the digital native.  The difference is that the digital native’s commonplace book is accessible to all, and (more often than not) searchable.  Consider also the phenomenon of microblogging in the form of Twitter.  As an example, I am going to look at my own Twitter feed (http://twitter.com/eforcier – I have attached a page with specific examples).  In 140 character segments I carry on conversations, post links to online documents and express my reactions to such texts.  It is, in fact, perfectly possible to consider a 21st century individual’s Twitter feed analogous to the early modern reader’s commonplace book.  These activities represent a far more complex mode of reading than Darnton assigns the contemporary reader.  It is a type of reading that is at times segmental, at times sequential, but is remarkable because of the interconnectivity of sources and the critical engagement of the reader that it represents.  What is most interesting is that, rather than emphasizing the notion of the autonomous individual, these digital modes of reading/writing emphasize collectivity and community—what could be described as a “Posthuman self-fashioning”.

 

Darnton, R. (2009).  The Case for Books. New York: PublicAffairs.  219p.

***

I have not include the appendix of selected tweets that was submitted along with this assignment, but I’m sure you’ll get the gist by viewing my Twitter page: http://www.twitter.com/eforcier

A Nostalgic Look Back: Cloning

Whatever happened to cloning?

No, no, this is a legitimate question.  I remember about ten years ago, maybe a little bit more than that, there was a buzz around ‘cloning’ as the next big scientific development.  I was in high school at the time, and I recall devouring every news story about Dolly, the first cloned sheep, that I could get my hands on.  I imagined a future in which the tiniest bit of our genetic material could be used to replicate life, and pondered the murky ethics that arose from this.  And then time passed, and the whole craze just sort of faded away.

I was reminded of this in reading Robert Pepperell’s 2003 edition of Posthuman Condition: Consciousness Beyond the Brain, in preparation for my term paper.  In the preface, Pepperell mentions with much urgency developments in the field of genetics and cloning specifically, and what this might mean in the re-definition of the ‘human’.  He references in particular a 2002 article in the Sunday Times about the imminence of the first successful human cloning (I’m fuzzy on this point, but I suspect my lack of memory suggests it wasn’t as successful or as imminent as Pepperell claims.)

So my question is this:  What happened to all the hype about cloning?  Would it have featured importantly in my Posthumanism course had it been offered eight years ago?  Is it strange that cloning hasn’t even gotten the merest mention in class?