Posts Tagged ‘ Internet ’

Twitter and the KM Context

[W]e came across the word “twitter,” and it was just perfect. The definition was “a short burst of inconsequential information,” and “chirps from birds.” And that’s exactly what the product was.
– Jack Dorsey (Sarno, 2009)

Twitter, the popular microblogging tool from which users post updates in 140 character increments, recently celebrated its five-year anniversary. In the world of fly-by-night Web 2.0 applications, that makes it a well-established and time-tested social technology. What has contributed to Twitter’s success? Why is it such a popular tool?

As its co-founder, Jack Dorsey, suggests in the quotation above, Twitter is a place where users can publish short bursts of information and share it with a larger community. It is “the best way to discover what’s new in your world”, reads the website’s about page (http://twitter.com/about). Still, users unfamiliar with the platform or dubious about this claim might wonder precisely how this tool can be productive. After all, Dorsey’s endorsement is not exactly inspiring: what good is information if it is inconsequential? What makes Twitter such a powerful tool, from both a knowledge management or business perspective and the broader context of information-sharing is how it operates in real-time. It allows members of communities of practice to track relevant news and share important events as they happen. This crowdsourcing approach to information means that users who follow other users publishing information relevant to their community of practice can keep their finger on the pulse—an extremely valuable commodity in a world that is increasingly knowledge-centric. Similarly, these users can participate in a live, public conversation within a global network of peers, encouraging an ongoing exchange of knowledge. More importantly, the simple premise of “following” (or, in other words, subscribing to user feeds) allows complete personalization, while creating links between users that shape community networks organically, rhizomatically.

Another advantage of Twitter is that it is highly scalable. Twitter has an API (Application Programming Interface) that allows customized software to be built around the basic platform. In this way, users can log in to their account using third-party software like TweetDeck, which allows them to organize and view tweets in a variety of ways. In addition, this characteristic also allows the development of widgets to publish tweets on websites and blogs. Viewed as much as a disadvantage as an advantage, the 140-character limit on updates forces users to state a single idea clearly and concisely. This limitation was originally due to the average character limit for text messages from cell phones, which had been considered by the founders as the principal technology for using the service. Soon after the service went public, however, most smart phone models no longer had that limitation on text messages. By then users had discovered that the character limit was the ideal length for short status updates; the limitation distinguishes Twitter from other casual blogging services such as Tumblr, which, no doubt, helped promote the service as a brand. While sometimes inconvenient for users with longer, more elaborate messages, the difference makes Twitter unique as a social media tool.

A definite disadvantage of this technology, as with many social media technologies, is the public nature of updates and the murky notion of intellectual property. Twitter is perhaps more volatile in this sense than other, similar technologies like blogs or wikis, which require more thoughtful consideration before publishing. The brief nature of tweets make it easy for users to submit whatever they happen to be thinking or seeing, regardless of legal considerations such as intellectual property or copyrights, and updates are published immediately without the opportunity to review or delete before they go live. This can be problematic for users, particularly high-profile users; one dramatic example, though certainly not the only one, would be a tweet that resulted in the termination of one CNN correspondent. In 2010, Octavia Nasr was fired for publishing an update expressing regret over the death of the Grand Ayatollah Mohammad Hussein Fadlallah, a leader of Hezbollah. Twitter poses a problem for e-Discovery that courts around the world have not yet come to terms with.

To provide a nuts-and-bolts explanation of how Twitter works and to help understand its practicality, it is useful to consider the following scenario: You are interested in motorcycles, and want current information about promotions, events, and people in your area related to that interest. You create an account on Twitter.com, and search the website for likeminded users. Scanning through user profiles, you decide to follow users representing several motorcycle dealers in your city, a couple motorcycle riding clubs, a national motorcycle news magazine, and a number of individuals who identify themselves as “motorcycle enthusiasts”. You begin receiving these users’ updates (or tweets), and begin to learn about the local motorcycle community. After a few days of reading tweets, you learn that there is going to be a bike show and that several of the users will be attending. You are unable to attend the bike show yourself, but you get to experience it through the tweets of your fellow users, who describe the event and post pictures of different models on display. You are able to engage some of these users, asking them questions about the event as it is taking place. You also discover that there is a hashtag that Twitter users are using to identify tweets about the event, and by searching all tweets that include that hashtag you discover several more users to follow. In this way information is exchanged, and you develop relationships with other members of the community that you might otherwise not have had. Now consider this same scenario in a different context: you have recently opened a motorcycle shop. Using the tool in the same way, Twitter becomes a valuable social tool for promoting yourself or your company, in addition to acquiring and sharing useful information.

Knowledge management (KM) resides in an interesting interdisciplinary space, somewhere between sociology, philosophy and economics. In his 1962 article, “The Economic Implications of Learning by Doing”, Nobel-prize winning economist Kenneth Arrow clearly states the necessity for organizational practices that manage the learning process; the economics of KM are concerned with breaking down and quantifying this process. In The Tacit Dimension (1966), Michael Polanyi describes the concept of “tacit knowing”; knowledge that deals with the implicit nature of human experience, skill and action is considered tacit, while knowledge codified and transmittable through language is explicit. Polanyi’s epistemological model serves as the fundamental principle of KM, distinguishing knowledge from the concepts of information and data. The sociological underpinnings of KM provide us with the a sound basis for understanding “knowledge” as a concept and its notably various manifestations, while also giving us a framework for making sense of how knowledge circulates within communities and through individuals. The seminal work of Emile Durkheim lends KM a primary concern with the “social facts”—the observable behaviours at the root of human interaction. Rather than relying on theory, KM is preoccupied with studying how people actually share, learn, and use knowledge. KM arose from these disciplinary cornerstones in the early 1990s, when an increased emphasis on the creation, dissemination and utilization of organizational knowledge in professional and scholarly literature identified a growing need for a systematic approach to managing information and expertise in firms. Laurence Prusak identifies three social and economic trends that make KM essential in any organization today: globalization, ubiquitous computing and “the knowledge-centric view of the firm” (1002). Prusak’s description of globalization in particular emphasizes the necessity to stay current; information technology has resulted in a “speeding up” of all elements of global trade, as well as an increase in the “reach” of organizations. Twitter is a technology that can facilitate this necessity.

There are any number of examples that demonstrate how Twitter fulfills the requirements of KM that I have described. In terms of leveraging group and individual interactions based on “social facts”, we can consider the role Twitter has played in the recent revolution in Egypt. Protesters on the ground in Cairo were publishing updates about the conflicts they faced, giving the crisis international exposure it might otherwise not have had. Following the government’s failed attempt to block Twitter—evidence in itself as to the effectiveness of Twitter for spreading a message—there was overwhelming support from around the world for the protestors against President Mubarak’s regime. This global support, along with the grassroots reporting of Egyptian demonstrators, certainly contributed to Mubarak’s ultimate resignation from office. This example shows how the knowledge of individuals in a particular context spread to other communities, and how this in turn inspired a global movement—based on the ever-expanding network of interactions through this particular social tool. The “social fact” inherent in Twitter is how human interaction manifests around these short bursts of highly contextual information, and how communities take shape by engaging in the same and other closely related contextual spaces.

An example of how Twitter facilitates the transfer of tacit knowledge might be the way events are recorded and experienced through it. Take, for instance, the recent SXSW Conference and Festival in Austin, TX, a yearly event that is recognized worldwide as a showcase of music, films and emerging technologies; a Twitter search for “#SXSW” reveals a host of users recording their experience through a variety of media—text describing talks, shows and screenings combined with links to photos, videos, and websites that together form an image of the event. These individuals’ experiences might not otherwise be expressible without a tool like Twitter that facilitates the blending of online multimedia. Moreover, the combined force of a community of users sharing these experiences at the same time can provide a comprehensive panorama of what they are hearing, seeing, and learning. In this way, Twitter allows tacit knowledge to be codified for mass consumption.

Measuring the impact of Twitter and how knowledge circulates through the network is not a simple task. Perhaps the most effective way to do so that we have today is the application of web and text analytics to social media. There are several companies that have recently achieved success in this area, based on textual data (e.g. lexical analysis, natural language processing, etc), user data (e.g. demographics, geographic data), and traffic data (e.g. clickstream, page views, number of followers/subscribers, replies and retweets, etc) mined from social media websites. Canadian-based Sysomos has used MAP (Media Analysis Platform) to provide an in-depth analysis of how people, products and brands are effectively marketed through Twitter and other social media tools. One reviewer describes MAP as follows:

MAP can, for example, tell you that the largest number of Twitter users who wrote about the Palm Pre come from California and Great Britain, as well as who the most authoritative Twitter users who tend to tweet about the Pre are (MAP assigns a score from 1 to 10 to every Twitter user, based on the number of followers, replies, retweets, etc.). Of course, you can then also compare these results with results from a query for ‘iPhone,’ for example. (Lardinois, 2009)

MAP, in fact, was used for an analysis of users during the crisis in Egypt. Some of the visualizations of this data are available online[1] . A recent study comparing social media monitoring software identified five key categories that need to be considered to appropriately measure the effectiveness of a social media tool (FreshMinds Research, 2010):

  1. Coverage – Types of media available based on geographic coverage.
  2. Sentiment analysis – The attitude of the speaker/writer with respect to the topic, based on tone.
  3. Location of conversations
  4. Volume of conversations
  5. Data-latency – The speed at which conversations are collected by a tool, based on the frequency of its web crawlers and the length of time it takes the monitoring tool to process the data.

As the researchers who undertook the study indicate, the possibilities for such data, from both a qualitative and quantitative perspective, are “huge”. Social media monitoring allows us to examine any number of factors in the learning and communicative process as it is manifested through social media technologies, “from category choices to the lifestyles of different segments”, on an individual or at an aggregate level (ibid.). The research group also identifies areas in which social media monitoring needs to improve—particularly within the realm of sentiment analysis. The monitoring tools are not sophisticated enough to provide an accurate measure. While Twitter in itself can be thought of as an organizational practice for knowledge-sharing, the application of monitoring tools can be thought of as Arrow’s organizational practices for managing knowledge. Based on the analysis that such monitoring tools—like Sysomos’ MAP—can provide, organizations and individuals can make more effective use of Twitter.

It is clear that Twitter can be a huge benefit for the effective creation and dissemination of knowledge, if used correctly. Organizations that are prepared to invest the time and energy in a sound social media plan to improve KM would be remiss not to include a presence on Twitter. On the other hand, this technology poses many risks for organizations, particularly in the realm of e-Discovery. The fact that content published to Twitter resides on the website’s servers, and not in the hands of the organization must play an important factor in any organization’s KM assessment. Twitter is perhaps more useful for NFP organizations that have a mandate for advocacy and public promotion (take, for instance, SXSW). It also is useful for individuals with either a professional interest in promotional or informational knowledge-sharing (such as consultants, agents, performers, journalists and salespeople) or as members of an existing community (like our motorcycle enthusiast). The professional and the social are not easily distinguished on Twitter, which can be both a benefit and a curse for users, as we have seen. Finally, while the information shared on Twitter might seem “inconsequential” to some, to others it can be very valuable. It is this value that KM needs to harness, in order to effectively make use of Twitter.


[1] Visualizations for the Twitter data related to the crisis in Egypt can be found at http://mashable.com/2011/02/01/egypt-twitter-infographic/. For a compelling overview of the sort of data Sysomos has analyzed with respect to Twitter, an indispensable resource is their report “Inside Twitter: An in-depth look inside the Twitter World”, 2009: http://www.sysomos.com/docs/Inside-Twitter-BySysomos.pdf

Bibliography

Arrow, K. (1962, June). The Economic Implications of Learning by Doing. Review of Economic Studies 29(3), 153-73.

Durkheim, E. (1982). The Rules of the Sociological Method, Ed. S. Lukes. Trans. W.D. Halls. New York: Free Press.

FreshMinds Research. (2010, May 14). Turning conversations into insights: A comparison of Social Media Monitoring Tools. [A white paper from FreshMinds Research, http://www.freshminds.co.uk.] Retrieved on March 22, 2011 from http://shared.freshminds.co.uk/smm10/whitepaper.pdf

Lardinois, F. (2009, June 4). Pro Tools for Social Media Monitoring and Analysis: Sysomos Launches MAP and Heartbeat. ReadWriteWeb.com. Retrieved on March 22, 2011 from http://www.readwriteweb.com/archives/pro_tools_for_social_media_sysomos_launches_map_and_heatbeat.php

Polanyi, M. (1966). The Tacit Dimension. London: Routledge & Kegan Paul.

Prusak, L. (2001). Where did knowledge management come from? IBM Systems Journal, 40(4), 1002-1007.

Sarno, D. (2009, February 18) Twitter creator Jack Dorsey illuminates the site’s founding document. Part I. Los Angeles Times. Retrieved September 24, 2010 from http://latimesblogs.latimes.com/technology/2009/02/twitter-creator.html

Advertisements

Too Much Information, part 2: Recontextualization

The second article I want to discuss is “Data as a natural resource” by Matthew Aslett, and deals principally with the idea of transforming data—records decontextualized—into products (records recontextualized as commodities).  Aslett introduces the concept of the “data factory”, a place where data is “manufactured”.  He also highlights this in the context of “Big Data”—the current trend of accomodating larger and larger collections of information.  The problem is, “Big Data” are useless unless you can process them, analyze them, contextualize them.  Aslett suggests that the next big trend will be “Big Data Analytics”, which will focus on harnessing data sources and transforming them into products.  Assigning meaning to the raw, free-floating information, as it were.

One of the things I like about Aslett’s article is his analogy between data resources and energy resources, comparing the “data factory” with the oil industry.  Data is the new oil; useable data can be very valuable, as eBay and Facebook (Aslett’s two main examples) demonstrate.  What’s interesting about both eBay and Facebook, and why Aslett draws attention to them in particular, is that they don’t in themselves produce the data; they harness pre-existing data streams (the data “pipeline”), building on transactions that already take place, automate these transactions for their users, and parse their user data into saleable products.  In the case of Facebook, this comes in the form of ad revenue from targetted marketing, based on the most comprehensive demographic information available online (a user base of 500+ million); for eBay, it is the combination of transactional and behavioural data that identifies its top sellers and leads to increased revenue for them.  If Facebook or eBay didn’t exist, as Aslett points out, people would still communicate, share photos, buy and sell products.  They have just automated the process, and acquired the transaction records that are associated with such interactions in the process.

This makes me wonder about the ownership implications, once again, and about the Facebook terms of use I trotted out in a previous blog entry.  Is it fair for Facebook to profit off your personal information in this way?  To control your data?  Isn’t it a little worrisome that eBay and Amazon track what I buy online well enough to make quite accurate recommendations?  In terms of IAPP discussed in the last class and of David Flaherty’s list of individual rights, it is troubling to consider that, if the countless disparate traces of me online were somehow pulled together and processed, someone could construct a reasonable facsimile of me, my personality, my identity.  And isn’t this what Aslett is really talking about when he uses the word “analytics”?

Aslett, M. (2010, November 18).  Data as a natural energy source.  Too much information. Retrieved on November 26, 2010 from http://blogs.the451group.com/information_management/2010/11/18/data-as-a-natural-energy-source/

The Flipside of Information-Control on the Web

Today I read the following article:

On the evening of 25 November, Facebook.com disabled “We Are All Khaled Said” page which got more than 300,000 followers. The page was created after the 28-year-old Egyptian man named Khaled Said was beaten to death in Alexandria by two police officers who wanted to search him under the emergency law, according to El Nadim Center for Rehabilitation of Victims of Violence, local rights group.

The page administrator utilized the page to post updates on the flow of the case before the court and relevant information related to the incident that happened on the 6th of June 2010, as well as mobilizing people to join peaceful assemblies that took place against torture in Egypt and supporting victims of violence. …

http://advocacy.globalvoicesonline.org/2010/11/25/egypt-facebook-disables-popular-anti-torture-page/

This story came to me via Twitter, in a retweet that read “Reminder: making Facebook your publishing platform gives Facebook the right to delete what you say” (DanGillmor, RT by cascio). This reminder reemphasizes the point I keep coming back to about web records: you don’t control your information once it’s on the web.  I’ve spent a lot of time underlining how once someone publishes information on the web it might as well be there forever, particularly in my paper about ECCA and in previous journal entries about Twitter and blogging.  But maybe that’s not entirely accurate, or at least it only illustrates half of the point.

The flipside of the issue of information-control on the web is that whoever owns the rights to the server controls the information, and thus the disposition of the record—the “heaven” of perpetuity and the “hell” of the shredder, as we’ve learned in class (though, when it comes to the web, I suspect in many cases—at least retrospectively—the descriptors “heaven” and “hell” are reversed).  The case of “We Are All Khaled Said” aptly demonstrates how the server owner controls the disposition of information, even when one administrator and 300,000 users lay some intellectual claim to it.  The information can just as easily be destroyed when the author would wish it saved, as saved when he/she would wish it destroyed.

The real point about web records is that whenever you publish information using a third-party, such as Twitter, or WordPress, or Facebook, or MySpace, etc, etc, you’re compromising certain intellectual property rights.  Obviously, as a user you can access your web space through these services and add, edit, and delete your information however you like.  But the service provider, the server owner, the third-party reserves the right to either freeze, save or delete any or all of the content you publish.  Typically, this ceding of your intellectual property is written plainly (though often obliquely) in the end-user agreement or statement of terms, the same place you’ll find statements that free the third-party from any liability as well as privacy statements.  Here’s an example from the Facebook Statement of Rights and Responsibilities:

 

Sharing Your Content and Information

 

You own all of the content and information you post on Facebook, and you can control how it is shared through your privacy and application settings. In addition:

 

1. For content that is covered by intellectual property rights, like photos and videos (“IP content”), you specifically give us the following permission, subject to your privacy and application settings: you grant us a non-exclusive, transferable, sub-licensable, royalty-free, worldwide license to use any IP content that you post on or in connection with Facebook (“IP License”). This IP License ends when you delete your IP content or your account unless your content has been shared with others, and they have not deleted it.

2. When you delete IP content, it is deleted in a manner similar to emptying the recycle bin on a computer. However, you understand that removed content may persist in backup copies for a reasonable period of time (but will not be available to others).

http://www.facebook.com/?ref=logo#!/terms.php

While the statement claims that you “own” and “control” all of the information on your Facebook page, a careful reading makes it quite evident that it is Facebook that actually controls both the license and the final disposition of all your published content.

 

This is no different than the deletion of user comments by moderators on news sites or message boards.  And that’s why this issue lies in an expanding frontier of grey area: most people would agree that the owner of the website has the right to control what information is published there.  But who owns the social network?

Why Jason Hates Cory Doctorow

Here’s an amusing (mostly accurate) article by an old message board acquaintance of mine, about why Cory Doctorow is a “fucking dick”.  Not that I’m endorsing his point of view– I don’t mind Cory Doctorow, and I happen to think geek chic is cool (does that make me one of the festering unreal people?)– but his description of the Bono issue is, all in all, pretty fair.

Also enjoyed the depiction of Doctorow as a poor man’s Neal Stephenson.

http://www.wetasphalt.com/?q=content/why-i-hate-cory-doctorow

Welcome to 2010, people!

Visualizing Cultural Analytics

HuCo 500 – Weekly questions

 

How would you begin to establish a taxonomy, as Manovich suggests (“Cultural Analytics for Beginners”), for the different types of digital content for analysing culture?  Is this enough?

 

Manovich claims that we have to turn “culture” into “data” (this is, based on these readings, the basis of ‘Cultural Analytics’).  He goes on to define “culture” as “beliefs, ideologies, fashions, and other non-physical properties.” (“Visualizing Temporal Patterns in Visual Media”)  What sort of data can one derive from such artefacts?  More importantly, how does one make sense of the data?  What is meaningful?

 

Readings:

Manovich, Lev. “Cultural Analytics for Beginners.” 2009.

Manovich, Lev and Jeremy Douglass. “Visualizing Temporal Patterns in Visual Media.” 2009.

Common Record and Information Chaos

HUCO 500 – Weekly Questions:

There is a new profession of trail blazers, those who find delight in the task of establishing useful trails through the enormous mass of the common record. (Bush)

The idea of “common record” as the well of all human knowledge is interesting: could the Internet be thought of as the “common record”?  Does such a profession of “trail blazers” for the ‘Net exist?

The more complex the materials, the more abstract and/or cumbersome the edition becomes. (McGann)

Does the use of computing to create electronic hypertexts (“HyperEditing”) solve this problem, or complicate it further?  Even at the period when Bush was writing, the fear of information overload/information chaos is explicitly felt.  Does hypertext address this fear, or does it simply introduce new ways in which information chaos can manifest itself?

Readings:

Bush, Vannevar. “As We May Think.” Altantic Monthly.176.1 (1945): 101-108. | download annotated

McGann, Jerome. “The Rationale of Hypertext.” ADHO, 1995. | download annotated

Blind Weighman

I’ve been reading the following article about 19-year-old Matthew Weighman, “one of the best phone-hackers alive”, recently sentenced to 11 years in prison for his crimes.  Weighman was born blind.  Fascinating stuff.

W I R E D Threat Level | Blind Hacker Sentenced to 11 Years in Prison

Relying on an ironclad memory and detailed knowledge of the phone system, the teenager is known for using social engineering to manipulate phone company workers and others into divulging confidential information, and into entering commands into computers and telephone switching equipment on his behalf.

The FBI had been chasing Weigman since he was 15 years old, at times courting him as an informant. He was finally arrested in May of last year, less than two months after celebrating his 18th birthday.

Even more interesting is the “factual resume” (read: confession) written by Weighman and his lawyer describing in detail the various crimes he was charged with, and the part he played in them.

…If they haven’t already been sold, I’m betting Hollywood buys the movie rights before the end of the week.