Posts Tagged ‘ blog ’

Crowdsourced Intelligence and You

This post should have gone up ages ago, as part of a course assignment for HUCO 510.  Sometimes you just get side-tracked.  Anyway, this week something happened that gave me the perfect topic to complete my assignment.  Enjoy.

~~

On May 2, 2011 Osama Bin Ladin, one of the most feared terrorist leaders in the world, was killed.  Nearly a decade after the September 11 attacks on the World Trade Center in New York, attacks orchestrated by Bin Laden, US Navy Seals successfully carried out the assassination.  A nation rejoiced.

And, as that nation rejoiced, within minutes of the news being made public on the Internet and on television, all social media websites were abuzz.  One can imagine the sheer volume of the expressions of support, opposition, incredulity, happiness, sadness, congratulations and disgust that flooded the web.  Or, one can simply search “osama” on the Twitter index.  The President would later televise an address to the nation confirming the death of the man who had been cast in the role of nemesis to an entire people and way of life.

It is during these kinds of world-changing events that the most interesting insights about our society are discovered.  Megan McArdle, editor for The Atlantic, made one such discovery, as she browsed her Twitter feed on the fateful day.  One tweet in particular caught her eye.  Being one of Penn Jillette’s 1.6 million followers, she read the following quote, apparently in response to the death of Bin Laden:

“ I mourn the loss of thousands of precious lives, but I will not rejoice in the death of one, not even an enemy.” – Martin Luther King, Jr

Amid the—no doubt—millions of reactions, some of them shocking, this short sentence at least had the ring of reason.  And it was attributed to perhaps the most famous civil rights activist in North America.  A combination of Jillette’s celebrity as a performer and this level-headed response to the event in contrast to many much less level-headed responses made it viral; within hours of it going up on Twitter, many of Jillette’s followers had retweeted the quote, and it had become a trending topic on the social network, in the midst of the Bin Laden furor.  McArdle, unlike many others, did not retweet the quote, though she did initially feel the urge to pass it on.  She hesitated, however, because it didn’t “sound” like Martin Luther King, Jr.  And for that hesitation, I am sure she was later grateful, when it was soon discovered that the quote was misattributed.

Besides the end to privacy (which I’ve repeatedly discussed on this blog), another quality of modern communication technologies that we must all adapt to is the speed at which information travels.  Networks like Twitter and Facebook increase the rate of transmission exponentially.  The cult of celebrity has also found fertile earth in these virtual spaces.  If I had been the person to publish the quote on Twitter, with my 80 or so followers, rather than Jillette, the quote would not have been so popular, and the backlash would not have been so severe.  The fact that the initial tweet reached 1.6 million people dramatically increased how quickly the quote spread from that point.  So where did Jillette get the quote?

Despite some media outlets implying that he did this deliberately to mess with his followers, it seems clear now that it was accidental.  Jillette copied the quote from a Facebook user’s status update that read:

I mourn the loss of thousands of precious lives, but I will not rejoice in the death of one, not even an enemy. “Returning hate for hate multiplies hate, adding deeper darkness to a night already devoid of stars.  Darkness cannot drive out darkness: only light can do that.  Hate cannot drive out hate: only love can do that.” MLK jr

In viewing this, it is clear that Jessica Dovey, the Facebook user, was adding her own interpretation to an authentic quote by Martin Luther King, Jr.  Jillette tried to copy it to Twitter, but given the 140 character limit for tweets, was forced to edit it down.  Apparently he did not realize the first sentence was not part of the quotation.  Jillette later apologized repeatedly for the tweet, stating that it was a mistake.

“Why all the fuss over this?” one might ask.  It seems that most people are upset not so much by the misattribution as they are at the criticism of the popular reaction and the media circus that has surrounded the assassination.  Dovey and Jillette, and McArdle as well, who went on to write a blog post and editorial in The Atlantic online about her discovery of the misattribution, have faced a great deal of criticism since the quote was first shared.

We live in a world of memes, in a place where information—regardless of its accuracy or authenticity—is shared at an exponential rate, and where fiction can be accepted as fact based on who says it and how many believe it.  The only thing surprising about this particular incident is that the mistake was discovered and the truth of it spread online as fast as the initial tweet did.  If it had taken a day or two longer for someone like McArdle, with a platform to spread the information, to discover the mistake, would anyone have noticed?  Probably not.  It is not like people haven’t been misquoted or misattributed in the past.  What’s noteworthy is the speed at which this particular misquote proliferated.

I find this interesting because, as I have stated, it gives evidence of how communication has changed in our society.  Many of us rely on sources like Twitter to engage with current events.  It serves us well to be reminded that, in spite of the many benefits of crowdsourced intelligence, the onus for fact-checking is on the reader.

Advertisements

Assessing Social Media – Methods

I have written about various social media and web technologies as they relate to knowledge management (KM), and as they are discussed in the literature.  But I haven’t really touched on how the literature approaches measuring the application and success of such technologies in an organizational context.  Prusak notes that one of the priorities of KM is to identify the unit of analysis and how to measure it (2001, 1004).  In this review paper I will examine some of the readings that have applied this question to social media. For the sake of consistency, the readings I have chosen deal with the assessment of blogs for the management of organizational knowledge, but all of the methods discussed could be generalized to other emerging social technologies.

Grudin indicates that the reason most attempts at developing systems to preserve and retrieve knowledge in the past have failed, is that digital systems required information to be represented explicitly when most knowledge is tacit: “Tacit knowledge is often transmitted through a combination of demonstration, illustration, annotation, and discussion.” (2006, 1) But the situation, as Grudin explains, has changed—“old assumptions do not hold…new opportunities are emerging.” (ibid.) Virtual memory is no longer sold at a premium, allowing the informal and interactive activities used to spread tacit knowledge to be captured and preserved; emerging trends such as blogs, wikis, the ever-increasing efficiency of search engines, and of course social networks such as Twitter and Facebook that have come to dominate the Internet landscape open up a multitude of ways in which tacit knowledge can be digitized.

In his analysis of blogs, Grudin identifies five categories (2006, 5):

diary-like blogs, or personal blogs, developing the skill of engaging readers through personal revelation;

A-list blogs by journalists and high-profile individuals, as a source of information on events products and trends;

Watchlists, which track references across a wide selection of sources, reveal how a particular product, organization, name, brand, topic, etc is being discussed;

Externally visible employee blogs provide a human face for an organization or product, which offsets the potential legal and PR risks for a corporation.

Project blogs are internal blogs that focus on work and serve as a convenient means of collecting, organizing and retrieving documents and communication.

Lee, et al. make a similar move in categorizing the types of public blogs used by Fortune 500 companies (2006, 319):

Employee blogs (maintained by rank-and-file employees, varies in content and format)

Group blogs (operated by a group of rank-and-file employees, focuses on a specific topic)

Executive blogs (feature the writings of high-ranking executives)

Promotional blogs (promoting products and events)

Newsletter-type blogs (covering company news)

Grudin does not conduct any formal assessment of blogs, except to provide examples of project blogs, and to assign technical and behavioral characteristics to that particular sub-type that allowed them to be successful, based on his personal experience (2006, 5-7). Lee, et al.’s approach to assessing blogs involves content analysis of 50 corporate blogs launched by the 2005 Fortune 500 companies (2006, 322-23). In addition to the categories above, Lee, et al. also identified five distinct blogging strategies based on their findings, which broadly fall under two approaches (321):

Bottom-up, in which all company members are permitted to blog, and each blog serves a distinct purpose (not necessarily assigned by a higher authority)[1];

Top-down, in which only select individuals or groups are permitted to blog, and the blogs serve an assigned purpose that rarely deviates between blogs.

As the names suggest, a greater control of information is exercised in the top-down approach, while employee bloggers in companies adopting the bottom-up approach are provided greater autonomy.

Huh, et al. developed a unique approach in their study of BlogCentral, IBM’s internal blogging system (2007).  The study combined interviews with individual bloggers about their blogging practices and content analysis of their blogs.  Based on this data, they were able to measure two characteristics of blogs: the content (personal stories/questions provoking discussion/sharing information or expertise) and the intended audience (no specific audience/specific audience/broad audience).  These findings revealed four key observations:

– Blogs provide a medium for employees to collaborate and give feedback;

– Blogs are a place to share expertise and acquire tacit knowledge;

– Blogs are used to share personal stories and opinions that may increase the chances of social interaction and collaboration;

– Blogs are used to share aggregated information from external sources by writers who are experts in the area.

Rodriguez examines the use of WordPress blogs in two academic libraries for internal communication and knowledge management at the reference desk (2010).  Her analysis measures the success of these implementations using diffusion of innovation and organizational lag theories. Rogers’ Innovation Diffusion Theory establishes five attributes of an innovation that influence its acceptance in an organizational environment: Relative advantage, compatibility, complexity, triability, and observability (2010, 109). Meanwhile, organizational lag identifies the discrepancy between the adoption of technical innovation—i.e. the technology itself—and administrative innovation—i.e. the underlying, administrative purpose(s) for implementing the technology, usually representing a change in workflow to increase productivity.  In analyzing the two implementations of the blogging software, Rodriguez discovers that both libraries succeeded in terms of employee adoption of the technical innovation, but failed with the administrative innovation.  This was due specifically to the innovation having poor observability: “the degree to which the results of the innovation are easily recognized by the users and others” (2010, 109, 120). The initiators of the innovation in both cases did not “clearly articulate the broader administrative objectives” and “demonstrate the value of implementing both the tool and the new workflow process.” (2010, 120) If they had done so, Rodriguez suggests, the blogs might have been more successful.

While all of these studies approached blogging in a different way—project blogs, external corporate blogs, internal corporate blogs and internal group blogs—and measured different aspects of the technology—what it is, how it is used, if it is successful—they reveal a number of valuable approaches to studying social media in the KM context. Categorization, content and discourse analysis, interviews, and the application of relevant theoretical models are all compelling methods to assess social media and web technologies.

 


[1] One of the valuable contributions of Lee, et al.’s study is to also identify the essential purposes for which corporate blogs are employed. Some of these include product development, customer service, promotion and thought leadership. The notion of ‘thought leadership’ in particular, as a finding of their content analysis, is worth exploring; ‘thought leadership’ suggest that the ability to communicate innovative ideas is closely tied to natural leadership skills, and that blogs and other social media (by extension) can help express these ideas. Lee, et al.’s findings also suggest that ‘thought leadership’ in blogs will build the brand, or ‘human’ face of the organization, while acting as a control over employee blogs, evidenced by the fact that it is found primarily in blogs that employ a top-down strategy.


Bibliography

Grudin, J. (2006).  Enterprise Knowledge Management and Emerging Technologies. Proceedings of the 39th Hawaii International Conference on System Sciences. 1-10.

Huh, J., Jones, L., Erickson, T., Kellogg, W.A., Bellamy, R., and Thomas, J.C. (2007) BlogCentral: The Role of Internal Blogs at Work.  Proceeding Computer/Human Interaction CHI EA 2007, April 28-May 3. 2447-2452. San Jose, CA.  doi <10.1145/1240866.1241022>

Lee, S., Hwang, T., and Lee, H. (2006). Corporate blogging strategies of the Fortune 500 companies. Management Decision 44(3). 316-334.

Prusak, L. (2001). Where did knowledge management come from? IBM Systems Journal, 40(4), 1002-1007.

Rodriguez, J. (2010). Social Software in Academic Libraries for Internal Communication and Knowledge Management: A Comparison of Two Reference Blog Implementations. Internet Reference Services Quarterly 25(2). 107-124.

Brief update

Nothing new this week, unless I come up with something on the fly.  I’m knee-deep in figuring out ethics applications for directed study/thesis research, something I basically need to get done ASAP if I plan on doing any sort of data collection or analysis before the end of the term. I’ve also completed most of the response/review paper assignments required for my courses.

To make life more complicated, some database workshops related to my HUCO course this term have renewed my desire to do a bit of coding.  I’ve been toying with the idea of starting a simple PHP/mySQL project– unrelated to coursework– to refresh my memory and hone my (admittedly limited) programming skills. More on that, possibly, if anything comes of it.

You will also notice I’ve changed the look of the blog once more.  It needed a bit of a facelift.

Too Much Information, part 2: Recontextualization

The second article I want to discuss is “Data as a natural resource” by Matthew Aslett, and deals principally with the idea of transforming data—records decontextualized—into products (records recontextualized as commodities).  Aslett introduces the concept of the “data factory”, a place where data is “manufactured”.  He also highlights this in the context of “Big Data”—the current trend of accomodating larger and larger collections of information.  The problem is, “Big Data” are useless unless you can process them, analyze them, contextualize them.  Aslett suggests that the next big trend will be “Big Data Analytics”, which will focus on harnessing data sources and transforming them into products.  Assigning meaning to the raw, free-floating information, as it were.

One of the things I like about Aslett’s article is his analogy between data resources and energy resources, comparing the “data factory” with the oil industry.  Data is the new oil; useable data can be very valuable, as eBay and Facebook (Aslett’s two main examples) demonstrate.  What’s interesting about both eBay and Facebook, and why Aslett draws attention to them in particular, is that they don’t in themselves produce the data; they harness pre-existing data streams (the data “pipeline”), building on transactions that already take place, automate these transactions for their users, and parse their user data into saleable products.  In the case of Facebook, this comes in the form of ad revenue from targetted marketing, based on the most comprehensive demographic information available online (a user base of 500+ million); for eBay, it is the combination of transactional and behavioural data that identifies its top sellers and leads to increased revenue for them.  If Facebook or eBay didn’t exist, as Aslett points out, people would still communicate, share photos, buy and sell products.  They have just automated the process, and acquired the transaction records that are associated with such interactions in the process.

This makes me wonder about the ownership implications, once again, and about the Facebook terms of use I trotted out in a previous blog entry.  Is it fair for Facebook to profit off your personal information in this way?  To control your data?  Isn’t it a little worrisome that eBay and Amazon track what I buy online well enough to make quite accurate recommendations?  In terms of IAPP discussed in the last class and of David Flaherty’s list of individual rights, it is troubling to consider that, if the countless disparate traces of me online were somehow pulled together and processed, someone could construct a reasonable facsimile of me, my personality, my identity.  And isn’t this what Aslett is really talking about when he uses the word “analytics”?

Aslett, M. (2010, November 18).  Data as a natural energy source.  Too much information. Retrieved on November 26, 2010 from http://blogs.the451group.com/information_management/2010/11/18/data-as-a-natural-energy-source/

Too Much Information, Part 1: e-Disclosure

Today I’m going to write about a RIM blog I have discovered thanks to the ARMA website links, “Too Much Information” by The 451 Group.  In particular, I want to discuss two articles from different authors, on quite different topics.  Given the word length limit on entries for the journal assignment, I’ll be splitting my writing up into two seperate entries.

The first article, by Nick Patience, is a review of the topics discussed at the 6th Annual e-Disclosure Forum in London, dealing primarily with UK law.  Patience identifies key themes that came up during the forum.  The first of these is “Practice Direction 31B”, which is an amendment to the rules of civil procedure in the disclosure of electronic documents.  Of the changes, Patience identifies the addition of a 23-question questionnaire to be used in cases that involve a large number of documents, and emphasizes how this would be useful both in getting parties organized for proceedings and as a pre-emptive method for organizations to prepare records in the event of future litigation.  In Canada we have some standard guidance in the form of the Sedona Canada Principles, the Sedona Working Group, and provincial task forces working on refining e-Disclosure practices.  I suspect there are discrepancies in practices between provinces, simply due to the nature of the Canadian legal system, which might make it difficult to apply a detailed questionnaire as common resource (conjecture on my part, since I’m certainly not an expert in law), but I certainly agree with Patience about the potential benefits of such a resource.  In reviewing the case law digests, it is clear that one of the great challenges of e-Disclosure is limiting the scope on what constitutes evidence, which is, I believe, at the court’s discretion.  Examples that I’ve found are:

Dulong v. Consumers Packaging Inc., [2000] O.J. No. 161 January 21, 2000 OSCJ Commercial List Master Ferron.. The court held that a broad request from a plaintiff that the corporate defendant search its entire computer systems for e-mail relating to matters in issue in the litigation was properly refused on the grounds that such an undertaking would, “having regard to the extent of the defendant’s business operations, be such a massive undertaking as to be oppressive”. (para 21).

Optimight Communications Inc. v. Innovance Inc., 2002 CanLII 41417 (ON C.A.), Parallel citations: (2002), 18 C.P.R. (4th) 362; (2002), 155 O.A.C. 202, 2002-02-19 Docket: C37211. Moldaver, Sharpe and Simmons JJ.A. The appellants appeal a Letter of Request issued in a California court seeking the assistance of Ontario courts in enforcing an order for production of 34 categories of documents by Innovance, Inc. Appellate Court limited the scope of production and discovery. Schedule A details the electronic sources and search terms.

Sourian v. Sporting Exchange Ltd., 2005 CanLII 4938 (ON S.C.) 2005-03-02 Docket: 04-CV-268681CM 3. Master Calum U.C. MacLeod. Production of information from an electronic database. An electronic database falls within the definition of “document” in our (Ontario) rules. The challenge in dealing with a database, however, is that a typical database would contain a great deal of information that is not relevant to the litigation.  Unless the entire database is to be produced electronically together with any necessary software to allow the other party to examine its contents, what is produced is not the database but a subset of the data organized in readable form.  This is accomplished by querying the database and asking the report writing software to generate a list of all data in certain fields having particular characteristics.  Unlike other documents, unless such a report is generated in the usual course of business, the new document, the requested report (whether on paper or on CD ROM) would have to be created or generated. Ordering a report to be custom written and then generated is somewhat different than ordering production of an existing document.  I have no doubt that the court may make such an order because it is the only way to extract the subset of relevant information from the database in useable form. On the other hand such an order is significantly more intrusive than ordinary document production. A party must produce relevant documents but it is not normally required to create documents.  Accordingly such an order is discretionary and the court should have regard for how onerous the request may when balanced against its supposed relevance and probative value. (Italics P.D.)

[These only represent the first three cases I found in the LexUM Canadian E-Discovery Case Law Digests (Common Law) online, under “Scope of production and discovery”. http://lexum.org/e-discovery/digests-common.html#Scope]

What this news about UK policy makes me wonder, though, is precisely why we haven’t implemented a better national standard.  The Sedona Principles are wonderful for what they are—recommendations from a think tank drawing on the experience of lawyers, law-makers, technology and information professionals—but in order for it to really mean anything, it has to be enacted in policy.  Naturally, that kind of legislation doesn’t happen overnight.

Another theme Patience identifies is the growing trend of cloud computing, and the problems therein.  This sort comes back to my frequent rants about web records; the conference participants agreed that service level agreements (SLAs—precisely the kind of agreements I noted in my last entry) by cloud service providers did not provide sufficient guarantee as to the control and security of a user’s records (in this case, the user being an organization).  Patience describes this quality of the SLA as lacking the “necessary granularity”—you need to know that you can search for, find, and retrieve your data in a form that you can use.  As Patience says, not having that guarantee is a “dealbreaker”.  This seems like a very important counterpoint to the ever-growing buzz about cloud computing, and re-enforces the need for organizations to exercise caution before making a decision about how they want to manage their data.

 

Resources:

ARMA International

E-Discovery Canada

Patience, N.  (2010, November 16). e-Disclosure – cooperation, questionnaires and cloud. Too Much Information. Retrieved on November 26, 2010 from http://blogs.the451group.com/information_management/2010/11/16/e-disclosure-cooperation-questionnaires-and-cloud/

A Matter of Security

The big story over the weekend was about John Tyner, a software engineer who refused the TSA body scan and pat-down at the San Diego airport, and was subsequently removed from the airport and fined $10,000 for being uncooperative.  What makes this a big story is the fact that Tyner recorded the entire incident on his cell phone and then posted it on YouTube; he also wrote a full account on a blog using the moniker “johnnyedge”[1].  The video and blog have gone viral in the 48 hours since the incident took place, the YouTube video receiving over 200,000 hits.

There is quite a lot going on in this story that is worth examining.  First off, the relatively new practice of using the backscatter x-ray scanners and the TSA’s policy to administer a full pat-down to any passengers that opt-out of the scan have been under fire since they were first introduced.  Several stories have surfaced in the last year regarding the new technology, though none quite so markedly as Tyner’s.  One of the concerns raised was whether or not the body scan images were saved and stored [2]; the TSA confirmed that this was not the case in August, although it continues to be an issue raised in the argument against the body scans.  The issue does raise the question of precisely what does happen with the images?  How do the scanners work?  Is there no memory that stores images, even in the short term?  What if the scan does reveal someone in possession of something nefarious?  Doesn’t the scan represent evidence?  Surely there must be some system in place to preserve the image when this happens—if not, it does not seem particularly effective.  And if yes, the question is whether or not such a system violates the human rights of passengers.

I bet the TSA is rather unhappy right now, given the rising tidal wave of public discontent it is now facing.  I’ve written a lot about web content as records in this journal, so I won’t over-emphasize it now, but clearly the video/audio record Tyner preserved and uploaded to the Internet will impact the TSA’s operations—the extra time and labour spent dealing with uncooperative passengers, of navigating the negative press, and of correcting its policies and procedures will directly translate into dollar amounts.  As one article on Gizmodo suggests, there is a lot of money for manufacturers and lobbyists in the implementation and use of the new body scanners [3]; there’s a lot of money at stake if their adoption is stymied by bad press and public outrage.  And why?  Because one person recorded this activity and made the record public.

A movement in the US has grown around the rejection of the body scan technology and the TSA’s policies.  The website “I Made the TSA Feel my Resistance” has gone up, and is calling for “National Opt-Out Day” on November 24—the busiest day of the year for air travel.  It encourages passengers to refuse the body scan when they go through security. [4]

While I’ve always been sympathetic with the challenging (let’s face it—impossible) task of providing airport security, I think Tyner’s use of records and the web are useful in one very important way.  It forces us to ask: In what way does the body scan technology protect passengers?

____________________________________

[1] The original blog post and videos are available here: http://johnnyedge.blogspot.com/2010/11/these-events-took-place-roughly-between.html

An article by the Associated Press about the story’s popularity can be viewed here: http://www.mercurynews.com/breaking-news/ci_16617995?nclick_check=1

As well as a blog post on the CNN Newsroom website by one of the network’s correspondents can be viewed here: http://newsroom.blogs.cnn.com/2010/11/15/dont-touch-my-junk/?iref=allsearch

[2] The issue of whether the images are stored or not was first raised last January, as represented in this article on CNN.com: http://articles.cnn.com/2010-01-11/travel/body.scanners_1_body-scanners-privacy-protections-machines?_s=PM:TRAVEL

The TSA refuted these claims at the time on their blog: http://blog.tsa.gov/2010/01/advance-imaging-technology-storing.html

The issue again made headlines in August with the following article on cnet: http://news.cnet.com/8301-31921_3-20012583-281.html

Which the TSA again refuted: http://blog.tsa.gov/2010/08/tsa-response-to-feds-admit-storing.html

[3] Loftus, J.  (2010, November 14).  TSA Full-Body Scanners: Protecting Passengers or Padding Pockets?  Gizmodo. Retrieved on November 15, 2010 from http://gizmodo.com/5689759/tsa-full+body-scanners-protecting-passengers-or-padding-pockets

This article also effectively summarizes the current controversy surrounding Advanced Imaging Technology (AIT).

[4] http://www.imadethetsafeelmyresistance.com/

Blogs as Records: Damage Done?

It’s no secret that I am a social media addict. My current drug of choice is Twitter, which I’ve discussed previously as part of the records management blog. As you may or may not know, I’m in the process of researching the records management issues surrounding the Edmonton City Centre airport plebiscite for a term paper, and when I checked Twitter this morning– as I’m wont to do– I was surprised by a new and interesting development in the form of links to new commentary.

A blogger claiming to be a reporter for the Seattle Times blogged about the decision by city council to move forward with the closure following the failed petition drive by Envision Edmonton. This blogger, apparently named “Darren Holmes”, put his own spin on the existing documents, facts and hearsay about the issue that portrays the council decision as some nefarious conspiracy, and casts Envision Edmonton as well as all Edmontonians as victims and dupes [1].

Some crack investigative reporting by local Journal reporter Todd Babiak revealed that this individual’s claims of authority were bogus, but not before the blog post went viral [2, 3]. This development begs the question: how do you classify blogs as records?

There are a number of issues initially that we need to consider—for the sake of brevity, I’ll limit myself to the most obvious one.  Outwardly “Darren” has no connection with the municipal government, Envision Edmonton, the airport authority or Yes For Edmonton.  Unlike the petition records, reports, proposals, letters and emails traded internally and between these organizations, Darren’s blog entry (and Todd Babiak’s column) exist outside the purview of these involved parties.  As an individual, Darren is merely exercising his right to free speech, a right we are proud to respect in our society; his is only one opinion amid a vast sea of others, and is thus, ostensibly, transient.  And yet it has indelibly made its mark within this discourse, and could be potentially damaging to other individuals and organizations (some of which I’ve just mentioned), particularly as local residents make their way to the ballot box.  So how do you classify the blog entry?  How do you control it?  Is it even worth qualifying as a record worthy of notice?  Considering the furor it created in my Twitter feed, and more generally in the community of players and swirling informational landscape surrounding the Edmonton City Centre controversy, it’s clear that it has forced itself into the debate for better or worse.

One way to deal with the blog entry as a record is to litigate.  According to Darren’s most recent update, Mayor Mendel’s representation has begun to do just that, by threatening legal action for slander [4].  Given Darren’s anonymity, the veracity of the claim is highly dubious, but such a move would certainly be an option for Mandel.  According to Babiak’s column, the Seattle Times is also concerned for being associated with Darren, particularly since no “Darren Holmes” has ever written for them.  The Times would be within their rights to sue Darren for lying about his connection to the newspaper.  Envision Edmonton should also be anxious about being associated with this person, as the episode continues to play out on the public stage, since for many readers it might seem that Darren represents their cause; since any truth to Darren’s credentials has been refuted, such an association could be very damaging for Envision.

Two more methods of dealing with the blog present themselves.  First, to respond to it in kind in a public format, as Babiak has done with his column in The Edmonton Journal.  The other is to try and ignore it; “don’t feed the trolls” is a common saying in web culture that refers to people that comment online for the sole purpose of being inflammatory.  Neither of these methods can make the blog entry go away, however, and even litigation can’t erase the impact it has already had on public perception.

________________________

References:

[1] darrensbigscoop.  (2010, October 13.) Catching Up. Darren’s Big Scoop. Retrieved on October 13, 2010 from http://darrensbigscoop.wordpress.com/2010/10/13/8/

[2] Babiak, T. (2010, October 13.) Blog from fake reporter doesn’t add to airport debate. The Edmonton Journal. Retrieved on October 13, 2010 from http://www.edmontonjournal.com/business/Blog+from+fake+reporter+doesn+airport+debate/3662096/story.html

[3]Babiak, T. (2010, October 12.) Anonymity, Fraud and No Fun. That Internet Thing. Retrieved on October 13, 2010 from http://communities.canada.com/edmontonjournal/blogs/internetthing/archive/2010/10/12/anyonymity-fraud-and-no-fun.aspx

[4] darrensbigscoop. (2010, October 7.) Developer’s on Final Approach For Downtown Airport Land. Darren’s Big Scoop. Retrieved on October 13, 2010 from http://darrensbigscoop.wordpress.com/2010/10/07/developers-on-final-approach-for-downtown-airport-land/