Review Paper 1: Wrapping our Heads Around KM

In this week’s readings, Prusak and Nunamaker Jr. et al. successfully provide a solid and informed definition for ‘knowledge management’ (KM), and why it is important.  Prusak establishes from the get-go that KM is not just about managing information, but about providing and maintaining access to “knowledge-intensive skills” (1003).  He also identifies the pitfall of reducing KM to simply “moving data and documents around”, and the critical value of supporting less-digitized / digitizable tacit knowledge (1003).  Prusak chooses to define KM based on its disciplinary origins, noting economics, sociology, philosophy and psychology as its “intellectual antecedents”, rather than defining it from a single perspective or its current application alone (1003-1005).   Nunnaker et al. take a different approach, defining KM first in the context of IT, that is, KM as a system or technology, and then presenting a hierarchical framework from which to understand its role.  In this sense, data, information, knowledge and wisdom all exist on a scale of increased application of context (2-5).  Except for this first theoretical framework that they present, Nunamaker Jr. et al. risk falling into the trap Prusak warns against; they define KM as the effort to organize information so that it is “meaningful” (1).  But what is “meaningful”?  Only context can determine meaning—fortunately, Nunamaker Jr. et al. at least account for this unknown quantity in their framework (3-4).  They also propose a unit to measure organizational knowledge: intellectual bandwidth.  This measurement combines their KM framework and a similar framework for collaborative information systems (CIS), and is defined as: “a representation of all the relevant data, information, knowledge and wisdom available from a given set of stakeholders to address a particular issue.” (9)  It is clear from their efforts to quantify KM and from the manner in which the frame KM as a system that Nunamaker Jr. et al. are writing for a particular audience, the technicians and IT specialists. Meanwhile Prusak is writing for a more general audience of practitioners.

One thing I felt was lacking from both articles was a clear statement and challenge of the assumptions of systematizing knowledge.  Nunamaker Jr. et al.’s argument for “intellectual bandwidth” is compelling, but I cannot help but be skeptical of any attempt to measure a concept as fuzzy as “wisdom” and “collective capability” (8-9).  Even Prusak clearly states that, as in economics, an essential knowledge management question is “what is the unit of analysis and how do we measure it?” (1004).  The underlying assumption is that knowledge can, in fact, be measured.  I am dubious about this claim (incidentally, this is also why I am dubious of similar claims often proposed in economic theory).  Certainly, there are other, qualitative forms of analysis that do not require a formal unit of measurement.  Assuming (a) that knowledge is quantifiable, and (b) that such a quantity is required in order to properly examine it, to me seems to lead down a quite dangerous and not altogether useful path.  The danger is that, in focusing on how to measure knowledge in a manner that lends itself to a quantitative analysis, one is absorbed in the activity of designing metrics and forgets that the purpose of KM is primarily to capture, organize and communicate the knowledge and knowledge skills within an organizational culture.  Perhaps this danger should be considered alongside and as an extension of Prusak’s pitfall of understanding KM merely as “moving data and documents around”.

Both of these articles, as well as the foundational article by Nonaka also under discussion this week, are valuable insofar as they lay the groundwork for knowledge management as a theoretical perspective.  Nunamaker Jr. et al. present much food for thought on how knowledge is formally conceptualized with their proposed frameworks. Meanwhile Prusak provides a sound explanation of the origins of KM and forecasts the future of the field by suggesting one of two possible outcomes; either it will become so embedded in organizational practice as to be invisible, like the quality movement, or it will be hijacked by opportunists (the unscrupulous, profit-seeking consultants Prusak disdains at the beginning of his article, 1002), like the re-engineering movement (1006).  Both papers were published in 2001, and a decade later neither of these predictions appears to have been fulfilled.  KM has been adopted by organizations much as the quality movement has been, but I suspect that knowledge workers are still trying to wrap their heads around how it is to be implemented and what it actually means.

 

 

Cited References

 

Nunamaker Jr., J. F., Romano Jr., N. C. and Briggs, R. O. (2001). A Framework for Collaboration and Knowledge Management, Proceedings of the 34th Hawaii International Conference on System Sciences – 2001. 1-12.

 

Prusak, L. (2001). Where did knowledge management come from? IBM Systems Journal 40(4), 1002-1007.

A quick update

Two new courses this term which will feed content on my blog: HUCO 510: Theory of Humanities Computing and LIS 599: Social Media and Knowledge Management.
Other content that should appear soon or sometime over the course of the term:
last term’s Posthumanism term paper: “Humanity’s Box: Proto-SF and the Robot Other”

A Nostalgic Look Back: Cloning

Whatever happened to cloning?

No, no, this is a legitimate question.  I remember about ten years ago, maybe a little bit more than that, there was a buzz around ‘cloning’ as the next big scientific development.  I was in high school at the time, and I recall devouring every news story about Dolly, the first cloned sheep, that I could get my hands on.  I imagined a future in which the tiniest bit of our genetic material could be used to replicate life, and pondered the murky ethics that arose from this.  And then time passed, and the whole craze just sort of faded away.

I was reminded of this in reading Robert Pepperell’s 2003 edition of Posthuman Condition: Consciousness Beyond the Brain, in preparation for my term paper.  In the preface, Pepperell mentions with much urgency developments in the field of genetics and cloning specifically, and what this might mean in the re-definition of the ‘human’.  He references in particular a 2002 article in the Sunday Times about the imminence of the first successful human cloning (I’m fuzzy on this point, but I suspect my lack of memory suggests it wasn’t as successful or as imminent as Pepperell claims.)

So my question is this:  What happened to all the hype about cloning?  Would it have featured importantly in my Posthumanism course had it been offered eight years ago?  Is it strange that cloning hasn’t even gotten the merest mention in class?

The Roommate Agreement

As well as being one of the most entertaining and popular sitcoms on television, Big Bang Theory also offers an amusing insight about records management in the form of Sheldon and Leonard’s “Roommate Agreement”.  The Roommate Agreement is alluded to frequently over the course of the series, typically when Leonard does something that Sheldon feels infringes on his rights as a roommate (e.g. priority couch seating, overnight visitors, scheduled bathroom use, etc.)  Not only is it entertaining to witness Sheldon’s neurotic behavior in action, but the roommate agreement provides a clever solution for anyone who has ever found him/herself sharing an apartment.

I’m sure everyone who has ever had a roommate can testify that, at times, the sharing of your living space can often be irritating, and sometimes lead to unpleasant confrontation.  We all have personal preferences and expectations when it comes to the domestic sphere, and when those preferences and expectations clash, conflict naturally ensues.  The idea of a roommate agreement—in theory—is genius, really, since such a document establishes the parties’ expectations from the outset.  In practice, at least as seen in Big Bang Theory, such a record only emphasizes the tension and breeds more conflict, usually to hilarious effect (probably not as funny for Leonard as it is for the casual observer).

In a recent episode entitled “The Boyfriend Complexity”, the issue of the roommate agreement comes up yet again.  Under the impression that Leonard and Penny are once again a couple, Sheldon presents proposed changes to the agreement for Leonard to sign.  The changes are written to address Penny’s “annoying personal habits” (of which Sheldon has naturally compiled a lengthy list—I’m assuming it is attached to the agreement as an appendix).  Sheldon makes it clear that Penny has no say in the agreement or the discussion of her personal habits, since Leonard is the signatory and thus “bears responsibility for all [her] infractions and must pay all fines”.  Leonard, upon inquiring about the fines, is told that if Penny is to resume spending nights in the apartment he’ll have to set up an escrow account (apparently the possibility that Penny might correct her annoying personal habits is not a thought that occurs to Sheldon).  Leonard signs, even though he and Penny aren’t actually back together.  Sometimes the path of least resistance is the best approach in a compromise.

The agreement essentially reduces the roommate experience to the level of transactions.  This is quite literally apparent in the example above—Penny annoys Sheldon, Leonard must pay a fine.  No doubt Sheldon has dollar amounts associated with each infraction as it appears in the appended list of “annoying personal habits”, in direct relation to the degree that Sheldon finds them annoying.  It seems ridiculous when you hear it, but I can personally think of a few situations in my experience when the existence of such an agreement would have made my life a lot easier; I can certainly recall occasions when I’d wished I could collect fines for the irritating habits of a roommate.  And while it might still seem absurd, consider this: isn’t it just another example of the sort of contracts we enter into every day with our landlords, insurance providers, health providers, phone and internet service providers, utility companies, employers, employees and unions, educational institutions, and governments?

Lots of posts

You may be wondering why I’m posting crazy amounts this week.  It’s because the journal assignment is coming due (technically it already came due– but I got a weekend extension).  I’m down to my last entry on records management, and it’ll be a good one.  I don’t want to give away too much, but I’ll be writing about the Big Bang Theory roommate agreement.  Big, important, big stuff.  Stay tuned.

Too Much Information, part 2: Recontextualization

The second article I want to discuss is “Data as a natural resource” by Matthew Aslett, and deals principally with the idea of transforming data—records decontextualized—into products (records recontextualized as commodities).  Aslett introduces the concept of the “data factory”, a place where data is “manufactured”.  He also highlights this in the context of “Big Data”—the current trend of accomodating larger and larger collections of information.  The problem is, “Big Data” are useless unless you can process them, analyze them, contextualize them.  Aslett suggests that the next big trend will be “Big Data Analytics”, which will focus on harnessing data sources and transforming them into products.  Assigning meaning to the raw, free-floating information, as it were.

One of the things I like about Aslett’s article is his analogy between data resources and energy resources, comparing the “data factory” with the oil industry.  Data is the new oil; useable data can be very valuable, as eBay and Facebook (Aslett’s two main examples) demonstrate.  What’s interesting about both eBay and Facebook, and why Aslett draws attention to them in particular, is that they don’t in themselves produce the data; they harness pre-existing data streams (the data “pipeline”), building on transactions that already take place, automate these transactions for their users, and parse their user data into saleable products.  In the case of Facebook, this comes in the form of ad revenue from targetted marketing, based on the most comprehensive demographic information available online (a user base of 500+ million); for eBay, it is the combination of transactional and behavioural data that identifies its top sellers and leads to increased revenue for them.  If Facebook or eBay didn’t exist, as Aslett points out, people would still communicate, share photos, buy and sell products.  They have just automated the process, and acquired the transaction records that are associated with such interactions in the process.

This makes me wonder about the ownership implications, once again, and about the Facebook terms of use I trotted out in a previous blog entry.  Is it fair for Facebook to profit off your personal information in this way?  To control your data?  Isn’t it a little worrisome that eBay and Amazon track what I buy online well enough to make quite accurate recommendations?  In terms of IAPP discussed in the last class and of David Flaherty’s list of individual rights, it is troubling to consider that, if the countless disparate traces of me online were somehow pulled together and processed, someone could construct a reasonable facsimile of me, my personality, my identity.  And isn’t this what Aslett is really talking about when he uses the word “analytics”?

Aslett, M. (2010, November 18).  Data as a natural energy source.  Too much information. Retrieved on November 26, 2010 from http://blogs.the451group.com/information_management/2010/11/18/data-as-a-natural-energy-source/

Too Much Information, Part 1: e-Disclosure

Today I’m going to write about a RIM blog I have discovered thanks to the ARMA website links, “Too Much Information” by The 451 Group.  In particular, I want to discuss two articles from different authors, on quite different topics.  Given the word length limit on entries for the journal assignment, I’ll be splitting my writing up into two seperate entries.

The first article, by Nick Patience, is a review of the topics discussed at the 6th Annual e-Disclosure Forum in London, dealing primarily with UK law.  Patience identifies key themes that came up during the forum.  The first of these is “Practice Direction 31B”, which is an amendment to the rules of civil procedure in the disclosure of electronic documents.  Of the changes, Patience identifies the addition of a 23-question questionnaire to be used in cases that involve a large number of documents, and emphasizes how this would be useful both in getting parties organized for proceedings and as a pre-emptive method for organizations to prepare records in the event of future litigation.  In Canada we have some standard guidance in the form of the Sedona Canada Principles, the Sedona Working Group, and provincial task forces working on refining e-Disclosure practices.  I suspect there are discrepancies in practices between provinces, simply due to the nature of the Canadian legal system, which might make it difficult to apply a detailed questionnaire as common resource (conjecture on my part, since I’m certainly not an expert in law), but I certainly agree with Patience about the potential benefits of such a resource.  In reviewing the case law digests, it is clear that one of the great challenges of e-Disclosure is limiting the scope on what constitutes evidence, which is, I believe, at the court’s discretion.  Examples that I’ve found are:

Dulong v. Consumers Packaging Inc., [2000] O.J. No. 161 January 21, 2000 OSCJ Commercial List Master Ferron.. The court held that a broad request from a plaintiff that the corporate defendant search its entire computer systems for e-mail relating to matters in issue in the litigation was properly refused on the grounds that such an undertaking would, “having regard to the extent of the defendant’s business operations, be such a massive undertaking as to be oppressive”. (para 21).

Optimight Communications Inc. v. Innovance Inc., 2002 CanLII 41417 (ON C.A.), Parallel citations: (2002), 18 C.P.R. (4th) 362; (2002), 155 O.A.C. 202, 2002-02-19 Docket: C37211. Moldaver, Sharpe and Simmons JJ.A. The appellants appeal a Letter of Request issued in a California court seeking the assistance of Ontario courts in enforcing an order for production of 34 categories of documents by Innovance, Inc. Appellate Court limited the scope of production and discovery. Schedule A details the electronic sources and search terms.

Sourian v. Sporting Exchange Ltd., 2005 CanLII 4938 (ON S.C.) 2005-03-02 Docket: 04-CV-268681CM 3. Master Calum U.C. MacLeod. Production of information from an electronic database. An electronic database falls within the definition of “document” in our (Ontario) rules. The challenge in dealing with a database, however, is that a typical database would contain a great deal of information that is not relevant to the litigation.  Unless the entire database is to be produced electronically together with any necessary software to allow the other party to examine its contents, what is produced is not the database but a subset of the data organized in readable form.  This is accomplished by querying the database and asking the report writing software to generate a list of all data in certain fields having particular characteristics.  Unlike other documents, unless such a report is generated in the usual course of business, the new document, the requested report (whether on paper or on CD ROM) would have to be created or generated. Ordering a report to be custom written and then generated is somewhat different than ordering production of an existing document.  I have no doubt that the court may make such an order because it is the only way to extract the subset of relevant information from the database in useable form. On the other hand such an order is significantly more intrusive than ordinary document production. A party must produce relevant documents but it is not normally required to create documents.  Accordingly such an order is discretionary and the court should have regard for how onerous the request may when balanced against its supposed relevance and probative value. (Italics P.D.)

[These only represent the first three cases I found in the LexUM Canadian E-Discovery Case Law Digests (Common Law) online, under “Scope of production and discovery”. http://lexum.org/e-discovery/digests-common.html#Scope]

What this news about UK policy makes me wonder, though, is precisely why we haven’t implemented a better national standard.  The Sedona Principles are wonderful for what they are—recommendations from a think tank drawing on the experience of lawyers, law-makers, technology and information professionals—but in order for it to really mean anything, it has to be enacted in policy.  Naturally, that kind of legislation doesn’t happen overnight.

Another theme Patience identifies is the growing trend of cloud computing, and the problems therein.  This sort comes back to my frequent rants about web records; the conference participants agreed that service level agreements (SLAs—precisely the kind of agreements I noted in my last entry) by cloud service providers did not provide sufficient guarantee as to the control and security of a user’s records (in this case, the user being an organization).  Patience describes this quality of the SLA as lacking the “necessary granularity”—you need to know that you can search for, find, and retrieve your data in a form that you can use.  As Patience says, not having that guarantee is a “dealbreaker”.  This seems like a very important counterpoint to the ever-growing buzz about cloud computing, and re-enforces the need for organizations to exercise caution before making a decision about how they want to manage their data.

 

Resources:

ARMA International

E-Discovery Canada

Patience, N.  (2010, November 16). e-Disclosure – cooperation, questionnaires and cloud. Too Much Information. Retrieved on November 26, 2010 from http://blogs.the451group.com/information_management/2010/11/16/e-disclosure-cooperation-questionnaires-and-cloud/