Posts Tagged ‘ information ’

Crowdsourced Intelligence and You

This post should have gone up ages ago, as part of a course assignment for HUCO 510.  Sometimes you just get side-tracked.  Anyway, this week something happened that gave me the perfect topic to complete my assignment.  Enjoy.

~~

On May 2, 2011 Osama Bin Ladin, one of the most feared terrorist leaders in the world, was killed.  Nearly a decade after the September 11 attacks on the World Trade Center in New York, attacks orchestrated by Bin Laden, US Navy Seals successfully carried out the assassination.  A nation rejoiced.

And, as that nation rejoiced, within minutes of the news being made public on the Internet and on television, all social media websites were abuzz.  One can imagine the sheer volume of the expressions of support, opposition, incredulity, happiness, sadness, congratulations and disgust that flooded the web.  Or, one can simply search “osama” on the Twitter index.  The President would later televise an address to the nation confirming the death of the man who had been cast in the role of nemesis to an entire people and way of life.

It is during these kinds of world-changing events that the most interesting insights about our society are discovered.  Megan McArdle, editor for The Atlantic, made one such discovery, as she browsed her Twitter feed on the fateful day.  One tweet in particular caught her eye.  Being one of Penn Jillette’s 1.6 million followers, she read the following quote, apparently in response to the death of Bin Laden:

“ I mourn the loss of thousands of precious lives, but I will not rejoice in the death of one, not even an enemy.” – Martin Luther King, Jr

Amid the—no doubt—millions of reactions, some of them shocking, this short sentence at least had the ring of reason.  And it was attributed to perhaps the most famous civil rights activist in North America.  A combination of Jillette’s celebrity as a performer and this level-headed response to the event in contrast to many much less level-headed responses made it viral; within hours of it going up on Twitter, many of Jillette’s followers had retweeted the quote, and it had become a trending topic on the social network, in the midst of the Bin Laden furor.  McArdle, unlike many others, did not retweet the quote, though she did initially feel the urge to pass it on.  She hesitated, however, because it didn’t “sound” like Martin Luther King, Jr.  And for that hesitation, I am sure she was later grateful, when it was soon discovered that the quote was misattributed.

Besides the end to privacy (which I’ve repeatedly discussed on this blog), another quality of modern communication technologies that we must all adapt to is the speed at which information travels.  Networks like Twitter and Facebook increase the rate of transmission exponentially.  The cult of celebrity has also found fertile earth in these virtual spaces.  If I had been the person to publish the quote on Twitter, with my 80 or so followers, rather than Jillette, the quote would not have been so popular, and the backlash would not have been so severe.  The fact that the initial tweet reached 1.6 million people dramatically increased how quickly the quote spread from that point.  So where did Jillette get the quote?

Despite some media outlets implying that he did this deliberately to mess with his followers, it seems clear now that it was accidental.  Jillette copied the quote from a Facebook user’s status update that read:

I mourn the loss of thousands of precious lives, but I will not rejoice in the death of one, not even an enemy. “Returning hate for hate multiplies hate, adding deeper darkness to a night already devoid of stars.  Darkness cannot drive out darkness: only light can do that.  Hate cannot drive out hate: only love can do that.” MLK jr

In viewing this, it is clear that Jessica Dovey, the Facebook user, was adding her own interpretation to an authentic quote by Martin Luther King, Jr.  Jillette tried to copy it to Twitter, but given the 140 character limit for tweets, was forced to edit it down.  Apparently he did not realize the first sentence was not part of the quotation.  Jillette later apologized repeatedly for the tweet, stating that it was a mistake.

“Why all the fuss over this?” one might ask.  It seems that most people are upset not so much by the misattribution as they are at the criticism of the popular reaction and the media circus that has surrounded the assassination.  Dovey and Jillette, and McArdle as well, who went on to write a blog post and editorial in The Atlantic online about her discovery of the misattribution, have faced a great deal of criticism since the quote was first shared.

We live in a world of memes, in a place where information—regardless of its accuracy or authenticity—is shared at an exponential rate, and where fiction can be accepted as fact based on who says it and how many believe it.  The only thing surprising about this particular incident is that the mistake was discovered and the truth of it spread online as fast as the initial tweet did.  If it had taken a day or two longer for someone like McArdle, with a platform to spread the information, to discover the mistake, would anyone have noticed?  Probably not.  It is not like people haven’t been misquoted or misattributed in the past.  What’s noteworthy is the speed at which this particular misquote proliferated.

I find this interesting because, as I have stated, it gives evidence of how communication has changed in our society.  Many of us rely on sources like Twitter to engage with current events.  It serves us well to be reminded that, in spite of the many benefits of crowdsourced intelligence, the onus for fact-checking is on the reader.

Advertisements

Collective Intelligence, Web 2.0, and Understanding Knowledge

One of the key elements of Web 2.0, as established by Tim O’Reilly in his 2005 paper “What is Web 2.0?”, is the notion of ‘collective intelligence’.  The term itself does not suggest any particular type of technology; rather, it evokes an epistemological stance toward the concept of ‘intelligence’— if ‘intelligence’ is the cognitive capacity to think and learn, ‘collective intelligence’ implies the capacity to think and learn together, as a group. ‘Collective intelligence’ is the capacity to think, learn and share knowledge. Web 2.0 is more of a paradigm than simply a new breed of information technologies; it is a shift in how we perceive the ways in which knowledge is shared, by expanding the means of knowledge production to non-specialists.  A prime example of this principle is Wikipedia; once upon a time, encyclopedias (such as Britannica) were produced by a small group of subject specialists, high priests of their respective domains.  Wikipedia’s model transformed this approach, stripping the high priests of their power and opening up the opportunity to produce, edit and debate content to all.  The results are revealing—while occasionally entries on Wikipedia lack the accuracy of a traditional encyclopedia, they almost always reflect the current debates that surround a given topic, revealing the fluid nature of such knowledge.  This is not something one could easily apprehend from a traditional encyclopedia.  Why? Because the knowledge is mediated by a variety of perspectives, rather than one alone. That’s the power of collective intelligence[1].

In his remarks at the launch of the MIT Center for Collective Intelligence (2006), Thomas Malone defines ‘collective intelligence’ as “groups of individuals doing things collectively that seem intelligent.” As Malone makes clear, this is not a new idea—in the same way that knowledge management (KM) builds on concepts that have existed for decades, even centuries, ‘collective intelligence’ can be considered in a particular way as a new name for old ideas.  What makes it (and what makes KM) ‘new’ again is its potential application through new information technologies (i.e. the Web):

It is now possible to harness the intelligence of huge numbers of people connected in very different ways and on a much larger scale than has ever been possible before. (Malone, 2006).

The question becomes: “How can people and computers be connected so that collectively they act more intelligently than any individual, group or computer has ever done before?” (ibid.) This same question is reflected before Web 2.0, rather prophetically, in Marwick’s consideration of KM technology (2001).  Channeling Nonaka’s model of organizational knowledge creation, Marwick emphasizes the value and importance of tacit knowledge, while identifying the shortcomings of then-current technologies. The great hope for Marwick is ‘groupware’, a broad term that perhaps has less currency today referring to portals, intranets and collaborative software packages to facilitate group communication and project work.  In 2001, Marwick refers to such tools as ‘applications’ or ‘products’, standalone packages that organizations purchase and own; it is significant that the Web 2.0 paradigm eliminates the accuracy of such phrasing to describe collective intelligence—or social media—tools.  Rather, the web itself has become the ‘product’, the platform, and the tools are services.  This distinction is essential: the difference between a handful of software packages for computer-supported cooperative work and a universally accessible platform for social media is that one better reflects the interconnected nature of activities involved in the knowledge creation process. While Nonaka’s model of knowledge creation is split into four categories (socialization, internalization, externalization, and combination) that describe the type of transfer of knowledge that occurs between individual and group, tacit knowledge and explicit knowledge, it is unified conceptually as a spiral that circles through these categories in an eternal series of overlapping cycles.  Pre-Web 2.0, this poses a problem for KM, because it means that a variety of technologies—many of which will not communicate well, or at all, with each other—needs to be employed at each stage. There is no continuity, no sense of connection between one tool and the next, when the process of knowledge creation is by its very nature continuous and interconnected. Web 2.0 gives us the paradigm with which to understand that continuity. It also gives us the potential for collective intelligence that Malone is so excited about.

Collective intelligence in the Web 2.0 context is by no means flawless.  In fact, this approach to understanding knowledge has led to a whole new set of problems.  While we might be less concerned today than Marwick in 2001 about the sharing of tacit knowledge through technologies, thanks to an ever-expanding assortment of social networks available on the web which situate individuals, communities and organizations in relation to one and other, the explosion of information in such an unimaginably vast array poses increasingly difficult challenges.  In 2006, Grudin writes that there was some concern at the time when photos tagged as ‘london’ in Flickr jumped from 70,000 to 200,000 over three months.  Would this be a “tragedy of the commons”, a tool that shows such promise, combining folksonomic tagging with user-generated photographic collections, grown out-of-control? But then Flickr introduced clusters, subsets and pools to re-organized tagged content in a more refined way; crisis averted, and new innovation achieved.  While we have come a long way from Marwick’s groupware, we are still struggling to grasp how concepts like ‘collective intelligence’ and ‘Web 2.0’, and their associated technologies, can help KM.  New challenges and innovations are encountered every day.  And as Grudin suggests, “These are still early days.”


[1] That’s not to say that the collective intelligence or crowdsourcing principle that underlies web social media is definitively superior; quite the opposite, Web 2.0 introduces a new host of challenges, such as determining reliability, issues of intellectual property, and organization of information that were not nearly as problematic from a traditional approach to knowledge creation.


Bibliography

Grudin, J. (2006). Enterprise Knowledge Management and Emerging Technologies. Proceedings of the 39th Hawaii International Conference on System Sciences. 1-10.

Malone, T. W. (2006, October 13). What is collective intelligence and what will we do about it? MIT Center For Collective Intelligence. Retrieved from http://cci.mit.edu/about/MaloneLaunchRemarks.html

Marwick, A. D. (2001). Knowledge management technology. IBM Systems Journal, 40(4). 814-830.

O’Reilly, T. (2005, November 30). What is Web 2.0? Design Patterns and Business Models for the Next Generation of Software. O’Reilly Media. Retrieved from http://oreilly.com/web2/archive/what-is-web-20.html

The Implications of Database Design

In studying the database schema for the Prosopography of Anglo-Saxon England (PASE), several features of the design are immediately apparent[1].  Data is organized around three principal tables, or data points: the Person (i.e. the historical figure mentioned in a source), the Source (i.e. a text or document from which information about historical figures is derived), and the Factoid (i.e. the dynamic set of records associated with a particular reference in a source about a person).  There are a number of secondary tables as well, such as the Translation, Colldb and EditionInfo tables that provide additional contextual data to the source, and the Event, Person Info, Status, Office, Occupation and Kinship tables, among others, that provide additional data to the Factoid table.  In looking at these organizational structures, it is clear that the database is designed to pull out information about historical figures based on Anglo-Saxon texts.   I admire the versatility of the design and the way it interrelates discrete bits of data (even more impressive when tested using the web interface at http://www.pase.ac.uk ), but I can’t help but recognize an inherent bias in this structure. In reading John Bradley and Harold Short’s article “Using Formal Structures to Create Complex Relationships: The Prosopography of the Byzantine Empire—A Case Study”, I found myself wondering at the choices made in the design of both databases.  The PBE database structure appears to be very similar if not identical to that of the PASE.  Perhaps it’s my background as an English major—rather than a History major—but I found it especially unhelpful in one particular instance: how do I find and search the information associated with a unique author? With its focus on historical figures written about in sources, rather than the authors of those sources, the creators made a conscious choice to value historical figures over authors and sources.  To be fair, the structure does not necessarily preclude the possibility of searching author information, which appears in the Source table, and there is likely something to be said of the anonymous and possibly incomplete nature of certain Anglo-Saxon texts.  In examining the PASE interface, the creators appear to have resolved this issue somewhat by allowing users to browse by source, and listing the author’s name in place of the title of the source (which, no doubt, is done by default when the source document has no official title).  It is then possible to browse references within the source and to match the author’s name to a person’s name[2].  The decision to organize information in this way, however, de-emphasizes the role of the author and his historical significance, and reduces him to a faceless and neutral authority.  This is maybe to facilitate interpretation; Bradley & Short discuss the act of identifying factoid assertions about historical figures as an act of interpretation, in which the researcher must make a value judgment about what the source is saying about a particular person(8).  Questions about the author’s motives would only problematize this act.  The entire organization of the database, in fact, results in the almost complete erasure of authorial intent. What this analysis of PASE highlights for me is how important it is to be aware of the implications of our choices in designing databases and creating database interfaces.  The creators of PASE might not have intended to render the authors of their sources so impotent, but the decisions they made both in the construction of their database tables and of the user interface, and of the approach to entering factoid data had that ultimate result. Bradley, J. and Short, H. (n.d.).  Using Formal Structure to Create Complex Relationships: The Prosopography of the Byzantine Empire.  Retrieved from http://staff.cch.kcl.ac.uk/~jbradley/docs/leeds-pbe.pdf PASE Database Schema. (n.d.). [PDF].  Retrieved from http://huco.artsrn.ualberta.ca/moodle/file.php/6/pase_MDB4-2.pdf Prosopography of Anglo-Saxon England. (2010, August 18). [Online database].  Retrieved from http://www.pase.ac.uk/jsp/index.jsp


[1] One caveat: As I am no expert, what is apparent to me may not be what actually is.  This analysis is necessarily based on what I can understand of how PASE and PBE are designed, both as databases and as web interfaces, and it’s certainly possible I’ve made incorrect assumptions based on what I can determine from the structure.  Not unlike the assumptions researchers must make when identifying factoid assertions (Bradley & Short, 8).
[2] For example, clicking “Aldhelm” the source will list all the persons found in Aldhelm, including Aldhelm 3, bishop of Malmsbury, the eponymous author of the source (or rather, collection of sources).  Clicking Aldhelm 3 will provide the Person record, or factoid—Aldhelm, as historical figure.  The factoid lists all of the documents attributed to him under “Authorship”.  Authorship, incidentally, is a secondary table linked to the Factoid table; based on the structure, it seems like this information is derived from the Colldb table, which links to the source table.  All this to show that it is possible but by no means evident to search for author information.

The Knowing and Agency of Information Need

There is a fuzzy distinction between “information” and “knowledge” that is strongly emphasized in Wilson’s article “On User Studies and Information Needs”.  Information exists as a subcategory of knowledge; in terms of the models we’ve previously discussed—in particular, Nonaka and Cook & Brown—knowledge encompasses both the property of information and context, and the activity of interpretation (or “knowing”).  Wilson describes this in his figure for the “Universe of Knowledge” (661).  An alternative interpretation of this model would be to consider the concentric circles as “bodies of knowledge”, and the intersecting lines between “users”, “information systems”, and “information resources” as the action or practice of “knowing”.

The distinction between “information” and “knowledge” becomes fuzzy the instant you introduce agency into the equation—particularly, human agency.  As soon as we begin thinking of people accessing, transmitting, and creating information, we also have to start thinking about processes and motivation.  The concept of “information needs”, then, is epistemological; as Wilson describes it, an information need arises from a more basic “human need” that may have a physiological, affective or cognitive source, implying that a person must know something before seeking information (663).  That initial knowing or knowledge might be implicit or tacit.  You might feel hungry and, knowing implicitly that you must eat to resolve this physiological need, you might seek information about the nearest restaurant or supermarket.  How you go about doing that would be categorized as “information seeking behaviour”, and would be influenced by context—for instance, what you already know about what restaurants or supermarkets look like, what neighborhood you are in, what kind of restaurant or food you could afford and how much money you have in your purse or wallet, what information resources are most easily available to you, etc, etc.  If you have an iPhone, you might simply locate the nearest restaurant using GIS technology.  If not, you might consult a nearby map or directory, or simply look for signs of restaurants.  Or you might ask someone.  All of these represent different behaviours designed to fulfill an information need.  Once you have located a restaurant, you have fulfilled the information need required to fulfill your physiological need—hunger.  You have acquired information—namely, where to find the nearest restaurant from your starting point.  But you have also acquired a great deal of additional, potentially useful knowledge about the neighborhood, about other businesses you came across that were not restaurants, about how to find restaurants in general, and so on and on.  What you now know is not limited to the restaurant itself and the meal you are about to have, but includes every new piece of information that you came across throughout the information seeking process.  Including the process itself.  And this knowledge will be available to you the next time you have an information need.

Wilson identifies three definitions of “information” in user studies research (659):

1. Information as a physical entity (a book, a record, a document).

2. Information as a medium, or a “channel of communication” (oral and written).

3. Information as factual data (the explicit contents of a book, or record, or document).

These definitions are useful, but need to be expanded.  In his analysis, Wilson only discusses information as being transmitted orally or in writing.  There are, however, a number of alternative means for acquiring information.  Taking my previous example, you might smell cooked food before you see the marquee above a restaurant.  Or you might first notice the image of a hamburger on a sign before reading the words printed underneath.  Both of these examples—visual and olfactory information media—demonstrate that messages are transmitted in a variety of ways.  Additionally, we cannot forget the context.  If I am on a diet, I might ignore the building that smells of French fries and hamburgers.  If I am allergic to certain foods, an image of the type of fare served in a particular establishment might turn me off of it.  And it is possible that I miss these messages entirely; if I have a cold, maybe I won’t smell the hamburgers, and walk past that particular restaurant, unaware that it could satisfy my need.

Knowledge seeking can also be considered in terms of communication.  When I look at a sign, a message containing information is being transmitted to me.  Simplistically, this is the “conduit” metaphor for communication, which usually disregards or downplays the notions of context, influence and noise.  The communication process is far more complex, but conceptually the metaphor is useful for highlighting the roles of transmitter/speaker, message and receiver/listener.  Thomas, Kellogg and Erickson explore this idea in their article by suggesting the alternative “design-interpretation” model.  They argue that “getting the right knowledge to people” is only part of the equation, and that “people need to engage with it and learn it.” (865)  Thomas, et al. describe the model as follows:

The speaker uses knowledge about the context and the listener to design a communication that, when presented to and interpreted by the listener, will have some desired effect. (865)

The application of existing knowledge about the environment and the target audience by the speaker (or transmitter) is important to understand.  When I see the image of the hamburger, I can assume that the restaurant owners put some thought into presenting an appetizing, attractive product that will draw the most clientele.  If the image makes my mouth water, the message is received—and if I am then motivated to enter the restaurant, the owners achieved the desired effect.  If, however, I find the image unappealing, the message has failed; not because I don’t understand the information it contains, but because the restaurant owners failed to appropriately apply their knowledge about what potential customers want.  Perhaps they lacked the information they needed in order to do this successfully.

Cited References

Thomas, J. C., Kellog, W. A. and Erickson, T. (2001). The knowledge management puzzle: Human and social factors in knowledge management. IBM System Journal, 40(4), 863-884.

Wilson, T. D. (2006). On User Studies and Information Needs.  Journal of Documentation, 62(6), 658-670.

Forms of Knowledge, Ways of Knowing

The principle premise of Cook & Brown’s “Bridging Epistemologies” is that there are two separate yet complimentary epistemologies tied up in the concept of knowledge.  The first one of these is found in the traditional definition of knowledge, which describes knowledge as something people possess—that is, a property (in more than one sense of the word) that is.  Cook & Brown refer to this as the “epistemology of possession”, and it can be characterized as the “body” of knowledge.  The second, “epistemology of practice” hones in on the act of knowing found in individual and group activities—it is the capacity of doing.  Cook & Brown contend that the interplay between these two distinct forms is how we generate new knowledge, in a manner not unlike Nonaka’s spiral structure of knowledge creation (with one key difference, described below), which they call the “generative dance”.

Another way I conceptualized this distinction (using analogy, as Nonaka urges, to resolve contradiction and generate explicit knowledge from tacit knowledge, (21)) was to consider these two notions of “knowledge”/”knowing” from a linguistic perspective: if knowledge and knowing were distinct properties of the English sentence, knowing would be the verb and knowledge the object.  This is supported by Cook & Brown’s emphasis on how “knowledge” can be applied in practice as a tool to complete the task, and can result from the act of knowing (388); “knowing” acts upon (and through) “knowledge”, just as the verb acts upon (or through) the object.  The subject—that is, the person or people who are performing the action—is an essential element both to the formulation of knowledge/knowing and to the sentence.  The subject’s relationship to the verb and the object is very similar to the individual (or group’s) relationship to knowing and knowledge.  The verb represents enaction by the subject—as knowing does—and the object represents that which is employed, derived or otherwise affected by this enaction—as knowledge is.  Cook & Brown’s principle of “productive inquiry” and the interaction between knowledge and knowing, then, can be represented by the structure of the sentence.

Cook & Brown’s premise has many important implications for knowledge management.  Perhaps the most important of these is the idea that knowledge is abstract, static and required for action (that is, “knowing”) in whatever form it takes, while knowing is dynamic, concrete and related to forms of knowledge.  Of these characteristics, the most dramatic must be the static nature of knowledge; in what is Cook & Brown’s most significant break with Nonaka, they state that knowledge does not change or transform.  The only way for new knowledge to be created from old knowledge is for it to be applied in practice (i.e. “productive inquiry”).  Nonaka perceives knowledge as something malleable, that can transform from tacit to explicit and back again, while Cook & Brown unequivocally state that knowledge of one form remains in that form (382, 387, 393, 394-95).  For Cook & Brown, each form of knowledge (explicit, tacit, individual and group) performs a unique function (382).  The appropriate application of one form of knowledge in the practice (the act of knowing) can, however, give rise to knowledge in another (393).

I found Blair’s article “Knowledge Management: Hype, Hope or Help?” useful as a supplement to Cook & Brown.  Blair makes several insightful points about knowledge and knowledge management, such as the application of Wittgenstein’s theory of meaning as use in defining “knowledge”, identifying abilities, skills, experience and expertise as the human aspect of knowledge, and raising the problem of intellectual property in KM practice.  Blair’s most valuable contribution, however, is to emphasize the distinction between the two types of tacit knowledge.  This is a point Cook & Brown (and Nonaka) fail to make in their theory-sweeping models.  It is also a point I have struggled with in my readings of Cook & Brown and Nonaka.  Tacit knowledge can be either potentially expressible or not expressible (Blair, 1025).  An example of tacit knowledge that is “potentially expressible” would be heuristics—the “trial-and-error” lessons learned by experts.  Certainly in my own experience, this has been a form of tacit knowledge that can be gleaned in speaking with experts and formally expressed to educate novices (generating “explicit knowledge” through the use of “tacit knowledge”).  An example of inexpressible tacit knowledge would be the “feel” of the flute at different levels of its construction described in Cook & Brown’s example of the flutemakers’ study (395-96); this is knowledge that can only be acquired with experience, and no amount of discussion with experts, of metaphor and analogy, will yield a sufficient understanding of what it entails.  It is an essential distinction to make, since as knowledge workers we must be able to determine how knowledge is and should be expressed.

 

Cited References

Blair, D. (2002). Knowledge management: Hype, hope, or help? Journal of the American Society for Information Science and Technology 53(12), 1019-1028.

Cook, S. D. N., and Brown, J. S. (1999). Bridging Epistemologies: The Generative Dance between Organizational Knowledge and Organizational Knowing, Organization Science 10(4), 381-400.

Nonaka, I. (1994). A Dynamic Theory of Organizational Knowledge Creation. Organization Science 5(1), 5-37.

Shapiro’s Shakespeare and the “Generative Dance” of his Research

Perhaps the most interesting thing about James Shapiro’s A Year in the Life of Shakespeare is the kind of scholarship that it represents.  Drawing upon dozens—likely hundreds—of sources, Shapiro presents a credible depiction of Shakespeare’s life in 1599.  Rather than limiting himself to sources that are exclusively about Shakespeare or his plays, Shapiro gathers a mountain of data about Elizabethan England.  He consults collections of public records that shed light either on Shakespeare’s own life or the life of his contemporaries, not just to identify the historical inspiration and significance of his plays, but to give us an idea of what living in London as a playwright in 1599 would have been all about.  This, to me, is a fascinating use of documentary evidence that few have successfully undertaken.

Before I go on, I should note that I’m currently working on a directed study in which I am being thoroughly steeped in the objects and principles of knowledge management.  It is in light of this particular theoretical context that I read Shapiro and think, “he’s really on to something here.”   In their seminal article “Bridging Epistemologies: The Generative Dance Between Organizational Knowledge and Organizational Knowing”, Cook & Brown present a framework in which “knowledge”—the body of skills, abilities, expertise, information, understanding, comprehension and wisdom that we possess—and “knowing”—the act of applying knowledge in practice—interact to generate new knowledge.  Drawing upon Michael Polanyi’s distinction between tacit and explicit knowledge, Cook & Brown present a set of distinct forms of knowledge—tacit, explicit, individual and group.  They then advance the notion of “productive inquiry”, in which these different forms of knowledge can be employed as tools in an activity—such as riding a bicycle, or writing a book about an Elizabethan dramatist—to generate new knowledge, in forms that perhaps were not possessed before.  It is the interaction between knowledge and knowing that produces new knowledge, that represent a “generative dance”.

Let’s return for a moment to Polanyi’s tacit and explicit knowledge.  The sources Shapiro is working with are, by their nature, explicit, since he is working with documents.  The book itself is explicit, since it too is a document, and the knowledge it contains is fully and formally expressed.  The activity of taking documentary evidence from multiple sources, interpreting each piece of evidence in the context of the other sources, and finally synthesizing all of it into a book, represents more epistemic work than is represented than in either the book or the sources by themselves.  The activity itself is what Cook & Brown describe as “knowing”, or the “epistemology of practice”.  The notions of recognizing context and of interpretation, however, suggest that there’s even more going on here than meets the eye.  In this activity, Shapiro is merging these disparate bits of explicit knowledge to develop a hologram of Shakespeare’s 1599.  This hologram is tacit—it is an image he holds in his mind that grows more and more sophisticated the more historical relational evidence he finds.  Not all of the patterns and connections he uncovers are even expressible until he begins the synthesis, the act of writing his book.  Throughout this process, then, new knowledge is constantly being created—both tacit and explicit.

Let’s also consider for a moment Cook & Brown’s “individual” and “group” knowledge.  Shapiro’s mental hologram can be safely classified as individual knowledge.  And each piece of evidence from a single source is also individual knowledge (though, certainly, some of Shapiro’s sources might represent popular stories or widely known facts, and thus group knowledge).  The nature of Shapiro’s work, however, the collective merging of disparate sources, problematizes the individual/group distinction.  What arises from his scholarship is neither group knowledge (i.e. knowledge shared among a group of people) or individual knowledge (i.e. knowledge possessed by an individual), but some sort of hybrid that is not so easily understood.

From a digital humanist perspective, we can think of Shapiro’s scholarship (and have) as a relational database.  All of the data and the documentary evidence gets plugged into the database, and connections no one even realized existed are then discovered.  We might have many people adding data to the database, sharing bits of personal knowledge.  And everyone with access to the database can potentially discover new connections and patterns, and in doing so create new knowledge.  Would such a collective be considered group knowledge?  Would individual discoveries be individual knowledge?  Would the perception of connections be tacit or explicit?  It is not altogether clear because there are interactions occurring at a meta-level, interactions between data, interactions between sources, interactions between users/readers and the sources and the patterns of interacting sources.  What is clear is that this interactive “dance” is constantly generating additional context, new forms of knowledge, new ways of knowing.

 

Cook, S. D. N., and Brown, J. S. (1999). Bridging Epistemologies: The Generative Dance between Organizational Knowledge and Organizational Knowing, Organization Science 10(4), 381-400.

Shapiro, J. (2006).  A Year in the Life of William Shakespeare: 1599.  New York: Harper Perrennial.  394p.

Review Paper 1: Wrapping our Heads Around KM

In this week’s readings, Prusak and Nunamaker Jr. et al. successfully provide a solid and informed definition for ‘knowledge management’ (KM), and why it is important.  Prusak establishes from the get-go that KM is not just about managing information, but about providing and maintaining access to “knowledge-intensive skills” (1003).  He also identifies the pitfall of reducing KM to simply “moving data and documents around”, and the critical value of supporting less-digitized / digitizable tacit knowledge (1003).  Prusak chooses to define KM based on its disciplinary origins, noting economics, sociology, philosophy and psychology as its “intellectual antecedents”, rather than defining it from a single perspective or its current application alone (1003-1005).   Nunnaker et al. take a different approach, defining KM first in the context of IT, that is, KM as a system or technology, and then presenting a hierarchical framework from which to understand its role.  In this sense, data, information, knowledge and wisdom all exist on a scale of increased application of context (2-5).  Except for this first theoretical framework that they present, Nunamaker Jr. et al. risk falling into the trap Prusak warns against; they define KM as the effort to organize information so that it is “meaningful” (1).  But what is “meaningful”?  Only context can determine meaning—fortunately, Nunamaker Jr. et al. at least account for this unknown quantity in their framework (3-4).  They also propose a unit to measure organizational knowledge: intellectual bandwidth.  This measurement combines their KM framework and a similar framework for collaborative information systems (CIS), and is defined as: “a representation of all the relevant data, information, knowledge and wisdom available from a given set of stakeholders to address a particular issue.” (9)  It is clear from their efforts to quantify KM and from the manner in which the frame KM as a system that Nunamaker Jr. et al. are writing for a particular audience, the technicians and IT specialists. Meanwhile Prusak is writing for a more general audience of practitioners.

One thing I felt was lacking from both articles was a clear statement and challenge of the assumptions of systematizing knowledge.  Nunamaker Jr. et al.’s argument for “intellectual bandwidth” is compelling, but I cannot help but be skeptical of any attempt to measure a concept as fuzzy as “wisdom” and “collective capability” (8-9).  Even Prusak clearly states that, as in economics, an essential knowledge management question is “what is the unit of analysis and how do we measure it?” (1004).  The underlying assumption is that knowledge can, in fact, be measured.  I am dubious about this claim (incidentally, this is also why I am dubious of similar claims often proposed in economic theory).  Certainly, there are other, qualitative forms of analysis that do not require a formal unit of measurement.  Assuming (a) that knowledge is quantifiable, and (b) that such a quantity is required in order to properly examine it, to me seems to lead down a quite dangerous and not altogether useful path.  The danger is that, in focusing on how to measure knowledge in a manner that lends itself to a quantitative analysis, one is absorbed in the activity of designing metrics and forgets that the purpose of KM is primarily to capture, organize and communicate the knowledge and knowledge skills within an organizational culture.  Perhaps this danger should be considered alongside and as an extension of Prusak’s pitfall of understanding KM merely as “moving data and documents around”.

Both of these articles, as well as the foundational article by Nonaka also under discussion this week, are valuable insofar as they lay the groundwork for knowledge management as a theoretical perspective.  Nunamaker Jr. et al. present much food for thought on how knowledge is formally conceptualized with their proposed frameworks. Meanwhile Prusak provides a sound explanation of the origins of KM and forecasts the future of the field by suggesting one of two possible outcomes; either it will become so embedded in organizational practice as to be invisible, like the quality movement, or it will be hijacked by opportunists (the unscrupulous, profit-seeking consultants Prusak disdains at the beginning of his article, 1002), like the re-engineering movement (1006).  Both papers were published in 2001, and a decade later neither of these predictions appears to have been fulfilled.  KM has been adopted by organizations much as the quality movement has been, but I suspect that knowledge workers are still trying to wrap their heads around how it is to be implemented and what it actually means.

 

 

Cited References

 

Nunamaker Jr., J. F., Romano Jr., N. C. and Briggs, R. O. (2001). A Framework for Collaboration and Knowledge Management, Proceedings of the 34th Hawaii International Conference on System Sciences – 2001. 1-12.

 

Prusak, L. (2001). Where did knowledge management come from? IBM Systems Journal 40(4), 1002-1007.