Posts Tagged ‘ knowledge management ’

Assessing Social Media – Methods

I have written about various social media and web technologies as they relate to knowledge management (KM), and as they are discussed in the literature.  But I haven’t really touched on how the literature approaches measuring the application and success of such technologies in an organizational context.  Prusak notes that one of the priorities of KM is to identify the unit of analysis and how to measure it (2001, 1004).  In this review paper I will examine some of the readings that have applied this question to social media. For the sake of consistency, the readings I have chosen deal with the assessment of blogs for the management of organizational knowledge, but all of the methods discussed could be generalized to other emerging social technologies.

Grudin indicates that the reason most attempts at developing systems to preserve and retrieve knowledge in the past have failed, is that digital systems required information to be represented explicitly when most knowledge is tacit: “Tacit knowledge is often transmitted through a combination of demonstration, illustration, annotation, and discussion.” (2006, 1) But the situation, as Grudin explains, has changed—“old assumptions do not hold…new opportunities are emerging.” (ibid.) Virtual memory is no longer sold at a premium, allowing the informal and interactive activities used to spread tacit knowledge to be captured and preserved; emerging trends such as blogs, wikis, the ever-increasing efficiency of search engines, and of course social networks such as Twitter and Facebook that have come to dominate the Internet landscape open up a multitude of ways in which tacit knowledge can be digitized.

In his analysis of blogs, Grudin identifies five categories (2006, 5):

diary-like blogs, or personal blogs, developing the skill of engaging readers through personal revelation;

A-list blogs by journalists and high-profile individuals, as a source of information on events products and trends;

Watchlists, which track references across a wide selection of sources, reveal how a particular product, organization, name, brand, topic, etc is being discussed;

Externally visible employee blogs provide a human face for an organization or product, which offsets the potential legal and PR risks for a corporation.

Project blogs are internal blogs that focus on work and serve as a convenient means of collecting, organizing and retrieving documents and communication.

Lee, et al. make a similar move in categorizing the types of public blogs used by Fortune 500 companies (2006, 319):

Employee blogs (maintained by rank-and-file employees, varies in content and format)

Group blogs (operated by a group of rank-and-file employees, focuses on a specific topic)

Executive blogs (feature the writings of high-ranking executives)

Promotional blogs (promoting products and events)

Newsletter-type blogs (covering company news)

Grudin does not conduct any formal assessment of blogs, except to provide examples of project blogs, and to assign technical and behavioral characteristics to that particular sub-type that allowed them to be successful, based on his personal experience (2006, 5-7). Lee, et al.’s approach to assessing blogs involves content analysis of 50 corporate blogs launched by the 2005 Fortune 500 companies (2006, 322-23). In addition to the categories above, Lee, et al. also identified five distinct blogging strategies based on their findings, which broadly fall under two approaches (321):

Bottom-up, in which all company members are permitted to blog, and each blog serves a distinct purpose (not necessarily assigned by a higher authority)[1];

Top-down, in which only select individuals or groups are permitted to blog, and the blogs serve an assigned purpose that rarely deviates between blogs.

As the names suggest, a greater control of information is exercised in the top-down approach, while employee bloggers in companies adopting the bottom-up approach are provided greater autonomy.

Huh, et al. developed a unique approach in their study of BlogCentral, IBM’s internal blogging system (2007).  The study combined interviews with individual bloggers about their blogging practices and content analysis of their blogs.  Based on this data, they were able to measure two characteristics of blogs: the content (personal stories/questions provoking discussion/sharing information or expertise) and the intended audience (no specific audience/specific audience/broad audience).  These findings revealed four key observations:

– Blogs provide a medium for employees to collaborate and give feedback;

– Blogs are a place to share expertise and acquire tacit knowledge;

– Blogs are used to share personal stories and opinions that may increase the chances of social interaction and collaboration;

– Blogs are used to share aggregated information from external sources by writers who are experts in the area.

Rodriguez examines the use of WordPress blogs in two academic libraries for internal communication and knowledge management at the reference desk (2010).  Her analysis measures the success of these implementations using diffusion of innovation and organizational lag theories. Rogers’ Innovation Diffusion Theory establishes five attributes of an innovation that influence its acceptance in an organizational environment: Relative advantage, compatibility, complexity, triability, and observability (2010, 109). Meanwhile, organizational lag identifies the discrepancy between the adoption of technical innovation—i.e. the technology itself—and administrative innovation—i.e. the underlying, administrative purpose(s) for implementing the technology, usually representing a change in workflow to increase productivity.  In analyzing the two implementations of the blogging software, Rodriguez discovers that both libraries succeeded in terms of employee adoption of the technical innovation, but failed with the administrative innovation.  This was due specifically to the innovation having poor observability: “the degree to which the results of the innovation are easily recognized by the users and others” (2010, 109, 120). The initiators of the innovation in both cases did not “clearly articulate the broader administrative objectives” and “demonstrate the value of implementing both the tool and the new workflow process.” (2010, 120) If they had done so, Rodriguez suggests, the blogs might have been more successful.

While all of these studies approached blogging in a different way—project blogs, external corporate blogs, internal corporate blogs and internal group blogs—and measured different aspects of the technology—what it is, how it is used, if it is successful—they reveal a number of valuable approaches to studying social media in the KM context. Categorization, content and discourse analysis, interviews, and the application of relevant theoretical models are all compelling methods to assess social media and web technologies.

 


[1] One of the valuable contributions of Lee, et al.’s study is to also identify the essential purposes for which corporate blogs are employed. Some of these include product development, customer service, promotion and thought leadership. The notion of ‘thought leadership’ in particular, as a finding of their content analysis, is worth exploring; ‘thought leadership’ suggest that the ability to communicate innovative ideas is closely tied to natural leadership skills, and that blogs and other social media (by extension) can help express these ideas. Lee, et al.’s findings also suggest that ‘thought leadership’ in blogs will build the brand, or ‘human’ face of the organization, while acting as a control over employee blogs, evidenced by the fact that it is found primarily in blogs that employ a top-down strategy.


Bibliography

Grudin, J. (2006).  Enterprise Knowledge Management and Emerging Technologies. Proceedings of the 39th Hawaii International Conference on System Sciences. 1-10.

Huh, J., Jones, L., Erickson, T., Kellogg, W.A., Bellamy, R., and Thomas, J.C. (2007) BlogCentral: The Role of Internal Blogs at Work.  Proceeding Computer/Human Interaction CHI EA 2007, April 28-May 3. 2447-2452. San Jose, CA.  doi <10.1145/1240866.1241022>

Lee, S., Hwang, T., and Lee, H. (2006). Corporate blogging strategies of the Fortune 500 companies. Management Decision 44(3). 316-334.

Prusak, L. (2001). Where did knowledge management come from? IBM Systems Journal, 40(4), 1002-1007.

Rodriguez, J. (2010). Social Software in Academic Libraries for Internal Communication and Knowledge Management: A Comparison of Two Reference Blog Implementations. Internet Reference Services Quarterly 25(2). 107-124.

Advertisements

Collective Intelligence, Web 2.0, and Understanding Knowledge

One of the key elements of Web 2.0, as established by Tim O’Reilly in his 2005 paper “What is Web 2.0?”, is the notion of ‘collective intelligence’.  The term itself does not suggest any particular type of technology; rather, it evokes an epistemological stance toward the concept of ‘intelligence’— if ‘intelligence’ is the cognitive capacity to think and learn, ‘collective intelligence’ implies the capacity to think and learn together, as a group. ‘Collective intelligence’ is the capacity to think, learn and share knowledge. Web 2.0 is more of a paradigm than simply a new breed of information technologies; it is a shift in how we perceive the ways in which knowledge is shared, by expanding the means of knowledge production to non-specialists.  A prime example of this principle is Wikipedia; once upon a time, encyclopedias (such as Britannica) were produced by a small group of subject specialists, high priests of their respective domains.  Wikipedia’s model transformed this approach, stripping the high priests of their power and opening up the opportunity to produce, edit and debate content to all.  The results are revealing—while occasionally entries on Wikipedia lack the accuracy of a traditional encyclopedia, they almost always reflect the current debates that surround a given topic, revealing the fluid nature of such knowledge.  This is not something one could easily apprehend from a traditional encyclopedia.  Why? Because the knowledge is mediated by a variety of perspectives, rather than one alone. That’s the power of collective intelligence[1].

In his remarks at the launch of the MIT Center for Collective Intelligence (2006), Thomas Malone defines ‘collective intelligence’ as “groups of individuals doing things collectively that seem intelligent.” As Malone makes clear, this is not a new idea—in the same way that knowledge management (KM) builds on concepts that have existed for decades, even centuries, ‘collective intelligence’ can be considered in a particular way as a new name for old ideas.  What makes it (and what makes KM) ‘new’ again is its potential application through new information technologies (i.e. the Web):

It is now possible to harness the intelligence of huge numbers of people connected in very different ways and on a much larger scale than has ever been possible before. (Malone, 2006).

The question becomes: “How can people and computers be connected so that collectively they act more intelligently than any individual, group or computer has ever done before?” (ibid.) This same question is reflected before Web 2.0, rather prophetically, in Marwick’s consideration of KM technology (2001).  Channeling Nonaka’s model of organizational knowledge creation, Marwick emphasizes the value and importance of tacit knowledge, while identifying the shortcomings of then-current technologies. The great hope for Marwick is ‘groupware’, a broad term that perhaps has less currency today referring to portals, intranets and collaborative software packages to facilitate group communication and project work.  In 2001, Marwick refers to such tools as ‘applications’ or ‘products’, standalone packages that organizations purchase and own; it is significant that the Web 2.0 paradigm eliminates the accuracy of such phrasing to describe collective intelligence—or social media—tools.  Rather, the web itself has become the ‘product’, the platform, and the tools are services.  This distinction is essential: the difference between a handful of software packages for computer-supported cooperative work and a universally accessible platform for social media is that one better reflects the interconnected nature of activities involved in the knowledge creation process. While Nonaka’s model of knowledge creation is split into four categories (socialization, internalization, externalization, and combination) that describe the type of transfer of knowledge that occurs between individual and group, tacit knowledge and explicit knowledge, it is unified conceptually as a spiral that circles through these categories in an eternal series of overlapping cycles.  Pre-Web 2.0, this poses a problem for KM, because it means that a variety of technologies—many of which will not communicate well, or at all, with each other—needs to be employed at each stage. There is no continuity, no sense of connection between one tool and the next, when the process of knowledge creation is by its very nature continuous and interconnected. Web 2.0 gives us the paradigm with which to understand that continuity. It also gives us the potential for collective intelligence that Malone is so excited about.

Collective intelligence in the Web 2.0 context is by no means flawless.  In fact, this approach to understanding knowledge has led to a whole new set of problems.  While we might be less concerned today than Marwick in 2001 about the sharing of tacit knowledge through technologies, thanks to an ever-expanding assortment of social networks available on the web which situate individuals, communities and organizations in relation to one and other, the explosion of information in such an unimaginably vast array poses increasingly difficult challenges.  In 2006, Grudin writes that there was some concern at the time when photos tagged as ‘london’ in Flickr jumped from 70,000 to 200,000 over three months.  Would this be a “tragedy of the commons”, a tool that shows such promise, combining folksonomic tagging with user-generated photographic collections, grown out-of-control? But then Flickr introduced clusters, subsets and pools to re-organized tagged content in a more refined way; crisis averted, and new innovation achieved.  While we have come a long way from Marwick’s groupware, we are still struggling to grasp how concepts like ‘collective intelligence’ and ‘Web 2.0’, and their associated technologies, can help KM.  New challenges and innovations are encountered every day.  And as Grudin suggests, “These are still early days.”


[1] That’s not to say that the collective intelligence or crowdsourcing principle that underlies web social media is definitively superior; quite the opposite, Web 2.0 introduces a new host of challenges, such as determining reliability, issues of intellectual property, and organization of information that were not nearly as problematic from a traditional approach to knowledge creation.


Bibliography

Grudin, J. (2006). Enterprise Knowledge Management and Emerging Technologies. Proceedings of the 39th Hawaii International Conference on System Sciences. 1-10.

Malone, T. W. (2006, October 13). What is collective intelligence and what will we do about it? MIT Center For Collective Intelligence. Retrieved from http://cci.mit.edu/about/MaloneLaunchRemarks.html

Marwick, A. D. (2001). Knowledge management technology. IBM Systems Journal, 40(4). 814-830.

O’Reilly, T. (2005, November 30). What is Web 2.0? Design Patterns and Business Models for the Next Generation of Software. O’Reilly Media. Retrieved from http://oreilly.com/web2/archive/what-is-web-20.html

Twitter and the KM Context

[W]e came across the word “twitter,” and it was just perfect. The definition was “a short burst of inconsequential information,” and “chirps from birds.” And that’s exactly what the product was.
– Jack Dorsey (Sarno, 2009)

Twitter, the popular microblogging tool from which users post updates in 140 character increments, recently celebrated its five-year anniversary. In the world of fly-by-night Web 2.0 applications, that makes it a well-established and time-tested social technology. What has contributed to Twitter’s success? Why is it such a popular tool?

As its co-founder, Jack Dorsey, suggests in the quotation above, Twitter is a place where users can publish short bursts of information and share it with a larger community. It is “the best way to discover what’s new in your world”, reads the website’s about page (http://twitter.com/about). Still, users unfamiliar with the platform or dubious about this claim might wonder precisely how this tool can be productive. After all, Dorsey’s endorsement is not exactly inspiring: what good is information if it is inconsequential? What makes Twitter such a powerful tool, from both a knowledge management or business perspective and the broader context of information-sharing is how it operates in real-time. It allows members of communities of practice to track relevant news and share important events as they happen. This crowdsourcing approach to information means that users who follow other users publishing information relevant to their community of practice can keep their finger on the pulse—an extremely valuable commodity in a world that is increasingly knowledge-centric. Similarly, these users can participate in a live, public conversation within a global network of peers, encouraging an ongoing exchange of knowledge. More importantly, the simple premise of “following” (or, in other words, subscribing to user feeds) allows complete personalization, while creating links between users that shape community networks organically, rhizomatically.

Another advantage of Twitter is that it is highly scalable. Twitter has an API (Application Programming Interface) that allows customized software to be built around the basic platform. In this way, users can log in to their account using third-party software like TweetDeck, which allows them to organize and view tweets in a variety of ways. In addition, this characteristic also allows the development of widgets to publish tweets on websites and blogs. Viewed as much as a disadvantage as an advantage, the 140-character limit on updates forces users to state a single idea clearly and concisely. This limitation was originally due to the average character limit for text messages from cell phones, which had been considered by the founders as the principal technology for using the service. Soon after the service went public, however, most smart phone models no longer had that limitation on text messages. By then users had discovered that the character limit was the ideal length for short status updates; the limitation distinguishes Twitter from other casual blogging services such as Tumblr, which, no doubt, helped promote the service as a brand. While sometimes inconvenient for users with longer, more elaborate messages, the difference makes Twitter unique as a social media tool.

A definite disadvantage of this technology, as with many social media technologies, is the public nature of updates and the murky notion of intellectual property. Twitter is perhaps more volatile in this sense than other, similar technologies like blogs or wikis, which require more thoughtful consideration before publishing. The brief nature of tweets make it easy for users to submit whatever they happen to be thinking or seeing, regardless of legal considerations such as intellectual property or copyrights, and updates are published immediately without the opportunity to review or delete before they go live. This can be problematic for users, particularly high-profile users; one dramatic example, though certainly not the only one, would be a tweet that resulted in the termination of one CNN correspondent. In 2010, Octavia Nasr was fired for publishing an update expressing regret over the death of the Grand Ayatollah Mohammad Hussein Fadlallah, a leader of Hezbollah. Twitter poses a problem for e-Discovery that courts around the world have not yet come to terms with.

To provide a nuts-and-bolts explanation of how Twitter works and to help understand its practicality, it is useful to consider the following scenario: You are interested in motorcycles, and want current information about promotions, events, and people in your area related to that interest. You create an account on Twitter.com, and search the website for likeminded users. Scanning through user profiles, you decide to follow users representing several motorcycle dealers in your city, a couple motorcycle riding clubs, a national motorcycle news magazine, and a number of individuals who identify themselves as “motorcycle enthusiasts”. You begin receiving these users’ updates (or tweets), and begin to learn about the local motorcycle community. After a few days of reading tweets, you learn that there is going to be a bike show and that several of the users will be attending. You are unable to attend the bike show yourself, but you get to experience it through the tweets of your fellow users, who describe the event and post pictures of different models on display. You are able to engage some of these users, asking them questions about the event as it is taking place. You also discover that there is a hashtag that Twitter users are using to identify tweets about the event, and by searching all tweets that include that hashtag you discover several more users to follow. In this way information is exchanged, and you develop relationships with other members of the community that you might otherwise not have had. Now consider this same scenario in a different context: you have recently opened a motorcycle shop. Using the tool in the same way, Twitter becomes a valuable social tool for promoting yourself or your company, in addition to acquiring and sharing useful information.

Knowledge management (KM) resides in an interesting interdisciplinary space, somewhere between sociology, philosophy and economics. In his 1962 article, “The Economic Implications of Learning by Doing”, Nobel-prize winning economist Kenneth Arrow clearly states the necessity for organizational practices that manage the learning process; the economics of KM are concerned with breaking down and quantifying this process. In The Tacit Dimension (1966), Michael Polanyi describes the concept of “tacit knowing”; knowledge that deals with the implicit nature of human experience, skill and action is considered tacit, while knowledge codified and transmittable through language is explicit. Polanyi’s epistemological model serves as the fundamental principle of KM, distinguishing knowledge from the concepts of information and data. The sociological underpinnings of KM provide us with the a sound basis for understanding “knowledge” as a concept and its notably various manifestations, while also giving us a framework for making sense of how knowledge circulates within communities and through individuals. The seminal work of Emile Durkheim lends KM a primary concern with the “social facts”—the observable behaviours at the root of human interaction. Rather than relying on theory, KM is preoccupied with studying how people actually share, learn, and use knowledge. KM arose from these disciplinary cornerstones in the early 1990s, when an increased emphasis on the creation, dissemination and utilization of organizational knowledge in professional and scholarly literature identified a growing need for a systematic approach to managing information and expertise in firms. Laurence Prusak identifies three social and economic trends that make KM essential in any organization today: globalization, ubiquitous computing and “the knowledge-centric view of the firm” (1002). Prusak’s description of globalization in particular emphasizes the necessity to stay current; information technology has resulted in a “speeding up” of all elements of global trade, as well as an increase in the “reach” of organizations. Twitter is a technology that can facilitate this necessity.

There are any number of examples that demonstrate how Twitter fulfills the requirements of KM that I have described. In terms of leveraging group and individual interactions based on “social facts”, we can consider the role Twitter has played in the recent revolution in Egypt. Protesters on the ground in Cairo were publishing updates about the conflicts they faced, giving the crisis international exposure it might otherwise not have had. Following the government’s failed attempt to block Twitter—evidence in itself as to the effectiveness of Twitter for spreading a message—there was overwhelming support from around the world for the protestors against President Mubarak’s regime. This global support, along with the grassroots reporting of Egyptian demonstrators, certainly contributed to Mubarak’s ultimate resignation from office. This example shows how the knowledge of individuals in a particular context spread to other communities, and how this in turn inspired a global movement—based on the ever-expanding network of interactions through this particular social tool. The “social fact” inherent in Twitter is how human interaction manifests around these short bursts of highly contextual information, and how communities take shape by engaging in the same and other closely related contextual spaces.

An example of how Twitter facilitates the transfer of tacit knowledge might be the way events are recorded and experienced through it. Take, for instance, the recent SXSW Conference and Festival in Austin, TX, a yearly event that is recognized worldwide as a showcase of music, films and emerging technologies; a Twitter search for “#SXSW” reveals a host of users recording their experience through a variety of media—text describing talks, shows and screenings combined with links to photos, videos, and websites that together form an image of the event. These individuals’ experiences might not otherwise be expressible without a tool like Twitter that facilitates the blending of online multimedia. Moreover, the combined force of a community of users sharing these experiences at the same time can provide a comprehensive panorama of what they are hearing, seeing, and learning. In this way, Twitter allows tacit knowledge to be codified for mass consumption.

Measuring the impact of Twitter and how knowledge circulates through the network is not a simple task. Perhaps the most effective way to do so that we have today is the application of web and text analytics to social media. There are several companies that have recently achieved success in this area, based on textual data (e.g. lexical analysis, natural language processing, etc), user data (e.g. demographics, geographic data), and traffic data (e.g. clickstream, page views, number of followers/subscribers, replies and retweets, etc) mined from social media websites. Canadian-based Sysomos has used MAP (Media Analysis Platform) to provide an in-depth analysis of how people, products and brands are effectively marketed through Twitter and other social media tools. One reviewer describes MAP as follows:

MAP can, for example, tell you that the largest number of Twitter users who wrote about the Palm Pre come from California and Great Britain, as well as who the most authoritative Twitter users who tend to tweet about the Pre are (MAP assigns a score from 1 to 10 to every Twitter user, based on the number of followers, replies, retweets, etc.). Of course, you can then also compare these results with results from a query for ‘iPhone,’ for example. (Lardinois, 2009)

MAP, in fact, was used for an analysis of users during the crisis in Egypt. Some of the visualizations of this data are available online[1] . A recent study comparing social media monitoring software identified five key categories that need to be considered to appropriately measure the effectiveness of a social media tool (FreshMinds Research, 2010):

  1. Coverage – Types of media available based on geographic coverage.
  2. Sentiment analysis – The attitude of the speaker/writer with respect to the topic, based on tone.
  3. Location of conversations
  4. Volume of conversations
  5. Data-latency – The speed at which conversations are collected by a tool, based on the frequency of its web crawlers and the length of time it takes the monitoring tool to process the data.

As the researchers who undertook the study indicate, the possibilities for such data, from both a qualitative and quantitative perspective, are “huge”. Social media monitoring allows us to examine any number of factors in the learning and communicative process as it is manifested through social media technologies, “from category choices to the lifestyles of different segments”, on an individual or at an aggregate level (ibid.). The research group also identifies areas in which social media monitoring needs to improve—particularly within the realm of sentiment analysis. The monitoring tools are not sophisticated enough to provide an accurate measure. While Twitter in itself can be thought of as an organizational practice for knowledge-sharing, the application of monitoring tools can be thought of as Arrow’s organizational practices for managing knowledge. Based on the analysis that such monitoring tools—like Sysomos’ MAP—can provide, organizations and individuals can make more effective use of Twitter.

It is clear that Twitter can be a huge benefit for the effective creation and dissemination of knowledge, if used correctly. Organizations that are prepared to invest the time and energy in a sound social media plan to improve KM would be remiss not to include a presence on Twitter. On the other hand, this technology poses many risks for organizations, particularly in the realm of e-Discovery. The fact that content published to Twitter resides on the website’s servers, and not in the hands of the organization must play an important factor in any organization’s KM assessment. Twitter is perhaps more useful for NFP organizations that have a mandate for advocacy and public promotion (take, for instance, SXSW). It also is useful for individuals with either a professional interest in promotional or informational knowledge-sharing (such as consultants, agents, performers, journalists and salespeople) or as members of an existing community (like our motorcycle enthusiast). The professional and the social are not easily distinguished on Twitter, which can be both a benefit and a curse for users, as we have seen. Finally, while the information shared on Twitter might seem “inconsequential” to some, to others it can be very valuable. It is this value that KM needs to harness, in order to effectively make use of Twitter.


[1] Visualizations for the Twitter data related to the crisis in Egypt can be found at http://mashable.com/2011/02/01/egypt-twitter-infographic/. For a compelling overview of the sort of data Sysomos has analyzed with respect to Twitter, an indispensable resource is their report “Inside Twitter: An in-depth look inside the Twitter World”, 2009: http://www.sysomos.com/docs/Inside-Twitter-BySysomos.pdf

Bibliography

Arrow, K. (1962, June). The Economic Implications of Learning by Doing. Review of Economic Studies 29(3), 153-73.

Durkheim, E. (1982). The Rules of the Sociological Method, Ed. S. Lukes. Trans. W.D. Halls. New York: Free Press.

FreshMinds Research. (2010, May 14). Turning conversations into insights: A comparison of Social Media Monitoring Tools. [A white paper from FreshMinds Research, http://www.freshminds.co.uk.] Retrieved on March 22, 2011 from http://shared.freshminds.co.uk/smm10/whitepaper.pdf

Lardinois, F. (2009, June 4). Pro Tools for Social Media Monitoring and Analysis: Sysomos Launches MAP and Heartbeat. ReadWriteWeb.com. Retrieved on March 22, 2011 from http://www.readwriteweb.com/archives/pro_tools_for_social_media_sysomos_launches_map_and_heatbeat.php

Polanyi, M. (1966). The Tacit Dimension. London: Routledge & Kegan Paul.

Prusak, L. (2001). Where did knowledge management come from? IBM Systems Journal, 40(4), 1002-1007.

Sarno, D. (2009, February 18) Twitter creator Jack Dorsey illuminates the site’s founding document. Part I. Los Angeles Times. Retrieved September 24, 2010 from http://latimesblogs.latimes.com/technology/2009/02/twitter-creator.html

Forms of Knowledge, Ways of Knowing

The principle premise of Cook & Brown’s “Bridging Epistemologies” is that there are two separate yet complimentary epistemologies tied up in the concept of knowledge.  The first one of these is found in the traditional definition of knowledge, which describes knowledge as something people possess—that is, a property (in more than one sense of the word) that is.  Cook & Brown refer to this as the “epistemology of possession”, and it can be characterized as the “body” of knowledge.  The second, “epistemology of practice” hones in on the act of knowing found in individual and group activities—it is the capacity of doing.  Cook & Brown contend that the interplay between these two distinct forms is how we generate new knowledge, in a manner not unlike Nonaka’s spiral structure of knowledge creation (with one key difference, described below), which they call the “generative dance”.

Another way I conceptualized this distinction (using analogy, as Nonaka urges, to resolve contradiction and generate explicit knowledge from tacit knowledge, (21)) was to consider these two notions of “knowledge”/”knowing” from a linguistic perspective: if knowledge and knowing were distinct properties of the English sentence, knowing would be the verb and knowledge the object.  This is supported by Cook & Brown’s emphasis on how “knowledge” can be applied in practice as a tool to complete the task, and can result from the act of knowing (388); “knowing” acts upon (and through) “knowledge”, just as the verb acts upon (or through) the object.  The subject—that is, the person or people who are performing the action—is an essential element both to the formulation of knowledge/knowing and to the sentence.  The subject’s relationship to the verb and the object is very similar to the individual (or group’s) relationship to knowing and knowledge.  The verb represents enaction by the subject—as knowing does—and the object represents that which is employed, derived or otherwise affected by this enaction—as knowledge is.  Cook & Brown’s principle of “productive inquiry” and the interaction between knowledge and knowing, then, can be represented by the structure of the sentence.

Cook & Brown’s premise has many important implications for knowledge management.  Perhaps the most important of these is the idea that knowledge is abstract, static and required for action (that is, “knowing”) in whatever form it takes, while knowing is dynamic, concrete and related to forms of knowledge.  Of these characteristics, the most dramatic must be the static nature of knowledge; in what is Cook & Brown’s most significant break with Nonaka, they state that knowledge does not change or transform.  The only way for new knowledge to be created from old knowledge is for it to be applied in practice (i.e. “productive inquiry”).  Nonaka perceives knowledge as something malleable, that can transform from tacit to explicit and back again, while Cook & Brown unequivocally state that knowledge of one form remains in that form (382, 387, 393, 394-95).  For Cook & Brown, each form of knowledge (explicit, tacit, individual and group) performs a unique function (382).  The appropriate application of one form of knowledge in the practice (the act of knowing) can, however, give rise to knowledge in another (393).

I found Blair’s article “Knowledge Management: Hype, Hope or Help?” useful as a supplement to Cook & Brown.  Blair makes several insightful points about knowledge and knowledge management, such as the application of Wittgenstein’s theory of meaning as use in defining “knowledge”, identifying abilities, skills, experience and expertise as the human aspect of knowledge, and raising the problem of intellectual property in KM practice.  Blair’s most valuable contribution, however, is to emphasize the distinction between the two types of tacit knowledge.  This is a point Cook & Brown (and Nonaka) fail to make in their theory-sweeping models.  It is also a point I have struggled with in my readings of Cook & Brown and Nonaka.  Tacit knowledge can be either potentially expressible or not expressible (Blair, 1025).  An example of tacit knowledge that is “potentially expressible” would be heuristics—the “trial-and-error” lessons learned by experts.  Certainly in my own experience, this has been a form of tacit knowledge that can be gleaned in speaking with experts and formally expressed to educate novices (generating “explicit knowledge” through the use of “tacit knowledge”).  An example of inexpressible tacit knowledge would be the “feel” of the flute at different levels of its construction described in Cook & Brown’s example of the flutemakers’ study (395-96); this is knowledge that can only be acquired with experience, and no amount of discussion with experts, of metaphor and analogy, will yield a sufficient understanding of what it entails.  It is an essential distinction to make, since as knowledge workers we must be able to determine how knowledge is and should be expressed.

 

Cited References

Blair, D. (2002). Knowledge management: Hype, hope, or help? Journal of the American Society for Information Science and Technology 53(12), 1019-1028.

Cook, S. D. N., and Brown, J. S. (1999). Bridging Epistemologies: The Generative Dance between Organizational Knowledge and Organizational Knowing, Organization Science 10(4), 381-400.

Nonaka, I. (1994). A Dynamic Theory of Organizational Knowledge Creation. Organization Science 5(1), 5-37.

Shapiro’s Shakespeare and the “Generative Dance” of his Research

Perhaps the most interesting thing about James Shapiro’s A Year in the Life of Shakespeare is the kind of scholarship that it represents.  Drawing upon dozens—likely hundreds—of sources, Shapiro presents a credible depiction of Shakespeare’s life in 1599.  Rather than limiting himself to sources that are exclusively about Shakespeare or his plays, Shapiro gathers a mountain of data about Elizabethan England.  He consults collections of public records that shed light either on Shakespeare’s own life or the life of his contemporaries, not just to identify the historical inspiration and significance of his plays, but to give us an idea of what living in London as a playwright in 1599 would have been all about.  This, to me, is a fascinating use of documentary evidence that few have successfully undertaken.

Before I go on, I should note that I’m currently working on a directed study in which I am being thoroughly steeped in the objects and principles of knowledge management.  It is in light of this particular theoretical context that I read Shapiro and think, “he’s really on to something here.”   In their seminal article “Bridging Epistemologies: The Generative Dance Between Organizational Knowledge and Organizational Knowing”, Cook & Brown present a framework in which “knowledge”—the body of skills, abilities, expertise, information, understanding, comprehension and wisdom that we possess—and “knowing”—the act of applying knowledge in practice—interact to generate new knowledge.  Drawing upon Michael Polanyi’s distinction between tacit and explicit knowledge, Cook & Brown present a set of distinct forms of knowledge—tacit, explicit, individual and group.  They then advance the notion of “productive inquiry”, in which these different forms of knowledge can be employed as tools in an activity—such as riding a bicycle, or writing a book about an Elizabethan dramatist—to generate new knowledge, in forms that perhaps were not possessed before.  It is the interaction between knowledge and knowing that produces new knowledge, that represent a “generative dance”.

Let’s return for a moment to Polanyi’s tacit and explicit knowledge.  The sources Shapiro is working with are, by their nature, explicit, since he is working with documents.  The book itself is explicit, since it too is a document, and the knowledge it contains is fully and formally expressed.  The activity of taking documentary evidence from multiple sources, interpreting each piece of evidence in the context of the other sources, and finally synthesizing all of it into a book, represents more epistemic work than is represented than in either the book or the sources by themselves.  The activity itself is what Cook & Brown describe as “knowing”, or the “epistemology of practice”.  The notions of recognizing context and of interpretation, however, suggest that there’s even more going on here than meets the eye.  In this activity, Shapiro is merging these disparate bits of explicit knowledge to develop a hologram of Shakespeare’s 1599.  This hologram is tacit—it is an image he holds in his mind that grows more and more sophisticated the more historical relational evidence he finds.  Not all of the patterns and connections he uncovers are even expressible until he begins the synthesis, the act of writing his book.  Throughout this process, then, new knowledge is constantly being created—both tacit and explicit.

Let’s also consider for a moment Cook & Brown’s “individual” and “group” knowledge.  Shapiro’s mental hologram can be safely classified as individual knowledge.  And each piece of evidence from a single source is also individual knowledge (though, certainly, some of Shapiro’s sources might represent popular stories or widely known facts, and thus group knowledge).  The nature of Shapiro’s work, however, the collective merging of disparate sources, problematizes the individual/group distinction.  What arises from his scholarship is neither group knowledge (i.e. knowledge shared among a group of people) or individual knowledge (i.e. knowledge possessed by an individual), but some sort of hybrid that is not so easily understood.

From a digital humanist perspective, we can think of Shapiro’s scholarship (and have) as a relational database.  All of the data and the documentary evidence gets plugged into the database, and connections no one even realized existed are then discovered.  We might have many people adding data to the database, sharing bits of personal knowledge.  And everyone with access to the database can potentially discover new connections and patterns, and in doing so create new knowledge.  Would such a collective be considered group knowledge?  Would individual discoveries be individual knowledge?  Would the perception of connections be tacit or explicit?  It is not altogether clear because there are interactions occurring at a meta-level, interactions between data, interactions between sources, interactions between users/readers and the sources and the patterns of interacting sources.  What is clear is that this interactive “dance” is constantly generating additional context, new forms of knowledge, new ways of knowing.

 

Cook, S. D. N., and Brown, J. S. (1999). Bridging Epistemologies: The Generative Dance between Organizational Knowledge and Organizational Knowing, Organization Science 10(4), 381-400.

Shapiro, J. (2006).  A Year in the Life of William Shakespeare: 1599.  New York: Harper Perrennial.  394p.

Review Paper 1: Wrapping our Heads Around KM

In this week’s readings, Prusak and Nunamaker Jr. et al. successfully provide a solid and informed definition for ‘knowledge management’ (KM), and why it is important.  Prusak establishes from the get-go that KM is not just about managing information, but about providing and maintaining access to “knowledge-intensive skills” (1003).  He also identifies the pitfall of reducing KM to simply “moving data and documents around”, and the critical value of supporting less-digitized / digitizable tacit knowledge (1003).  Prusak chooses to define KM based on its disciplinary origins, noting economics, sociology, philosophy and psychology as its “intellectual antecedents”, rather than defining it from a single perspective or its current application alone (1003-1005).   Nunnaker et al. take a different approach, defining KM first in the context of IT, that is, KM as a system or technology, and then presenting a hierarchical framework from which to understand its role.  In this sense, data, information, knowledge and wisdom all exist on a scale of increased application of context (2-5).  Except for this first theoretical framework that they present, Nunamaker Jr. et al. risk falling into the trap Prusak warns against; they define KM as the effort to organize information so that it is “meaningful” (1).  But what is “meaningful”?  Only context can determine meaning—fortunately, Nunamaker Jr. et al. at least account for this unknown quantity in their framework (3-4).  They also propose a unit to measure organizational knowledge: intellectual bandwidth.  This measurement combines their KM framework and a similar framework for collaborative information systems (CIS), and is defined as: “a representation of all the relevant data, information, knowledge and wisdom available from a given set of stakeholders to address a particular issue.” (9)  It is clear from their efforts to quantify KM and from the manner in which the frame KM as a system that Nunamaker Jr. et al. are writing for a particular audience, the technicians and IT specialists. Meanwhile Prusak is writing for a more general audience of practitioners.

One thing I felt was lacking from both articles was a clear statement and challenge of the assumptions of systematizing knowledge.  Nunamaker Jr. et al.’s argument for “intellectual bandwidth” is compelling, but I cannot help but be skeptical of any attempt to measure a concept as fuzzy as “wisdom” and “collective capability” (8-9).  Even Prusak clearly states that, as in economics, an essential knowledge management question is “what is the unit of analysis and how do we measure it?” (1004).  The underlying assumption is that knowledge can, in fact, be measured.  I am dubious about this claim (incidentally, this is also why I am dubious of similar claims often proposed in economic theory).  Certainly, there are other, qualitative forms of analysis that do not require a formal unit of measurement.  Assuming (a) that knowledge is quantifiable, and (b) that such a quantity is required in order to properly examine it, to me seems to lead down a quite dangerous and not altogether useful path.  The danger is that, in focusing on how to measure knowledge in a manner that lends itself to a quantitative analysis, one is absorbed in the activity of designing metrics and forgets that the purpose of KM is primarily to capture, organize and communicate the knowledge and knowledge skills within an organizational culture.  Perhaps this danger should be considered alongside and as an extension of Prusak’s pitfall of understanding KM merely as “moving data and documents around”.

Both of these articles, as well as the foundational article by Nonaka also under discussion this week, are valuable insofar as they lay the groundwork for knowledge management as a theoretical perspective.  Nunamaker Jr. et al. present much food for thought on how knowledge is formally conceptualized with their proposed frameworks. Meanwhile Prusak provides a sound explanation of the origins of KM and forecasts the future of the field by suggesting one of two possible outcomes; either it will become so embedded in organizational practice as to be invisible, like the quality movement, or it will be hijacked by opportunists (the unscrupulous, profit-seeking consultants Prusak disdains at the beginning of his article, 1002), like the re-engineering movement (1006).  Both papers were published in 2001, and a decade later neither of these predictions appears to have been fulfilled.  KM has been adopted by organizations much as the quality movement has been, but I suspect that knowledge workers are still trying to wrap their heads around how it is to be implemented and what it actually means.

 

 

Cited References

 

Nunamaker Jr., J. F., Romano Jr., N. C. and Briggs, R. O. (2001). A Framework for Collaboration and Knowledge Management, Proceedings of the 34th Hawaii International Conference on System Sciences – 2001. 1-12.

 

Prusak, L. (2001). Where did knowledge management come from? IBM Systems Journal 40(4), 1002-1007.