Feeds:
Posts
Comments

Archive for June, 2014

In the same way that the Web has changed the communication habits of millions of people, social computing is evolving to help us work with and in complex adaptive systems. The insights from Snowden and Jackson about ‘foresight’ and complexity, and their relationship with social computing, are fascinating not just for futures planning, but also for re-thinking processes in knowledge intensive organisations like knowledge management, communication and collaboration processes. For instance:

Developing a sensor network: The ability to quickly access authoritative guidance from colleagues, and to regular streams of intelligence regarding clients, competitors, market changes, and so on, is crucial to the development of actionable current awareness. But too often, companies rely on a handful of sources to feed them information, constrain sharing to document and email centric models and squeezed people’s interactions into pre-existing software models and workflows. That leaves the better part of a company’s extensive network of resources untapped and a void of higher level meta-data or collective intelligence derived from people’s diverse activities and contributions to a social computing platform.

If employees, clients and collaborators are able to contribute fragments of information, like tags, bookmarks, comments and links, as they come across or create the information in the course of their daily work not only will this be of benefit at the individual level (e.g. there’s no extra or less effort involved), when that information is aggregated, patterns can be determined which help others to spot trends and focus on hotspots in realtime. That makes for a potent early warning system and truly effective current awareness.

Sharing your way to competitive advantage: To operate in complex networked environments, companies are having to rethink old models based on the control of proprietary information. Snowden articulated it in these terms: “The paradigm has shifted. For whole of last century ownership gave economic power. Now, it is the speed at which you exploit things that matters not ownership. A strategy of openness makes more things available to you. What matters then is your agility and ability to exploit things.”

Over the last few years we have seen some companies become more open and share their learning/information with clients and other organisations. For instance, Innocentive.com enables companies, academic institutions and NFPs to come together in an open innovation marketplace to post challenges, share ideas and devise breakthrough solutions. In the UK, several major international law firms established the Banking Legal Technology portal in 2006 due to pressure from investment banking clients wishing to reduce costs and streamline access to all advice/information from the different firms. Likewise, Legal OnRamp provides another forum for lawyers share information and showcase their expertise, and for in-house counsel to access to precedents of major law firms and the pool their resources with other general counsel.Going forward, we will see companies using increasing volumes of fragmented data (e.g. tweets, blogs, comments, html links and pages) to contribute to social extranets, accessible by clients and competitors alike.

In that way, companies will get to see more and do more for less. By opening up the scanning process, not only will they add to the overall pool from which they can draw, they will also be presented with new narratives and possibilities which would not have been apparent or available in a closed setting. It will then be companies’ ability to interpret and apply the information quickly, innovatively and insightfully that will provide competitive advantage.

Developing new meaning through deliberate ambiguity: This picture presents a classic example of ambiguity. Woman - Young and OldThere’s an old and a young woman in there. Perhaps you see one or both of them. How long did it take you to focus on the different images? Does that mean anything? Is one more persuasive than the other?  Snowden proposed ‘deliberate ambiguity’ as a vehicle for encouraging emergent meaning and contributing to to the effectiveness and richness of a work. Increasing moves toward the use of fragmented materials in our work, like clipping items from feed readers, adding to them notes and tags, linking the clippings to blog posts and engaging people in further online discussions and idea sharing, we are deliberately introducing a higher degree of ambiguity to the system. It is precisely this ambiguity that allows us to interpret and give new meaning to the fragments, which provides new perspectives, ideas and interpretations. This is the source of innovation and difference – not best practice and compliance regimes.

There are also ramifications for traditional information categorisation and classification regimes, the purpose of which was to disambiguate and establish order in the system. Efforts to create order in this way can be counter-productive. If you are looking for something that hasn’t been categorised in the way you expect, then you probably won’t find it (quickly or perhaps at all). You are also less likely to make valuable serendipitous discoveries by stumbling across items that sit outside of traditional categories. As Thomas Gruber (2007) explains in his article “Ontology of Folksonomy: A Mash-up of Apples and Oranges”:

“Tags introduce distributed human intelligence into the system. As others have pointed out, Google’s revolution in search quality began when it incorporated a measure of “popular” acclaim — the hyperlink — as evidence that a page ought to be associated with a query. When the early webmasters were manually creating directories of interesting sites relevant to their interests, they were implicitly “voting with their links.” Today, as the adopters of tagging systems enthusiastically label their bookmarks and photos, they are implicitly voting with their tags. This is, indeed, “radical” in the political sense, and clearly a source of power to exploit.”

In that way, user participation in the form of social tagging offers a far more powerful means of discovering information and meaning.

Using technology to provide decision support: Although previous generations dreamed of artificial intelligence and people feeding computers information and receiving answers, we now understand that the roles should be reversed, and we are interested in using computer networks to augment human intelligence and make it easier for us to make decisions of our own. This is the key differentiating factor about social computing – it has human agency in it. Whilst computers can present more data, human agency is needed to determine the meaning of the information fragments. That requires us to deliberate model/look at things from different perspectives then present the data back for human-based interpretation and decision making.

To conclude: “the whole point about technology is to provide decision support for human beings not to make decisions” (Dave Snowden).

Read Full Post »

Yesterday, I went along to the InformatieProfessional Conference here in Amsterdam. As with all things associated with the web these days, the theme of the conference was Integrity 2.0. Key issues revolved around data privacy, information reliability and management of information overload. Lee gave a great presentation on the use of “Social network as information filters”. Here’s what he had to say:

The affordances of social web allow us to build a new relationship with each other and with information.  New forms of media consumption and architecture of participation hold important implications for information management:

Sharing as a by product of action: During the 1990s we saw a rise in interest in KM, which came up with a host of ideas that were never implemented.  The problem lay in the precepts on which knowledge ‘management’ was built, i.e. that people could and should share because sharing is a good idea.  But people are fundamentally lazy and selfish.  They don’t share unless they have to.  And even if they wanted to, the tools available to them have been so difficult to use and unfit for the purpose that they themselves created barriers to participation.  Now, we have the ability to support effective sharing by placing flexible, user-friendly social tools like wikis, blogs and status updates ‘in the flow’ of people’s daily work.  Contributing to the collective intelligence of the organisation takes no extra effort and flows from the very activities necessary for people to get their jobs done.

Socialisation of information: The second phenomenon is that social computing makes invisible data visible.  Information that was previously private or hidden in databases or behind firewalls is being ‘socialised’.  A key difference that the social web holds for information professionals is the way it enables individuals to manage their feeds and flows of information. It also offers new ways of aggregating information, which provides new levels of meaning and adds significant value.  For instance, we have seen how Google employs page visits to feed back and improve page rankings, which acts as an extremely effective recommendation system. Even in the absence of social tools or sophisticated algorithms, organisations have a wealth of meta-data available to them e.g. what people are reading, browsing and searching behaviours, time spent on pages, and so on.  However, to date they have been terrible at surfacing this information, wasting extremely valuable data.

Rapid feedback is key to evolution: These days we see thousands of new internet businesses launch, and they expose themselves to constant feedback.   That means they will fail quickly or succeed because they are robust and responsive to market conditions.  Inside companies, the situation is very different.  Traditional static intranets don’t have the same evolutionary forces at play.  Information is simply broadcast and there are no feedback loops through which people can signal their preferences, and improve or change what they are being given.  Internal tools are evolving very slowly compared to their web-counterparts, because the lessons of the web about the need for interaction, transparency and feedback are not being applied in the enterprise context.

 

Networked productivity: Companies also need to move beyond the obsession with personal productivity and look to networked productivity.  That requires more and better information sharing, and its aggregation to create ambient intelligence.  We are still exploring and tapping into the great source of value of networks in the enterprise.  Consider for instance the ways in which following people on twitter, reading blogs, discovering new information via Digg or delicious tags makes us more productive collectively.

What does this mean for managing information?

The answer often results in a binary debate about focusing on experts on the one hand and the wisdom of crowds on the other.  But each of these views is too simplistic, but not mutually exclusive.  If you were to look on Google for the best restaurants in New York City,  page rank driven user recommendations would provide you with a set of de-facto facts about the ‘best restaurants’ based on people’s search and browse behaviours.  We don’t know they are the best, but we do know that enough people have clicked on the page to make it worthwhile considering.  On the other hand, WolframAlpha seeks to establish a fact, but the problem it that it hasn’t got a clue about how to do so.  The ‘fact’ simply can’t be established through semantic data because there are different ways of establishing what is ‘true’ in this context.

So which do we use: the individual or distributed model?  On the one hand, gurus like Steve Jobs commonly do an outstanding job of deciding what it is that everybody will have and love.  On the other, we have the development of the Linux kernel using distributed expertise.  Two equally powerful scenarios.  However, recently we have also seen examples of experts testifying in trials based on their interpretation of information behaviour, and getting their opinions very wrong.  Similarly, in healthcare, we are being advised that what was said to be good for us yesterday is not good today ‘in light of what we now know’.  Being open to interpretation, knowledge and truth mean different things to different people and change over time.

That has ramifications for the way we manage information – using networks and human signals to improve information findability:

  • Findability: Making something increasingly easy to find is much better than search.  Whilst some companies look to black-box solutions like Autonomy to find ‘right’ answer, others are using social tagging to build an accurate picture of what information is and isn’t important in their systems.  Leaving trails is a far better way to find information.
  • Human signals: Signals are a very powerful way of validating information. Working through our networks, we see what people have read, commented or voted on the most, and use that contextual level information to help guide us in our search for our ‘facts’ or meaning.

If we continue to manage information as we did in the past we will inevitably create information overload and increasing sources of frustration for our consumers. In the past, the job of information managers was to codify and store information.  Most of the metaphors surrounding this work related to about putting information into boxes.  This approach is not robust or scalable and leads to filter failure. We need to move away from the obsession with storage, and to a weave fabric of information through which people operate.  Notably, the connective tissue (e.g. signals, links and tags) is as important as the information it points to.  All of this is based on people who by their actions indicate what they think is important and useful.

It is this human generated web of information that is the only effective way of dealing with the information deluge.  Everyday, we have too much information pushed at us via email.  We sit like Pavlov’s dog waiting for the tinkle to alert us to the arrival of new mail, only to dutifully go to our inbox (and salivate) over what usually turns out to be spam.  This is a disturbing productivity drain.  Too much of the wrong kind of information commands people’s attention. In addition, most enterprise communication and collaboration tools cannot distinguish between the variable velocity and life span of information.  Which information is current only in the moment, and which has more durable and lasting significance?

To cope with these problems, we need better filters and better radars.  Your ‘filters’ are your network including Twitter, Delicious, Digg, Stumblupon, etc, signaling links or sites you should read because people you trust think they are important.  But using your network as filters, in isolation, can lead to group think as you tend to be attracted to people with similar interests, views or roles. In built bias is not a bad thing as long as you have other mechanisms for finding new information.  This is where your ‘radar’ comes in.  It comprises alerts, searches and smart feeds, which are always on the look out for new stuff.  The combination of the two things is needed to capitalise on ambient awareness.

In fact, one of main purposes of knowledge management is to help people find good information on which to make better decisions.  This is far more  involved than people processing email, memos and other document-centric communications.  People are incredibly adept at receiving and processing ambient information.  In the office we overhear other people’s conversations, we see what people are working on, we receive snippets of news from our feeds or the paper, and so on.  This information is constantly feeding our consciousness. And the human brain has evolved process these huge volumes of fragmented ambiguous information.  But if people constantly have their noses in their inbox, or they are forced into document-centric models of information sharing, they are  cut off from valuable information sources and flows.

Online social networking acts as an excellent operational information filter.  We are used to connecting with people and exchanging information in spaces, and this behaviour is reflected online in social and business networking sites like Facebook and LinkedIn.  Instead of going to Google to search for the best restaurants in NYC, people now go to their network and get better more relevant results.

These activities socialise the information, along with the language and meaning.  An experiment run by the Sony computer lab used robots to describe images projected onto a wall.  The robots had to rapidly learn how to communicate with each other to come up with a description.  They found that at the beginning of the experiment, the number of words used for a concept was quite large but declined over time as the robots negotiated meaning and converged on the designated concept. The finding: Polysemy declines rapidly for new concepts as dominant terms emerge.

Likewise, the process of social tagging is fascinating, especially its effect on interactions and understanding.  As we label our information, we find people that share our perceptions or interests, or we even add new meaning through the label itself. This is the power of folksonomies over taxonomies which for decades have made information impossible to find for most people.  Instead of trying to structure everything and remove all ambiguity, we should use a top-down categorisation system for things that are broadly correct (e.g. regions, products, practice areas) and below that allow human-generated emergent metadata like labels to act as a more effective social way of navigating through information.  Allowing the structure of the language to come from people in the field.

For information professionals, this means moving from tending boxes and labels to becoming information networkers.  It means being guides rather than gate keepers.  Information professionals need to share 21st century competencies with people, helping them to use their networks as filters and establish their radars giving greater control to the individual.  All of this points to a much more interesting future role for information professionals.

Read Full Post »

In today’s complex and turbulent environment, organisations need ‘foresight’ to be able to respond promptly to various change drivers including technology, sustainability, globalisation and the economy. Essentially ‘foresight’ is a participative approach to creating shared long-term visions to inform short term decision-making processes (see http://www.foresight-network.eu/). During a recent webinar, Dave Snowden (Cognitive Edge) and Mike Jackson (Shaping Tomorrow) outlined how horizon scanning and social computing can help organisations plan for the future, protect themselves against unexpected threats and exploit forthcoming opportunities.

Jackson outlined the following five components of foresight development and managing change:

  • ID and Monitor Change: Identify patterns from the stories, fragments of information and behaviours of many participants in a system or network and decide how those patterns impact business.
  • Critique implications: Inform the impact assessment with a cross-section of information not just intelligence regarding one’s own industry. That means monitoring much more change and developing a better peripheral vision to be able to understand the broader implications for the business.
  • Imagine difference: Establish the risks and alternatives for different scenarios.
  • Envision preferred route forward: Established where you are, then determine where you want to go by scanning plausible, possible and probable ideas and changes for the future.
  • Plan and implement: Identify goals, resources, strategies and stakeholders required to create change and help them cope with the inherently uncertain future.

The participatory, evolutionary and social nature of ‘foresight’ development makes for a snug fit with social computing (i.e the simpler more networked online applications that connect people and allow them to pool their knowledge and interact better with those in their network). More specifically, social computing enables the content and online interactions to constantly shift so as to better reflect the knowledge, ideas, opinions, preferences and even aspirations of all contributors. Not only does this allow us to develop a better radar of what is happening across our network, it also provides us with higher level of collaborative intelligence: a range of opportunities and outputs that could not be created by any number of individuals or small groups working alone.

(Interestingly, these are also features of complex adaptive systems (emergent, highly connected and simple on the micro level; complex and unpredictable on the macro level) which evolve through rapid feedback loops making them highly adaptive to changing conditions.)

Snowden picked up here, talking about complexity theory, the creation of human sensor networks and the need to manage the evolutionary potential of the present as an alternative to traditional scenario planning. Referring to his recent blog Think anew, Act anew: Scenario Planning, Snowden cited a wonderful quote from Seneca:

“The greatest loss of time is delay and expectation, which depend upon the future. We let go the present, which we have in our power, and look forward to that which depends upon chance, and so relinquish a certainty for an uncertainty.”

This quote emphasises that what matters now is managing the present by switching from “fail safe design strategies to safe fail experimentation”. That involves the use of small early interventions in the form of exploratory behaviour, allowing the ones with good results to be amplified and the ones that don’t work to be eliminated.

Snowden went on to outlined the three fundamental consequences of complexity theory, which need to be present to mange a complex system, which he covers in great depth along with a critique of horizon scanning and scenario planning, in his post mentioned above.

  • Need for distributed cognition. The crux of this is decentralisation and mass participation. The idea that the few can decide for the many, whether it be drawing up scenarios, selecting technology, imposing structures or the like, is inherently unstable. Instead, we need to start to use large numbers of people to feed decision making processes with current information and diverse perspectives.
  • Fragments are key. Material that is used must be finely granulated. A big problem with traditional scenario planning is that it produces ‘chunked’ reports. The human brain has evolved to handle fragmented data patterns – pictures, comments, captions, labels, etc. One of the reasons social computing is so successful is that it presents information in multiple inter-threaded fragments, so that the brain can ‘conceptually blend’ those and link those fragments to determine how to move forward. Documents don’t tune in to the evolutionary nature of humans. Fragmented information has evolutionary advantage.
  • Overcome disintermediation. People making decisions about the future have to have direct contact with raw intelligence. They can’t afford to have middle management or processes mediating, summarising or grouping information. The information must come from trusted sources and permit interaction with those sources (so the information can be validated).

In Part II of this post I will look at some of the implications of this participative evolutionary approach for traditional current awareness and information creation and categorisation processes.

Read Full Post »