Posts Tagged ‘ambient findability’

Yesterday, I went along to the InformatieProfessional Conference here in Amsterdam. As with all things associated with the web these days, the theme of the conference was Integrity 2.0. Key issues revolved around data privacy, information reliability and management of information overload. Lee gave a great presentation on the use of “Social network as information filters”. Here’s what he had to say:

The affordances of social web allow us to build a new relationship with each other and with information.  New forms of media consumption and architecture of participation hold important implications for information management:

Sharing as a by product of action: During the 1990s we saw a rise in interest in KM, which came up with a host of ideas that were never implemented.  The problem lay in the precepts on which knowledge ‘management’ was built, i.e. that people could and should share because sharing is a good idea.  But people are fundamentally lazy and selfish.  They don’t share unless they have to.  And even if they wanted to, the tools available to them have been so difficult to use and unfit for the purpose that they themselves created barriers to participation.  Now, we have the ability to support effective sharing by placing flexible, user-friendly social tools like wikis, blogs and status updates ‘in the flow’ of people’s daily work.  Contributing to the collective intelligence of the organisation takes no extra effort and flows from the very activities necessary for people to get their jobs done.

Socialisation of information: The second phenomenon is that social computing makes invisible data visible.  Information that was previously private or hidden in databases or behind firewalls is being ‘socialised’.  A key difference that the social web holds for information professionals is the way it enables individuals to manage their feeds and flows of information. It also offers new ways of aggregating information, which provides new levels of meaning and adds significant value.  For instance, we have seen how Google employs page visits to feed back and improve page rankings, which acts as an extremely effective recommendation system. Even in the absence of social tools or sophisticated algorithms, organisations have a wealth of meta-data available to them e.g. what people are reading, browsing and searching behaviours, time spent on pages, and so on.  However, to date they have been terrible at surfacing this information, wasting extremely valuable data.

Rapid feedback is key to evolution: These days we see thousands of new internet businesses launch, and they expose themselves to constant feedback.   That means they will fail quickly or succeed because they are robust and responsive to market conditions.  Inside companies, the situation is very different.  Traditional static intranets don’t have the same evolutionary forces at play.  Information is simply broadcast and there are no feedback loops through which people can signal their preferences, and improve or change what they are being given.  Internal tools are evolving very slowly compared to their web-counterparts, because the lessons of the web about the need for interaction, transparency and feedback are not being applied in the enterprise context.


Networked productivity: Companies also need to move beyond the obsession with personal productivity and look to networked productivity.  That requires more and better information sharing, and its aggregation to create ambient intelligence.  We are still exploring and tapping into the great source of value of networks in the enterprise.  Consider for instance the ways in which following people on twitter, reading blogs, discovering new information via Digg or delicious tags makes us more productive collectively.

What does this mean for managing information?

The answer often results in a binary debate about focusing on experts on the one hand and the wisdom of crowds on the other.  But each of these views is too simplistic, but not mutually exclusive.  If you were to look on Google for the best restaurants in New York City,  page rank driven user recommendations would provide you with a set of de-facto facts about the ‘best restaurants’ based on people’s search and browse behaviours.  We don’t know they are the best, but we do know that enough people have clicked on the page to make it worthwhile considering.  On the other hand, WolframAlpha seeks to establish a fact, but the problem it that it hasn’t got a clue about how to do so.  The ‘fact’ simply can’t be established through semantic data because there are different ways of establishing what is ‘true’ in this context.

So which do we use: the individual or distributed model?  On the one hand, gurus like Steve Jobs commonly do an outstanding job of deciding what it is that everybody will have and love.  On the other, we have the development of the Linux kernel using distributed expertise.  Two equally powerful scenarios.  However, recently we have also seen examples of experts testifying in trials based on their interpretation of information behaviour, and getting their opinions very wrong.  Similarly, in healthcare, we are being advised that what was said to be good for us yesterday is not good today ‘in light of what we now know’.  Being open to interpretation, knowledge and truth mean different things to different people and change over time.

That has ramifications for the way we manage information – using networks and human signals to improve information findability:

  • Findability: Making something increasingly easy to find is much better than search.  Whilst some companies look to black-box solutions like Autonomy to find ‘right’ answer, others are using social tagging to build an accurate picture of what information is and isn’t important in their systems.  Leaving trails is a far better way to find information.
  • Human signals: Signals are a very powerful way of validating information. Working through our networks, we see what people have read, commented or voted on the most, and use that contextual level information to help guide us in our search for our ‘facts’ or meaning.

If we continue to manage information as we did in the past we will inevitably create information overload and increasing sources of frustration for our consumers. In the past, the job of information managers was to codify and store information.  Most of the metaphors surrounding this work related to about putting information into boxes.  This approach is not robust or scalable and leads to filter failure. We need to move away from the obsession with storage, and to a weave fabric of information through which people operate.  Notably, the connective tissue (e.g. signals, links and tags) is as important as the information it points to.  All of this is based on people who by their actions indicate what they think is important and useful.

It is this human generated web of information that is the only effective way of dealing with the information deluge.  Everyday, we have too much information pushed at us via email.  We sit like Pavlov’s dog waiting for the tinkle to alert us to the arrival of new mail, only to dutifully go to our inbox (and salivate) over what usually turns out to be spam.  This is a disturbing productivity drain.  Too much of the wrong kind of information commands people’s attention. In addition, most enterprise communication and collaboration tools cannot distinguish between the variable velocity and life span of information.  Which information is current only in the moment, and which has more durable and lasting significance?

To cope with these problems, we need better filters and better radars.  Your ‘filters’ are your network including Twitter, Delicious, Digg, Stumblupon, etc, signaling links or sites you should read because people you trust think they are important.  But using your network as filters, in isolation, can lead to group think as you tend to be attracted to people with similar interests, views or roles. In built bias is not a bad thing as long as you have other mechanisms for finding new information.  This is where your ‘radar’ comes in.  It comprises alerts, searches and smart feeds, which are always on the look out for new stuff.  The combination of the two things is needed to capitalise on ambient awareness.

In fact, one of main purposes of knowledge management is to help people find good information on which to make better decisions.  This is far more  involved than people processing email, memos and other document-centric communications.  People are incredibly adept at receiving and processing ambient information.  In the office we overhear other people’s conversations, we see what people are working on, we receive snippets of news from our feeds or the paper, and so on.  This information is constantly feeding our consciousness. And the human brain has evolved process these huge volumes of fragmented ambiguous information.  But if people constantly have their noses in their inbox, or they are forced into document-centric models of information sharing, they are  cut off from valuable information sources and flows.

Online social networking acts as an excellent operational information filter.  We are used to connecting with people and exchanging information in spaces, and this behaviour is reflected online in social and business networking sites like Facebook and LinkedIn.  Instead of going to Google to search for the best restaurants in NYC, people now go to their network and get better more relevant results.

These activities socialise the information, along with the language and meaning.  An experiment run by the Sony computer lab used robots to describe images projected onto a wall.  The robots had to rapidly learn how to communicate with each other to come up with a description.  They found that at the beginning of the experiment, the number of words used for a concept was quite large but declined over time as the robots negotiated meaning and converged on the designated concept. The finding: Polysemy declines rapidly for new concepts as dominant terms emerge.

Likewise, the process of social tagging is fascinating, especially its effect on interactions and understanding.  As we label our information, we find people that share our perceptions or interests, or we even add new meaning through the label itself. This is the power of folksonomies over taxonomies which for decades have made information impossible to find for most people.  Instead of trying to structure everything and remove all ambiguity, we should use a top-down categorisation system for things that are broadly correct (e.g. regions, products, practice areas) and below that allow human-generated emergent metadata like labels to act as a more effective social way of navigating through information.  Allowing the structure of the language to come from people in the field.

For information professionals, this means moving from tending boxes and labels to becoming information networkers.  It means being guides rather than gate keepers.  Information professionals need to share 21st century competencies with people, helping them to use their networks as filters and establish their radars giving greater control to the individual.  All of this points to a much more interesting future role for information professionals.


Read Full Post »