Feeds:
Posts
Comments

Posts Tagged ‘CI’

There have been numerous discussions about how to evaluate social software implementations, and the shortcomings of ROI and reductionist models for illustrating ‘success’ in terms of bottom-line profitability (e.g. Why Bother with Social Software, Musing About the Value of Social Software, Calculating the ROI of Blogging).

Because traditional financial accounting measures like ROI give misleading signals about continuous improvement and innovation, more integrated approaches to performance measurement are needed.  An obvious candidate here is Kaplan & Norton’s Balanced Scorecard (BSC), which assess performance from the perspectives of (i) staff development/learning (ii) internal processes (iii) customer service and satisfaction and (iv) financial effectiveness, efficiency and cash flow.

The different perspectives of the BSC can be linked by outlining a ‘story’ of the social software implementation.  That story also helps test the thinking/assumptions behind a project’s goals and what exactly should be measured or evaluated.  The underlying logic of the story would be along the following lines:

  • If we increase the capability of our staff to connect with information, expertise and colleagues and/or clients
  • Then they will be able to improve and innovate our products, services and processes
  • Then the customer will be delighted and customer loyalty will improve, and
  • We will keep/get more business, which has a positive impact on our finances.

The beauty of the model is that it provides a more visual flexible approach to project evaluation, and moves away from a restrictive quantitative approach.  It allows the focus to shift from time to time depending on the business strategy, and for the nature of measures to change overtime, depending on people’s information/social networking needs and the (adoption) phase of the implementation.

That shifting relationship is the subject of Mark Gould’s post “Measuring Maturity“.   In his post, Mark cites the following scenario by Jonathan Wolff highlighting the relationship between experience, measures and proxies:

Suppose you have applied for a job, any job. You are at one of those macho interviews where the panel members compete to see who can make you sweat the most. And this is the winning question: how do you plan to monitor and evaluate your own performance in the role? …

Suppose your job is in business of some sort and, ultimately, you are employed to make the company money… In the end, the only thing that matters, then, is the profit you bring in. But it may take some time to build up a client base and to gather the dosh. It would be foolish to say that in the short term you should be judged on how much profit you make for the company. Rather you should monitor your activity: how many meetings you have taken, how many letters and emails you have sent, how many briefings you have been to. But, of course, that is only for openers. If the meetings don’t result in business, then you are wasting your time. So in the second phase of monitoring, you stop counting meetings and start counting things like contracts signed, goods shipped, turnover generated, or any other objective sign of real interaction.

But, once more, this is only an interim goal. You are there not to generate turnover, but profit. And once you have been around long enough that is the only thing that matters. In the third and final phase you count how much you make for the company, and stop worrying about meetings, letters or contracts signed. Who cares about how many of these there are if the bottom line stays juicy enough?

There are several messages here about taking account of the right things, and how those things change over time as people, technology and processes mature.  This resonates with Bessant’s Continuous Improvement (CI) Maturity Model (2001), which was based on extensive research exploring how high involvement in continuous improvement can be built and sustained as an organisational capability.  The model facilitates assessment of progress in the evolution of behavioural changes necessary to establish innovation routines in business.  It emphasises that effective management of the process depends upon seeing CI not as a short-term activity “but as the evolution and aggregation of a set of key behavioural routines within the firm”.  As CI practices in firms mature and become more systematic, strategic and autonomous, there are flow-on effects for performance which drive improvements measurable in terms of bottom-line impact, major innovation and incremental problem solving.  But, these improvements accrue incrementally, with co-ordinated management support, and appropriate on-going assessments of the organisation’s structure, systems and processes.

So what’s the upshot of these models for social software implementation evaluation and measures?

Adopting an holistic approach to evaluation, based on the multiple BSC perspectives, will highlight a range of behaviours and outcomes which need to be targeted, not just the financial ones.  Those measures will change overtime depending on the phase (or maturity) of the implementation, and improvements to routines and learning within the organisation.  Taking a staged approach also helps in working through the different phases associated with the adoption of new technologies, and thinking about types of behaviours and outcomes necessary for progress in the future.

Early measures may include simple activities like number of pages created or edited, number of posts, comments or views, or number of (different) users contributing content, reduction in email volume and associated time savings (e.g. fewer distracting blanket emails).  However those measures only give part of the picture – they do not indicate why people are doing what they are doing or what the effect of the behaviour is on organisational structure, culture and profitability.  So, as the implementation matures, it would be useful to assess changes (if any) to organisational routines, levels and structure of communications, and workflows, as well as asking people about the attitudes and behaviours behind their activities.

But, connecting these qualitative (‘soft’) measures to any improvements on the balance sheet is key.  That’s a reasonably complex question best left for a future post. For now, let me close with a thought from Mark Clare (2002 ) (cited in Anthony Rhem’s blog: Realizing ROI in KM Initiatives) about the way to estimate the value of intangible benefits and related them back to cashflow:

The value created from managing knowledge [or other social/information networking programmes] is a function of the costs, benefits and risks of the … initiative. Thus mathematically stated: Initaitive Value = F (cost, benefit, risk), which equals Total Discounted Cash Flow (DCF) created over the life of the … investment.

This is just one formula which could be used to enhance the BSC – let me know if you have any others!

Advertisements

Read Full Post »