[continued from the Last Post.]
Precision Weights for Accurate Decision Making
The Weighted Ranking Technique
Our species has a builtin preference for seeking preferences. We compare, match, liken, analogise, and compete, constantly. In the end, our decisions are based on preferring one option to another.
However, this is most often an instinctive response triggered by some primal software deep within us. We have a built-in tendency to categorise and rank options under scrutiny.
The Attributes Grid as described, permits us to come to a quick decision, even when applying ‘weightage’ in our calculations. This ‘weightage’ has been instinctive until now.
For example, if we had to spontaneously make a ranking of say, the top five cities in the world, out of the following options:
New York, London, Tokyo, New Delhi, Baghdad
we would all arrive at different arrangements in prioritising this list.
This is because we have not considered a range of criteria. Moreover, these criteria would also need to be prioritised, and given weights. The principal defect in our instinctive method for making choices lies in our tendency to view problems one-dimensionally.
When ranking diverse items, we are inclined to apply different criteria to the different items being ranked ~ it is not an apple for an apple situation, so to speak. We also tend to view all our criteria as being equally important.
Furthermore, we frequently tend not to rank every item individually against every other item in a listing.
Therefore, in order to be immensely more accurate, say, when deciding on some high priority issues, we must go deeper into the process.
The basic procedure is a simple choice between two items at a time. For example, say, we have to make a selection from the following alternatives:
Alternative 1, Alternative 2, Alternative 3, Alternative 4
Having listed down the alternatives as a first step, we now do a ‘round robin’ exercise. We ask ourselves, which alternative is better, 1 or 2?
If it is ‘1’, we place a tick against that alternative. We then match 1 against 3; if alternative 3 is better, we place a tick against that one.
In like manner, each alternative is matched against every other alternative, pair by pair by pair, until the process is complete.
In order to be certain that you have ranked every alternative against every other alternative on the list, use the following formula:
[N x (N-1)] divide by 2
(Where N is the number of alternatives or items on a list)
If you fail to arrive at the correct total number of votes, re-check your working.
Thus, in our example, we have four alternatives, so the total number of votes that must be assigned to all the alternatives when they have all been ranked against every other alternative, is:
[4 x (4-1)] divide by 2 = [4 x 3] divide by 2 = 6
We can check our accuracy. Let us say the outcome was as follows:
Alternative 1 ✔ ✔
Alternative 2 ✔
Alternative 3 ✔ ✔ ✔
Alternative 4
We could thus pair rank the outcome, based on votes, as:
Alternative 3 (three votes)
Alternative 1 (two votes)
Alternative 2 (one vote)
Alternative 4 (no votes)
The instinctive outcome may have been very different. You can do this with any number of items. You could sometimes have two or more items end up with the same number of votes. This could occur when the analysis is inconsistent.
Simply pit them head to head to break the tie.
Instinctive ranking is propelled by intuition, and is a hit or-miss method. It provides a ‘guesstimate’ whereas paired rankings give us assurance that we are as close to certain as we can be.
[To be continued in the Next Post. Excerpted from 'Surfing the Intellect: Building Intellectual Capital for a Knowledge Economy', by Dilip Mukerjea. All the images in this post are the intellectual property of Dilip Mukerjea.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment