Chapter 14: Multi-Criteria Decision Making (MCDM) for Small Sets of Alternatives

Learning Objectives

By the end of this chapter, you will be able to distinguish decision analysis from statistical inference, structure a small multi-criteria decision problem, compute transparent weighted rankings with AHP and TOPSIS, check consistency and sensitivity, and report rankings without implying more precision than stakeholder judgements support.

When MCDM Methods Are Appropriate

Multi-criteria decision-making methods are useful when a small set of alternatives must be ranked or selected using several criteria. Their value lies in transparency: the criteria, weights, normalisation choices, and aggregation rule are all visible, so stakeholders can see how a ranking was produced and whether it holds under plausible assumptions. They produce ranked or weighted scores rather than p-values or population-parameter estimates.

MCDM belongs in a small-sample methods text because many constrained research settings end with a decision rather than a population estimate. A school may need to choose one of three reading interventions, a clinic may need to prioritise one of four service improvements, or a community project may need to allocate a small grant among a few feasible options. When the alternatives are fixed and the evidence is too limited for strong inference, a structured decision model is more honest than pretending that a p-value can select the best option.

MCDM is appropriate when alternatives are few, criteria are heterogeneous, stakeholder preferences matter, and the goal is selection or resource allocation (Saaty 1980; Hwang and Yoon 1981). When the question is whether a treatment caused an effect, only a controlled experiment can answer it.

Analytic Hierarchy Process

The Analytic Hierarchy Process (AHP) uses pairwise comparisons to derive priority weights (Saaty 1980). Decision-makers compare criteria two at a time, and the resulting matrix is converted into weights. AHP also includes a consistency check. A low consistency ratio suggests that the pairwise judgements are coherent enough to use. A high value means the comparisons should be revisited.

Before constructing the matrix, document the elicitation protocol: who supplied the comparisons, what scale was used, whether judgements were individual or consensus-based, and how disagreements were resolved. A simple stakeholder template should ask each rater to compare every pair of criteria, give a short reason for each judgement, and flag any comparison they feel uncertain about. The final matrix should be auditable rather than treated as a hidden expert input.

Table 14.1 gives the criteria weights for a training-programme selection example. Effectiveness receives the largest weight, while cost and feasibility still contribute to the decision.

The AHP calculation below uses the principal eigenvector of the pairwise-comparison matrix. The consistency ratio is computed as (lambda_max - k) / ((k - 1) * RI), where k is the number of criteria and RI is Saaty’s random-index value for a matrix of that size.

Table 14.1

AHP criteria weights for the training-programme decision

Criterion Weight Relative emphasis
Cost 0.182 Cost matters, but is not dominant
Effectiveness 0.545 Primary criterion
Feasibility 0.273 Secondary practical criterion

Note. The consistency ratio is 0.000. A CR below 0.10 is commonly treated as acceptable for pairwise-comparison matrices (Saaty, 1980). For very small 3 x 3 matrices, some authors tolerate slightly higher values, but inconsistent judgements should still be revisited.

Table 14.2

AHP weighted scores for three training programmes

Rank Programme Weighted cost Weighted effectiveness Weighted feasibility Total score
1 B 0.055 0.273 0.082 0.409
2 A 0.091 0.136 0.109 0.336
3 C 0.036 0.136 0.082 0.255

Note. Programme B ranks highest because effectiveness has the largest weight and Programme B performs best on that criterion.

Interpretation should be decision-focused rather than inferential. Programme B is the preferred option under the stated weights and scores. That does not mean Programme B is statistically superior. It means the explicit decision model ranks it highest.

TOPSIS

TOPSIS ranks alternatives by closeness to an ideal solution and distance from a negative-ideal solution (Hwang and Yoon 1981). The method first normalises the criteria, applies weights, identifies the best and worst value on each weighted criterion, and then computes a closeness coefficient. Higher coefficients indicate alternatives closer to the ideal profile.

Cost criteria require recoding because TOPSIS assumes larger values are better. In Table 14.3, project cost is transformed into a benefit score before normalisation.

The vector-normalisation step divides each criterion value by the Euclidean norm of its column:

\[ r_{ij} = \frac{x_{ij}}{\sqrt{\sum_i x_{ij}^2}}. \]

The implementation below shows the full calculation, including the cost-to-benefit transformation and closeness coefficient.

Table 14.3

TOPSIS ranking for four community projects

Rank Project Impact Cost Support Closeness coefficient
1 P3 6 40 8 0.640
2 P1 7 50 6 0.544
3 P2 8 70 7 0.395
4 P4 9 80 7 0.389

Note. Cost was recoded as a benefit before vector normalisation, where each criterion value is divided by the Euclidean norm of its column. This places criteria on a comparable scale before weighting. The closeness coefficient ranges from 0 to 1.

The TOPSIS ranking depends on the normalisation rule and the weights. If stakeholders disagree about cost or impact weights, the ranking should be recomputed under those alternatives rather than reported as a single unquestioned answer.

VIKOR and Other MCDM Methods

VIKOR is another ideal-solution method, but it emphasises compromise between group utility and individual regret (Opricovic and Tzeng 2004). TOPSIS asks which alternative is geometrically closest to the ideal profile. VIKOR asks which alternative is a defensible compromise when no option is best on every criterion. Other methods, such as SMART, WASPAS, and MOORA, follow the same broad logic: structure the decision, normalise criteria, apply weights, aggregate scores, and test sensitivity.

The VIKOR calculation below uses the same four projects and equal criterion weights as the TOPSIS example. For each criterion, the best value receives zero loss and worse values receive larger normalised loss. The utility measure S summarises total weighted loss, the regret measure R records the largest single-criterion loss, and the index Q combines both using v = 0.5 to balance group utility and individual regret. Lower Q values are preferred.

Table 14.4

VIKOR compromise ranking for the community project example

Rank Project Utility loss (S) Regret loss (R) VIKOR index (Q)
1 P2 0.528 0.250 0.318
2 P3 0.333 0.333 0.500
3 P4 0.500 0.333 0.773
4 P1 0.639 0.333 1.000

Note. Lower Q values are preferred. Compare this compromise ranking with the TOPSIS closeness ranking. Disagreement is a sensitivity finding, not an error.

The technical differences matter less than the reporting discipline. Always state the criteria and their substantive justification, the raw scores and direction of preference, the weights and how they were elicited, the normalisation and aggregation methods, and sensitivity analyses across plausible weight sets. Weights are value judgements, not statistical estimates, so they need transparent justification.

Sensitivity Analysis

Sensitivity analysis is essential because MCDM rankings can change when weights or normalisation rules change. Figure 14.1 and Table 14.5 vary the weight placed on effectiveness in the AHP example. This is a better way to communicate robustness than presenting a ranking as if it were fixed by the data alone.

The sensitivity loop below perturbs the effectiveness weight and recomputes rankings. In a stakeholder report, repeat this for each contested criterion or use a tornado plot to show rank reversals under +/-20% weight changes.

Figure 14.1: Sensitivity of the AHP top-ranked programme to the effectiveness weight.

For applied decision reports, a tornado plot can extend this idea by varying each criterion weight by a fixed amount, such as +/-20%, and showing whether the top-ranked alternative changes. That display is often more comprehensive than a single-weight sweep when several criteria are contested.

Table 14.5

AHP ranking sensitivity as the effectiveness weight changes

Effectiveness weight Top programme Ranking
0.30 A A > B > C
0.35 A A > B > C
0.40 B B > A > C
0.45 B B > A > C
0.50 B B > A > C
0.55 B B > A > C
0.60 B B > A > C
0.65 B B > A > C
0.70 B B > A > C
0.75 B B > A > C
0.80 B B > A > C

Note. The ranking is stable when Programme B remains first across plausible effectiveness weights. If the top programme changes, stakeholders should discuss whether the weight range is realistic.

Key Takeaways

MCDM methods help structure decisions with multiple criteria and few alternatives. AHP is useful when pairwise judgements are central. TOPSIS and VIKOR are useful when alternatives can be compared against ideal and compromise profiles. These methods complement statistical inference but do not replace it. Good MCDM reporting makes the judgement calls visible: criteria, scores, weights, normalisation, aggregation, consistency, and sensitivity all need to be shown.

Self-Assessment Quiz

Question 1. When are MCDM methods most appropriate?

Explanation.

MCDM methods support ranking and selection decisions across multiple criteria. They are decision tools, not hypothesis tests.

Question 2. What does AHP’s consistency ratio evaluate?

Explanation.

The consistency ratio checks whether pairwise comparisons are logically coherent. High inconsistency means the judgements should be revisited.

Question 3. In TOPSIS, what does a larger closeness coefficient mean?

Explanation.

TOPSIS ranks alternatives by relative closeness to the ideal solution. Larger coefficients indicate more preferred alternatives under the chosen weights and normalisation.

Question 4. Why must cost criteria often be transformed before TOPSIS?

Explanation.

TOPSIS treats larger values as preferable. Cost must therefore be converted so that lower cost becomes a higher benefit score.

Question 5. Why is sensitivity analysis essential in MCDM?

Explanation.

Rankings can be sensitive to weights and normalisation. Sensitivity analysis shows whether the preferred alternative is robust or depends on a narrow assumption.