Producing a Ranking

Decision makers objectives tend to compete with each other. For instance, a different set of interventions would be chosen if the objective was to maximise health gain than if the objective was to minimise health inequalities. Given the competition between objectives, it is necessary to allow these objectives to be ‘traded off’ against one another in a systematic way.

In order to incorporate this possibility in the prioritisation tool it is necessary to estimate the relative importance of criteria. A Discrete Choice Experiment (DCE) was undertaken to elicit the relative importance that decision makers place on different criteria.

Discrete Choice Experiments

DCEs involve presenting respondents with a series of hypothetical scenarios (choice sets) that are described using a consistent set of parameters called attributes. For example, a scenario could be a public health intervention, described by parameters such as number of people that could benefit from it, its cost effectiveness, and its ability to address health inequalities. Faced with a number of such interventions, respondents are then asked to choose their preferred intervention. Given respondents’ choice of interventions, statistical analysis can be employed to estimate the relative impact of each attribute on the choice made. These results can then be used, amongst other things, to predict whether one intervention would be preferred over another.

A number of alternative methods for estimating the relative importance of intervention attributes have been employed in previous studies (CLG, 2009). The DCE methodology was preferred to these methods for a number of reasons, including:

  • It is a proven methodology, having been frequently used to measure preferences in health economics, and is grounded in economic theory (Ryan et al, 2008).
  • The data required can be collected using a large-scale survey, allowing a wide range of respondents to be reached relatively efficiently, unlike other methodologies such as swing weighting and the analytical hierarchy (AHP), which are best used in a workshop setting (CLG, 2009).
  • It produces range-sensitive weights. That is, it facilitates comparison of different levels of attributes, rather than being based on abstract comparison of attributes, an approach employed by AHP (CLG, 2009).



Given that is was important to keep the DCE relatively short in length to ensure that it did not burden already busy decision makers, it was not possible to include all the criteria within the DCE. It was decided that including only three criteria to ensure the collection of more accurate data (Payne, 1993). The three criteria included in the analysis were: cost-effectiveness, inequality score, and reach. Affordability was excluded from the DCE, as it was assumed that decision makers would assess interventions against the other criteria and then choose the intervention that is ranked highest and that they can afford.

An online survey (LINK) was designed to engage a large number of respondents as efficiently as possible. A draft survey was piloted at a small workshop of decision makers and necessary adjustments made. An e-mail inviting potential respondents to undertake the questionnaire was sent to 446 decision makers, including: directors of finance, commissioning, and public health in PCTs.

Respondents were presented with twelve questions (‘choice sets’), each containing three hypothetical interventions (‘scenarios’). Respondents had to choose the one intervention they would invest in from each set. Each intervention was described by values for each of the decision makers’ objectives / criteria (‘attributes’). Each choice set was accompanied by definitions of these attributes, shown in Table 1.

Table 1: Attributes included in the choice sets

Inequality scoreReachCost effectiveness
The share of health benefits received by the most disadvantaged 20% of the population The proportion of the total population whose health would improve as a result of the intervention, if all eligible people received the intervention. Costs are measured in £s and effectiveness is measured in QALYs.

A QALY is a simple way of combining quality of life with length of life. One QALY is equivalent to one year in full health.

The cost per QALY is therefore the cost of achieving 1 extra year of full health.

The experiment did not include a ‘none of these’ scenario in the choice sets. This reflects the fact that the purpose of the ranking exercise is to prioritise public health interventions rather than decide whether to invest in such interventions or not.

The number of levels per attribute was kept to a minimum to keep the complexity of the experiment as low as possible:

  • Two criteria (reach and inequality score) were assigned 2 levels. Implicit in this decision is the idea that there is a linear relationship between the score of an intervention on a particular attribute and the likelihood that the intervention is chosen.
  • One criterion (cost effectiveness) was assigned 3 levels, to reflect the possibility that respondents may look favourably on interventions with a cost per QALY gained lower than the £30,000 threshold implicit in NICE recommendations.

The twelve possible scenarios included in the choice sets were generated using publically available software (Burgess, 2007) and are shown in Table 2. An example question from the DCE is shown in Figure 1.

Table 2: Scenarios included in the choice sets

Inequality scoreReachCost effectiveness
20:20 1% £10,000/QALY
50:20 1% £30,000/QALY
20:20 5% £50,000/QALY
20:20 1% £30,000/QALY
50:20 1% £50,000/QALY
20:20 5% £10,000/QALY
20:20 1% £50,000/QALY
50:20 1% £10,000/QALY
20:20 5% £30,000/QALY
50:20 5% £30,000/QALY
50:20 5% £50,000/QALY

Figure 1: Example DCE hypothetical scenario

There is risk when conducting a DCE that respondents choose between interventions by focusing on one objective/attribute only. For instance, they may choose the most cost-effective intervention in every case. This may reflect the importance they attach to cost-effectiveness. However, it may also reflect the fact that they are not genuinely engaging with the trade off between objectives (Ratcliffe et al, 2009). In order to combat the latter possibility, two attitudinal questions were included at the start of the questionnaire. These questions asked whether respondents thought that each of the objectives was important. By including this question, respondents attention is drawn to the importance (or not) of each objective, ensuring they consider these objectives appropriately when responding to the DCE questions (Ryan et al, 2008).


A total of 1117 questions were answered by 99 respondents. This resulted in 3351 observations as each choice set generates three observations; two scenarios not chosen and one scenario chosen. Table 3 shows the distribution of responses by occupation. Questions completed by respondents that did not complete the whole survey were included in line with previously reported studies (Ryan et al, 2008).

Table 3: Breakdown of responses by occupation

Chief Executive 42
Consultant 252
Director 288
Other - specified 81
Other - not specified 495
Unknown 2050
Grand Total 3208

Table 4 summarises the number of times each scenario was chosen over another scenario.

Table 4: Percentage of time that scenarios were chosen over one another

Scenarios are described by parameter values - (inequality score, reach, cost effectiveness)

  Scenario chosen
Scenario not chosen 20:20, 0.01 , £10,000/QALY 20:20, 0.01 , £30,000/QALY 20:20, 0.01 , £50,000/QALY 20:20, 0.05 , £10,000/QALY 20:20, 0.05 , £30,000/QALY 20:20, 0.05 , £50,000/QALY 50:20, 0.01 , £10,000/QALY 50:20, 0.01 , £30,000/QALY 50:20, 0.01 , £50,000/QALY 50:20, 0.05 , £10,000/QALY 50:20, 0.05 , £30,000/QALY 50:20, 0.05 , £50,000/QALY
20:20, 0.01 , £10,000/QALY         56% 16%   56% 1%   86% 22%
20:20, 0.01 , £30,000/QALY       84%   8% 68%   8% 91%   26%
20:20, 0.01 , £50,000/QALY       36% 34%   60% 3%   92% 62%  
20:20, 0.05 , £10,000/QALY   8% 2%         15% 8%   62% 16%
20:20, 0.05 , £30,000/QALY 23%   5%       60%   4% 85%   22%
20:20, 0.05 , £50,000/QALY 28% 1%         29% 56%   91% 66%  
50:20, 0.01 , £10,000/QALY   5% 5%   34% 5%         66% 26%
50:20, 0.01 , £30,000/QALY 28%   4% 68%   16%       92%   16%
50:20, 0.01 , £50,000/QALY   8%   84% 11%         85% 86%  
50:20, 0.05 , £10,000/QALY   1% 4%   11% 8%   3% 4%      
50:20, 0.05 , £30,000/QALY 13%   2% 36%   5% 29%   1%      
50:20, 0.05 , £50,000/QALY 23% 5%   68% 56%   68% 15%        

  Scenarios did not appear in same choice set
  Scenarios are identical

Multinomial regression analysis was run using the conditional logit model. Table 5 summarises the results of the regression analysis.

Table 5: Results of conditional logit regression analysis

AttributeCoefficientStd. Err.P95% CI
Reach a 0.0435987 0.0201 0.0300 0.0041 - 0.0831
Inequality score a 0.119895 0.0539 0.0260 0.0143 - 0.2255
Cost effectiveness a -0.0000586 0.0000 0.0000 -0.00006 - -0.00005

a Indicates P < 0.05

All of the coefficients were statistically significantly different from 0 (p ≤ 0.05), suggesting that all attributes had an impact on respondents choice of intervention. The co-efficient on each attribute had an intuitively sensible significance. The coefficient for cost effectiveness was negative, which indicates that respondents are more likely to invest in an intervention that has a lower £/QALY value. The coefficients for reach and inequality score were positive, indicating that respondents are more likely to invest in an intervention if it benefits a greater proportion of the population or if it has a greater impact on health inequalities.

The coefficients indicate the effect that a 1 unit increase in the attribute with which the coefficient is associated has on the probability of a scenario being chosen (Ryan et al, 2008).

Using the DCE results to prioritise health interventions

In order to prioritise the 17 interventions being evaluated by this project, the results of the DCE are used to assess the probability of each intervention being funded. These probabilities were then used to rank interventions.

Following Ryan et al (2008), to calculate the probability of an intervention being funded, the results of the DCE and the values of criterion for each intervention are used to calculate the benefit of each intervention, as per equation 1.

Again following Ryan et al (2008), the relative probability of intervention a being chosen compared with the other interventions is given by equation 2.

Confidence in the probability estimate

The confidence grade attached to the probability score for each intervention was estimated by combining the confidence grades for each of the criteria used in the DCE: reach, inequality score, and cost effectiveness. The confidence grades (1-3) for each of the criteria were weighted to reflect the relative importance of the criteria identified in the DCE.


When decision makers choose to invest in a public health intervention they have a number of objectives in mind. It is important that these objectives are incorporated into any analysis of interventions designed to inform investment decisions. These objectives can, however, sometimes compete with one another. For instance, an intervention may improve the overall health of the population but increase health inequalities. The DCE reported in the previous sections was designed to elicit data from decision makers on the way they ‘trade off’ competing objectives / criteria. This data can be used to estimate the probability that an intervention is preferred by a decision maker.

While the DCE provides information of crucial importance to ranking interventions in correspondence with decision makers’ preferences, the implementation of the DCE raises a number of methodological questions. First, are consistent results obtained from DCEs? That is, would similar criteria weights be obtained if the DCE was run with a different group of decision makers, or with the same group of decision makers at a different point in time? Further research is required to answer this question and to identify how preferences vary between decision makers operating in different contexts.

Second, whose values should be used to weight criteria? This paper estimated weights by eliciting the preferences of the decision makers responsible for allocating public funds. The preferences of decision makers possess a certain level of legitimacy, especially within a democratic system. However, it is often argued that it is the preferences of the public that should be employed to allocate resources (Fox-Rushby et al, 2008). Further discussion of the appropriate source of value is required to determine the appropriate methodology for weighting criteria.


Burgess, L. (2007), Discrete choice experiments, Department of Mathematical Sciences, University of Technology, Sydney, available from http://crsu.Science.Uts.Edu.Au/choice

CLG (Department for Communities and Local Government) (2009), Multi-Criteria Analysis: A manual. London: Department for Communities and Local Government.

Fox-Rushby, J., Boehler, C., Hanney, S., Roberts, I., Beresford, P., and Buxton, M. (2008), Prioritisation of prevention services: Determining the applicability of research from the US to the English context. Health England.

Payne, J.W., Bettman, J.R. and Johnson, E.J. (1993) The Adaptive Decision Maker. Cambridge, MA: Cambridge University Press.

Ratcliffe, J., Brazier, J.,Tsuchiya, A., Symonds, T and Brown, M. (2009). Using DCE and ranking data to estimate cardinal values for health states for deriving a preference-based single index from the sexual quality of life questionnaire. Health Economics. Published online: 13 Jan 2009.

Ryan, M., Gerard, K. and Amaya-Amaya, M. (2008) Using Discrete Choice Experiments to Value Health and Health Care. Springer.

 Back to previous page

All content is copyrighted. Please do not duplicate or redistribute without our permission.

19 August 2017 19:45 Health England Leading Prioritisation vaspiraHub 2017.7760