Let us put the question the other way round: could one make sensible and scientifically informed policies without these global indicators or index? With the experience of Covid-19 fresh in our minds, we would venture that good pandemic policies (leaving out the other issues for the time being) could be based on and started with sensible data presentation and some simple heuristics rather than over-stated modelling with its inherent limitations. One key to effective control of the pandemic was acting preventatively at an early stage, and implementing counter-measures such as widespread testing, lockdown and closing of the borders [23]. Taiwan is one of the best examples in this respect: noting the rapid rise of infections in neighboring China in late December 2019, it implemented wide-spread testing among incoming people, and set in motion a National Health Command Center. It soon closed its borders, quarantined all cases and rapidly propagated the use of face masks. Taiwan certainly did not find any advice in the GHSI since it was not included in the first place. Early detection and reaction were the key to controlling the pandemic in many countries, and they showed success. Laissez-faire attitudes like in the United Kingdom, Sweden or the United States of America proved fatal. A United Nations report has this key message: “Act decisively and early to prevent the further spread or quickly suppress the transmission of Covid-19 and save lives” [24]. Other writers have already noted that simplicity may be a better guide than getting lost in the complexities: “An imperative to prioritize simplicity over complexity is at the core of social health” [25].
Furthermore, in all modelling it is widely recognized that there is trade-off between precision and complexity. Complex models are seen as more accurate, while simple ones are seen as more general with a lack of detail that causes systematic bias in predictions—but adding detail to a model does not guarantee an increase in reliability unless the added processes are essential, well understood and reliably estimated [26]. O’Neill’s conjecture was that there may be an optimal balance between model complexity and model error ([26], p. 70). In our case, this implies that adding to the complexity of the basic categories in the GHSI may actually increase model error rather than decrease it. This is also the background for the recommendations in [27]. To quote this article:
“Complexity can be the enemy of relevance. Most modelers are aware that there is a tradeoff between the usefulness of a model and the breadth it tries to capture. But many are seduced by the idea of adding complexity in an attempt to capture reality more accurately. As modelers incorporate more phenomena, a model might fit better to the training data, but at a cost. Its predictions typically become less accurate. As more parameters are added, the uncertainty builds up (the uncertainty cascade effect), and the error could increase to the point at which predictions become useless.” ([27], p. 483).
Here we want to stress that we are concerned with meeting an immediate health crisis and we ask whether or not an index like the GHSI can be regarded as a useful addition in our toolbox to manage that crisis. In the preceding sections, we have already claimed that as a matter of fact it was not useful, and certainly was not a precise predictor. But the real underlying question we want to ask is if we are looking in the wrong toolbox altogether. As with all tools, the utility of the tool depends on its intended use. Therefore, we do not argue that there is no use for a composite index like the GHSI, since we might assume that it could have good uses as a tool in the design of long-term strategies in our health policies. What we, however, claim, is that policy in an imminent crisis like a pandemic is ill advised if it looks at the composite index as a guide to crisis management. This does not imply that science cannot contribute to crisis management, rather the opposite: science is highly useful if the right information comes in the right format and the right doses at the right times. It only implies that scientific advice may take other roads to policy than a global composite index. One issue might be to resist the temptation to provide numbers, i.e. quantification, when the problem is still poorly understood.
“Quantification can backfire. Excessive regard for producing numbers can push a discipline away from being roughly right towards being precisely wrong.” ([27], p. 484).
One needs to recognize the immediate needs of decision makers facing an immediate crisis. Obviously, a decision maker tries to come up with robust decisions, and robust decisions are typically about a set of different future scenarios under deep uncertainty and guided by varying criteria for robustness; for example, a decision maker may change from an optimist to a pessimist strategy or the other way round [28, 29]. Decision makers need to engage in a learning process as the crisis unfolds, and thus apply an adaptive management approach [30] through the different phases of the crisis. The availability of a risk register may be a decision-support tool during this process. As the science–policy interface is known to be full of pitfalls, institutionalized brokerage may be an important support [31,32,33,34], aiming at synthesis when information is sparce and beset with deep uncertainty. Heuristics may be more important than formal tools, aiming at characterizing the whole complexity of the issue at hand. As Todd and Gigerenzer [35] observe, simple heuristics often perform comparably or better than more advanced algorithms, and they add a much-desired simplicity which leads to more robust decisions. This point does not invalidate some other uses of quantified formal indices or models, but it stresses that the scientific input needs to meet the constraints and context of the decision situation in a crisis.
In such a setting, the availability of reliable data on the emergence of the risk is typically providing a good input for decision heuristics. We illustrate this by reference to our INGSA Evidence-to-Policy Tracker.