©UNHCR

The international system desperately needs a counterweight to the diplomatic immobilism of decisions based on political expediency. One possible solution would be for the new boss to appoint a panel of superforecasters to tell the U.N. what to focus on most urgently.

The U.N.’s failure to use expert advice against the politicians has produced a number of blunders over the years — from allowing the Rwanda massacres to take place, to screwing up in Haiti, missing the boat on Ebola, standing by helpless in Southern Sudan, ignoring the civil war in Sri Lanka, letting Europe as a community turn its back on refugees, covering up rape and violence by peacekeeping troops in West Africa, and failing to control North Korea’s belligerence and oppression, to mention only the most recent.

No-one can claim these disasters and mistakes were not foreseen. But the whistleblowers have regularly been pursued if not punished. Looking further back, structural adjustment, as promoted by the World Bank and IMF, was clearly from the beginning a horrendous non-Keynesian approach to economic development. But who was around to give non-partisan analysis of the options?

With a superforecasting team on board, perhaps Kofi Annan could have given more weight to the U.N. peacekeepers who were predicting massacres in Rwanda unless the international community stepped up its presence. Would US President Bill Clinton have approved NATO cluster bomb attacks on civilians in Serbia if someone with more clout than U.N. human rights officials had publicly pointed out the violation of humanitarian standards?

Finding superforecasters

Where can we find the superforecasters and what can they do? Credibility is the key. Canadian-American political scientist Philip Tetlock is trying to do something about it. Tetlock is famous for demonstrating that many so-called political experts do no better than a “dart-throwing chimpanzee” at hitting the bullseye with their predictions. Superforecasting (2015), written with journalist Dan Gardner, tells how Tetlock assembled a group of lay people who did (reportedly) 30 percent better in one year than professional intelligence analysts at predicting future events. All they had to go on was publicly available information from the Internet.

Tetlock’s argument is that superforecasters are made not born. He sets out 10 simple rules for developing these skills. They are the same I have found in the people I consider superforecasters in the U.N. in Geneva. Among these principles:

  • Don’t try to forecast too much into the future
  • Consider counter-evidence to your views
  • Get help from others. More forecasters produce better results (the wisdom of crowds)
  • Learn to disagree without being disagreeable
  • Own your errors. Don’t justify or excuse them, but don’t exaggerate your failings
  • Revise your estimates as new evidence comes in, and put numbers on your certainty.

This last is perhaps the most important difference between experts and superforecasters.

Fundamental change

Tetlock’s argument for specifying the scale of your uncertainties is simple: “Asking falsifiable questions and forecasting on them has the potential to moderate polarizing policy debates because accountability fundamentally alters the parameters of the discussion.” Tetlock now runs a forecasting project at goodjudgment.com, where anyone can exercise their own powers of prediction on a series of questions. Its GJOpen site challenges participants to make forecasts, explain their reasoning, allow others to challenge them, and find out how they perform against others to hone their skills. Goodjudgment will also help you run a tournament within your organization (hint, hint).

One major challenge open for forecasters is the Early Warning Project 2016, designed to help policymakers and NGOs on the risk of mass killings. For example, the forecasters put the chance of a mass killing in Turkey before 1 January 2017 at 15 percent, 8 percent in Yemen and 5 percent in Pakistan. Some 97 forecasters put the estimate for Ukraine at 0 percent. And 55 per cent of 88 forecasters believe there will be a mass killing in Afghanistan before the end of the year.

Working with new parameters

The UN already has a number of ways to gather and air differing opinions but rarely does so, except at the political level. Committees are a perfect place to hear dissenters but their public reports aim more for consensus or the least common denominator of agreement than for quantifying the uncertainties.

At the end of October Tetlock gave a masterclass to the World Government Summit in Dubai. He called on participants to draw up scenarios for the coming 10, 50 and 100 years in the workshop. His Edge class in 2015 (eight hours of video and audio, plus 61,000 words of transcript, is available online). Not that it promises a panacea. The superforecasters were wrong about Brexit, rejection of the Colombian peace agreement, and the election of Donald Trump as next U.S. President.

What makes such approaches useful is that the superforecasters then go back to their predictions and discuss how they went wrong. The details are available on the goodjudgement blog. “High status pundits have learned a valuable survival skill, and that survival skill is they’ve mastered the art of appearing to go out on a limb without actually going out on a limb,” says Tetlock.

With regard to the U.S. Presidential election in 2016, the GJ open group (wouldbe superforecasters rather than proven stars) scored highest among five sites that made daily forecasts, including the Huffington Post.But, as Nick Rohrbaugh on goodjudgment admits, it is more of question of being least wrong. They gave Trump a 24 percent chance of winning. Nevertheless, twice as many of the forecasters who correctly predicted the U.K. vote for Britain leaving the European Union thought Trump would win the U.S. election: 30 percent vs. 15 percent who thought Britain would remain. Even so, Rohrbaugh points out, Tetlock’s proverbial dart-throwing chimpanzee would have given Trump a 50 percent chance of winning.

Fail again, fail better

So, just appointing a panel doesn’t solve all the problems. In a discussion of Donald Trump, Tetlock points out that his self-selecting group of superforecasters about Brexit and Trump have done worse overall than the crowd in their predictions. It still depends on who’s on your team. To quote Samuel Beckett via tennis star Stan Wawrinka: fail again, fail better.

Tetlock warns against choosing a question that is easy to answer instead of the one that you really want to predict, e.g. “Will Hillary Clinton get the most votes?” as against “Will Hillary win the election?” The blog has a whole section on posing the right questions.

When the IPCC (or Intergovernmental Panel on Climate Change) published its first assessment 26 years ago, I criticised it for taking a minimalist view rather than recognizing that science is not about consensus, and allowing governments like the U.K. to defer action citing the unquantified “uncertainties”. I give myself a 60 per cent success score on that one. That’s just better than coin-tossing, and I would put my score higher in the 1990s than later.

But my rating goes down to 20 per cent with regard to the Paris Climate Change conference. I didn’t reckon with the determination of Laurent Fabius to close his career with an achievement, and wrote the prospects down just before the talks started.

I’m keeping the 20 per cent until I am sure that the U.S., the U.K., India and China will live up to their promises. Maybe I should push the percentage up to 30 percent since the U.S. presidential election. But I don’t expect to be among the superforecasters. Journalists are better at predicting the present than the future. We’re too excitable.

6 COMMENTS

  1. As a former intelligence officer I would be keen to see the UN embrace the superforecaster ethos. The key obstacle, however, is superforecasters can afford to get it wrong. The UN can’t afford to get it wrong – or right. Any intelligence apparatus within the UN would face a legal backlash at either instance, of ‘knowing what it does not know’ or ‘knowing what it does know’ and being unable to act in a meaningful way.

    • Good point. But we need something to counterbalance the political immobilism, as I argued. Would it have taken so long for the U.N. to have apologized to Haiti, and poorly at that, if someone with credibility had pointed to the U.N.’s culpability in the cholera epidemic. I don’t agree, though, that the U.N. can’t afford to get it wrong. It has done so often, as I point out. I’d rather be wrong for the right reason than for the obviously wrong ones. And, for me, it is not a reason for failing to act: at least in the right direction. I mentioned only the most obvious failures of political decision-making, rather than debatable political decisionss that could have done with some objective, statistically based assessment.

  2. Hi, This is Gwyn from Good Judgment Inc — the commercial evolution from the research team of Good Judgment Project (our academic research project side, which did so well in the IARPA forecasting tournament.) I just had a quick note for your consideration that occurred to me as I read this article. Another reason “Superforecaster” forecasting is so important is that it offers numerical probabilities. The political world is so often replete with “vague verbiage” forecasting, where loose language is the mode of prediction. For example, an expert might say an event has a “distinct possibility” of occurring — then if something happens, they were right, and if something doesn’t happen, they were still right because they only said it’s “possible.” I believe that working with numerical probabilities give more solid ground on which to base decisions.

  3. I hope my quote from Philip Tetlock made clear how much I agree with you. The point where superforecasters and conventional pundits meet is that when superforeasters say something is 70% likely, there’s a 30% likelihood it won’t happen, and they will still be superforecasters. Like Philip Tetlock, I don’t believe you judge superforecasters by being right all the time — but by their willingness to subject their judgement to quantitative analysis. And that’s what we need more of in the U.N. one of its most successful programs is difficult to quantify because its results cannot be definitively ascribed to its practices. Does that mean we should stop it? No, but so much can be given importance by quantification, even if I don’t believe you can’t treasure it if you can’t measure it. Dart-throwing chimpanzees have their uses.

  4. Agreed, Peter! There is more room for superforecasters to influence our work. There may also be room for rank and file employees to introduce their views – by featuring them in events and publications. Finding superforecasters is the challenge.

  5. Right on! The person I knew with the best superforecaster qualities in the UN was a security guard. Philip Tetlock discovered his superforecasters were not specialists or top-rankers. Just people who were careful in their judgements and cautious about voting with their prejudices, and these were found in many walks of life: the main link was that they were not afraid to look at numbers, even if they were not mathematicians. “Expertise” counted for little, which is why Tetlock set up his website where anybody could contribute. That would be my ideal choice for the UN: lots of people at all levels feeding in “the wisdom of the crowd” instead of decisions made by hierarchies.

LEAVE A REPLY

Please enter your comment!
Please enter your name here