How can experts help governments think?

by Professor Glen O'Hara October 22, 2018

JCE’s editors are delighted to present an excerpt from Glen O’Hara’s inaugural professorial lecture at Oxford Brookes University, delivered on 9 May 2018. 

This will be the first inaugural lecture to be published in the journal and we took the decision to start with Glen’s for a couple of reasons. First, close readers might remember that we closed Volume 10 with a collective editorial we called ‘On Brutal Culture’. In ruminating on the ways that the experience of ‘culture’  is directly relevant to people’s imaginative political resistance the piece acknowledged that this can take some unanticipated – and brutal – forms. Cultural foundationalism and all its fictions has been a powerful resource for the regressive, populist nationalisms orchestrated by the campaigns for Brexit, for Trump, the resurgence of the Far Right across Europe. We want however to host scholarship that goes further than reporting on these trends to drill down on the cultural and political mechanics behind them. In the context of Brexit, one set of mechanics was immortalized by  then Justice Secretary Michael Gove’s assertion that ‘people in this country have had enough of experts’. Glen O’Hara’s scholarship, journalism and voice on social media has provided a remorseless and impeccably well-informed rejoinder demonstrating all the ways that expertise matters. This leads us directly to our second reason for inviting him to publish with us. O’Hara is a political historian whose excavation of the collapse of due political process is grounded in analysis of long-run political data. As an interdisciplinary journal dedicated to the study of how cultural, economies and politics collide, this seems a prudent moment to take seriously the political history that has made it necessary to defend the role of expertise. 

 

The struggle to be ‘relevant’ – and, latterly, to build up case-studies of their metricised social ‘Impact’ more broadly – has led academics to think about their intent and audience. But what it has also done is draw them into a world of outsized and technicolour comment. Added to their own tendency to seek an answer, or the answer, this has encouraged us all to write and speak in terms that do not make enough sustained use of one of our advantages, and one of our jobs: the injection of nuance, uncertainty and granular detail into the picture.

Consider the UK General Election of 2017. One of the strands in my work up to now has been the analysis of data, including long-run trends using opinion polls and other political indicators, so I took a strong and public view of the situation at that time. Now, right up until three to four weeks before polling day, I would have confidently told you that Britain’s Labour Party was heading for a very bad defeat. You did not, by the way, need to look at any opinion polls to reach that conclusion. Local and Parliamentary by-elections, canvassing data, focus group transcripts and the sheer weight of anecdote all pointed in exactly the same direction. But then, something very unusual happened. Labour leapt from its pitiful performance in the early May local elections to surmount 40 per cent of the popular vote in early June. They still lost, but they were not defeated in anything like the same manner predicted just weeks before. Why did this happen? There were at least three straws in the wind that we should have caught hold of.

One: campaigns do now seem to matter. Going into the 2017 General Election, most commentators were confident that the short campaign of three to six weeks or so doesn’t really count. We were wrong. Labour surged, while the Liberal Democrats, Greens and UKIP sagged as the main Opposition party sucked up all the anti-Conservative oxygen in the room. Now we should have been warned about that by the Canadian federal election of 2015, at which the Liberal Party led by Justin Trudeau not only put on a very similar vote share to Labour’s 2017 campaign gains, but came from third to first and added over 20 per cent and 150 seats to their disastrous showing at the previous election. Fortunately, I did note this at the time, wondering aloud in January 2016 whether we had ‘got it all wrong’ (O’Hara, 2016). But really that’s to avoid the real point here: in conditions where party attachment seem to be weakening all the time, sudden electoral surges should not be as surprising as they were.

Two: the Leave electorate from 2016 did not constitute the General Election electorate. As we can see in work published by Matt Singh of Number Cruncher Politics, there was a very strong relationship between an increase in turnout and voting to leave the European Union in 2016 (Singh, 2016). Theresa May’s electoral gamble was that she could ride to an easy, even uncontested victory on the back of Leave voters who wanted to make sure that we did indeed leave the EU. But both she, and we, should have been more cognisant of the risk that these voters simply would not turn out again – allowing Remain Britain, in the majority at the 2017 polls, to take their revenge on her. The demographic and ideological composition of the actual electorate, it is clear, can be just as important as the movement between parties.

Three: we paid too much attention to headline Voting Intention numbers. Labour at their nadir, right at the start of the 2017 election campaign, returned some figures in the mid-20s. If they had stayed at that level, they would have been wiped out as a serious Parliamentary force that aspired to form a government. One other reason why that did not happen has to do with why their overall numbers were sagging. All data, as we have already noted, is a construct – in some ways art and judgement, as well as science. And what the main voting intention numbers were showing were that as many Labour voters in the raw data were saying that they did not know how they would vote as were saying that they would move over to the Conservatives and the Liberal Democrats – 19 per cent, as opposed to 20 per cent reporting a preference to change their vote (YouGov/ The Times, 2017). In the first stage of the campaign, as the Liberal Democrats in particular failed to fly, these voters returned home, pushing Labour back towards respectability and then into a zone where they could compete. We failed to look deeper than the headlines. That’s something else experts can do.

It is far better to express all mixes of certainty and uncertainty, all projections in the near- or medium-term, as a range of probabilities or chances. The Bank of England does this in terms of its narrow-band or wider-band ‘heat maps’ of where inflation is likely to go, even without significant changes in policy or external shocks. More relevant in terms of the electoral example we have just been talking about is the American elections expert and statistician Nate Silver, who has made a career out of mostly very successful electoral forecasting. Many people seemed shocked when Donald Trump was elected President of the United States in 2016, despite the fact that some of them had been feverishly updating Mr Silver’s 538.com website every few minutes for weeks. The very final update of his electoral model gave Donald Trump just under a thirty per cent chance of winning the Presidency (FiveThirtyEight, 2016). Many people seemed to think that thirty actually meant zero. But you would not, I guarantee, put a gun to your head with two barrels loaded and pull the trigger – an action that will be fatal in only a few more simulations (33 per cent as opposed to 29 per cent). Mr Silver was proved right, in his battle with those experts who gave Mr Trump a far, far smaller chance of winning: but the fact that his message often wasn’t getting through anyway speaks to our difficulty in understanding what’s being said to us in numbers, as much as it does about the failings of ‘experts’. Even so, even he could only flag the danger. He could not provide precise certainty as to its size.

So, in this example as in others, we must accept strict limits as to what ‘experts’ can aspire to achieve. Politicians and civil servants have never, and will never, be able to take on board everything that public policy experts – historians among them – might be shouting about from outside Whitehall and Westminster’s rather dense walls. They don’t have time to do so, even if they had the inclination to listen. And, in any case, the efficacy of specific recommendations is very questionable. As A.J.P. Taylor once famously observed, politicians learn only ‘from the mistakes of the past how to make new ones’ (Taylor, 1963). One recent example: British politicians shied away from intervening in Syria’s terrible civil war, in part because of their awful experience in post-Saddam Iraq. It is at least arguable that they were wrong to do so. And so on.

Recent social and cultural changes also make Professors’ views even less likely to stick. The Hogwartsian University, that disseminates information from eccentric dons, or moves esoteric knowledge around in the sense that it takes it from over here, in the academy, and plugs it in over there, in your heads, was always something of a myth. The History Professors created by David Baddiel and Rob Newman for their Mary Whitehouse Experience television programme (1990-92), beloved of those born into shall we say a certain generation, were funny because they were grotesque. But grotesques have to reference some form of truth to be funny at all, and there was enough sharp observation in Baddiel and Newman’s caricature to force the point about portentous and pretentious Professors.

All that seems increasingly old-fashioned now. The time has passed – and it has well passed, properly passed, if not quite declared done with – during which straight, white, staid, English, Oxbridge-educated, middle aged white men get to tell you what to do. That is unquestionably both a positive and a welcome reality. All we can do – if we can even do that – is advise and warn others as to their speed and line of march.

 

FiveThirtyEight.com, 2016. Who will win the Presidency? FiveThirtyEight, 8 November. https://projects.fivethirtyeight.com/2016-election-forecast/, accessed 14 June 2018.

O’Hara, G., 2016. The real question: might we have got it all wrong? Public Policy and the Past, 23 January. http://publicpolicypast.blogspot.com/2016/01/the-best-question-might-we-have-got-it.html, accessed 14 June 2018.

Taylor, A.J.P., 1963. Mistaken lessons from the past. The Listener, 6 June.

YouGov/ The Times, 2017. YouGov/ The Times Survey Results, 18-19 April. https://d25d2506sfb94s.cloudfront.net/cumulus_uploads/document/04xxn42p3e/TimesResults_170419_VI_Trackers_GE_W.pdf, accessed 14 June 2018.

NOTE: In the full lecture Professor O’Hara describes a number of historic policy failures, critiques the notion of expertise and concludes with a series of recommendations that he believes could assist in reducing the number of public policy disasters. These text versions differ in many respects from the lecture as actually given. The latter, with the full range of slides deployed, can be watched online here

 

SHARE THIS