On Saturday I passed on Inel‘s links to Stainforth et. al., from Philosophical Transactions, about models and their uses, and on Sunday, Stoat gave us a link to a paper by James McWilliams on ‘irreducible imprecision‘. This morning, RealClimate has a post up on models, so I suppose its hard to avoid getting to the point of what’s been on my mind over the weekend.

Each of these papers and articles has a different focus and purpose, but they all address the larger questions relating to what climate models produce and what this output is worth.

Commercially, there is now a business in climate consultancy which makes use of the reports and scoping studies from model runs to advise clients on best present policy for optimum future success. Many areas of planning, not least housing development, have to make decisions based on expectations of future conditions (at least, they should, though it isn’t always apparent in the final decisions). Insurance companies and underwriters also need useful tools for risk analysis purposes. So modelling has real-world uses, beyond policy-making, in addition to its essential scientific applications.

To cut to the chase, many members of the public are dubious about the predictions for future climate changes which are often cited in the media and referred to frequently by us bloggers. Others uncritically take on board the media headlines which frequently find the most melodramatic part of a model run and present it as if it is writ in stone. Yet many organisations (commercial and non-commercial) make use of model output in ways which suggest that they expect the models to be providing them with accurate representations of likely future conditions.

Which leads us to the question of what we think models do, and  what we expect from them.

First thing to clear up links to McWilliams; models (he discusses all AOS models, though the public discussion is normally about GCMs) seek to imitate the real world, to a greater or lesser degree. That they cannot exactly reproduce reality is almost trivially true (one is reminded of the scene in a Michael Palin ‘Ripping Yarn’, where someone produces a ‘model’ of the Bismark on a scale of one-to-one). McWilliams points to the important idea that there are limits to the precision with which any model can even approximate reality. Thinking of VR and computer games, one can see an example of this, perhaps. So all climate models must stay, resolutely, at some level ‘less’ than the reality they emulate.

This leads us in turn to one of the common complaints about the reliability of climate models; ‘I can’t trust a model which does not have all of the possible variables and doesn’t include all the available information’. Firstly, no such model can exist, for the reasons implied above, and secondly, given that models are imitations of the real, much/most of what is excluded has no possible effect on what is being measured/estimated/projected.

More technically, there are a number of scientists who call frequently for models to be improved. The argument is that, on scales where policy is relevant (largely regional), models are lacking in important mechanisms which might skew results, and show a degree of uncertainty which makes model-derived policy-making irrational. On the other hand, many climate modellers are at pains to point out the areas in which models are reliable, and thus valuable. The desire to improve models is a given; the extent to which current models show predictive skill and on what scales is a more tricky question.

So, what do we expect from climate models? It depends who you are, and herein is a problem. There is a big difference between what science expects of modelling, and what the public, and decision-makers, want of it. Perhaps a part of the problem also lies in the different ways in which the two domains make use of the word ‘confidence’, but this is probably a whole separate post.

Whilst it is sufficient for most scientists that models produce results consistent with the physics and mathematics of a problem, and provide a set of results for future events which are internally consistent, within certain margins of error (ah, and here we get to an interesting question about Bayes…), outside science, what is wanted is clear and decisive anticipation of future conditions, which relate to our current situation and which indicate which present actions or decisions are ‘best’ (‘value’ being a function of intention, its definition must depend on the context).

The Stainforth et. al. paper comes in here. It points out (to me, rightly) the conflict of interest which ensues from presenting model output (especially from single, or low-repeat runs) in a confident tone, expressing the uncertainties in a manner which fulfils the scientific requirement for honesty, yet overlooking them in the choice of verb-forms used in the subsequent text. But this is unavoidable for some of these publications; they are commercial products aimed at a non-scientific clientele and must come across as authoritative, otherwise the client will reject them as too imprecise or ‘woolly’ for her/his purposes. They attempt to answer the second form of expectation in language suited to their status.

At the same time, the paper also points, as does McWilliams, to certain limitations in current model-assessment procedures in the scientific side of modelling. Bluntly put, none of the existing methods of  concluding a most likely future from ensemble runs or intercomparisons is very good. McWilliams offers some reasons why. First, most of the current AOS models are interconnected in their construction; thus, they contain similar assumptions and parameter definitions, so there is a likely limitation in the range of possible outcomes by virtue of the models’ similarities. Second, the intercomparisons are not sufficiently systematic; thus, comparison is based on opportunistic differences rather than an organised, coherent strategy of defining, ranging and cross-analysing variables.

How should we understand, then, the conclusions about likely future climate under given scenarios which forms the bedrock of the IPCC Assessment reports? Why is it that the majority of scientists broadly agree with the scientific basis for AGW, broadly understand the likelihood of the changes projected by the report, even though no strong method exists to assess the climate models?

Rather than read the summaries of model experiments or scoping studies as  ‘predictions’, or as some kind of equation, in which the terms and the solution are absolute, these reports need to be understood as texts, or narratives. When a novel seeks to create a suspense of disbelief in its reader, it uses realism as a core tool. The degree to which a writer can offer a reader familiar, known or understood real conditions influences the degree of belief that the reader will permit for the story. In other words, the ‘world’ of the narrative must conform to understood laws of nature and physics, unless a given explanation is offered. It must also provide recognisable responses to exceptional situations by believably ‘human’ agents. I don’t think I need to labour the comparison.

In considering the plausibility of  climate projections as narratives, I was reminded of the academic debates about the nature and status of History as a subject, in which the ‘new historians’ argued that all versions or interpretations of the past were no more than narratives, or stories, which depended on the credibility of the connections between objects, records and known events, to establish an explanation for what actually happened and why. At an extreme end of this version of what history is we can also consider archaeology. In this discipline, small amounts of physical evidence are used to extract substantial narratives of circumstances and lives which have little other source of confirmation beyond the objects themselves. Yet archaeology is very successful at generating these narratives, because each new piece of evidence is considered in the existing context of all other known evidence in the entire field. Thus, there is a meta-narrative of archaeological understanding which operates beneath the level of the immediate object, within which inference becomes both possible and more or less plausible.

Should we look at climate model projections as narratives, then, in the context of a larger meta-narrative of all climate science research? If we do so, the willingness of scientists to accept the probability of  a future resembling the one laid out in the IPCC ceases to be problematic; as they can see the evidence in the context of the meta-narrative, so they can understand its credibility at a level which is closed to the general public. Similarly, the difference in public and scientific expectation of models can be resolved, in part. If the public can come to accept that model projections are ‘stories’, which can be judged on their credibility, rather than absolute predictions of certain futures (which no scientist has ever claimed), then the question of trust or confidence becomes an issue of interpretation, not of (the impossible) perfect replication of the world.

Enough. Be good.

Advertisements