The US CCSP has a draft document out on its website; Climate Models – an Assessment of Strengths and Limitations for User Applications. Snappy title. The whole document runs to 225 pages. Feedback is still open, so if you are qualified, you can still input into the draft document.
Here is the executive summary, a more manageable 10 pages. For a non-scientist, it is useful to put into perspective the issues which do still matter about the skill of global climate models in forecasting future climate trends.
In a nutshell, the GCMs are imperfect in known ways. They have improved a lot in the last 25 years in some ways, but not in others. The uncertainty caused by aerosol and cloud dynamics/physics is still a defining limitation. Regional downscaling and statistical modelling have strengths and weaknesses. Some of the errors are sufficiently consistent to suggest that there are still elements effecting climate change which are not adequately represented in the models.
So? Three thoughts: in spite of all this, the models are probably good enough now to be confident that, in the ‘big picture’ and on climatically meaningful time-scales, they are pointing us in the right direction. Secondly; any products which agencies or organisation produce which derive conclusions from single-model runs should be viewed as speculative and highly conditional, and their merit evaluated accordingly. Thirdly; if somebody with a brain the size of a planet and a wallet to match could combine a major ensemble run of all the CMIP3 GCMs, several statistical models and a large tranche of closely-linked simplified models such as the ones used by ClimatePrediction.net, whilst at the same time succesfully incorporating the ‘missing link’ in the current models, it might be possible to end up with a genuinely skilful forecast of the twenty-first century’s climate at a resolution useful for adaptation planning to be undertaken.
Be loved.
23 comments
Comments feed for this article
June 7, 2007 at 2:59 pm
Eli Rabett
I am not sure that ensembles gain you that much as the physics problems are pretty much the same for all the models. That being said, greenhouse gas forcing is SO large that even a one dimensional model captures most of the effect AND the models will be much better at forecasting differences (say between 330 and 520 ppm CO2 equiv) than absolute trajectories.
June 7, 2007 at 3:45 pm
fergusbrown
Thanks for visiting, uncle Eli. The report says that better results have been achieved from averaging ensemble runs than anything else, because of differences in sensitivity parameters and the like; that’s why I said it.
I feel that the main difficulty comes when local or regional government wants to make budget and planning decisions based on expected future climate patterns. Obviously, this will matter more in some places than in others. Thinking in terms of material such as the UKCIP report on species spread (MONARCH3), this is based on a BAU/high sensitivity scenario/model, with resolution down to 5km (adjusted later to 50km); If the reliable part of the projection is a fag-packet calculation, why go to the bother of doing the research? OTOH, how much credibility do we give reports such as this, given the limited assumptions that they invariably make? This one, for example, is based on a model run which gives 2.5-3.5C average warming by 2057 (I should check that…) and a local warming where I live of 5C. When I’m 97, I might appreciate the extra heat, assuming they’ll let me out of the institute on hot days…
Regards.
June 8, 2007 at 8:24 pm
Michael Tobis
Greetings from ensemble central.
We are interested in doing millions of runs of a given GCM from the same initial condition. I mean this quite literally. This is not to explore the variance due to chaos. This is to examine the space spanned by underconstrained parameters.
Preliminary results here at U of Texas show this approach has great potential to improve the robustness of model predictions.
see e.g., http://tinyurl.com/2oft75
It isn’t even exactly true that all GCMs solve the same physics, though they are trying to model the same system. The extent to which they are useful outside the bounds of observed climate is a subtle question.
The fact is the the models can only include known phenomena. The harder we push the system, the less valuable our predictions and the bigger our risks.
June 10, 2007 at 8:40 am
fergusbrown
I’ve spotted something upcoming on probabilistic forecasting elsewhere recently; I think it’s either the RMS or the CRU; I’ll look it up. There seems to be a movement towards combining statistical/probabilistic methods and ensemble models runs, with the strengths of one approach complimenting the weaknesses of the other, to produce more ‘credible’ projections. (Won’t JA be pleased!)
What do you mean by ‘the extent to which they are useful outside the bounds of observed climate?’ This is confusing.
Another point I might pick up on soon, though you’re better qualified to look at it, is the apparent coming together of several papers and research work which all point to the idea that there is at least one major climate process which is currently missing from the models. (I’m not thinking about the aerosol or cloud issues, here). Do you know of any work going on to identify or isolate what might be missing? Could your model runs include a number with ‘extra’ variables with limited/unknown constraints, and would this help to find the ‘overlooked’ element of the climate which is not currently being modelled?
Regards,
June 10, 2007 at 8:45 am
fergusbrown
Thisis the reference I was thinking of: http://publishing.royalsoc.ac.uk/index.cfm?page=1302
It’s a special issue of the Philosophical Transactions, due out in August.
F.
June 10, 2007 at 11:14 pm
crandles
>”Greetings from ensemble central.
We are interested in doing millions of runs of a given GCM from the same initial condition. I mean this quite literally. This is not to explore the variance due to chaos. This is to examine the space spanned by underconstrained parameters.
Preliminary results here at U of Texas show this approach has great potential to improve the robustness of model predictions.”
Sounds very interesting. What is the benefit of this compared to climateprediction.net doing say 10,000 to 100,000 runs?
June 11, 2007 at 7:24 am
fergusbrown
Hello Chris, and welcome to the cave.
The process sounds similar, but I believe CP.net has been doing a fair bit of playing with the physics, as well. They are about due to produce some new material, so it will be interesting to see how the experiment has developed.
I’ll leave it to Michael to explain the details of the UofT work.
Regards,
June 11, 2007 at 1:40 pm
crandles
>”a fair bit of playing with the physics, as well”
Isn’t that just with adjusting parameters which sounds very similar to “This is to examine the space spanned by underconstrained parameters”.
CPDN have also been doing same parameters and different initial condition upto a few hundred different initial conditions.
So why use an emulator when you can have the real thing?
I also heard Myles Allen recently saying they could make use of a few thousands of runs but above that they struggled to see any benefit.
June 11, 2007 at 3:10 pm
crandles
>”could combine a major ensemble run of all the CMIP3 GCMs”
>”better results have been achieved from averaging ensemble runs than anything else”
OTOH it seems it is possible for Myles Allen to argue for cutting 20ish National models to just two. See http://www.climateprediction.net/science/pubs/allen_NOC2007.pdf
(page 60 of 61)
Unsuprisingly he is saying large ensembles are crucial but a large number of different independent modelling efforts is not.
That sounds quite surprising to me – I thought the structural uncertainties were the most important source of model error and different modelling efforts would help capture this better than parameter variations of the same model.
(I wonder what James will make of that presentation.)
June 11, 2007 at 8:32 pm
fergusbrown
For all I know, they are doing exactly the same thing, but Michael would be able to confirm this; I suspect there are considerable nuances involved.
TTTT, any discussion of model technicalities is pushing my luck a bit; I tend to know (or doubt) what I learn as matters this this arise.
Regards,
June 12, 2007 at 8:00 pm
fergusbrown
Okay, Chris, tracking back to references from your site to this, it seems clear you know a fair bit about the CP.net project; so perhaps you’d like to clarify your comment on the structural errors relative to the parameter variables, a bit. It would be helpful to more than just me, I hope.
Regards,
June 13, 2007 at 12:19 pm
crandles
Hmm. I do think I know a fair bit about CPDN but that doesn’t make me a climate scientist. I was just trying to provoke some discussion and my musings shouldn’t be taken to mean anything.
What I said above about “structural uncertainties were the most important source of model error” was something I gathered from the first CPDN open day
http://www.climateprediction.net/project/openday.php
I am unlikely to do better than quote Piani et al
Click to access Piani_GRL.pdf
“Expressing HadCM3 model segments
about the HadCM3 control climate results in an unrealistically
low representation of model-data discrepancy,
meaning even with the best-fit combination of parameters,
the observation-model discrepancy cannot (unsurprisingly)
be attributed entirely to internal climate variability.”
Not sure whether that is going to help anyone.
Regards
Chris
June 13, 2007 at 1:24 pm
fergusbrown
thanks for coming back, Chris. i think we need a modeller to respond to this one…
🙂
June 15, 2007 at 12:27 am
EliRabett
If structural differences dominate, then the physics is pretty much the same. YMMV
June 15, 2007 at 10:49 am
crandles
I didn’t think ‘the extent to which they are useful outside the bounds of observed climate’ was confusing but I do find ‘If structural differences dominate, then the physics is pretty much the same.’ confusing.
I am wondering if we need to define some terms to help avoid some of the confusion eg
structural differences
physics of a model vs parameter
physics problems
model error
model-data discrepancy
June 15, 2007 at 7:51 pm
inel
Good enough to argue we need to take global collaborative action now to reduce emissions!
Let scientists have some peace to improve climate model reliability further while the rest of us pressure our governments and businesses to change more than the climate.
Business-as-usual needs to be remodelled.
A clear paper on the scientific basis of climate change prediction, which de-mystifies models and methodology for layreaders like me, is Alan Thorpe’s Climate Change Prediction, commissioned by the IoP in 2005. It is still valid, though model capabilities have improved significantly with the HiGEM project.
US CCSP executive summary is to political in tone to be any use. Stick to IPCC or some reputable source otherwise you are playing right into the hands of sceptics with threads like this, imho 😉
Eli’s point is the most important at this stage:
June 16, 2007 at 12:13 am
fergusbrown
I think it is important to point out here that I’m not promoting what the CCSP says, or any other organisation; providing the links is for reference. Generally, references like this one are simply to draw attention to important issues, which I hope are of interest to people. If a sceptic chooses to question the validity of model output on the basis of this, or any other analysis, I’d be happy to see a discussion take place, as that way, certain common misconceptions can be addressed and responded to. As often as not, though, most sceptical objections seem to end up boiling down to a distrust of scientific output, based on ignorance of what is being done or how, rather than legitimate questions about such things as models or predictive skill. Of course, there are exceptions; but these are important, too.
I agree that some mechanism for exerting real pressure on governemnt to actual reduce carbon emissions is necessary, but I am unsure what that mechanism might be.
Regards,
June 17, 2007 at 7:26 pm
maksimovich
The complexities of meteorology and its “generalized pupil “Climate science” are often misrepresented as being able to understand and measure (model) the changes of differentials in thermodynamic equilibrium from an initial state to a predictive state.
The misrepresentation of scientists to predict the changes that will initiate catastrophic climatic events is far from reality, as the predictive capabilities are beyond the existing systems to predict the simplest differentials ie changes in convective thermodynamics over a short period.
There are a number of constraints.
1) Prediction and determinism are incompatible: we cannot predict long-term behaviour of complex systems, even if know their precise mathematical description.
2) Reducing does not simplify: interaction is important and interaction means inseparability.
3) Simple linear causality does not apply to Chaos and Complexity.
Mathematics is a part of physics. Physics is an experimental science, a part of natural science. Mathematics is the part of physics where experiments are cheap.
This statement from one of the giants of 20th century mathematical physics shows the reality of the Diophantine approximations industry .
….At this point a special technique has been developed in mathematics. This technique, when applied to the real world, is sometimes useful, but can sometimes also lead to self-deception. This technique is called modelling. When constructing a model, the following idealisation is made: certain facts which are only known with a certain degree of probability or with a certain degree of accuracy, are considered to be “absolutely” correct and are accepted as “axioms”. The sense of this “absoluteness” lies precisely in the fact that we allow ourselves to use these “facts” according to the rules of formal logic, in the process declaring as “theorems” all that we can derive from them.
It is obvious that in any real-life activity it is impossible to wholly rely on such deductions. The reason is at least that the parameters of the studied phenomena are never known absolutely exactly and a small change in parameters (for example, the initial conditions of a process) can totally change the result. Say, for this reason a reliable long- term weather forecast is impossible and will remain impossible, no matter how much we develop computers and devices which record initial conditions.
In exactly the same way a small change in axioms (of which we cannot be completely sure) is capable, generally speaking, of leading to completely different conclusions than those that are obtained from theorems which have been deduced from the accepted axioms. The longer and fancier is the chain of deductions (“proofs”), the less reliable is the final result.
June 18, 2007 at 12:56 am
fergusbrown
Welcome to the cave, Maksimovitch.
If I understand your first sentence correctly, you are stating that it is a misrepresentation to claim that climate models (or MWP models) are able to predict changes in temperature: is this correct?
Your second sentence contains the common suggestion that climate models are predicting catastrophic changes, and that the scientists who run the models claim to know the mechanisms by which such catastrophe is supposed to occur. On what do you base this claim? I have read very few if any predictions of catastrophe in the scientific literature, and no assertions that the mechanism of certain future catastrophe is known; therefoe, I would ask on what basis you make this reference?
The process described in the unattributed passage you quote (is a citation possible?) may bear a resemblance to climate modelling, but I am at a loss to understand why you might think that climate modellers have ‘absolute’ confidence in their ‘axioms’, or that they might believe that we should ‘wholly rely on such deductions’: I do not believe that either is the case.
You ‘talk a good game’, in the sense that your comments sound technical and mathematical, but what statements are you making? That climate models cannot predict temperature changes? That climate models are not mathematically reliable? That chaos theory dictates or defines the climate model in the same way that it operates in numerical weather prediction?
You may be making a good point here, but it is hard to tell, because there are many assumptions and slight misrepresentations or errors in what you have said, which confuse your message. If you would like to reply to a simple statement on the questions I ask, it may be helpful.
For the time being, without understanding your reasoning properly, i will work on the assumption that the meaning of your comment is that global climate models are of no use.
Regards,
June 22, 2007 at 2:18 pm
crandles
Sounds like you know a bit about the mathematics of chaos.
However, it seems to me that you are just assuming that climate is chaotic. You also seem to be going on about it as you think this may cast doubt on climate models and the whole idea of AGW. I don’t know why you would think that so much climate model science would be funded if it was so clear that it was all a waste of effort.
It is simple to test whether a model leads to chaotic weather. (Have you done this?) I do of course accept that this does not prove that our real climate is not chaotic.
There are over 20 national modelling efforts that have produced many different versions of their models and the vast majority show very little chaos in climate (but there is chaos in the weather) while the remainder show clear problems that make them nowhere near as realistic. Given this, what would you assess as the probability of there being chaos in the real climate system?
It certainly isn’t my assessment of climate modellers that they have absolute faith in their axioms either. I have always found they make absolutely clear that they know their models are far from perfect. That doesn’t necessarily make them useful and the modellers also seem clear that the model are much better for some things than others.
June 25, 2007 at 1:29 pm
fergusbrown
I suspect the problem with model output lies less with the scientists who operate them than with the uses that their output is put to. It is one thing to be able to say that one has confidence that a forward-projected climate trend is likely to a certain degree under given circumstances; it is another thing to say, as for example the recent UKCIP MONARCH document seems to, that the future distribution of species needs to be planned for given that the model they used shows that specific regional climate conditions are likely to occur in the next fifty years in this place or that.
Further from this, there is the more contentious issue that long-term adaptive planning depends on certain expectations of climate conditions which, these days, are modelled in order to provide best possible information on, say, whether a huge hydro project is going to be viable, necessary or damaging; under these circumstances, the ability of models to accurately project specific regional conditions with a degree of skill becomes critical. I am sure that project managers know how far to ‘trust’ the skill of GCMs, insofar as it affects their decision-making, but there is little evidence that many governments or their agencies pay any attention to the limitations of likelihood or risk (in climate change terms). Does this make sense?
Regards,
July 1, 2007 at 6:50 am
maksimovich
Chaos and Complexity theory studies nonlinear processes: Chaos explores how complexly interwoven patterns of behaviour can emerge out of relatively simply-to-describe nonlinear dynamics, while Complexity tries to understand how relatively simply-to-describe patterns can emerge out of complexly interwoven dynamics.
With their pioneering works on local stability (instability) of dynamical systems in the last decade of the 19 century, the Russian mathematicians Andrey Lyapunov and Sophia Kovalevskaya are viewed as the founders of the single and most creative and prolific stand of thought in the analysis of dynamic discontinuities and non-linearities up to the present day, the Russian School. Significant successors to Lyapunov and Kovalevskaya include A. Andropov and L. Pontryagin (1937) who crucially advanced the theory of structural stability, A. Kolmagorov (1941) who developed foundation of the mathematical theory of turbulence (chaotic fluid dynamics) and V. Arnold (1968) with his most completely classification of mathematical singularities (catastrophes)
Chaos and Complexity emerge from non-linear mathematics. Theoretical and applied development of mathematical cybernetics and computer science made it possible for many mathematicians and physicists to step out of the framework of linearity, continuity and smoothness, and to approach problems belonging to the world of non-linearity, discontinuity and transformations.
In 1962 Kolmagorov and Obukhov mathematically demonstrated possibility of an intermittency of chaotic fluid dynamics and emergence of patterns of order out of a turbulent flow.
The famous KAM theorem of Kolmagorov-Arnold-Moser (1978) threw light over the unresolved 3-body problem of Laplacean-Newtonian celestial mechanics – a problem firstly approached by the French mathematician Henri Poincaré (1890) who, facing its unsurmountable computational difficulty, saw the possibility of existence of a non-wandering (dynamically stable) solution of extreme complexity, and thus firstly predicted the existence of an attractor in chaotic dynamics. According to the KAM theorem, the trajectories studied in classical mechanics are neither completely regular nor completely irregular, but they depend very sensitively on the chosen initial states: tiny fluctuations can cause chaotic development.
As stated previously Prediction and determinism are incompatible: we cannot predict long-term behaviour of complex systems, even if know their precise mathematical description.The reason why we cannot say much about a complex dynamic system is because of its enormous sensitivity: even an infinitely small change in the starting conditions of a complex process can result in drastically different future developments.
Indeed the limitations of the ”models’ predictive capability is the inability of the computer to construct intuitive “analysis”. Where say the total intuitve disregard of the predictive capabilities of the computer by one man is with 100% certainty I can say you are alive to talk about this today!
http://en.wikipedia.org/wiki/Stanislav_Petrov
July 2, 2007 at 12:33 pm
fergusbrown
Thanks for the potted history, maksimovitch: I am sure some readers will find it enlightening. If I may I will concentrate on your final two paragraphs, as I think this is where the essence of your point lies.
What do you mean by ‘prediction and determinism are incompatible’? Simply put, the possibility of prediction is dependent on the assumption of a principle of cause and effect. That an action or force has an effect is uncontroversial, surely (assuming you accept the null case as a possibility).
Therefore I must guess at what you are getting at. ‘We cannot predict long-term behaviour of chaotic systems’. I’m not going to disagree with this, but I would ask what its relevance is? Even if it is the case that weather (not climate) is ‘chaotic’, in the sense that it is strictly non-predictable in an absolute sense, it is also the case that weather phenomena are broadly predictable. For example, if a combination of humidity, temperature, dew point and pressure exists, we can reasonable predict the likelihood of rain, especially given that, in all weather predictions, there are a large number of pre-existing conditions which have already been measured and which exist in a geographical relationship to the place for which the prediction is being made. We are also well-placed to anticipate the likelihood of a development of a TD, given what is known about temperature, pressure, wind direction and shear. That we cannot predict precisely how such weather systems will move and develop is a function of chaos, but this does not mean that we are unable to pass a warning on to potential victims that a TC is heading their way, or that flooding is likely.
Further to this, I should point out that there are important differences in the way climate models and weather models work, which effects the way in which the output is evaluated in terms of ‘prediction’. Therefore, it is entirely reasonable to accept that, if all climate models agree with a given signal for future temperature, and such a signal has been confirmed over a twenty-or-so-year period, then the likelihood of future warming is very high indeed.
I am sorry to say that the point of your last paragraph has been lost in translation: I simply don’t understand what you mean. Working on the assumption that you are claiming still that models cannot work because they are trying to model something which is inherently antipathetic to prediction, my response is that you do not appear to understand how climate models operate or how to interpret their output. No scientist is claiming to know what the climate of fifty years will be precisely (this is a ‘straw man’) but only a fool would ignore the work of thousands of specialists and decades of historical data on the grounds that they don’t know what they are doing; in this, I agree with crandles (and thanks for contributing, Chris).
Would it be helpful to ask if you can provide an example of recent GCM output which you believe to be false, or ill-founded? This might allow us to talk about specific cases rather than mathematical prinicples of disputable relevance.
Regards,