First and above all, thank you to the several people who have commented, both here and elsewhere, on the poll of scientists’ opinions which we conducted and which has now been put up on Roger Pielke Sr.’s weblog.

Thank you to our fellow climate bloggers, who have been supportive and engaged themselves, for their comments and feedback.

Thank you to the people who have sent emails, and those who have given permission for extracts to be used on the website.

Now, to some points which arise from all the comment and discussion, to some clarifications and questions.

Does the opinion poll, as we conducted it, say, or hint at, anything important?

Does it matter what (climate) scientists think about the IPCC WG1?

Does the reaction to the paper in itself raise important matters?

To the validity/value/significance (hello to Steve Bloom) of the poll itself:

As I have explained elsewhere, in the absence of a known community, a properly conducted sample poll is not generally an option. Where there is an identifiable specialism, however, it is generally considered acceptable to sample a subset of the community, so long as it can be seen to be in some way representative of the larger (undefined) community.

I believe that, in the methodology adopted, and in particular, in the efforts made to eliminate sample errors in advance by triple-checking the suitability of the people sent the questionnaire, we successfully met the criteria for an acceptable subset of the community of scientists engaged in disciplines closely related to climate or climate science.

I further believe that, in sending questions to people in more than fifty countries, and in selecting second and third authors/presenters as well as first authors/presenters, we have met an acceptable standard of inclusiveness across the range of age, nationality and discipline, whilst retaining that original criterion.

Where our poll fails to meet the rigorous standards of statistical significance (about which we are open, not as a ‘get-out-jail’ card, but as recognition of the limitations of our work), in other words, the number of respondents and the risk of self-selection bias, are matters which were beyond our capacity to control. We could have chosen to limit our sample, or select a more specific subsample, but the same problems, of identifying the relationship between the subset of respondents and the larger community, would still have existed. As with all polls, the number of respondents is  in part defined by the time, resources and finance available to the pollsters. It should be understood that this exercise was conducted with no funding of any kind, in our own time, with only the resources we could obtain via the internet available to us.

In other words, we did the best we could with what we had, and worked hard to get the broadest mix of relevant subjects, in the most open and honest way possible. A lot of comment has focussed on the ‘self-selection bias issue’, but on this I will make two comments: first, it has been claimed by one group of people that our poll may well be biased in favour of ‘skepticism’, and by the other group of people as being biased in favour of ‘alarmism’. This might suggest to the observant mind that we may actualy have found a decent middle ground. Second, there is as likely to be a bias in favour of ‘the middle response’ as in either extreme. The only way we could ever find out if this preliminary poll was in fact biased is to run a validation test, or a better, larger poll, and compare the results.

I would argue that, the (well-justified) criticisms aside, if you are willing to look at the results as they stand, this work does provide potentially important suggestions/guidance/information. It tells me, anyway, that I will probably struggle to find a genuine ‘denialist’ in the community of people involved in this area. Even a couple of very well-known ‘skeptics’ were polled and responded, and they did not opt for out-and-out denial. It suggests that one ‘end’ of the so-called ‘frame of discourse’ on climate science is to all intents and purposes defunct, and can be eliminated from serious discussion on climate change.

In contrast to this, it also does tantalisingly suggest that there really is a reasonably broad range of scientific opinion on the WG1, but that, by and large, the position as represented in the AR4 WG1 paper is the ‘middle ground’, the majority view, the default position. In this sense, if one wished to talk about a ‘consensus’, this suggests that the IPCC represents the ‘consensus position’.

But because the poll also hints at a range of opinion outside the ‘consensus’ view, it also suggests that scientists’ opinions in this respect are important to know about and to understand. I understand the political importance of presenting a clear and strong message about climate change to politicians and the general public, and am an advocate of the same myself, in my own words and deeds. However, that there (probably) exists a range of disagreement about the science as presented in the WG1, which could well be broader than is implied by the summaries and press releases of the official bodies, is not a trivial matter. It may be expedient to sweep such issues under the carpet, but is it honest? Is it right, if one is to be judged as a scientist, by scientific criteria, to exclude data which does not conform to the required results of a test or hypothesis?

What is intriguing to me is what has followed as a consequence of the original poll. One of the most striking things about the comments of scientists and non-scientists alike is that both sets of people are equally prone to prejudice and predispositional attitudes, and both tend to view material as ‘on my side’ or ‘not on my side’, with very little equanimity or balance in evidence. Also intriguing is that our efforts appear to have produced both positive and negative reactions from people of all shades of opinions. To me, this clearly demonstrates that the content of this, or any other piece of writing, is being attributed its meaning almost entirely in the minds of the readers, almost irrespective of its actual content. In other words, what people are getting out of it is what they put into it.

No doubt there will be more to say on the subject, but I would welcome any further comments. As it seems reasonably clear that many people agree that the poll and the paper could have been done better, it would be nice if we could focus instead on the results , rather than the means of getting them. You will have to accept my word that we did our best to honestly gauge the honest feelings of honest scientists. I still think we succeeded, however ‘better’ or ‘differently’ things might have been. So tell me what you think about the results…

Oh, and if I haven’t already made it clear, we really do appreciate your involvement.