Friday, March 10, 2017

Selection of climate model survivors isn't the scientific method

I was surprised that several TRF readers (Marthe, Abbyyorker, John Moore, and perhaps others) don't understand why the methodology keeping "ensembles of inequivalent models" that have survived some tests isn't science i.e. why Scott Adams is right in the recommendation #1 to climate fearmongers.

On Monday, Scott Adams actually dedicated a special blog post exactly to this problem. He wrote that when some media promote an old paper from the 1980s that apparently made rather accurate predictions of the climate for the following decades, it doesn't mean anything because it was one paper among many and we're not told about the number of similar models whose predictions were wrong. So everything he knows is compatible with the assumption that the successful model was just one that was right by chance – it was cherry-picked but there doesn't have to be any reason to think that its authors know something that others don't. They were just lucky. Adams mentioned analogies dealing with financial scams. If you send thousands of e-mails with various investment recommendations, it's almost unavoidable that one of them will be successful thrice in a row. If you later cherry-pick this successful recommendation and sell it as a proof of your prophetic skills, then you are a crook and your clients are gullible morons.



Some people apparently really believe that it's an example of OK science when the climate modelers are working with an ensemble of mutually inequivalent models, sometimes eliminate some of them, and they implicitly if not explicitly say that all the "survivors" in their ensemble of models are simultaneously or collectively right. Well, different theories just cannot be simultaneously right and this process of mindless selection of "packages that seem to work well" just isn't science. When we're trying to address a physical system in which many factors matter at the same moment, it's obvious that we must still try to answer questions separately.

I embedded the Feynman monologue above because he says that many activities try to pretend to be scientific but they're pseudosciences. These pseudosciences – social sciences are examples – haven't gotten anywhere (yet). They didn't get any laws. This is exactly true for the "model ensemble enterprise" in the climate science, too. They're not proposing and separately testing any actual laws or statements. People who are doing these things just play with some complex mushed potatoes and when they have a sufficient number of moving parts, it's unavoidable that for some choices of these moving parts, a good enough agreement – within any pre-agreed error margins – will be achieved for some of them.




You know, the point is that the qualitative features of these theories or models are being "assumed" and they're not actually being tested or falsified. These pseudoscientists are just constructing computer-aided "stories" that make the initial assumptions look plausible. But they're not actually producing any evidence that the assumptions are intrinsically correct – i.e. capable of making reliable predictions of the future. They are just adjusting the other moving parts so that the whole package passes some tests.

This is not science. One doesn't really learn whether CO2 may be neglected in physics of the climate. It's being assumed that it cannot be neglected. Lots of other, more detailed qualitative things are being assumed – so whether they are right or wrong isn't being scientifically addressed, either. And as I said and will say again, many models in the "ensemble of survivors" are actually making completely different assumptions about what matters etc. But these models still have lots of movable and adjustable parts and they're enough to make the package "look" good enough. It's classic data-fitting. There is zero scientific progress that can be achieved in this way.




Let me mention Adams' example, the one-week-old article in The Independent. Two guys, Stouffer and Manabe, wrote some articles in the 1980s after they programmed some climate models.

One of the graphs – and their papers contained many, many graphs indeed – looks reasonably similar to the map of the regional changes of the temperatures in recent 30 years. Well, the agreement isn't too impressive. You may say that "the Arctic and, to a lesser extent, the Northern Hemisphere landmasses will warm by the rate over 1 °C per century". And this statement was true for their prediction as well as for the recent map that contain the actual past data.

None of the finer details is really too precise – the precise boundaries of the warm regions, the numerical sizes of the warming – but it looks good enough to them. Also, the periods for which the predictions were verified don't really agree. Their paper in the 1980s was comparing the temperature between (the average of the period) 1961–1990 and (the average of the period) 1991–2015 while a different comparison was visualized on the recent map. Equally importantly, a part of the trend between 1961–1990 and 1991–2015 was already known in the late 1980s. You know, the comparison of the two periods is similar to a comparison of 1975 and 2003, the middle points of both intervals. And the paper was written in the late 1980s. So they already knew the data from the first 1/3 of the interval. They could have extrapolated the trend from that decade and they would get rather accurate predictions, too. To some extent, that's exactly what they did.

There are various reasons why this prediction wasn't too impressive but Adams' complaint is the most important one, I think. There are just lots of models and papers about them and each paper contains a large number of graphs. It's just statistically unavoidable that you find some that will work. But the percentage of the successful ones may be very low. We've seen that over 95% of the climate models overestimated the warming in recent 20 years, for example. Collectively, the models don't do much better than guesswork and if you pick some of the guesses that were luckily close to the truth, you don't have fundamental reasons to think that this "lucky guy" will be lucky again – that it is intrinsically getting something right. Someone has to be lucky even among many folks or models that are not skillful.

What science needs is to formulate some laws and verify that the successful laws can make some good predictions repeatedly and, if possible, in inequivalent situations. Only when you do so, you have reasons to think that you have found something – the laws – that seems to be more right than a random guess or a random speculation.

Look for "global climate model" (including the quotation marks) on Google Scholar. You will find 37,700 papers. A big fraction of them really does contain some prediction, often many predictions that aren't quite the same. So it's obvious that some of these graphs will be close enough to the observed changes of the temperatures. Scott Adams' comparison of this methodology to the financial scams is absolutely justified.

Some models do a good job, some models do a bad job. You may always choose the better ones once you know the actual data that should have been predicted. But you may still be picking just the "lucky ones", not the "smart ones". This is an absolutely fundamental problem that simply has to be addressed.

The scientific method addresses it because it does something that these "climate modelers" don't: It is actually formulating well-defined hypotheses or theories or laws and it is testing their predictions separately from other hypotheses. The formulation and verification of particular statements is really a key part of the scientific method – look at the first chart on the Wikipedia page.

The "climate modelers" simply aren't doing that. They are not formulating any well-defined hypotheses or laws – so they are not testing these laws, especially not in a way that would fairly treat different competing hypotheses. So what they're doing simply isn't science. It fails to be science not because of some controversial, newly created requirements what science should be doing. It's not science simply because there are no hypotheses and laws that are formulated and tested. And the formulation and testing of hypotheses are surely defining procedures of the scientific method.

A climate model tries to look like a very complex system and if you wanted such a program to be a part of science, you would need to independently evaluate each of its numerous "moving parts" and Yes/No and other assumptions. It's not being made so it's not science. These people are just playing semi-realistic computer games. Playing with computers isn't science, it is closer to a disease that interferes with your work.

Their not proposing or testing any laws isn't the only problem. A bigger problem is that they don't even seem to care that these essential building blocks of science are missing. What's my evidence that they don't seem to care? Well, the very fact that they are simultaneously using inequivalent models – and they never sound worried that they don't understand the difference between these models – proves that they are actually not interested in any laws, any correct statement one can do about the physical system (the climate, in this case). Some computer game that looks similar to the reality according to their subjective viewpoint is enough to satisfy them.

It isn't enough to satisfy a scientist, however. In proper science, the different theories are bloodily competing with each other. They are in no way "cooperating" to make some ideological goals look more justified. Just like the "consensus" between the people (as an argument) doesn't belong to science, "ensembles of theories" that are taken seriously at the same moment don't belong to science, either.

None of these problems can be solved by a different choice of the error margins and other parameters that are being used in the management of the "model ensembles". Whatever the error margins are, as long as they are finite, a fraction of the models will unavoidably survive. The more moving parts – parameters that may be adjusted – the models have, the more guaranteed it is that some of the models will pass the tests by chance.

If you actually study the climate in the scientific way, you need to figure out some qualitative insights. For example, Milutin Milankovič figured out that the glaciation cycles at the time scale of tens or hundreds of thousands of years were caused by the irregularities of the Earth's orbit, especially by the variable eccentricity of the elliptical orbit and the variable tilt of the Earth's axis. Those are caused by perturbations by other planets etc. and they influence the climate by changing the rate of growth or melting of Arctic ice sheets (mostly around June). This is a qualitative assumption which we know to be right these days. But it couldn't have been obvious from the beginning. Different, very different explanations of the ice ages were a priori possible. The ice ages could have been caused by some very slow circulation of the world ocean, for example. One needed to formulate the hypotheses and test them.

Again: The competing hypotheses you start with are or have to be completely, qualitatively different from each other. You first need to get the qualitative features of your theories right before you may converge closer to the truth by the adjustment of continuous parameters. Whoever is imagining that the selection of the qualitative features of the laws is "trivial" and it's enough to adjust parameters and pick survivors in some error margin is entirely misunderstanding the bulk of science. If you make qualitatively wrong assumptions about something, the adjustment of the parameters in your model won't allow you to formulate a correct theory that could make reliable predictions. You're looking for the truth on an entirely wrong continent. But even on a wrong continent, you may find things that will look like the truth by chance as long as you look at a sufficient number of candidates. But the search for the scientific truth is something different than the data-fitting based on adjustments of parameters in an intrinsically wrong model.

Similarly, we need to test hundreds of less far-reaching statements about clouds and their dynamics, the processes in the ocean, the El Niño and La Niña phenomena, their interactions, and lots of other things, including the validity of various numerical approximation schemes and discretization of the continuum etc. A climate model may neglect many of the subtle phenomena and focus on others. It's not clear from the beginning which arrangement of the qualitative choices in the climate model is better. Some details may be so irrelevant that they should better be neglected, indeed. Others are essential and they shouldn't be neglected. If you evaluate the models as "whole packages" or even "collectively", you are just making zero progress towards the scientific understanding of the physical phenomena. You don't really understand what is the "great feature" that makes one model work better than another. In most cases, it's because the feature that has allowed one model to be more successful than another is pure luck.

Computers may be extremely helpful in the scientific progress but it's still true that if you're not learning any comprehensible laws or lessons – what you have to consider and how – then you are not making any scientific progress. The learning of lessons and laws simply can't be "replaced" with thousands of hours of playing with some computer programs.



Off-topic. When I was looking for the Feynman video about pseudosciences at the top, I also found this one from the 1960s and I wasn't terribly familiar with it. But it made me laugh out loud. He said that almost all crackpots who were writing to him were constantly pointing out some obvious things. Something could be wrong about the assumptions that scientists are making. But they're never proposing any viable replacements for the assumptions we are using. So these letters from the crackpots are time-consuming. Feynman "still reads them just to be sure..." (the students explode in laughter) "that there's nothing interesting in those letters". I could repeat all these things in verbatim. And yes, I also think that I am wasting time by reading so many things by crackpots but I am still mostly doing it.

But what really made me laugh (even more so than the helpful Mr Joe who advised Feynman 10-20-30 to open a 5-digit lock) was Feynman's enumeration of two most popular "paradigm shifts" that the crackpots who were writing letters to him were using. One of them was "that the spacetime should be discrete at the fundamental scale" and the other one was that "quantum mechanics with its probability amplitudes is strange and maybe it isn't fundamentally right, after all". That's what crackpots loved half a century ago. When you open my blog post about a bogus non-solution to the cosmological constant problem that I posted just two days ago, you will see that crackpots in 2017 love exactly the same two most fashionable would-be "paradigm shifts" as they did half a century ago: the spacetime could be discrete and quantum mechanics could fail to be fundamentally true.

You can see that these crackpots haven't done any intellectual progress in their "research" during the recent 50 years. However, as the Western society has converged closer to Idiocracy, many of these crackpots have made a career progress and some of them identify themselves as physicists these days and their fellow crackpots such as Lee Smolin are helping them to play this ludicrous game.

Sorry, crackpots, but just like in the 1960s, there exists no viable way to replace the continuous spacetime around us by a discrete one, and there exists no viable replacement for the universal postulates of quantum mechanics. But as long as you can get away with the suggestions that you are legitimate physicists, you probably don't care.

No comments:

Post a Comment