Showing posts with label nature. Show all posts
Showing posts with label nature. Show all posts

Friday, 29 September 2023

Science’s big problem(s)

Anyone who follows this blog (you know who you are) will have noticed the concentration of late on non-science topics. So I thought it was worth returning to my former stomping ground. This was, in part, because I came across something specific in the press that caught my attention. But it also relates to a much bigger, and therefore more troubling, theme. Science matters because it is clear that it is the best, perhaps the only way, to effectively answer certain types of important questions. It has an impressive (though not unblemished) track record. Some of the problems we face today pose questions of exactly the type science in the past has helped to answer. So if science is in trouble, we’re all in trouble. It is therefore wise to reflect on the position “it” finds itself in.

Let’s start with some specifics. Patrick Brown is a climate scientist. He obtained his PhD (Title: “Magnitude and Mechanisms of Unforced Variability in Global Surface Temperature”) from Duke University in 2016 and has since been fairly productive. As far as I can see, has had three papers published in Nature as “first author” to date; not bad for someone relatively early in his career. It is the third and most recent of these (Climate warming increases extreme daily wildfire growth risk in California”; Nature 621:760-766, published 30/06/2023) that has excited most comment. However, the comment has not primarily been around the science in his paper. Judging from his citation statistics (a far from perfect metric), Patrick is competent but he hasn’t exactly set the heather alight. It was what he did after his latest Nature paper was published that led to things getting tasty. On September 5th he published an article in “The Free Press” entitled “I left out the full truth to get my climate change paper published”; this article was later also published in the The Times under the title “Groupthink in science is no good for the planet” (The Times, September 9th, p28) generating much more attention (at least on this side of the Atlantic).

Basically he claimed in his articles that he (and his co-authors) had narrowed the focus of their approach in the Nature paper to that of the effect of climate change on wildfires, all the time knowing that much more complex issues were in play. But they knew that if they “overcomplicated” the picture, so that it did not so clearly support the story that important journals like Nature “want to tell”, their paper would have likely been bounced. If they had broadened the focus (in the process presenting a more accurate and useful picture) they would have been seen not to support “certain preapproved narratives” that some journals, including Nature, are pushing. He fairly makes the point that getting published in prestigious journals has a big influence on someone’s academic career, and that these days it is hard to stick out from the crowd of other PhD’s. So this non-scientific factor, as much as the importance of the science they had done, determined important things like what metrics they had used to assess what was going on with Californian wildfires, and how the data that had resulted from their analysis was interpreted and presented. He was just innocently playing the game of building an academic career. But, having moved out of academia, he now felt moved to act the whistleblower and tell all. Not that he is in favour of retracting the paper as he still thinks “it advances our understanding of climate change’s role in day-to-day wildfire behavior”. It’s just that “the process of customizing the research for an eminent journal caused it to be less useful than it could have been”. The fact that there is a competing narrative in this space (i.e. that man-made climate change is a hoax), and that his “exposé” was jumped on as evidence of scientific skulduggery, didn't seem to bother him (at least initially).

I don’t have the expertise to comment on his Nature article. But of course, before it was published, those with appropriate expertise did. Nature published the peer review reports along with the final paper, and interestingly while the paper itself is behind Nature’s paywall, the reports aren’t (you can access them using the Nature link above). What these make clear is that some of the reviewers made the point that some of the wider issues should have been covered in the paper and hadn’t been. Given the tale that Brown subsequently told, this is a bit surprising. But what is even more surprising is that Brown and his fellow authors then robustly defended their approach. This shows that there was no particular “preapproved narrative”, or at least not one of the kind alleged. The reviewers (and the Editor) dealt with the paper on its merits as we all might expect. So his charge that some agenda that is not supported by the science is being prosecuted, looks a lot weaker than at first it appears.

But what Dr Brown seems to miss entirely is that he has told us that on at least one particular occasion he deliberately shaped his approach so that the resulting paper became potentially misleading or at least less useful (something the Nature reviewers in effect picked up on and challenged). In fact in his Free Press article he claims he left academia because “the pressures put on academic scientists caused too much of the research to be distorted”. Presumably he means his own research as well as that of others – a serious charge. Yet, despite confessing to distortion, we are supposed to take his commentary (unchallenged by reviewers and perhaps serving an agenda) at face value. And it’s not as though he is some kind of innocent when it comes to the media. He knows well how the media works specifically when it comes to his area of expertise (climate change). He published a paper about precisely this back in 2016 (“Reporting on global warming: A study in headlines”). Now he is in the private sector, free from that insidious pressure to “distort” (his word, not mine). But presumably he is also now being paid for his words by individual and corporate donors. We can hardly be sure that it is data and careful analysis that are the centre of his considerations. After all, he has form. It all begins to look a bit murky.

And that’s a big problem. All over, science is beginning to look murky. Much of Brown’s commentary is recognisable. There is pressure to publish, and particularly to publish in “top” journals like Nature. I’ve submitted to Nature myself (more in hope than expectation). And decisions do have to be made about both data selection and analysis, even in much simpler situations that those being investigated by Brown and his co-authors. Can this lead to bias and misrepresentation? Yes it can. But that is where the challenge of reviewers and editors, the peer review system, becomes so important. The system seems to have worked in the case of Brown’s Nature paper. Although the reviewers expressed concerns, these were answered by the authors, and the paper was deemed to make a sufficient contribution (something Brown continues to agree with) to be published. Does it present only part of the picture? Of course it does. It’s now up to others to criticise, challenge, refine or refute what’s in that one paper. If it is actually misleading, that will become clear. That’s science.

But the bigger theme here is a problem about journals; they are a key part of science and collectively comprise the “literature”. Brown’s point was that they may not be as neutral and dispassionate as one would like to think (whether justified or not in the case of his Nature paper). There are other problems too, particularly the issue of “predatory” journals which has been discussed for a while in scientific publishing circles (see this article and others on the the Scholarly Kitchen site). Predatory journals are those whose primary concern is to make money not publish good science. They tend to have lax acceptance and reviewing standards because the more they publish the more money they make. This has been encouraged by a change in who pays for published science. It used to be almost entirely the case that the user (i.e. the reader) paid. But this began to change, partly because of technology and partly because of claims that his was discriminatory. Lots of scientists in low and middle income countries were excluded because neither they nor their institutional libraries could afford the subscriptions that were charged for access to journals. So there was a change to a “producer pays” model. Some journals charge a fee simply to consider a manuscript for publication, and all of them charge a fee to publish papers once the peer review process has determined that a paper is of sufficient merit. Publication fees range from a few hundred £s/$s, to several thousand. Some charge flat fees, others charge by the published page. However, once published the research is open to all, and aided by the interweb, accessible to all. But it is clear that what was meant to assist openness and accessibility is being abused, and that the “literature” is being undermined as a result.

It was always the case that nonsense could be published in scientific journals, including the prestigious ones. I used to have to tell students that just because something was published didn’t make it true. There is never any substitute for careful reading and equally careful thinking. But as the number of predatory journals has increased (one 2021 estimate put the number at 15,059), so has the level of murkiness, and gradually we risk the whole scientific enterprise losing the trust of public and politicians alike. What is the root cause of these problems? Well, unfortunately it is something that cannot be fixed (although it can be improved). Science is a human activity, and is therefore as flawed as humans are. Most scientists are competent and conscientious, some are lazy, a very small number are fraudulent, but all are human. Even although as an institution science is to some extent self-correcting, it remains at its core the activity of flawed women and men. Science’s big problem is scientists. And just when we need them too.

Monday, 12 March 2018

The insufficiency of science


There are scientists who talk about a “theory of everything” although it turns out they do not literally mean a theory of “everything”. There are others who have claimed that science can basically supply the correct answer to any correctly formulated question (at least any question worth asking). This is sometimes tempered to the view that science provides, at least in principle, an approach that can rigorously establish the truth about a given state of affairs even if in practice it’s currently difficult to see how. At one point it looked as though this was becoming a dominant view. Proponents of this sort of view, passionately and (usually) elegantly expressed, were the likes of Dawkins, Hitchens, Dennett and Harris. Let us call them collectively Ditchkinetteris (with apologies to Terry Eagleton who coined the term Ditchkins to refer to two of them; 1). As an aside, the power of this sort of view seems to be in decline, as I have discussed previously. In general, Ditchkinetteris’s take might be termed the sufficiency of science (SoS for short). It would be wrong to assume that SoS was ever a majority view even among scientists, although such things are hard to establish, erm… scientifically. It was certainly a minority view among philosophers (eg see Kaufman’s review of Harris’ “The Moral Landscape”;2). But SoS has now been implicitly undermined but one of its former (if only tacit) supporters, the journal Nature.



Nature published an editorial on the 27th February entitled: “A code of ethics to get scientists talking”. This reports on a document produced by a group of scientists convened by the World Economic Forum and heartily recommends it. As the editorial points out, such codes are not new in science. Many funding and governmental bodies have their own codes. Interestingly the editorial claims that there’s a problem getting scientist to take them seriously and adhere to them. But what intrigues me is the question of what kind of thing is this code?



If SoS is true, then presumably such codes will be scientific. That would mean they would consist of hypotheses, predictions, experiments, results and conclusions. Or if not hypothesis driven (because not all science fits this pattern comfortably) they would consist of observations, measurements and conclusions. But there will be measurements and data, there will be stats, there will be theory; all the familiar elements of science. Right? Wrong. Actually what the particular code referred to consists of (and this would be true of all the other codes) are well meaning, sensible and pretty obvious advice about the kind of things we expect of responsible science. For example, responsible science seeks to minimise harm to citizens. Such a rule doesn’t appear to be scientific rule. It’s sensible, it’s the kind of thing tax payers expect, but it is not itself a scientific statement or a scientific rule. It’s the kind of thing I’d be happy to adhere to, as would all my colleagues, and practically any scientist anywhere I know of. But it’s not science.



The reasons given for why such a code is necessary are also interesting. It is valuable because “the code contextualizes natural sciences in a time of rapid technological change and popular questioning of expertise.” Not sure I understand the first point, but the questioning of expertise is familiar enough. The proponents of the code want to meet such questioning by “infusing research with “the most irreproachable behaviours”. But again, these are not scientific statements or aims, laudable though they may be. They depend on historical, sociological and ethical analysis, not science. So to properly practice science, we must look outside science, indeed our conduct must be ruled by principles which are not themselves scientific principles. This seems to be a blow against SoS.



Of course SoS never was true. Science always stood on foundations that were not themselves scientific. Principles, assumptions and commitments always lurked in the background that were rarely talked about. We all have them, use them and depend on them, and we’ve always known it. It was Bacon who suggested that we ought to purge ourselves of such “idols” in 1620, only for Kant to argue in the 18th century that some of them are built into the very structure of our minds, they are wired in. Better to be aware of them, and control them, than deny that they exist at all.



Personally, I’ve always tried to be clear about my prior commitments. I’m drawn to science because it tackles an ordered universe in an ordered way. That order flows from the God who made the universe, and has sustained it ever since. He is the ultimate source of truth, so I only progress because He reveals His truth as I employ the tools that science provides, allied to the tools that He has provided. He also reveals His truth to others, even although they do not recognise Him or acknowledge Him in any way (indeed many of them are much better at this science game than me). I study the book of His works, and “think God’s thoughts after Him” (to slightly misquote Kepler).



While I’m actually running an experiment, collecting and analysing data, drawing inferences from it, accepting or rejecting hypotheses, I behave (and probably look) like a naturalist. I explain my results, accept or reject my hypotheses, in terms of mechanisms that are familiar in the field. But ultimately, on reflection, I know it is Him I’m studying. Because of that, I want to do it in way that honours rather than dishonours Him,  just like the Christian plumber, carpenter, bus driver, dentist or lawyer. I don’t work to please my boss, or the head of my Institution, or really for the good of the community or for the honour of science. All of these things are good things to do. But they are secondary. My aim is to “serve wholeheartedly as if (I) were serving the Lord, not men” (Ephesians 6:7). All these are prior, outside commitments. But it turns out it’s not just me that has them, indeed needs them, because science is insufficient. At least I’m (reasonably) coherent about it.

1. Eagleton, T. (2009) Reason, faith and revolution: reflections on the God debate. Yale University Press.

2. Kaufman, WRP (2012) Can science determine moral value? A reply to Sam Harris. Neuroethics 5:55-65.

Saturday, 16 December 2017

On understanding pencils…..


Consider the humble pencil. For those poor souls born in the internet age who may not be familiar with them, the pencil is a wooden cylinder, usually about 12cm long, with a graphite core. They can be used for things like writing or drawing, making dark marks on paper (a bit like what happens on your laptop screen when you press keys on the keyboard). They don’t require an electrical supply and are pretty hardy objects, continuing to work in both hot and cold weather. They even work outdoors when it’s raining. But when all is said and done, they are fairly simple objects. Now here are some questions. What does it mean to understand a pencil? What range of disciplines are required? Is anything required beyond some fairly straightforward science? Could a pencil be any more than a sum of its parts?

Well talking of parts, I suppose a scientific approach to pencils would begin by understanding what it was actually made from. A simple pencil (let’s not complicate things too much by discussing pencils with erasers on the end or highly engineered propelling pencils) seems to consist of just two kinds of stuff. Its core is clearly different from the material surrounding it. In fact the core is probably a far from simple mixture of graphite, a substance which was originally mined but these days is manufactured. The graphite is mixed with clay or wax. The surround is of course wood. But what kind of wood? It turns out that almost all pencils are made of cedar, which doesn’t warp or crack, and can be repeatedly sharpened.  Actually the pencil I have in front of me is also painted (it’s red), and on the side there’s lettering.

The lettering spells out a brand name, but there are also some code letters. It turns out all pencils are not the same. In some the “filling” is hard and makes a thick black line, while in others it’s relatively soft making fainter, finer marks. So you don’t have to try out a pencil each time you go to buy one to find out what kind it is, the different types are coded. Apparently “medium soft” pencils (#2’s) are best for writing. But hang on. Now we’re not really thinking about the constituent parts of a pencil and their properties, the sort of thing that science can help with. A botanist could perhaps have identified the wood and speculated as to why it had been chosen. A chemist would have quickly identified that the core was a mixture of something that occurs naturally (graphite) mixed with other chemicals that it doesn’t naturally occur with. She could perhaps speculate on the processes used to combine these different substances. But now it turns out that there’s a whole other level of understanding required in order to understand pencils. They are “for something”, they have an intended purpose. And this is beyond the purview of chemistry and botany.

There are lots of uses to which pencils could be put. I assume that they burn, wood usually does. So I suppose you could put them in a fire to keep your house warm. They are relatively long and thin.  So I suppose you could poke them into holes in a bid to winkle out anything that might be hiding there. A quick experiment will show that graphite is an excellent conductor. But if you try to build circuits with pencils you’ll discover that they quickly generate so much heat that they burst into flames. So a line of pencils is never going to perform well as a mains electricity distribution system. Pencils have an intended purpose, for which they are designed, and for which they are really good. They are designed for writing and drawing, and when used in this way they perform admirably. But what kind of thing is an intended purpose? And what discipline has the correct tools for studying intended purposes? Not physics, or chemistry, or even most of biology.

It turns out pencils have a history, so it’s not just about the particular pencil sitting in front of me now. But they did not start out as the finely manufactured objects they are today. Some trace the history of the pencil back to Roman stylus. Others argue that pencils, properly understood, began with the discovery of naturally occurring graphite in Borrowdale in 1564. Leonardo frequently sketched his ideas in pencil. Without the humble pencil who knows what he might have forgotten all about,a what we would never have known he thought about. The pencil no doubt played a role in, and benefitted from, the industrial revolution of the 18th and 19th centuries. To understand the pencil clearly the humanities have a role to play.

Understanding pencils is turning out to be a bit tricky. To fully capture their constitution, their use and purposes, and their impact on society is getting complicated. Just imagine how complicated it would be to substantiate the claim that we understand things like table lamps, or cars or houses. Mind you, these are all artefacts. They are all things that people make and use. But what about understanding people? Is a person simpler or more complicated that a pencil? Now I think that the answer to this is fairly simply. But for the absence of doubt I think that people are more complicated than pencils. So if we need multiple methods to understand pencils, it’s fairly certain we’ll need multiple methods to understand people. To be able to claim we understand just one individual will take effort, multiple disciplines and many layers of explanation. Some higher levels of explanation will probably be closely related to lower layers, and it may be able to explain one thing at a higher level with things at a lower level. So in principle the biological processes of digestion, beginning with what goes on in the stomach, might well be reducible to chemical explanations (eg the action of hydrochloric acid on certain foodstuffs). While the detail might be a bit tricky and technical, you can see how this kind of thing might work. But there might be other levels of explanation that can’t be decomposed into lower level types of explanation. So I might well be able to explain chemically the effect of HCl on chocolate, but why do I so enjoy Cadbury’s Dairy Milk?

And this is just about explaining one individual. People tend to clump together. And in that clumping whole new concepts emerge and need different types of explanations. So what do we make of football scores? They are a thing. You know what I mean by “football score” even if in the UK it’s about some you do with a round ball, and in the US it concerns an oval ball. On one level a football score might be just two numbers on a board at one end of a football ground. But then it seems to have strange properties than can induce effects on human beings even over great distances. So there might be a vast crowd of 50 000 people in a football ground, variable distances from the board displaying the score. A score of 1-0 is somehow capable of inducing depression in one group of 25 000 and euphoria in the remaining 25 000 (and this is the simplified version). Suppose the same score is liberated from the football ground itself and transmitted by the wonders of modern communications across the world. Across the world a similar pattern of depression and eupohoria is induced in different individuals. So what kind of thing is a football score, and with what tools should it be studied?

Given all of the above consider the following famous quotation: “The Astonishing Hypothesis is that “You,” your joys and your sorrows, your memories and your ambitions, your sense of identity and free will, are in fact no more than the behaviour of a vast assembly of nerve cells and their associated molecules. As Lewis Carroll’s Alice might have phrased it: “You’re nothing but a pack of neurons.”  This comes from Nobel Prize winner Francis Crick. If you’re nothing but a pack of neurons, then all we need to understand all the complexity of humanity are the tools furnished by a particular branch of biology called neuroscience, with perhaps a dash of physics and chemistry thrown in. It smacks of a kind of reductionism often encountered in the popular writings of scientists, very often towards the end of otherwise really interesting books. reductionism doesn't work for pencils. It’s unlikely to be a plausible approach to understanding people.  

Monday, 17 July 2017

The Faith in Science

The blogosphere is a big and diverse place. There's all sorts of stuff out there (and here). One could spend one's life navigating it and responding to what one finds; there are things to enrage, engage or intrigue. I recently came across a blog post in the New Humanist blog written a while ago by Mark Lorch (Chemist and science communicator at the University of Hull) entitled "Can you be a scientist and have religious faith?". For obvious reasons this piqued my interest given that this is a question that seems to keep coming around, and is one that I've examined from time to time in my own humble corner of this vast landscape.

His post has an interesting starting point: "... I could never reconcile what I saw as a contradiction between the principles of the scientific method and faith in a supernatural god." Let us leave to one side the issue of whether "the scientific method" is real thing; Nobel laureate Sir Peter Medawar had his doubts (see his essay on "Induction and intuition in scientific thought", Pluto's Republic). Also of interest is his observation that, as a professional scientist in a University, he is surrounded by other scientists who have "religious faith". And not merely a formal or perfunctory commitment to religion. He's on about honest to goodness, fundamental, bible-believing type faith of the sort that really outrages the evangelical "new atheists" that Terry Eagelton refers to collectively as "Ditchkins". So here's some data indicating that I'm not particularly atypical and my views are not really out there (always a comforting thought). I'm not claiming that I'm typical, just that Christians who are "proper" scientists are not extinct or even on the endangered list (at least not yet). You would get quite a different impression form some quarters.

There were of course comments in the blog that were at first less welcome, if only because they seemed to betray a lack of thought and research. For instance: "Ultimately faith is the knowledge that something is true even though there is not evidence to support it...". There may be faith of this sort out there, but this is not the faith that the Bible writers call for, or that Christian believers exercise. Christian faith is a response to evidence. Yes it is a response that involves, at a certain point, a degree of trust, but that's no different to life in general and science in particular.

Starting with Francis Bacon, Lorch arrives at the conclusion that "without ever realising it, I too have a deeply-seated faith in my own (scientific) belief system." Glory be! Sense at last. Notwithstanding the problems with his definition of faith above,  I welcome his honesty about his own thought processes. The problem is, it's worse than he thinks (if faith being involved in science is a bad thing). One reason for his conclusion is the conviction that in science a thing called "induction" is involved. This appears to be a sound way of moving from observations/facts/results to new knowledge. But it turns out, no one really has an explanation for why it works when it works. But it does appear to work, so he's happy to stick with it, in the absence of convincing evidence. Hence, exercising faith. To be fair, I don't think this mysterious process of induction is why science works, and neither did Medawar (hence his essay on the subject). But there are other foundations on which science rests which we understand even less than "induction" and yet we're prepared to press on regardless. Take two examples: nature's uniformity and the principle of reproducibility.

I beaver away in my lab in Liverpool, collecting and analysing data, finding out stuff about vision and eye movement. Once I've completed a series of experiments, I write them up, and submit them to a scientific journal. The journal organises other scientists to review what I've written, there's usually a bit of back and forth, and eventually the journal agrees to publish my report of my endeavours. If we've all done our jobs, science creeps incrementally and imperceptibly forward, just a bit. We assume that what I've done in Liverpool could be done anywhere else (ie replicated) and as long as I've been honest and accurate) the result will be the same. This is because of the uniformity of nature. The same material and physical forces and processes that operate in my lab in Liverpool, operate in New York, Tokyo or Mumbai. But this uniformity, on which science rests, hasn't been established by some grand experiment, it just "is". It's assumed. But it's fundamental to the whole process. We take it as an article of faith.

And this business of reproducibility is interesting too. Now it turns out that you could replicate my experiments without too much difficulty. It would cost a little bit of money (but not too much because I'm a bit of a cheapskate), some time and a bit of skill. But nothing too taxing. Nevertheless, rather than do this, people are prepared to take on trust that I've done what I've said I've done, and the result are sound. So, rather than repeat my results, they build on them and do something slightly different and new, to make another small advance. But what about an experiment like the one that established the existence of the Higgs boson? That took billions of euros, thousands of scientists, and large chunks of continental Europe. Are we waiting until another Large Hadron Collider is built before we accept the result? No, we take CERN's results on trust. We exercise (reasonable) faith. And, all of this in the presence of what some in science are talking about as the reproducibility crisis; when this type of faith has been abused by the unscrupulous or occasionally outright fraudulent.

My intention is not to undermine science in any way. It's simply to pint our that like most other areas of life, faith is key to it, not incidental. So, a double standard is applied by those who would like to bash my Christian faith, and claim that on the basis of science I must be suffering from some kind of reason deficiency. It turns out I'm neither alone, nor am I deluded. Mark Lorch appears to agree.


Saturday, 24 June 2017

Back to that chasm....

The Nature Editorial that I was reflecting on recently, prompted other responses published in the correspondence section of the journal itself. Firstly, Frank Nicolas' letter simply pointed out that all scientists adopt a "methodological naturalism" when doing science, and basically welcomed the new openness that the Editorial discussed. What was perhaps more interesting were the comments on the letter. In one it was stated:

"Where it is not incomprehensible this is an empty piece of philosophical maundering which should, and doubtless will be, widely ignored". 

A second commented: "Religion is by definition not open towards science, given that it ignores evidence (or actually the absence of it)..."

I didn't find the letter incomprehensible, and the philosophy wasn't empty. It was a fair statement of an approach many of us take in the lab. Even those of us committed to the belief that underpinning each instant in time, and at each point in space, it's the power of God that keeps the universe in existence (Hebrews 1:3), don't invoke this power to explain the processes we study. And the explanations we come up with don't compete with Hebrews 1. I study (among other things) psychological processes reflected in eye movement behaviour. If I invoke a mechanism like "behavioural inhibition" to explain an experimental finding, it doesn't mean that I'm denying that neurons in the brainstem gaze generating network weren't involved. I'm just operating at a different level of explanation. And as I can't measure the "power of His Word" in the lab (and don't seek to), I would never invoke it in the paper I eventually write on what I've been investigating. It forms part of the background that gives rise to my beliefs, thoughts, behaviour and activity. I would claim we all have such background (metaphysical) beliefs. It's just that I'm explicit about it, and know when (and when not to) mention them.

So in the lab, I'm as much a methodological naturalist as the next scientist. The problem comes when folk start hinting, implying or claiming that natural explanations (those couched in terms of what we can see, taste, touch, smell, measure) are the only type of explanations that count. This, I think, lies behind the response to Frank's letter. It's a form of metaphysical naturalism which, to be fair, is the default position of many scientists. However, it, itself, is not science, it's a metaphysical position, and it brings with it a history and set of attitudes. If you claim that only natural explanations apply to everything, that's a statement of the same sort as God upholds everything by His powerful word. This doesn't mean that it's necessarily wrong (although clearly I believe that it is). But it is equally true that it's not necessarily right, even if lots of scientists say it, and even if lot's of them also think that it's the only way to approach science specifically and life more generally.

Those of us with a religious and philosophical turn of mind accept the difference between the methodological naturalism we adopt in the lab and our other beliefs. We're open to thinking about the interaction between the science we do in the lab, and the kind of intellectual procedures it involves, and our other beliefs and attitudes. I think about how science informs me about the world (the world that God made and sustains); I reflect on what it tells me about Him. I think about how Scripture applies to what I do in the lab. These are not contradictory processes, although they do occasionally result in a bit of tension. What's interesting about the second of the two comments above is the claim that "Religion is by definition not open towards science" - not by my definition mate! In fact this statement suggests to me a closed attitude toward insights that both philosophy and religion can offer to the scientist. And it suggests a blind spot about the writer's own philosophical presuppositions and commitments.  

And then there's the implication that somehow the scientist who is a believer can only be a believer by ignoring stuff. I've commented on issues around facts and faith elsewhere on this blog. Religious belief, at least in Christianity, is not irrational, it is the opposite. It's not about ignoring anything. You might not like the conclusions drawn from the evidence, but please don't claim that either evidence was not involved, that it was not weighed or that counter evidence was not considered.

It would appear that in the  minds of some then, there is indeed a chasm between science and religion. I invite the interested observer to investigate further to discover whether this is myth or reality; to be open to the evidence.

Monday, 29 May 2017

A chasm … that cannot be bridged?


These days, not being a cosmologist, materials scientist or molecular biologist, the only bits of “Nature” I read with any expectation of understanding are the editorial, news and comments sections (although this blog post points to an exception). Commenting on a planned meeting between a group of families affected by Huntington’s disease and the Pope, the following sentence from this week’s editorial caught my eye: “There is a chasm between religion and science that cannot be bridged”. And it was further stated that it is the Vatican’s traditional philosophy that “the scientific method cannot deliver the full truth about the world” (Nature Editorial, 18th May 2017, 545:265). Hmm. Where to start?

Let’s start with the assertion of the existence of this unbridgeable chasm. Note that it is an assertion rather than the conclusion of a carefully constructed argument, or a hypothesis supported by any kind of evidence. It is not an assertion that would be have been supported by pioneers like Kepler, Newton, Boyle or Faraday or for that matter contemporary scientists such as Francis Collins, John Gurdon or Bill Newsome (do a web search on the names if they’re unfamiliar). Now of course all of these folk could be just plain wrong. The fact that they are likely to reject a proposition does not make it untrue. But with all due respect to the Nature leader writer who asserted the existence of the chasm in the first place, she (or he), while having a background in science is unlikely to have the experience and insight of those listed above. For my own part, I don’t claim any great insight either. But I am a scientist and I don’t accept that such a chasm either must exist, or does exist in any meaningful way.

What is probably rearing its head here is the conflict metaphor for the relationship between science and religion. This is the notion that science and religion compete for the same explanatory territory, but do so in fundamentally different ways, with different conclusions and therefore inevitable conflict. It’s a fight with a winner and a loser. Actually, some claim that the fight concluded some time ago, with science the clear winner, and the obscurantist forces of religion decisively routed and driven from the field. These notions, while they have been around for a while, are more recent than you might think.  Colin Russel, the historian of science, argues that the conflict metaphor was pushed as part of deliberate campaign by the likes of T.H Huxley in the second half of the 19th Century (see Russell's excellent “Cross-currents” for a discussion). Huxley, along with a relatively small group of fellow belligerents interpreted the history of science up to that point as a fight with religion; since then others have happily promulgated the same view. But both in Huxley’s own day, and today, this was only one way to see the relationship between religion and science.

Science has actually often attracted those who are committed to God’s revelation in His book (the Bible), who also wish to study his handiwork in the created order using science as a tool. There are occasionally tensions between the two, but by and large the book of God’s words, and the book of God’s works complement each other. Indeed there is often an interplay between the two. And where the tensions look more like contradictions, these are often to do with the fallibility of our science or our theology. Interestingly, from the outside, the tensions often look a lot worse than they are. So an atheist scientist, with no great interest in Scripture, might misquote and misapply Scripture to claim a major problem where none exists. It is equally possible to conceive of scientifically uneducated and uninterested believers claiming that some scientific discovery has to be rejected because of an apparent contradiction with the Bible. In both cases, a proper understanding of both the Scripture and the Science often dissolves the “contradiction”. So where is the chasm? There isn’t one.

Occasionally those who are scientists and believers (while I mainly mean Christian believers, the same applies to others) are accused of thinking in one way in the lab and in another way at worship and of keeping these two areas of thought separate.  And I don’t deny that I’ve come across this phenomenon, although not for a while, and not usually on the part of professional scientists. But it’s neither necessary, nor is it particularly healthy; and I reckon this it’s not sustainable in the longer term. I’m the same person whether I’m trying to work out why we get multimodal distributions of fast eye movement latency (the subject of a paper that I hope will appear soon) or why Jonah so misunderstood the God who called him to go and preach in Nineveh. Rationality is required in both cases to make progress. If pushed, and you asked me which of these two puzzles is most important to me, I’d say the later. But for the following reason:  science is what I do; my faith is about who I am. As a professional scientist, one day I’ll retire and put away my eye tracker. But I won’t be retiring as a Christian. This is why my faith (by which I mean the content of belief rather than the act of believing) is more important to me than my science. And the science is for now; faith is for eternity.

This brings me to one of the important distinctions between science and (Christian) faith. John Polkinghorne (originally a particle physicist, but who then trained for the ministry and became a theologian) wrote “Many scientists are both wistful and wary in their attitude towards religion. They can see that science’s story is not sufficient by itself to give a satisfying account of the multi-layered reality of the world (Theology in the Context of Science, p84)”. Science’s success stems from carving off bits of the universe that it can get to grips with. But it is a mistake to insist that this is all there is, or that this is the only kind of stuff that matters. It’s folly to believe that scientific explanations are the only ones that a true or valid. While a pigment chemist and colour psychophysicist could legitimately tell you a lot of interesting things about the Mona Lisa, that’s not all there is to say on the subject. And not all of the pertinent information you would need to “understand” the Mona Lisa  is scientific information.

So it’s not just the Vatican that thinks that the scientific method can’t deliver the “full truth” about the world. There are many scientists, including many non-religious ones, who believe this too. Certainly, this one does.  

Saturday, 5 March 2016

On “Moralistic gods” – at least we're taking them seriously now

Usually when the subject of religion crops up in Nature (the top ranking scientific journal), it’s because some perceived great obscurantist evil has to be exposed. The impression given has been that there is definitely nothing good or intellectually wholesome to be found in religion. At best, it’s for the weak minded. However, recently Nature published the report of a very large study by Purzycki and colleagues (“Moralistic gods, supernatural punishment and the expansion of human sociality” 2016, Nature 530:327-330). They conducted an experiment investigating how the beliefs of people in eight different, widely separated, communities about their god/s affected how they viewed anonymous, distant, coreligionists.  Long (and interesting) story short, the more you believe your god knows about your thoughts and motives, and wants you to be nice to fellow believers (even if you don’t know them and they live far away), and the more you believe that he/she/it has power to punish you if you don’t do what he/she/it wants, the more you’ll do what they want. So the effect is that you’re kinder to strangers you have no genetic links with. Simple “selfish gene” accounts struggle to explain why humans have come to live in large socially complex cooperative groups rather than small, selfish, genetically related ones. Religious belief, which simple observation shows is rampant, seems to provides at least one explanation.

There’s lots about the experiment that’s really interesting, and some aspects that seem distinctly odd. It’s not clear to me whether the label “Christian” has much of a meaning in the Biblical sense, at least in Western Europe. It seems merely to name a vaguely connected set of cultures that for a long time have been separated by quite some distance from the person one of whose titles provides the label. It would be churlish to claim this, and not accept that there are devout Muslims who feel the same way about the word “muslim” being applied broad-brush to large swathes of the world. After all, if I claim that your average IRA man planting bombs and shooting policemen in Northern Ireland in the 1970’s can’t in any sense be called a Christian without the word being emptied of usefulness, doesn’t the same logic apply to the “muslims” trying set up their Caliphate in Syria/Iraq? Yet this is portrayed as being about Islam and muslims, rather than power and politics. But that aside, there’s something more interesting about the publishing of this paper.

It’s now apparently intellectually respectable to take religion seriously. Strange as it may seem, this is a change. It used to be that religion was an epiphenomenon to be dismissed, or that it was a primitive intellectual parasite that the advance of science would finally put an end to. Or that it belonged to humanity’s violent adolescence, a passing phase we would collectively grow out of. It turns out that as a minimum, the influence of religion, for good or ill, now seems to be accepted as playing some fundamental roll in the development of complex societies. None of this means that what is actually believed by the religious (and that is probably all of us) is true, or even helpful. It’s just that it is observably deeply ingrained in us all. Indeed that it is probably all encompassing.

Now of course I see all this from a particular perspective. Because it’s just what I would expect if in fact we were all the product of (creatures of) a “moralistic” God, who held us accountable for our actions. A God who had designed us to know Him, and enjoy Him. Even if we denied Him, these facts of our design would not disappear; how could they? They’re just brute facts. The way things are. If we tried to observe the state of things from a standpoint of neutrality as to whether He (or “they”) were real, these features of how we are made, and how this worked itself out in our relationships would still be observed.

These observations neither prove that this God (let’s call Him the living God) exists, nor can they explain Him away (although it won’t be long until at least the later of these is being claimed). But at least now it’s respectable to have a sensible discussion. The reality of  Him having “placed eternity in the heart of man” as I might put it (or actually the writer of Ecclesiastes 3:11), and the large scale effects this has had, and still has, is no longer being denied.