Saturday, 31 March 2018

Easter Reflections I


I came across an article recently that opened with the following statement:  Perhaps the most boring question one can ever direct at a religion is to ask whether or not it is ‘true’. The author went on to claim that Easter “commemorates an incident of catastrophic failure”[1]. Well, we’ll see. My view is that deciding whether the events commemorated at Easter are true is far from boring. Not bothering to consider whether they are true is probably a product of the author completely misunderstanding what was going on. But let’s go back to the thorny issue of truth.
We now apparently live in a culture that has a real problem with truth. For some, and for a long time, the idea that there is something “out there” to be known is a non-starter. For others, even if there is an “out there”, it cannot be known in any certain way. This sort of thing has been argued back and forth for centuries. Meanwhile, most of humanity has just got on with life, not really bothering too much whether they could/could not prove in any absolute sense that it was all “real”. Family, food, employment, cushions, art, music, football, Radio 4, Monty Python and model railways might all be illusions, but they are comforting illusions. Interestingly (at least to me), even those who think that truth is an illusion seem to spill a lot of ink trying to persuade other people of the truth that truth is an illusion. It is almost as though it matters.
In fact most of us seem to live with the notion that it’s important to know what is true and what is not. Not all truth is equally important I’ll grant you. For most people, most of the time, knowing that there is a river that flows through Merseyside to the sea, is of only trivial importance. It’s maybe useful in the odd pub quiz, but it hardly counts as one of life’s great truths. Mind you, it becomes considerably more important if you have to make your way from Liverpool city centre to Birkenhead – look at a map (hopefully a true representation of certain geographical features) if you don’t believe me.
Clearly there are some people who claim that certain events that occurred in and around an obscure city in the Middle East called Jerusalem millennia ago have continuing significance. As a matter of observation, these events have been celebrated annually throughout large parts of the world, and by a growing and now large proportion of humanity, ever since. There are reports that provide some level of access to those original precipitating events. Can we reach a judgement on the truth of what those events were, whether they are important and indeed whether some of them were catastrophic? I think we can, and I think we should. I think we owe it to ourselves to investigate for ourselves what the fuss is about. We could just surf the web and explore the blogosphere. We could depend on the opinions of others. I much prefer the notion of doing as much of the work as I can for myself. Of course, I’ll have to take some things on trust. But as I’ve argued here before, some level of trust is always required in any enquiry. How much trust would be too much? Well, if I’m standing at a bridge wondering if it can bear my weight and get me safely across a river, I know some of the signs I need to look for. Does it go all the way across? Is it fairly clear what’s keeping it up? Does it appear steady as I set out, or does it begin to creak alarmingly? Of course I could be fooled. But not to attempt the crossing could be equally foolish, particularly if there’s a pressing reason to cross the river.
As far as Christianity is concerned, the question “is it true?” has to be the key question. Christianity depends on claims about things that happened (or didn’t happen). While some of these things are probably more important than others, if any of them turn out to be demonstrably untrue, then the credibility of the whole will take a hit. If the major claims are untrue, then the whole thing comes crashing down. Certain of the key claims are clearly unusual, and some, on the surface at least, approach the bizarre (at least from a 21st century standpoint). It’s tempting to dismiss these out of hand, a priori. This is a temptation worth resisting.
The Easter story turns on one of the most famous characters in history called Jesus. Four main accounts compiled from eye witness testimony from his own time have come down to us, along with accounts and interpretations of others who claimed to know him. These various sources have been frequently attacked but have yet to be fatally undermined. They tell us quite a lot about the life of Jesus, including what they claim was a miraculous birth (also still celebrated). They tell us much of what he said. But they seem to spend an inordinate amount of time on his death, implying that it has some significance beyond the ending of a particular life.
Jesus as portrayed in these accounts does not come over as a fanatic, a rabble rouser or a tyrant. He seems to have been attractive to some, and a curiosity to many. He doesn’t seem that interested in gathering a movement around himself. Indeed, in at least one of the accounts (by one of his followers called John) he seems to go out of his way to drive the merely interested away. For all his apparently humility and simplicity, it is his claims about himself that stick out. His original audience were in no doubt that he made one particularly objectionable claim. It’s a claim that many have made for themselves, and today it would be taken as a sign of poor mental health. He claimed to be God. One modern writer about Jesus introduced the subject by confessing that it was “easy to sympathise with scepticism” because the claims made by Jesus and his early followers “are staggering, and indeed offensive”[2]. And C.S. Lewis famously pointed out that these claims paint both Jesus and enquirers about Easter into a corner:
“A man who was merely a man and said the sort of things Jesus said would not be a great moral teacher. He would either be a lunatic — on the level with the man who says he is a poached egg — or else he would be the Devil of Hell. You must make your choice. Either this man was, and is, the Son of God, or else a madman or something worse. You can shut him up for a fool, you can spit at him and kill him as a demon or you can fall at his feet and call him Lord and God, but let us not come with any patronizing nonsense about his being a great human teacher. He has not left that open to us. He did not intend to.”[3]
It was at a place just outside Jerusalem that his claims and his death collided. By all accounts he died a barbaric, if not entirely unique, death. In Jesus day, those in control of where he lived had a standard form of execution. This involved literally nailing the condemned person to a wooden frame, raising them up, and waiting for them to die from suffocation, blood loss, thirst or a combination all three (plus various other encouragements like breaking legs, or sticking with spears). Even in the midst of these excruciating circumstances (which he had some insight into before they happened) he verbalised forgiveness for his torturers, made provision for his mother, comforted someone being executed with him, and made several other statements. None was a statement of regret. One was tantamount to a final claim. It is reported that he shouted “finished” (probably a single word in his original language). Even in dying (an extended process lasting several hours), he was claiming that he had accomplished something.
And there the story should have ended. If this was a man, a good man, a clever man, an exemplary man, ending as all men do, what possible significance could he have for the rest of us? Less than none. This would not be a sad story of what could have been. It might be a story that was instructive, but hardly one that would in any way be transformative. For most of us it would be more of a footnote than a catastrophe. But remember he claimed to be something considerably more than a man. If the story ends with his death, then this claim is clearly bogus. This, and probably all of his other claims are untrue, his credibility fatally flawed. He might have occasionally said something clever, or even something that appears high and moral, but it’s not. He got the one thing he could truly know wrong; he didn’t ultimately even know himself, never mind anything else. So why then twenty centuries later is there still even a question? Why a story to repeat? Why claims to consider?
Because of what happened next.

1.       Easter for Atheists”, The Philosopher’s Mail 

2.       Donald MacLeod, “The Person of Christ”

3.       C.S. Lewis, “Mere Christianity”

Saturday, 17 March 2018

Death of an expert


A few days ago, a remarkable human being left this life. Professor Stephen Hawking, one of Newton’s successors as the Lucasian Professor at the University of Cambridge (from 1979 to 2009), cosmologist, space tourist and author, died at the age of 76. His scientific output was prodigious and ground breaking, from his 1965 PhD thesis, “Properties of Expanding Universes”, to his 2017 paper “A Smooth Exit from Eternal Inflation?”. His popular output has made him a familiar name to many who knew nothing of physics. His 1988 book “A Brief History of Time”, was a best seller, and in the last week has shot back up Amazon’s best seller table (I’ve just looked and it’s currently #2).  Among other places, he popped up in Star Trek and The Simpsons. He was all the more remarkable because much of what he accomplished, he accomplished from wheelchair. At the age of 21 he was diagnosed with amyotrophic lateral sclerosis, the most common form of motor neurone disease. Originally told he only had a few years to live, it turned out that he was in the small group of ALS sufferers who survive more than 10 years after diagnosis. But latterly he had lost all power of movement in his limbs and lost the ability to speak, so he communicated by means of a computer interface that allowed him to type via a cursor activated by twitching a cheek muscle. It was slow and laborious, but it allowed him to continue to make an impact on the world beyond his wheelchair, and the sound of his electronic voice was widely and instantly recognisable. He did so much more than grudgingly and grimly survive. His passing will be felt most severely by his family and close friends. Then there will be that wider circle of friends and colleagues in Physics, and science more generally, who will miss and mourn him. And beyond that a much wider circle who will feel poorer for his passing. That’s all as it should be.
He was an expert. His specific expertise was in cosmology, working on how the universe came into existence and developed, carrying out basic and elegant work on those most mysterious objects in the universe, black holes. He used the mathematics of the infinitely small, and applied it to the really big. If you get the impression I’m being a bit vague, that’s because the maths involved, as well as many of the concepts, are well beyond me. But I’m not alone. I suppose this applies to the vast bulk of humanity. This got me thinking about expertise.

Many of us can appreciate and value Stephen Hawking’s expertise. Rather than resenting it, we can accept it, respect it. Some have been inspired by it. In part, maybe this is because of his very human story of achievement in the face of the most difficult of life circumstances. Rather than give up when confronted with essentially a death sentence, he persevered. That is impressive. Maybe it’s because his expertise was of a particular non-threatening sort. After all, as important as his work on black holes is, most of us can live quite happily in ignorance of it, with it making no personal demands on us. It has no influence on how we live, or spend, or vote. It’s the sort of thing most us are very clear we have no understanding of. There’s no question of our opinion on anything to do with black holes having any weight at all compared to Stephen Hawking’s. Most of us would accept that his expertise and knowledge were unquestionable, whereas ours is miniscule or non-existent. Perhaps it gets tricky when expertise is more questionable or its implications closer to home.

Expertise that has implications for how we think or how we live seems to be under attack (see Tom Nichol’s essay “The death of expertise”). In the blogosphere, in the media (social and otherwise), even in the street, we no longer defer to experts even when the issues are relatively technical. And of course some seem happy to keep us away from actual knowledge and to glory in ignorance (something discussed here). We have the spread of fake news (or at least the constant claim that a particular piece of news is fake) and fake facts. It emerged this week that a certain prominent politician made up a “fact” stated as a truth.

But this approach strikes me of having at its heart a strange double standard. In cosmology, medicine and aviation (to mention a few) we are happy to recognise, trust and rely on experts. Black holes may be remote objects with little direct impact on us, but knowing your surgeon can tell your tonsils from your toes, or that your pilot can successfully lower the undercarriage before landing, is clearly important. We accept that true facts matter in these domains, and that fake facts (your tonsils are on the end of your foot) have potentially serious consequences. Why then the unwillingness to accept expertise in other matters? Maybe it’s because a little knowledge is a dangerous thing; it leads to the kind of hubris that claims that we can all be experts. And of course a little knowledge is only mouse click away. All opinions can then become expert opinions that must be taken equally seriously.
The answer to this is not so much a new deference but old fashioned humility; humility to recognise skill and expertise in others, and therefore give their opinions more weight than my own within their areas of expertise. This doesn’t mean experts should be regarded as infallible, even within their areas of expertise. They are human, and therefore always capable of making mistakes. So transparency and dialogue, critical engagement and debate have a role in providing corrections.  But experts are still much more likely to be right that I am. And maybe experts need a degree of humility too. Perhaps it’s tempting in the current climate to be a little too dogmatic and emphatic, even where uncertainties abound.

True expertise will always be valuable and should be valued. I wouldn’t take my views on the fate of particle pairs at the edge of black holes too seriously if I were you. We had Stephen Hawking for that.

Monday, 12 March 2018

The insufficiency of science


There are scientists who talk about a “theory of everything” although it turns out they do not literally mean a theory of “everything”. There are others who have claimed that science can basically supply the correct answer to any correctly formulated question (at least any question worth asking). This is sometimes tempered to the view that science provides, at least in principle, an approach that can rigorously establish the truth about a given state of affairs even if in practice it’s currently difficult to see how. At one point it looked as though this was becoming a dominant view. Proponents of this sort of view, passionately and (usually) elegantly expressed, were the likes of Dawkins, Hitchens, Dennett and Harris. Let us call them collectively Ditchkinetteris (with apologies to Terry Eagleton who coined the term Ditchkins to refer to two of them; 1). As an aside, the power of this sort of view seems to be in decline, as I have discussed previously. In general, Ditchkinetteris’s take might be termed the sufficiency of science (SoS for short). It would be wrong to assume that SoS was ever a majority view even among scientists, although such things are hard to establish, erm… scientifically. It was certainly a minority view among philosophers (eg see Kaufman’s review of Harris’ “The Moral Landscape”;2). But SoS has now been implicitly undermined but one of its former (if only tacit) supporters, the journal Nature.



Nature published an editorial on the 27th February entitled: “A code of ethics to get scientists talking”. This reports on a document produced by a group of scientists convened by the World Economic Forum and heartily recommends it. As the editorial points out, such codes are not new in science. Many funding and governmental bodies have their own codes. Interestingly the editorial claims that there’s a problem getting scientist to take them seriously and adhere to them. But what intrigues me is the question of what kind of thing is this code?



If SoS is true, then presumably such codes will be scientific. That would mean they would consist of hypotheses, predictions, experiments, results and conclusions. Or if not hypothesis driven (because not all science fits this pattern comfortably) they would consist of observations, measurements and conclusions. But there will be measurements and data, there will be stats, there will be theory; all the familiar elements of science. Right? Wrong. Actually what the particular code referred to consists of (and this would be true of all the other codes) are well meaning, sensible and pretty obvious advice about the kind of things we expect of responsible science. For example, responsible science seeks to minimise harm to citizens. Such a rule doesn’t appear to be scientific rule. It’s sensible, it’s the kind of thing tax payers expect, but it is not itself a scientific statement or a scientific rule. It’s the kind of thing I’d be happy to adhere to, as would all my colleagues, and practically any scientist anywhere I know of. But it’s not science.



The reasons given for why such a code is necessary are also interesting. It is valuable because “the code contextualizes natural sciences in a time of rapid technological change and popular questioning of expertise.” Not sure I understand the first point, but the questioning of expertise is familiar enough. The proponents of the code want to meet such questioning by “infusing research with “the most irreproachable behaviours”. But again, these are not scientific statements or aims, laudable though they may be. They depend on historical, sociological and ethical analysis, not science. So to properly practice science, we must look outside science, indeed our conduct must be ruled by principles which are not themselves scientific principles. This seems to be a blow against SoS.



Of course SoS never was true. Science always stood on foundations that were not themselves scientific. Principles, assumptions and commitments always lurked in the background that were rarely talked about. We all have them, use them and depend on them, and we’ve always known it. It was Bacon who suggested that we ought to purge ourselves of such “idols” in 1620, only for Kant to argue in the 18th century that some of them are built into the very structure of our minds, they are wired in. Better to be aware of them, and control them, than deny that they exist at all.



Personally, I’ve always tried to be clear about my prior commitments. I’m drawn to science because it tackles an ordered universe in an ordered way. That order flows from the God who made the universe, and has sustained it ever since. He is the ultimate source of truth, so I only progress because He reveals His truth as I employ the tools that science provides, allied to the tools that He has provided. He also reveals His truth to others, even although they do not recognise Him or acknowledge Him in any way (indeed many of them are much better at this science game than me). I study the book of His works, and “think God’s thoughts after Him” (to slightly misquote Kepler).



While I’m actually running an experiment, collecting and analysing data, drawing inferences from it, accepting or rejecting hypotheses, I behave (and probably look) like a naturalist. I explain my results, accept or reject my hypotheses, in terms of mechanisms that are familiar in the field. But ultimately, on reflection, I know it is Him I’m studying. Because of that, I want to do it in way that honours rather than dishonours Him,  just like the Christian plumber, carpenter, bus driver, dentist or lawyer. I don’t work to please my boss, or the head of my Institution, or really for the good of the community or for the honour of science. All of these things are good things to do. But they are secondary. My aim is to “serve wholeheartedly as if (I) were serving the Lord, not men” (Ephesians 6:7). All these are prior, outside commitments. But it turns out it’s not just me that has them, indeed needs them, because science is insufficient. At least I’m (reasonably) coherent about it.

1. Eagleton, T. (2009) Reason, faith and revolution: reflections on the God debate. Yale University Press.

2. Kaufman, WRP (2012) Can science determine moral value? A reply to Sam Harris. Neuroethics 5:55-65.

Saturday, 16 December 2017

On understanding pencils…..


Consider the humble pencil. For those poor souls born in the internet age who may not be familiar with them, the pencil is a wooden cylinder, usually about 12cm long, with a graphite core. They can be used for things like writing or drawing, making dark marks on paper (a bit like what happens on your laptop screen when you press keys on the keyboard). They don’t require an electrical supply and are pretty hardy objects, continuing to work in both hot and cold weather. They even work outdoors when it’s raining. But when all is said and done, they are fairly simple objects. Now here are some questions. What does it mean to understand a pencil? What range of disciplines are required? Is anything required beyond some fairly straightforward science? Could a pencil be any more than a sum of its parts?

Well talking of parts, I suppose a scientific approach to pencils would begin by understanding what it was actually made from. A simple pencil (let’s not complicate things too much by discussing pencils with erasers on the end or highly engineered propelling pencils) seems to consist of just two kinds of stuff. Its core is clearly different from the material surrounding it. In fact the core is probably a far from simple mixture of graphite, a substance which was originally mined but these days is manufactured. The graphite is mixed with clay or wax. The surround is of course wood. But what kind of wood? It turns out that almost all pencils are made of cedar, which doesn’t warp or crack, and can be repeatedly sharpened.  Actually the pencil I have in front of me is also painted (it’s red), and on the side there’s lettering.

The lettering spells out a brand name, but there are also some code letters. It turns out all pencils are not the same. In some the “filling” is hard and makes a thick black line, while in others it’s relatively soft making fainter, finer marks. So you don’t have to try out a pencil each time you go to buy one to find out what kind it is, the different types are coded. Apparently “medium soft” pencils (#2’s) are best for writing. But hang on. Now we’re not really thinking about the constituent parts of a pencil and their properties, the sort of thing that science can help with. A botanist could perhaps have identified the wood and speculated as to why it had been chosen. A chemist would have quickly identified that the core was a mixture of something that occurs naturally (graphite) mixed with other chemicals that it doesn’t naturally occur with. She could perhaps speculate on the processes used to combine these different substances. But now it turns out that there’s a whole other level of understanding required in order to understand pencils. They are “for something”, they have an intended purpose. And this is beyond the purview of chemistry and botany.

There are lots of uses to which pencils could be put. I assume that they burn, wood usually does. So I suppose you could put them in a fire to keep your house warm. They are relatively long and thin.  So I suppose you could poke them into holes in a bid to winkle out anything that might be hiding there. A quick experiment will show that graphite is an excellent conductor. But if you try to build circuits with pencils you’ll discover that they quickly generate so much heat that they burst into flames. So a line of pencils is never going to perform well as a mains electricity distribution system. Pencils have an intended purpose, for which they are designed, and for which they are really good. They are designed for writing and drawing, and when used in this way they perform admirably. But what kind of thing is an intended purpose? And what discipline has the correct tools for studying intended purposes? Not physics, or chemistry, or even most of biology.

It turns out pencils have a history, so it’s not just about the particular pencil sitting in front of me now. But they did not start out as the finely manufactured objects they are today. Some trace the history of the pencil back to Roman stylus. Others argue that pencils, properly understood, began with the discovery of naturally occurring graphite in Borrowdale in 1564. Leonardo frequently sketched his ideas in pencil. Without the humble pencil who knows what he might have forgotten all about,a what we would never have known he thought about. The pencil no doubt played a role in, and benefitted from, the industrial revolution of the 18th and 19th centuries. To understand the pencil clearly the humanities have a role to play.

Understanding pencils is turning out to be a bit tricky. To fully capture their constitution, their use and purposes, and their impact on society is getting complicated. Just imagine how complicated it would be to substantiate the claim that we understand things like table lamps, or cars or houses. Mind you, these are all artefacts. They are all things that people make and use. But what about understanding people? Is a person simpler or more complicated that a pencil? Now I think that the answer to this is fairly simply. But for the absence of doubt I think that people are more complicated than pencils. So if we need multiple methods to understand pencils, it’s fairly certain we’ll need multiple methods to understand people. To be able to claim we understand just one individual will take effort, multiple disciplines and many layers of explanation. Some higher levels of explanation will probably be closely related to lower layers, and it may be able to explain one thing at a higher level with things at a lower level. So in principle the biological processes of digestion, beginning with what goes on in the stomach, might well be reducible to chemical explanations (eg the action of hydrochloric acid on certain foodstuffs). While the detail might be a bit tricky and technical, you can see how this kind of thing might work. But there might be other levels of explanation that can’t be decomposed into lower level types of explanation. So I might well be able to explain chemically the effect of HCl on chocolate, but why do I so enjoy Cadbury’s Dairy Milk?

And this is just about explaining one individual. People tend to clump together. And in that clumping whole new concepts emerge and need different types of explanations. So what do we make of football scores? They are a thing. You know what I mean by “football score” even if in the UK it’s about some you do with a round ball, and in the US it concerns an oval ball. On one level a football score might be just two numbers on a board at one end of a football ground. But then it seems to have strange properties than can induce effects on human beings even over great distances. So there might be a vast crowd of 50 000 people in a football ground, variable distances from the board displaying the score. A score of 1-0 is somehow capable of inducing depression in one group of 25 000 and euphoria in the remaining 25 000 (and this is the simplified version). Suppose the same score is liberated from the football ground itself and transmitted by the wonders of modern communications across the world. Across the world a similar pattern of depression and eupohoria is induced in different individuals. So what kind of thing is a football score, and with what tools should it be studied?

Given all of the above consider the following famous quotation: “The Astonishing Hypothesis is that “You,” your joys and your sorrows, your memories and your ambitions, your sense of identity and free will, are in fact no more than the behaviour of a vast assembly of nerve cells and their associated molecules. As Lewis Carroll’s Alice might have phrased it: “You’re nothing but a pack of neurons.”  This comes from Nobel Prize winner Francis Crick. If you’re nothing but a pack of neurons, then all we need to understand all the complexity of humanity are the tools furnished by a particular branch of biology called neuroscience, with perhaps a dash of physics and chemistry thrown in. It smacks of a kind of reductionism often encountered in the popular writings of scientists, very often towards the end of otherwise really interesting books. reductionism doesn't work for pencils. It’s unlikely to be a plausible approach to understanding people.  

Monday, 28 August 2017

Scientism


If “new atheism” (NA) is, if not dead, perhaps terminally ill, then one of the contributory factors to its demise is the scrutiny that its supporting doctrines have come under. Whether cause or consequence, NA has always been closely linked with “scientism”. Scientism is not science, does not work in the same way as science, and does not (or should not) have the same authority as science. A bit like NA itself, it’s not new; it has probably been around in one form or another as long as science itself. But it really began to emerge in the late 19th century with the desire of some in science to paint the only possible relationship between science and other disciplines, or between science and religious faith, as a war in which there had to be a winner and a loser. It kicked around in the background for a while, probably pooped up in many undergraduate science courses, and came to public prominence more recently as a supporting pillar of NA.

What is it? Definitions abound, but at its heart it’s an understandable (and now familiar) view. The only truth that counts is scientific truth, and therefore the scientific method is the only means of discovering truth. A series of classic statements can be found in Peter Atkins short essay “Science as truth” published in in 1995. Speaking of science, Atkins claims that “There appear to be no bounds to its competence… This claim of universal competence may seem arrogant, but it appears to be justified.” All religion (grouped with studies of the paranormal) is dismissed as an “obscurantist pursuit”. Science is the “greatest of humanity’s intellectual achievements”; in contrast he thinks it a defensible proposition that “no philosopher has helped to elucidate nature”! I commenced my own scientific journey in 1979 when I began my science degree at the University of Glasgow. There were certainly some lecturers to us first-year biology students who weren’t backward at dropping such sentiments into their lectures. I now suspect that this was because their own historical and philosophical education was sadly lacking. As student, I found such views baffling; as a scientist, more than thirty years Iater I find them embarrassing.

There have been and are lots of responses to scientism. Some have come from those of a theological disposition. I rather like John Polkinghorne’s comment on scientism (in “Theology in the Context of Science, p46), that it is “the rash and implausible claim that science tells us all that is worth knowing, or even that could ever be known. Embracing that belief is to take an arid and dreary view of reality..” . Polkinghorne writes as a theologian and former (distinguished) physicist. For a wide ranging and eloquent critique from a scientist’s standpoint, read Austin Hughes’ “The Folly of Scientism”. Hughes writes for more than just the sake of an argument. He has a real concern that scientism’s overreach will eventually cause science big problems: “Continued insistence on the universal competence of science will serve only to undermine the credibility of science as a whole.” With contemporary attacks on expertise ringing in our ears, and with science now worrying within about the reproducibility crisis, I think he’s right to be concerned.  

Part of Hughes’ case is that philosophers are far from innocent when it comes to the scientism. Some schools of philosophy provided a major impetus to it (ie the logical positivists), while others colluded in its rise. It always bemused me that 19th century theology gave up the tussle so easily. But philosophy being philosophy, scientism didn’t have it entirely its own way. At least now there does seem to be something of a fight back going on whether it’s Roger Scruton’s approach from art history, or Peter Hacker’s more analytical analytical critique.  

To my non-philosophical mind, many of those objecting to scientism seem to be united in a common reaction to the ignorance of those who promulgate scientism. This is a version of the disdain for other approaches that has been so much a part of NA. From their different perspectives, scientism’s critics have pointed out that it often derides and dismisses ideas that are never fully defined or fairly discussed. Some have objected to its selective memory when it comes to the history of science itself. Others have pointed out that it has a habit of blundering to other areas of academic endeavour, oblivious to important concepts and developments, constructing weak arguments and reaching fallacious conclusions. Particularly in popular accounts, this leads to a series of illusory battles against straw men, which of course, are convincingly won.  

It’s always struck me that this is something that often marks NA’s attacks on religious belief. Of course if you take the very weakest form of an argument it will be easy to defeat it. Having defeated the weakest form, it’s a short step to the claim that all arguments of that type are also therefore defeated. Showing that diverse beliefs in fairies, Santa Claus and large lizards controlling earth from the moon are irrational is not likely to be that relevant to debunking beliefs in well attested and evidenced ancient events that believers claim to have transformative power today. Such debunking may be possible, but it was always likely to take much more careful work than many in NA were apparently able or inclined to do. And the sheer logical inappropriateness of the natural science to do this work, was clearly lost on them.

As with the reported death of NA, it’s unclear to me what the fate of scientism will be. As Hughes argued, its fate will likely have important effects on science itself. As a scientist, I’m committed to the scientific endeavour, and think that within its area of competence science offers the best way to answer certain types of questions. But it can’t answer every type of question. For that we need the tools of philosophy, history, anthropology and the rest. And for that most important type of question (the why rather than the how)? If I were you I’d turn to Scripture rather than scientism (or even science).  

Tuesday, 15 August 2017

The strange life and (alleged) death of “new” atheism

“New” atheism, the type of ascribed to Dawkins, Harris, Hitchens et al, began its short life (according to its Wikipedia entry as of the 12th August, 2017) around 2006, when it is claimed the term was first coined. The writer/s of the Wikipedia article clearly doesn’t have a very good internet connection. Even a pretty cursory search of the web throws up abundant material demonstrating that the label has been around much longer. As for what is being labelled, even many atheists are unclear on what was really new about “new” atheism.
Let’s start with the label. Back in 1984 Robert Morey published “The New Atheism and the Erosion of Freedom” (he was not a supporter). But the term has a much older history than even that. A French Jesuit in the 1690’s wrote a book called “The New Atheism” against the philosopher Spinoza. In the 19th century William James is reported to have used the term. Spinoza, Hegel, Nietzsche (all philosophers) probably thought they were up to something new, and would not be too troubled with the label atheist. Mind you, being philosophers, they’d probably want to embark on a long definitional discussion (of the sort that wouldn’t sell these days) and conduct extensive research. The intellectual attention span seems to have shortened considerably. Towards the end of the 19th century, scientists like T.H. Huxley and Ernst Haeckel clearly fitted the mould of the scientifically educated and inspired atheism of Dawkins and Sam Harris. I’ve heard the term “new atheist” applied to them (and the other late 19th century Darwinists), although I haven’t been able to track down its use in contemporary sources. However, it seems that neither the label, nor the thing labelled, is particularly new.
Some have argued that it was not so much the content of the New Atheists that was new and exciting, but it was their style (a classic example of style over substance then). It was the militancy, swagger and verbal dexterity of the likes of Dawkins, Harris and Hitchens; their lack of respect for their theist interlocutors, and lack of deference for transparently fatuous arguments. Personally, I’ve always rather liked Richard Dawkins’ ability to turn a phrase. When it comes to his passion and skill in communicating science and its achievements, there’s much to admire. It’s when he wandered out of his area of expertise, and got on to the subject of religion, trying to smuggle his undoubted authority in the first realm into the second, he became less admirable (a view also echoed here). It’s not that he’s not entitled to anti-theist or more widely anti-religious views; nor is it that he’s not entitled to write and talk about them with a passion. It’s that when he does this, he has no special authority. Clearly the new atheists were observers of (some) religious practice and had strong views on the subject. But there was a lack of expertise on the issues they often tackled. Theologians, religious scholars and scholars of religion, and philosophers (including some who were by no means theists) pointed out this lack of expertise. But coming back to the issue of newness, in terms of the militancy and verbal skill of the “new” atheists, are the crop any more militant and skilful than Bertrand Russell (or a host of others from previous generations) in full flow?
It’s only fair to point out some atheists have contended that both the concept and the content of “New Atheism” is a straw man. Perhaps somewhat disconcerted by the naivety of some new atheist writing, some “old” atheists might be tempted to claim that “new” atheism is a bit of a theist hoax. But now another twist. As well as it’s disputed birth, and it’s somewhat ill-defined life, it now looks like its demise has been pronounced.
Throughout the internet, blogosphere and across the commentariat the question has been posed – is new atheism dead? In some cases death is pronounced with enthusiasm and comes as no surprise (eg Ed West in the Catholic Herald, “New Atheism is Dead”). In other cases (like here) its demise is perhaps tinged with more regret. For some the problem lies with the causes and views some of its prominent proponents have been linked with, although in the political sphere it has been called out for both right wing and left wing bias. The charge of misogyny has been levelled occasionally. A quick search will provide examples for anyone who’s interested, but this Phil Torres article provides an interesting starting point. Now again, a conspiracy theorist could claim that this is all some kind of theist plot. But the criticism is so wide ranging in terms of sources and content that this is scarcely sustainable. While I don’t want to appear overly gleeful, it is interesting that the new atheists do seem to be a bit friendless.
Perhaps it is because of their style after all. So very often the tone they adopted was one of disdain. But this seemed to spring from an almost wilful ignorance of their opponents various positions and arguments. Arguments for and against theism in particular and religion in general (particularly the organised sort) have flowed back and forth over a very long period. This longevity alone is suggestive that the issues at stake may be genuinely complex, and for all sorts of reasons. Of course if you pick the weakest caricature of the arguments you oppose, you’ll always be tempted to treat them with disdain. A starting assumption seemed to be that those of a religious persuasion were just so obviously stupid, that they deserved no kind of respect. Now it may be that there are things that are believed, which could count as religious, which are stupid. And there may well be religious people who are stupid, and who do and say stupid things. But it seemed as if the starting point for new atheists was that all religious views, and all religious people, were obviously stupid. This has no more traction as an argument than the contention that if I find a single stupid atheist, then atheism is clearly stupid. I have more respect for atheists than that, whether old or new. And its sheer unreasonableness probably did the new atheists no favours with a wide audience.
I admit that this may be perception as much as reality. If you were a theist on the end of, or an observer of, a typically robust new atheist critique, a sneer may have been detected where none was intended. If you were a fellow traveller with the alleged sneerer, you might just hear robust and triumphant argumentative thrusts. But given the friendlessness of new atheism, it would appear that more has been going on than the offending of sensitive theists.
Even if the death of “new” atheism has been somewhat exaggerated (to misquote Twain), a more respectful dialogue, one that is more comfortable with complexity and subtlety, and the need for hard thinking rather than just good put downs, would perhaps be a fitting legacy.

Monday, 17 July 2017

The Faith in Science

The blogosphere is a big and diverse place. There's all sorts of stuff out there (and here). One could spend one's life navigating it and responding to what one finds; there are things to enrage, engage or intrigue. I recently came across a blog post in the New Humanist blog written a while ago by Mark Lorch (Chemist and science communicator at the University of Hull) entitled "Can you be a scientist and have religious faith?". For obvious reasons this piqued my interest given that this is a question that seems to keep coming around, and is one that I've examined from time to time in my own humble corner of this vast landscape.

His post has an interesting starting point: "... I could never reconcile what I saw as a contradiction between the principles of the scientific method and faith in a supernatural god." Let us leave to one side the issue of whether "the scientific method" is real thing; Nobel laureate Sir Peter Medawar had his doubts (see his essay on "Induction and intuition in scientific thought", Pluto's Republic). Also of interest is his observation that, as a professional scientist in a University, he is surrounded by other scientists who have "religious faith". And not merely a formal or perfunctory commitment to religion. He's on about honest to goodness, fundamental, bible-believing type faith of the sort that really outrages the evangelical "new atheists" that Terry Eagelton refers to collectively as "Ditchkins". So here's some data indicating that I'm not particularly atypical and my views are not really out there (always a comforting thought). I'm not claiming that I'm typical, just that Christians who are "proper" scientists are not extinct or even on the endangered list (at least not yet). You would get quite a different impression form some quarters.

There were of course comments in the blog that were at first less welcome, if only because they seemed to betray a lack of thought and research. For instance: "Ultimately faith is the knowledge that something is true even though there is not evidence to support it...". There may be faith of this sort out there, but this is not the faith that the Bible writers call for, or that Christian believers exercise. Christian faith is a response to evidence. Yes it is a response that involves, at a certain point, a degree of trust, but that's no different to life in general and science in particular.

Starting with Francis Bacon, Lorch arrives at the conclusion that "without ever realising it, I too have a deeply-seated faith in my own (scientific) belief system." Glory be! Sense at last. Notwithstanding the problems with his definition of faith above,  I welcome his honesty about his own thought processes. The problem is, it's worse than he thinks (if faith being involved in science is a bad thing). One reason for his conclusion is the conviction that in science a thing called "induction" is involved. This appears to be a sound way of moving from observations/facts/results to new knowledge. But it turns out, no one really has an explanation for why it works when it works. But it does appear to work, so he's happy to stick with it, in the absence of convincing evidence. Hence, exercising faith. To be fair, I don't think this mysterious process of induction is why science works, and neither did Medawar (hence his essay on the subject). But there are other foundations on which science rests which we understand even less than "induction" and yet we're prepared to press on regardless. Take two examples: nature's uniformity and the principle of reproducibility.

I beaver away in my lab in Liverpool, collecting and analysing data, finding out stuff about vision and eye movement. Once I've completed a series of experiments, I write them up, and submit them to a scientific journal. The journal organises other scientists to review what I've written, there's usually a bit of back and forth, and eventually the journal agrees to publish my report of my endeavours. If we've all done our jobs, science creeps incrementally and imperceptibly forward, just a bit. We assume that what I've done in Liverpool could be done anywhere else (ie replicated) and as long as I've been honest and accurate) the result will be the same. This is because of the uniformity of nature. The same material and physical forces and processes that operate in my lab in Liverpool, operate in New York, Tokyo or Mumbai. But this uniformity, on which science rests, hasn't been established by some grand experiment, it just "is". It's assumed. But it's fundamental to the whole process. We take it as an article of faith.

And this business of reproducibility is interesting too. Now it turns out that you could replicate my experiments without too much difficulty. It would cost a little bit of money (but not too much because I'm a bit of a cheapskate), some time and a bit of skill. But nothing too taxing. Nevertheless, rather than do this, people are prepared to take on trust that I've done what I've said I've done, and the result are sound. So, rather than repeat my results, they build on them and do something slightly different and new, to make another small advance. But what about an experiment like the one that established the existence of the Higgs boson? That took billions of euros, thousands of scientists, and large chunks of continental Europe. Are we waiting until another Large Hadron Collider is built before we accept the result? No, we take CERN's results on trust. We exercise (reasonable) faith. And, all of this in the presence of what some in science are talking about as the reproducibility crisis; when this type of faith has been abused by the unscrupulous or occasionally outright fraudulent.

My intention is not to undermine science in any way. It's simply to pint our that like most other areas of life, faith is key to it, not incidental. So, a double standard is applied by those who would like to bash my Christian faith, and claim that on the basis of science I must be suffering from some kind of reason deficiency. It turns out I'm neither alone, nor am I deluded. Mark Lorch appears to agree.