In an earlier post, I explained why I sometimes feel that reason is greatly overrated; people often leave unnoticed gaps in an explanation and are not good at spotting these. In addition we are all prone to various cognitive biases which incline us to believe things when we shouldn’t and not to believe things when we should.
I tend to think of science as a systematic approach to avoiding such errors, by putting our ideas to the test. I think scientists sometimes find themselves arguing at cross-purposes with Philosophers, partly because we (scientists) tend to believe in the primacy of evidence and this (we intuitively feel) can supersede even very weighty arguments based on reason alone. Can we find a common, agreed framework where ideas, arguments and evidence can be worked through and tested?
This post is about one attempt to arrive at such a framework. Explanatory Coherence is a theory about how we decide when something is true – what philosophers term “epistemology”. According to it’s originator Paul Thagard, this depends on the degree to which different ideas and observations “hang together” with one another. The very neat thing about Thagard’s theory is that it is specified at a computational level – for any given argument you lay out the ideas and observations involved, and then connect them together according to the degree that they explain or contradict one another. When you run the simulation “activation” flows through the network of connections so that, for example, an idea becomes more active if it is connected to a supporting observation, but it in turn inhibits a contradictory idea.
This is a remarkably simple idea, but it seems to work – that is, to me it seems like a plausible account of how people decide whether to believe some things and disbelieve others. It produces testable predictions and could be refined and extended while at the same time providing a notation which can be used to develop and test both philosophical and scientific arguments. I think it can also potentially provide a good way to understand why someone else is wrong, and what could be done to change their minds.
I first learned about the idea of Explanatory Coherence when I was a PhD student. As part of this we had to attend a course of Philosophy of Science, run by Celia Heyes (very good it was, too). The assessment involved, as best I can remember, writing an essay about some philosophical principle or argument. As my PhD was on neural network modelling, I wanted to address a topic that was relevant to the methods I was using. My supervisor, George Houghton, told me about Thagard’s paper. For my essay I programmed a computer to represent the arguments as set out in the paper and to run the calculations, which turned out to be surprisingly easy. In doing so it sparked loads of ideas about how explanatory coherence could be extended to other areas of reasoning, decision-making, and Philosophy of Science. I can’t remember if I got a good mark for the essay, but the ideas have stuck with me, and I often think about the mechanism and how it might be used. I remember re-writing the code from scratch 5 or 6 years later, and I’ve been thinking about doing the same again recently. But when I looked it up I was delighted to see that there is a freely available version by Patti Schank called “Convince Me” which you can download and run.
So the main point of this post is to encourage you to take a look at it. I think the original paper is very well written but I imagine the combination of philosophy and connectionist modelling might be off-putting and it can be very hard to understand how a network model behaves based on a purely verbal description. The “Convince Me” software lets you just try it out.