Introduction
This being a remark attributed to one René Descartes, said to have first used it getting on for 400 years ago, more or less ‘I think therefore I am’. A remark which I have bought into in the sense that I think it self-evident that one is conscious (in the sense that interests me) when one says so. Allowing here a certain amount of latitude as to exactly how one does the saying.
A typical case would be my naming the pot of jam sitting in front of me. ‘That’s a nice looking pot of jam there. Perhaps I should have a go at it’. In such a case it is clear that I am conscious and that I am conscious of the pot of jam. The visual stimulus of the pot of jam has been captured, wrapped in a verbal inner thought. Many would argue that such consciousness involves, includes, a sense of self. And the same sort of thing might be said of other stimulae, for example the sound of church bells drifting in through an open window of a summer evening – a sound which for me evokes Sunday afternoons spent as a child in a place called Hemingford Gray.
Both are the sort of normal experience which I think that Hurlburt’s DES (descriptive experience sampling) is quite good at capturing. The normal experiences which account for a good chunk of our waking life and which any model of consciousness needs to be able to explain. See reference 4.
It is also true that we do not report on most of our consciousness. If there is a break in the stream of consciousness, perhaps because of one of Hurlburt’s bleeps, we can usually report on what was in consciousness at the time, with the business of reporting probably interacting in some way with the business of being conscious, with the act of reporting disturbing that which is being reported on. But most of what passes through our consciousness does just that, leaving little or nothing that we can recall or report on later. To which extent, the stream of consciousness essayed in Joyce’s ‘Ulysses’ is a work of the imagination rather than of report. See, for example, page 743 of the standard (Bodley Head) edition.
All of which introspection can lead us into an elaborate taxonomy of the different kinds of consciousness, a taxonomy in which it is all too easy to get tied up in knots.
For which reason, the once tidy notion that consciousness is usefully defined by the ability to report on it, is challenged in the interesting, accessible and open access paper by Victor Lamme that I came across the other day (see reference 1), a paper which suggests that we might do better to do away with definitions of consciousness which depend on the vagaries of subject report and to rely instead on something more dependable, something electrical or chemical which can be reliably measured. Something which might well turn out to be strongly correlated with – but not to coincide with such subject reports.
Some more complicated examples
There are certainly lots of tricky cases, borderline cases. Cases in which it is easy to get tangled up, cases which generate the elaborate taxonomy mentioned above. Including cases in which the nice straightforward reports of the two examples just given are not going to be available.
Perhaps the savage in the jungle would skip the verbal inner thought and go straight to desire. He would want the jam and would move, more or less immediately to get at it. Rather in the way that my horse, supposed to be carrying me along the road, would spot and reach down for a tasty looking clump of grass, nearly shooting me over his neck in the process. In both cases we have something which sounds like consciousness, but there is no contemporaneous verbal report in the case of the savage and no report at all in the case of the horse.
MacGahern talks in one of his novels about a mother sheep who is sad at the loss of her lamb, a lamb which she forgets about in a couple of hours or so and the sadness passes. With the evidence of sadness having been recognisable signs of distress in the mother sheep. Here we have a report, albeit not a verbal report, of a candidate emotional state. But how do we know, how can we be sure that we are not projecting our own conscious feelings in such a situation onto the sheep?
From where I associate to two places. First, my telephone writing all kinds of anguished messages to its internal log about some trouble caused by a botched software update from Microsoft. Not so botched as to stop the telephone taking pictures, but botched enough to generate the anguished messages, visible to the cognoscenti of such matters. Second, having moments of consciousness which are so fleeting that I have forgotten about them a few moments later. Never to be recovered, except in the case that Fernyhough tells of, the case that one is carrying a regularly snapping camera, from which images one can recover a lot of what would otherwise be forgotten moments of consciousness. See reference 2.
In which connection, Lamme points to the link between the sort of consciousness on which one can report and memory: one has to be able to hold something in working memory for long enough to be able to report on it, to enable the time consuming motor activities of reporting to do their stuff. No memory, no consciousness.
Then there are the people in what is called a locked in state, who may well be conscious, but be more or less unable to report the fact. Perhaps not even to the extent of doing Morse code with their eyelids – sometimes the last voluntary motor action to go. Perhaps only to the extent of doing Morse code in the brain which can be, more or less, detected with an fMRI scanner or an EEG machine. Morse code in the sense, for example, of think of a game of tennis when you want to reply yes to my question and think of an elephant when you want to reply no.
One can devise experiments in which babies or monkeys, neither of which have language, can demonstrate awareness of images on a screen by pressing buttons. I dare say one can train a variety of other animals to do this sort of thing: certainly birds, perhaps fishes. Perhaps the octopus with its rather unusual brain, a token invertebrate? But in such cases, how sure can one be sure that the experimental subject is having the same experience that I would have? After all, one could no doubt program a clever but unconscious computer to behave in the same way.
There is much less doubt in the case of the locked in people, although it seems likely that they will not remember being questioned if they were to wake up a little while later. Rather as we do not always remember what we first experience when coming around from an anaesthetic – although we might have seemed fairly normal from the outside. But such failure to remember after the event is not the same as a failure to report at the time, even though both failures are, inter alia, failures of memory.
One can devise experiments which demonstrate that while adults may not be consciously aware of this or that stimulus, their brains certainly are. With some of the experiments which Lamme talks about involving presenting subtly different images to the two eyes and then seeing what turns up in the brain. And then, back in the early days of cinema, there was the scare about subliminal advertisements, hidden in the films, which induced us to buy Coca-Cola in the interval without our ever knowing anything about them.
In a similar vein, one can devise experiments in which a hand seems to be aware of something that the brain is not conscious of, at least not in the ordinary way. Or in which the hand of which the brain is conscious is the wrong hand.
Rather different is the workshop, where I might be working away on a mortice and tenon joint, perhaps part of a door or a table that I am making. Working away with arms, hands and eyes, with plenty of interaction between afferent and efferent traffic – but when one is working well, there is usually no inner thought, the work just flows. One is conscious in the ordinary sense of the word, one has to be to do this sort of work, but one is only conscious in the sense which interests me here, with a sort of self-consciousness perched above the temporarily suspended action, from time to time. Most of the time it is not like the pot of jam at all.
Or different again when I am, perhaps driving from Epsom to Swindon for a meeting, and get to Swindon to find that I can remember nothing of the journey, having been thinking of the forthcoming meeting the whole way. Presumably if one was interrupted while in such a state, the report would be the same: no recollection of driving, plenty of recollection of meeting. Is it relevant that the business of both thinking in words and reporting is single threaded? One cannot have two lots of thinking going on at once, one cannot report on two lots of consciousness at once because one only has one talking apparatus. See reference 5.
Red herrings
One can make mistakes. One can think that one is conscious of a thrush pecking for grubs in the grass in front of one, when actually what one is seeing is an empty crisps packet blowing about in the wind. Or one can devise more or less elaborate experiments which reliably induce mistakes. But I think that this particular case is a red-herring. One might be mistaken about the object of one’s consciousness, but one is certainly conscious of it.
And one can lie about what one is experiencing. Also a red-herring, a complication which does not change anything important.
More interesting is the different way in which different people might report what must, in many respects at least, be the same experience. Neither person is lying and both are describing whatever it is from their own point of view; it is just that the two points of view differ. That is not to deny that the one might be right and the other might be wrong – but that is not the point of interest here.
Models
Forty years ago lots of people were busy building complex computer systems, perhaps to process a payroll, perhaps to produce the retail prices index. The style at that time was to model the requirement, in terms of both process and data, and to push that modelling down until the necessary computer programs and system sort of emerged at the bottom. A top-down process.
As time went on, computers came with increasing amounts of built in data, process and function. It started to make sense to complement the top-down design with a bit of bottom-up implementation. Leverage the tools that were by then available, rather than starting from scratch every time, in the old way.
While here, the situation is different again. We already have a working system, the conscious brain, and the task is to reverse engineer it, to work out how consciousness works. To which end we have the subjective experience and can model that from the top down – with Freud being an eminent exponent of this approach, background in nerves and neurons notwithstanding. One of the points of such modelling being to reduce the complexity of a system to something more tractable, more predictable.
While at the bottom of the heap we have neurons, synapses and the chemical and electrical soup that they live in. We know a good detail about the detail but get rather lost in the complexity which results from the numbers - so one tries to model structure, function and process, from the bottom up. And with the fine new tools now available for peering at brains, lots of people are doing just that.
With the grand plan being that top-down meets bottom-up in some pleasing way – rather like the two halves of a big bridge growing towards each other and meeting in the middle. See references 8 and 9 for previous musings on that front.
Against this background, I think that I am in the top-down camp, while Lamme is proposing putting more emphasis on bottom-up.
Conclusions
There are complications with the top-down approach. The fact that most of what we think of as consciousness is not reported. And when there is report, there is the interaction between the report and the subject of the report. The need to involve memory and understanding. The need to rely on the testimony of others, to be able to relate that to one’s own testimony.
But I remain satisfied that consciousness of the ordinary sort on which the subject can report is a good place to be; it is the touchstone of what is it to be a modern human being. Let’s try and explain, model the easier examples of this phenomenon and then move out to the more tricky examples, examples perhaps involving sheep, babies and monkeys. Furthermore, I believe there is a core phenomenon, exemplified by the pot of jam, for which there is a unitary explanation, an explanation rooted in anatomy and expressed in terms of chemistry and electricity which would go a good way towards explaining the subjective experience – and which could then be extended to the tricky examples.
I am not satisfied with the sort of short cut, the focus on bottom-up which seems to be being suggested by Lamme: here is a likely process and we define consciousness as being its result. Out with NCC (neural correlates of consciousness) and in with RP (recurrent processing)! In which, Lamme appears to be using the phrase ‘recurrent processing’ where Edelman before him had used the phrase ‘re-entrant processing’. Or perhaps it was the other way around. See, for example, reference 6. But for both, the elixir of conscious life. That said, the difference is one of degree rather than of kind, with my position being that we are not yet far enough forward with the top down model to want to take the eye off the subjective ball.
In the top down model that I am working on, the idea is that there has to be an image of the pot of jam in the brain, there has to be a process scanning that image, arousing the neurons involved, and there has to be a process scanning the scanning process, a notion which will probably be supported by some kind of duality between data, that is to say images, and process, both of which are expressed by the firing of patterns of neurons. With the most recent salvo in this campaign to be found at reference 3.
PS: I notice now that both of the two examples given at the beginning involve feelings, in the first case desire, in the second nostalgia. Leaving aside the sheep who followed a little further down, I wonder whether can one have feelings without this sort of conscious awareness of what is going on, without being able to report on the cause, or at least the object? The answer to which may well be yes as I recall reading – I forget where – of people experiencing emotions for which the cause never reached consciousness – and then there is the business noticed at reference 7.
Reference 1: Towards a true neural stance on consciousness – Victor Lamme – 2006.
Reference 2: http://psmv3.blogspot.co.uk/2016/07/madeleine-moments.html.
Reference 3: http://psmv3.blogspot.co.uk/2017/01/layers-and-columns.html.
Reference 4: http://psmv3.blogspot.co.uk/2017/01/progress-report-on-descriptive.html.
Reference 5: http://psmv3.blogspot.co.uk/2016/11/on-saying-cat.html.
Reference 6: Second nature : brain science and human knowledge – Gerald Edelman – 2006.
Reference 7: http://psmv3.blogspot.co.uk/2017/02/the-persistence-of-anger.html.
Reference 8: http://psmv3.blogspot.co.uk/2016/10/more-meeting-in-middle.html.
Reference 9: http://psmv3.blogspot.co.uk/2016/10/meeting-in-middle.html.
No comments:
Post a Comment