Saturday 6 January 2018

What is consciousness, and could machines have it?

This post being notice of and a few comments on a short paper (reference 1) which I came across more or less by chance, although I did already have a couple of  books by the same author. Interesting on at least three counts: for its source, for its mention of the new-to-me Adelson illusion and for its remarks about some neural correlates of consciousness.

I had already known about the paper at reference 2, although, unusually, it has not yet leaked out from behind its paywall and I have not cared to stump up the asking price of $US30 – although I could have had a print copy of the magazine in question, Science, for a more modest $US15. Always irritating to have to pay for sight of work which one suspects, if not knows, to have been funded from the public purse. Probably different from paying for access to the Tower of London, which I do not mind at all, but I do not have an argument about that to hand.

In any event, for some reason, I was checking yesterday whether it had leaked, and came across reference 1, probably a much shorter paper, but with the same title, from the Pontifical Academy of Sciences, the outfit which sports the glossy website at reference 3. It claims Galileo as a member of a precursor organisation, this despite his troubled relations with the church and its inquisition, and it appears to know all about Dehaene, although I was not able to work out whether there was a proper relationship between them; whether he was, for example, an academician. I would certainly think the less of a scientist who cared to be associated with the Pope in that way.

Leaving provenance aside, a short, accessible and interesting read.

I start with the Adelson illusion, illustrated above, whereby the squares marked A and B appear to be of quite different colour. But if, for example, you print the thing off and cut out the two squares, you find that they are actually of the same colour.

I think the point is that the brain wants to extract stable, invariant images of the things in the world around it. Object invariance being and important and useful property of most of the things in that world. So the brain tries to work out what colour the squares really are, behind the appearances, taking into account the shadow and, perhaps, the fact that we appear to have a regularly tiled chequerboard. And in this case, unlikely to arise in the real world, so perhaps unimportant there, the brain gets it wrong. So the question is, would a machine trying to be a human make the same kind of mistake?

A question complicated by the fact that we would not want vision to blot out shadows. Shadows are a part of the world which we sometimes want to know about and take account of. They might, for example, be an important part of a Dutch old master painting of an interior.

I suppose part of the answer is that we want it both ways in the one image. We want to see the thing as it really is, whatever that might mean, but we also want to see the shadow. Having it both ways being something, at least, that a multi-tasking computer can manage quite well.

Then, lastly, we have the neural correlates of consciousness, on which Dehaene is something of a whizz. The idea is to set up an experiment whereby a stimulus is on the border between consciousness and the unconscious. Perhaps sometimes it is one, sometimes the other. Perhaps you need to tweak the stimulus very slightly, to push it one way or the other. If you then peer closely at the brain in question, you can try to take the difference, to find out what it is in the brain that is different when the stimulus is conscious.

In this, quite reasonably, Dehaene is focused on the sort of consciousness of which the subject can report. You are conscious if you can report on it, and with a proper experiment it should not be difficult to weed out the subjects who like to fake. The sort of consciousness which Hurlburt examined and in whom I took an interest a year or so ago. See, for example, reference 8.

Dehaene lists five areas of difference – which he summarises under the headings amplification and access to prefrontal cortex; late global ignition and meta-stability; brain-scale diffusion of information; global spontaneous activity; and, late all-or-none firing of ‘concept cells’. With this last, for example, being the fact that each concept cell is a neuron which only fires when the concept concerned – perhaps Marilyn Monroe or the British Museum – is present in consciousness. Subliminal stimulation is not good enough. It is plausible that these cells point to the all the stuff in the brain about that concept – a cell firing in isolation being of no help at all, rather like knowing someone’s national insurance number but without having access to the central files. So, if enough information is provided to identify the concept, the firing of the corresponding concept cell will then activate all the other information related to that same concept. Or perhaps some context sensitive sample of that other information. If I see a large cat covered in brown spots on a cream background, that is enough to know that we probably have a leopard. I can then access all the information about leopards, that, for example, they are rather fond of eating domestic dogs. Which might be important if I am out walking my own dog.

But all five areas depend on activity all over the brain, or at least all over large parts of the brain. So how does that work with my notion of a local and layered workspace (LWS), for which see, for example, references 4 and 5?

My answer is that they can co-exist. LWS is about the generation of the conscious experience, which Dehaene does not address in a direct way at all. He identifies processes which are active when there is conscious content, and not otherwise, but he makes no claim for their generating that content or amounting to that content, a claim which I do make for the LWS. I see the processes that Dehaene identifies as providing input to or as being output of the LWS.

And while he might well be right about the four processes which computers do not presently have and which are needed to make computers more like people – that is to say a workspace for global information sharing; a repertoire of self-knowledge; confidence and “knowing that you don’t know”; and, theory of mind and relevance – I do not think that those missing processes bear on the generation of the conscious experience – and I believe that one could get computers to do the stuff which he talks about without those computers being conscious at all.

Next stop, reference 6.

In closing, given the title of the paper, I should add that Dehaene appears to be optimistic about machines, that they will, one day be conscious, that there is no deep reason why they should not be. A stance with which I agree, although I worry about the ethical and societal implications. Perhaps the Pope has got a role, after all!

PS 1: I should also add that the LWS presently fails as a theory because it proposes activity in a small, central part of the brain which is, presently at least, rather difficult to inspect when it is alive, up and running, not least because we have yet to suggest a particular location. One promising candidate, the claustrum, has been eliminated by counter example – wounded veterans from the Vietnam war. See reference 9.

PS 2: I expect the chap noticed at reference 7, Josef Albers, knew all about things like the Adelson illusion. He certainly wrote about plenty of his own.

Reference 1: What is consciousness, and could machines have it? - Stanislas Dehaene – 2017. Open access, perhaps a prĂ©cis of reference 2.

Reference 2: What is consciousness, and could machines have it? - Stanislas Dehaene, Hakwan Lau, Sid Kouider – 2017. Behind a paywall.

Reference 3: http://www.pas.va/content/accademia/en.html.

Reference 4: http://psmv3.blogspot.co.uk/2017/11/a-dogs-life-reprised.html.

Reference 5: http://psmv3.blogspot.co.uk/2017/11/the-electrical-assumption.html.

Reference 6: Consciousness and the brain: deciphering how the brain codes our thoughts – Stanislas Dehaene – 2014. Penguin.

Reference 7: http://psmv3.blogspot.co.uk/2017/06/late-convert.html.

Reference 8: http://psmv3.blogspot.co.uk/2016/08/descriptive-experience-sampled.html.

Reference 9: The effect of claustrum lesions on human consciousness and recovery of function - Aileen Chau, Andres M. Salazar, Frank Krueger, Irene Cristofori, Jordan Grafman – 2015.

No comments:

Post a Comment