Monday 5 June 2017

Reading the minds of monkeys

Brains were in the news a few days ago with the Guardian running a piece about some recently published work on the way that monkey brains deal with human faces.

The work in question is to be found at reference 1 and is all about reducing human faces to points in, modelling human faces in a multi-dimensional linear space, or more precisely in a 50-dimensional linear space. It seems that this what monkeys do, with there being neurons which code, in their firing rates, in their responses to faces, for places in that space. With it usually taking a few such neurons to triangulate down to an individual, or some small group of like individuals. I associate to the old fashioned ways of finding position by drawing lines on charts from radio beacons or by taking two or three star shots with a sextant. Maybe a couple of hundred neurons to do the whole job, more precisely to record the results of the whole job. Working from some face in the outside world to the firing (or not) of this couple of hundred neurons in the brain takes a whole lot more, presumably thousands if not millions.

While reference 2 is much more about modelling things like the articulated bodies which can be satisfactorily modelled rather like sausage men, assemblies of sausages. Rather like those balloons that children use for making models of animals, the sort of balloons featured at reference 3.

With being able to handle faces and bodies clearly being a good thing if you are a monkey, or indeed any kind of an animal which is likely to interact with other animals – be they prey, predators or conspecifics –  which are large enough to have faces and bodies.

Note than both the faces and bodies of vertebrates are fairly predictable objects with plenty of redundancy. Their general shape and organisation does not vary that much. The methods described in these two papers are not going to work anything like as well with more arbitrary, less predictable objects – which, fortunately, are not often important in our animal world.

And with the question for me being how does such a modelling capability fit in with the concentrated data structure I talk about at reference 4? With that data structure being organised on mostly topographic lines, including, for example, images which we would recognise as faces without needing much translation or transformation. Images which may, however, stray some way from the sort of thing that a camera may produce. The brain is a more active beast than the camera, is not content just to take an image, it works it up as well. With different ends in mind, but rather as a photographer might work an image up in Photoshop. Or as a microscopist of old might have used his pencil to turn the rather messy looking image he sees down the tube into a rather more comprehensible diagram – including here the possibility of his getting the diagram completely wrong. As the brain sometimes does.

Part of the answer is that our data structure is a manufactured good, the result of a complex manufacturing process, which is quite likely to include, inter alia, the sort of modelling described at reference 1. Modelling which generates the information which allows the compiler to put structure on and to add descriptive data to the raw sensory data, supplements which are needed to make the image both conscious and more or less comprehensible. We se no conflict here; just different aspects of the same system.

Another part is that the point of all this modelling is to identify the object in question. What sort of an animal is it? Who am I looking at? Sometimes it will be enough to know what sort of object, sometimes we will want to know what particular object. Once we have that information we can tap into memory and pull up all the data we have on that group of objects, say tigers, or on that particular object, say Charlie Chaplin. And this is what gets into our data structure, along with a tidied up version of the image.

Note that the work in question does not bear on what monkeys see, on how faces look to them, rather it is about how they analyse and code what they see, which is not the same thing at all. But exactly how does the monkey brain, and presumably the human brain, learn to do the sort of analysis needed to extract the low dimensional model of faces described by Chang and Tsao? Maybe it can do Fourier analysis as well? Or as the authors put it: ‘while simple, this model is also surprising because it means face cells [in the inferotemporal cortex] are performing a rather abstract mathematical computation’.

But the work does open the question about how the brain produces what we see and, more particularly, the images that we bring to mind from memory. Does the brain process the image of a face down to the fifty to hundred numbers which specify its position in face space and then just pass those numbers around, rather than the entire image? It does not seem likely that it does this with images of faces present in the here and now, but it seems much more likely that it does this with images of faces retrieved from memory. Storing a hundred numbers is a much less tougher proposition than storing an image. And as far as matching a face in the here and now with faces that have been seen before is concerned, much more efficient. We only have to process the face into numbers once, the first time that we see it. Leaving aside the question of updates as the face changes with time.

Conclusions

Our position remains that by the time a face gets into consciousness is has been turned into an image of the ordinary sort, of the sort that you might find in my telephone. The face does not appear in our data structure as a point in face space, which seems to us to leave rather too much for the activation process to do to bring the face into consciousness. That said, there may well be feelings, perhaps vague feelings, vague bits of knowledge about that face which have been derived from its position in face space. That this face is not quite the same as it was last time. That it is older or more ill or whatever.

PS: the Guardian is running quite a lot of these science pieces these days. Interesting for me and cheap for them - cheaper that is than regular news about, for example, social care policy discussions in Prime Minister May’s personal entourage.

References

Reference 1: The Code for Facial Identity in the Primate Brain – Le Chang and Doris Y. Tsao – 2017. Open access.

Reference 2: Medial axis shape coding in macaque inferotemporal cortex – Hung, C.C., Carlson, E.T., and Connor, C.E. – 2012. Open access.

Reference 3: http://www.wikihow.com/Make-a-Balloon-Giraffe.

Reference 4: http://psmv3.blogspot.co.uk/2017/05/in-praise-of-homunculus.html.

No comments:

Post a Comment