Monday, 20 August 2018

Unified theories of cognition

We are told that reference 1 tells us that any system claiming to have human-like abilities needs to exhibit, to have, the following thirteen qualities or behaviours:
  • Behave flexibly as a function of the environment
  • Exhibit adaptive (rational, goal-oriented) behaviour
  • Operate in real-time
  • Operate in a rich, complex, detailed environment (that is, perceive an immense amount of changing detail, use vast amounts of knowledge, and control a motor system of many degrees of freedom)
  • Use symbols and abstractions
  • Use language, both natural and artificial
  • Learn from the environment and from experience
  • Acquire capabilities through development
  • Operate autonomously, but within a social community
  • Be self-aware and have a sense of self
  • Be realizable as a neural system
  • Be constructible by an embryological growth process
  • Arise through evolution.

We thought it might be of interest to think of LWS-N in their light – this despite having not yet gone back to the horse’s mouth, to Newell’s book, to get a grip on their context and deeper meaning – which might have been a good idea, as requiring all thirteen strikes us as a bit greedy. One could probably manage without all thirteen, and perhaps with quite small doses of some of the ones that were left in.

Despite also LWS-N not being a computer system at all which we want to check-out for its human-like capability. Rather it is the description of a hypothesised part of a human brain, but the sort of description one might come to in the process of building a computer system. So it seems reasonable to check that this description is still reasonably human-like, and to check this from the point of view of some disinterested party.

LWS-N is hypothesised as the last stage in the process needed to deliver the subjective experience of consciousness, the heavy lifting, as it were, having been done elsewhere. LWS-N is a complex, compact (aka local as opposed to global), layered structure, designed to hold the content of consciousness. Cunning activation of this structure is hypothesised to deliver the subjective experience of consciousness. The penultimate stage is the compiler which builds the successive frames of consciousness and before that we have all the tricky processing need to get from, for example, the raw input from the two retinas, to the more or less stable data which the compiler can bite on. In terms of Tononi’s IIT (integrated information theory), stable and integrated data, with one aspect of the differentiation which that theory requires maybe lying in the layers of LWS-N. And another maybe being the analysis of layers into layer objects.

So the LWS-N is not, of itself, a system claiming to have human-like abilities, although it is one small part of such a system. While it needs to work inside, in the context of, a system which does exhibit these qualities or behaviours, it does not follow that it has to so do itself.

Nevertheless, we consider each of the thirteen qualities and behaviours, in turn, in what follows.

Posts giving more information about LWS-N, including an introduction, can be found through reference 2.

Behave flexibly as a function of the environment

LWS-N does not behave, but its contents, for any particular host, are going to vary in time and place, are going to reflect that time and place. It is also going to reflect the state of its host, its body. And a third component will be the stuff, as an intelligent autonomous agent, it cooks up for itself (on which point see reference 4). The balance between these three components will also vary through time and with circumstance generally. LWS-N is designed to hold information about more or less anything, rather as, in its own world, a SQL Server database is designed to hold information about more or less anything. On which last, reference 3 might be a reasonable place to start.

LWS-N is rather general purpose by nature, which bears on the question of self, raised below.

Exhibit adaptive (rational, goal-oriented) behaviour

As we have just noted, LWS-N does not behave, but it does reflect the behaviour of its host in that a good part of what makes it to consciousness will be relevant to conscious goals – if only because it is, in part at least, derived from signals which warn of potential threats to those goals.
It also reflects behaviour in the sense that, on the whole, one does see what one is looking at, one does hear what one is listening to. But only on the whole; one’s attention can drift and one can see without seeing, to misuse an expression from the vernacular.
Put another way, part of being autonomous is being able to detach from the stream of inbound stimuli, to take a time-out to sit back and think for a bit. There is a bit more on this below.

And more perversely, the compiler might persist with seeing what it wants to see, what it really wants, rather than what is actually there. Perhaps in the context of drink having been taken. But that is not really an issue for LWS-N, with this last just being the messenger.

Operate in real-time

LWS-N operates in something close to real time in that it hypothesises a sequence of frames of consciousness, each lasting something of the order of a second or so, reflecting the here and now. This is fast enough for much of what we do to make it to consciousness, at least in summary form. But slow enough that much of we do to has to be subconscious, to go on under the hood without our having any knowledge of it at all, beyond, maybe, knowledge that it is going on.

It would not do, for example, for LWS-N to try to keep up with all the detailed calculations being made by the cerebellum about how to move the hand from keyboard to cup. Although it does sometimes help for LWS-N to go slow, to be less of a drain on shared resources. One does not try to think about how to move the hand, this is usually unhelpful, but it is also usually helpful to stop thinking, while the hand is moving, about the next nomination to the Supreme Court. Or whatever.

Which interaction tells us, inter alia, that while LWS-N itself might be local, confined to a small space in the middle of the brain, it draws on resources from the brain more generally. Similarly, the cerebellum in processing movement, draws on resources from the brain more generally. Both structures are all of isolated, specialised and integrated; a slightly different take on the integrated and differentiated of IIT that we come to again below.

Operate in a rich, complex, detailed environment

That is, perceive an immense amount of changing detail, use vast amounts of knowledge, and control a motor system of many degrees of freedom.

LWS-N is capable of expressing snapshots, summaries of all this. Part of the idea – we hesitate to use the rather loaded words ‘purpose’ or ‘point’ – is to organise the stream of dense, chaotic, inbound stimuli into something much simpler and digestible – indeed, comprehensible. To recycle an analogy that we have used before, and to which we return in the next section, it expresses a diagram drawn from the photograph.

While the stream of outbound commands, the efferent as opposed to the afferent traffic, has a more shadowy presence in LWS-N, perhaps as no more than a goal in the background. As we have just said, best just to leave the cerebellum to get on with it. Although we recognise that the proponents of predictive coding might put a different gloss on this, with the efferent traffic having a key role in driving forward the internal model of what the body is supposed to be up to, an internal model which might well be mixed up in delivering what it is that gets into consciousness. So not just a stream of commands going out to the periphery. On which see reference 5 for a non-technical introduction by a philosopher.

PS: regarding purpose and point, we note in passing that while it seems a bit unlikely that consciousness is not for something, the jury is still out on what that something might be, at least here in Epsom. But we do associate to the global workspace of Baars and his colleagues in which the point is to make a summary or précis which can be made available to the brain generally, not to say globally, with the result that all processes have access to the bigger picture. We associate to the better managed corporations, which go to some lengths to keep their workforces in that bigger picture. To keep their workforces in the loop. See reference 6.

Use symbols and abstractions

Shape nets and texture nets and their various appendages might count as abstractions. Rather than attempt to replicate the world, in the way that some models of economies replicate all the economic agents in silico, rather than dealing in summaries, in macro economic equations, LWS-N uses these devices, these abstractions, to organise the world into something more organised and more comprehensible than a raw array of many millions of pixels. Perhaps also, in the jargon of signallers, a lossy compression. Or put another way, in the scheme of things, shape nets and texture nets are the abstraction which lies second in the sequence name (for example, the Forth Bridge (the world heritage one)), conscious abstraction, goings on in the visual areas of the cerebrum, retinal images (very roughly equivalent to a photograph), ambient light bouncing off the real thing (or, less commonly, the light emitted by the real thing) and lastly, the real thing itself.

Use language, both natural and artificial

There is fairly general agreement that while dumb animals can do a lot of stuff, a lot more stuff than is sometimes appreciated, it is language which puts humans at the top of the heap.
And there might well be language, or something very close to it, on one or more of the layers of LWS-N, if only to express those spoken and written words which make it to consciousness.

And with the combination of that language with other material, perhaps visual images, its linkage with that material through column objects, we have something approaching comprehension, certainly the sort of comprehension which reaches consciousness. To continue with the Forth Bridge, I probably know something of what it is for and how it fits into the world when I look at it, perhaps while the morning express thunders through it, even if I have not seen this particular bridge before. Although comprehension of language is not the same as being able to make full use of it, in a generative way – and in development terms also, comprehension comes before generation, generation which is here left to the wider system of which LWS-N is part.

Learn from the environment and from experience

The compiler is an important adjunct to LWS-N, the process which builds the successive frames of consciousness. It seems likely that this compiler, having learned and grown since before the birth of the host, will continue so to do through the life of the host. This might well include the detail of the ways in which LWS-N does things, as the host learns new ways to experience the world. It seems less likely that the basic structures of LWS-N will change that much.

An analogy might be that while one might add columns, rows and tables to an existing SQL Server database, one does not mess about with its basic organisation into columns, rows and tables. Or in this case, with its basic organisation into layers, layer objects, shape nets, texture nets and column objects.

Acquire capabilities through development

See above.

Operate autonomously, but within a social community

Not applicable, except to the extent that LWS-N might provide the springboard, or at least the background, for most the interaction of its host with its social community. Where in ‘most’ we are thinking that most speech is generated more or less unconsciously, and some of it comes so fast that we might say reflexive. We might be conscious of what we are saying, but the business of generating that speech is not, on the whole, conscious. We might slow important speech down, to give the brain time to do its work, but we are not conscious of that work. A similar point to that made above about thinking about the Supreme Court.

For some recent thoughts on the matter of autonomy, see reference 4.

Be self-aware and have a sense of self

It is clear that there is a sense of self in the healthy, adult, human brain. It is less clear that LWS-N needs any such thing, and as things stand it does not include any self specialised features. It is a general purpose tool which could include stuff about self, along with stuff about chairs, tables, polar bears and the theorems of Pythagoras. Or whatever else might be going on.

It may well be that a sense of self is a necessary prerequisite to the emergence of consciousness in early childhood. Which is not to say that a sense of self is a necessary ingredient in consciousness once it is up and running, a necessary ingredient to the activation processes whizzing around the shape nets and texture nets of a LWS-N delivering the subjective experience of consciousness. A necessity which seems to us a bit unlikely.

Another thought is that subjective experience arises from a rapid oscillation of focus between a self and the object of consciousness. But in the context of LWS-N, the compiler might well arrange for such an oscillation between active layers, without regard to the particular contents of those layers. What is different about that part of the content of LWS-N which is particularly to do with self?

Our present view is that consciousness quite often includes a sense of self, perhaps the body rather than the soul, but that quite often it does not. There are lots of episodes of consciousness which go beyond self, are not about self at all. That is not to say that much of the content of consciousness will be from a particular point of view, a particular position and orientation in the world, particulars which in large part determine how we experience the world, but we do not see the necessity for the explicit presence of self.

In sum, jury out on this one. But if we were forced to vote, we would vote against the necessity of self appearing in LWS-N.

Be realizable as a neural system

Some thought has been given to expressing LWS-N in neurons, on a small patch of cortical sheet, maybe at the top of the brain stem or maybe in one of the old (in evolutionary terms) structures sitting on the top of the brain stem, below the cerebral lobes proper. Maybe between 10 million and 100 million of them. A significant chunk of the 20 billion or so available in the cerebrum as a whole. With the 80 billion or so available in the cerebellum around the back not thought to be relevant in this content.

PS: we note in passing that we do not think that the information content of consciousness is that large. Maybe the same order of magnitude as a photograph taken by my telephone, 5Mb or so, otherwise 50 million bits.

Be constructible by an embryological growth process

In so far as LWS-N is part of the brain, it must be the product of embryonic growth from very small beginnings. But given the other structures that come into being during its growth, the other stuff that a brain clearly learns about during its growth, LWS-N does not seem particularly problematic in this regard. Just a question of programming up yet another bit of cortical sheet, albeit somewhat specialised. But then most if not all the bits of cortical sheet are specialised for something or other.

Arise through evolution
In working on consciousness, we have been mindful of both evolution (from the first tetrapod amphibian to the biped human) and development (from the embyro to the adult). And it is also true that LWS-N has gone through a number of iterations and continues to evolve. But we doubt whether this is what Newell is referring to, which we presume to be more in the way of neural algorithms changing, evolving with experience, which we have already touched on above.

Conclusions

Most of the Newell requirements can be either accommodated or avoided by LWS-N without strain.

The one that gave us most pause for thought was the self, thought by many to be an essential ingredient of consciousness. While we allow that the host needs a sense of self in order to develop to a healthy maturity, we are not yet convinced that LWS-N needs one to generate the subjective experience of consciousness. Furthermore LWS-N as presently conceived does not have a home for such a thing. It could accommodate it, but it would not be special in the way that some might think it ought to be.

References

Reference 1: Unified theories of cognition - Newell, A. – 1994.

Reference 2: http://psmv3.blogspot.com/2018/05/an-update-on-seeing-red-rectangles.html. Which includes an index to most of the various posts in the srd series.

Reference 3: https://en.wikipedia.org/wiki/Relational_database. There is a handy glossary at the end, with click-here pointers to further articles. Maybe we will get around to relating the structures and tools of such a database to the way that brains do things in the not too distant future; an exercise which we believe will be instructive.

Reference 4: http://psmv3.blogspot.com/2018/08/free-will-3.html. There is also the post following: http://psmv3.blogspot.com/2018/08/free-will-4-or-soft-wiring.html.

Reference 5: The predictive mind - Jakob Hohwy – 2013.

Reference 6: https://en.wikipedia.org/wiki/Global_workspace_theory.

Group search key: srd.

No comments:

Post a Comment