- Introduction
- Basics
- Compression
- Layer objects and space nets
- Tiling the interiors with texture nets
- Absences
- Other thoughts
- Conclusions, references etc.
Introduction
In LWS-W, we explored how we might code up the content of consciousness as a stack of rectangular arrays of cells, very much the sort of thing that could be stored in an Excel Workbook, albeit rather a large one.
We are now exploring how we might do the same thing, but this time on a substrate which is more closely related to the neural substrate. This we have named LWS-N, local (or layered) workspace for neurons.
The present hypothesis is that while the neurons on some patch of cortex, say around a square centimetre, do indeed code for the content of the successive frames of consciousness, we would do better to move up a bit from individual neurons and build our model in terms of higher level constructs.
But we also have it that these higher level constructs involve some of the same networking machinery as neurons, the same sort of language and that these higher level constructs can be described in terms of directed graphs, the individual components of which can be described in terms of neurons. These directed graphs have the additional, very important, property of being embedded in a more or less two dimensional space: we don’t just have links, we also have distance, direction and geometry.
LWS-N has a lot more possibilities than the rather restricted LWS-W, and one can do a lot more with it in the way of connections. So while we expect that LWS-N will be mainly built from planar graphs, perhaps more properly sub-graphs, it will be possible to cut across that essentially planar structure in a way that is not possible in the strictly planar LWS-W, where the only cross-cutting connections available are the column objects, suitable for moderate, but not heavy use.
Nevertheless, it is also true that the rectangular array of LWS-W can be expressed as a particular sort of graph in LWS-N, a graph which tiles the plane with squares and a sort of graph which is mentioned in a rather different context below. Put another way, as an abstraction, LWS-N lies somewhere between LWS-W and our real world of neurons in a patch of cortex.
There are three organising principals in LWS-N:
- Layers. There is small number of layers, certainly less than 20. There are links between layers but these links are very sparse compared with the links within layers. All the layers can be thought of as being superimposed on, embedded in the small patch of more or less two dimensional cortex
- Shape nets. Layer objects are expressed in more or less two dimensional space as more or less planar nets, with an object usually being made up of a number of polygon defined regions, sometimes just one region, often a small number of regions but in any case less than fifty or so. Regions will often be convex. Layer object will mostly have their own space, they will mostly not overlap. We call these nets, shape nets
- Texture nets. The regions of layer objects are given texture by one or more planar nets suspended in their interiors. Such planar nets might be very regular or they might be more complicated. We call these next, texture nets. In the case that there is more than one texture net for any one region, in the way (for example) envisaged for coding for colour, they will overlap, will often occupy more or less the same space, but texture nets generally will not overlap.
Figure 1 |
UCS (the unconscious at large) and UCS object have been included as a reminder that what gets into LWS-N is just the tip of the iceberg, possibly a non-functional tip. Note also that any one object active in UCS may be projected onto more than one layer object. On the other hand, it will usually not be projected at all, more or less by definition! The omission of the arrow head between layer object and shape net is deliberate as a layer object has exactly one shape net. Regions are comparable to the parts of layer objects we had before.
We retain from LWS-W the underlying idea that consciousness results from the activation of the neural structures expressing the content of consciousness, an activation which exploits those structures, which repeatedly scans those structures, through the course of a frame, which might last for a second or so, and which we shall start to describe below. Activation which we think will be more compelling in a world of graphs than in a world of arrays.
In all of this we expect to see hierarchy, modularity and re-use, three tools which are very well established in IT, three tools which will help the otherwise rather slow evolution along. But while we might use mathematical jargon and vocabulary, our structures do not exhibit much of the mathematical regularity often expressed in expressions like ‘for all x in X and for all y in Y, some proposition P, involving both x and y, is always true’. While we might well say that things are generally like this, or generally like that, we will rarely, if ever, say always. Our structures are not as tidy, nowhere near as tidy, as things like groups, modules or the set of natural numbers. Or even the set of complex numbers.
Basics
Figure 2 |
We have two kinds of vertex, the blue for shape and green for texture. There is the possibility of their coming in various sizes.
We have three types of directed edge. Blue for strong connections between the blue vertices, green for the rather weaker but rather more numerous connections between the green vertices and brown for the relatively sparse mixed connections. There is the possibility of their coming in various strengths too. And noting that these elements are embedded in our more or less two dimensional patch of cortex, their lengths may turn out to be significant.
For the moment, we do not allow vertices to be connected to themselves or for there to be more than one edge connecting any one pair of vertices. Constraints which are not respected in brains.
While all our graphs will be embedded in something close to two dimensional Euclidean space, we will be interested in planar graphs, that is to say graphs which can be embedded in a two dimensional, usually but not necessarily plane, surface without any crossings. In such a net we call the interiors of the polygons defined by its edges its regions. Some of the issues here were addressed at reference 4.
In addition, we have three sorts of special vertex, shown top left. The vertex with blue perimeter with red fill is a source of activation in the context of the structure in question, with edges only flowing out. The vertex with blue perimeter without fill is a sink for activation with edges only flowing in.
The red vertex is a transit, that is to say one which is embedded in a linear structure embedded in our more or less two dimensional patch of cortex, with edge or edges flowing in on one side and with edge or edges flowing out on the other side.
Note that the notions of source, sink and transit are local to the structure under consideration.
The terms vertex, edge and region are well established in graph theory and we try to stick with them in what follows. But, unlike in graph theory, where graphs are not usually, certainly not always directed, edge will nearly always mean directed edge. And in the figures which follow, we have not bothered to indicate direction much of the time.
Figure 3 |
We do not show here the direction of the connections, but the idea is that activation will flow across the structure in waves or pulses and the connections will have to be directed so that this makes sense.
We suppose that our network of vertices is complicated but that it is also roughly hierarchical and modular – where by roughly we mean that we allow stray vertices and edges which do not fit in the hierarchical or modular structure. So we are far from having every vertex directly connected to every other vertex, far even from their being a path from every vertex to every other vertex.
More than that, hierarchy and modularity mean that we can compress the otherwise impossibly complicated world of real neurons down to something more manageable. Lossy compression in signals processing terms; we cannot reverse a compression although we might well be able to reverse engineer it to something which will do.
Compression
We discuss in the paragraphs which follow, the compression which we need to get from raw neurons to shape nets; compression to the point where all that is left is the information which is projected, perhaps by way of the field of reference 1, into consciousness.
We are still thinking about whether such compression is applicable in the case of texture nets.
Compression of a substructure to a vertex
Figure 4 |
By way of example such a cluster is shown by the green ring in the left hand structure of Figure 4 above.
In the simple case, all the edges to be merged have the same direction, in which case there is no doubt about the direction of the merged edge.
Otherwise, there is a vote about direction among the edges to be merged: in the case of a tie the edge is dropped, otherwise the merged edge takes the winning direction.
In the very simplest case, and thinking in terms of merging a cluster of vertices in a linear structure in our two dimensional space, we have flows into the merged vertex from one side and flows out from the other side, with compression giving us a transit.
Which is to say in the right hand structure in Figure 5 below, there is no edge connecting a vertex above the transit directly to a vertex below the transit. All activation passes through the transit; or at least pretty much all. We will probably need to allow a bit of noise in the system, even though these structures will have been built by the compiler, a compiler which can be presumed to know enough to tidy up the possibly untidy signals arriving from the periphery.
Figure 5 |
Figure 6 |
Compression of a complex line to a segmented line
We now compress a complex line, complex in itself that is, but not much connected to the world outside. We suppose that activation is flowing anti-clockwise, from the sources top left around to the sinks middle right.
Figure 7 |
Such lines might also come with rules about there being paths from every source vertex to every other vertex on the line.
Figure 8 |
Figure 9 |
Figure 10 |
Few if any connections other that those shown. A reasonably closed world.
Layer objects and shape nets
The layer objects of LWS-N are represented as connected graphs, largely disconnected from everything else on that layer. One any one layer we may have a number of such graphs, more or less separated in space.
Figure 11 |
A shape net might span, roughly speaking, a disc of several millimetres in diameter, a significant part of our square centimetre of cortex.
Figure 12 |
We want to allow holes, perhaps the polygon shown above with the patterned blue fill, and absence of a texture net spanning the interior may be enough to mark such.
Figure 13 |
Given that we want to be able to activate the net with waves of activation spreading across it, we do have rules about the direction of edges.
Figure 14 |
The rule being that there has to be at least one source and that from any one of those sources one can reach every vertex of the object. Then activation will spread out from the source to cover the whole of the object.
Tiling the interiors with texture nets
Green edges and vertices were used to build planar tilings of the interior of a blue ring at Figure 3 above. While at reference 1 we talked of simple, regular tilings to do colour, with part of the idea being that simple, regular tilings send a strong signal to consciousness. These are illustrated in Figure 15 below.
Figure 15 |
Figure 16 |
We do not require the tiling to be uniform across regions. They need to repeat enough to generate a reasonable signal, but that apart they can vary. One end of a region might be blue, the other end might be red – remembering here that we proposed three nets, three tilings to do colour at reference 1.
We turn now to the activation of such nets, which depends on the direction of the edges, and, as with shape nets, we might have rules which ensure that activation does span the texture net in a satisfactory way.
Figure 17 |
Figure 18 |
In some cases it will suffice for two of the vertices of the texture net proper to act as source and sink. In this example it does not as there is no vertex from which one can reach every other vertex.
Figure 19 |
Absences
It will be possible for things to go missing in various ways.
So, somebody might not have turned up for a meeting. So he is missing, but we know that he is missing and can call him to mind readily enough. Nothing mysterious going on here.
Or we might know that someone is missing, but not be able to call that missing someone to mind. We don’t know who is missing. But again, there are images somewhere in LWS standing for that which is missing.
Rather different, an image might be more or less present in LWS, more or less properly coded up into shape nets and texture nets, but fail to activate properly, with the result that we are not conscious of it. Although we might be conscious of something being missing if some other image in LWS is coding for the presence, but not the content of the first image.
If we suppose that LWS can only be implemented in one particular place in the brain, that one particular place may be damaged in some way. Chunks of neurons may be missing or at least not working properly. Or what should be a rather even spread of neurons across our patch of cortex may actually be rather uneven. What sort of conscious experiences do these faults give rise to?
In the worst case, the external power source, perhaps in the brain stem, is simply turned off and there is no experience at all. A rather different kind of turning off would result from stopping oxygen getting into the blood.
Other thoughts
We sometimes try to imagine equilateral triangles, squares and regular hexagons with our eyes shut. Quite often, the impression is of the focus of attention whizzing up and down the sides, complete with arrows, taking one side after another, somewhat at random. And sometimes there is the very strong impression of the eyes tracking this imaginary movement, even to the point of sensing very small head movements. Evidence of a sort supporting the notion of activation running around the edges of our blue graphs.
We have talked of one region having more than one texture net. It seems likely that the activation of such nets will need to be both simultaneous and phase synchronised to work properly.
Conclusions
We have sketched some ideas about how the neurons of LWS-N might be organised into structures of layers, shape nets and texture nets. With the proposal being that activation of these nets gives rise to, is the conscious experience.
Plenty of further work to be done.
References
Reference 1: http://psmv3.blogspot.co.uk/2017/09/coding-for-colour.html.
Reference 2: http://psmv3.blogspot.co.uk/2017/06/on-elements.html.
Reference 3: http://psmv3.blogspot.co.uk/2017/08/occlusion.html.
Reference 4: http://psmv3.blogspot.co.uk/2017/09/sensing-spheroids.html.
Group search key: srd.
No comments:
Post a Comment