Friday, 26 October 2018

Orientation

We have been thinking about what LWS-N does about orientation. Does it matter if the field of consciousness which it is hypothesised to generate, rotates?  Does the subjective experience always include a sense of which way is up? A sense of where the self is relative to what is being experienced?

In the course of which we remembered about an art installation which involved lots of plastic sheeting, where the idea was that one’s whole world had become a uniform sea of a uniform colour, say blue. Maybe, when one is in such a world, one does know what colour it is, say blue, but one would be hard put to say exactly what blue. Maybe, some people would lose any very strong sense of what colour it was at all. Maybe, if the thing were properly organised, one would also lose any sense of what was up and down, where the horizon was.

Figure 1
Which led onto to thinking about how we might code for colour in the texture nets of LWS-N, with the trick being to find some way of coding something into our texture nets which was more or less equivalent to the RGB coding for colour built into packages like Microsoft’s Powerpoint, illustrated above, with the left and middle panels being the two options for action you have when you want to change the colour of the interior of the star right. The way that the middle panel works is not terribly intuitive, at least not to us, but the RGB answer in numbers is clear enough.

We note in passing that colour for humans is by nature three dimensional, which makes coding for colour by a single real number, by a single real valued property of our tiles, rather difficult. Thinking here of things like area, diameter and aspect ratio. One might have negative for blue and positive for red, but where does that leave green? It can be done, but it does not attract.

We also note in passing the transparency option at the bottom, a useful device offering something not unlike seeing a fish swimming about in the water below. Which feature, as we have described elsewhere, we think to offer by means of layers, on which some early thoughts are to be found at reference 4.

Remembering always that the whole point of LWS_N is that is self-contained; somehow it does colour of itself, draws colour out of the void, without reference to anything else. With a well known story about the Deity pulling this trick off to be found in the Book of Genesis and a rather more succinct version in the Gospel according to St. John.

Figure 2
Figure 3
Rather more succinct maybe, but also more obviously wrong. It sounds well, but the word was not the beginning: of the 5,000 million years or so that the earth has been around, the word has only been around for the last few hundred thousand – if that. Perhaps a Jesuit would argue that the potential for words is always there, but they only become visible from time to time, from place to place. While we would argue, although it is not particularly relevant here, that consciousness also came before the word.

But coming back to earth, see reference 1 for entry into the world of LWS-N generally, reference 2 for a previous stab at the problem of colour in that context.

Frames of reference

For present purposes, we neglect the binocular and the complications of binocular vision of things more or less in front of the nose.

We can use the anatomy of the body to define the direction in which the (untwisted) body is pointing, is facing. The body direction.

We can use the anatomy of the skull to define the direction in which the nose is pointing. The head direction.

We can use the anatomy of the eye to define the direction in which we are looking. The eye direction.

These three directions are the same in the case that the body is held straight and erect, the head is straight on the shoulders and the eyes are centred in their orbits – which does seem to be the preferred position, with there being a strong tendency for the head to follow the gaze and for the body to follow the head. We like to tackle both prey and predator head on, as it were.

The eye has three pairs of eye muscles and three degrees of movement, not just two, and the eye can rotate about the line of sight. Which rotation we can use to define eye vertical.

We can use the anatomy of the skull to define up and down with respect to the head, roughly the line up from the point of the chin to the bridge of the nose. To define head vertical.

We know there is an absolute vertical, defined by the force gravity at the surface of the earth. Near enough absolute for present purposes.

For us, at least for a lot of the time, these three verticals are the same. This is less true of, for example, arboreal animals like chimpanzees. It is also true that many things of interest exhibit a rough sort of bilateral symmetry about that vertical.

All these complications notwithstanding, we suppose that for the purposes of consciousness, of the subjective experience, there is at most one direction and at most one vertical – derived in some more or less complex way from the foregoing – and from other inputs. Which is a simplifying supposition: the actual experiences of, for example, looking at the television while lying on one’s side or trying, while out walking, to work out where one is from a map which has not been turned so that it is in the right direction, seem to be something more complicated. Noting here that the second of these example dates from well after the evolution of consciousness. Versions of the first probably well before.

So the visual field, for example, as expressed by one or more layers of LWS-N, will be orthogonal to the direction and orientated to the vertical.

Recap on fields

The proposition is that the subjective experience arises from the field generated by appropriate activation of the neurons underlying the shape nets and the texture nets of LWS-N. In particular, for present purposes, by activation of the tiles of those texture nets. A field which, for present purposes, might be considered to take vector values over a disc in the plane.

For present purposes we gloss over the complication that the subjective experience arises in time, is a function of that field over a short interval of time Δt, around time t.

For present purposes also, it is not relevant that we have supposed consciousness to be organised into a discrete succession of frames, of the order of a second each in duration. Frames which are not necessarily fixed or constant, but there is a compilation process which builds, more or less from scratch, each successive frame. The concern here is with what goes on within the span of a frame.

We do not require the map from field to subjective experience to be one to one. We allow that many fields, perhaps only differing in detail, might result in the same subjective experience, that there might be lots of pairs of fields which subjects – in so far as one can test such things – would find it hard to distinguish. They would report, at least after the event, that they were the same.

Figure 4
We also imagine our patch of cortical sheet as approximating to a disc. And our field generated by that disc as being something like a flying saucer in the plane of that disc. Strong (absolute) values in the middle, weaker at the periphery, effectively zero not much further out.

Figure 5
We suggest that this field is completely self-contained, that it contains whatever is necessary for the conscious experience without recourse or reference to anything or anywhere else. Nor does it interact in any relevant way with any other, ambient fields, say the earth’s magnetic or gravitational fields. Or brain waves emanating from some other person. Or the ether. Although we do recognise that some animals, particularly birds, are sensitive to the earth’s magnetic field, and that the involvement of consciousness with that field is, in consequence, theoretically possible. See reference 3.

So, notwithstanding the birds, we are led to propose that our field would generate the same subjective experience whatever its orientation with respect to the real world, to up and down, to north and south, to east and west. Any rigid transformation would do. And replication without mutual interference or interaction, were that possible, would result in two identical experiences.

We believe that it follows from this invariance of the experience under rigid transformations of the field, that there would also be invariance of the experience under similar transformations of the underlying data on our patch of cortical sheet.

So what about colour?

Coming now to the business of coding for colour, suppose we had a scheme for coding colour into the shapes of the tiles, the minimal polygons, making up our texture nets which went something like the figure that follows.

Figure 6
Hitherto we have focussed on shape: triangles, squares and pentagons. Or perhaps triangles, squares and hexagons. Or perhaps lines, triangles and circles. But now, by way of example, we focus on orientation: vertical for blue, neither for green and horizontal for red. With the assumption that there is a subjective distinction between, for example, a fine vertical grating and a fine horizontal grating, with vertical and horizontal being defined for our patch of cortical sheet along the lines we have indicated above

We have some function C which classifies the tiles of a texture net to exactly one of four groups: vertical, round, horizontal, void. So in the illustration above the left hand squares go to round, the vertical rectangles top middle go to vertical, the horizontal rectangles top right go to horizontal and the miscellaneous stuff bottom right goes to void. The small square is too small to qualify, the thin rectangles are too thin to qualify and the other stuff is either ambiguous (the quadrilateral middle left) or perverse (the star bottom right).

We have a function A which gives the area of a tile.

The elementary colour of a point in the interior  of a tile is given by the area and colour of that tile.

The value of a colour is a non-negative real, with both large and small coloured tiles giving a value of zero, with a positive maximum somewhere in the middle. If on a boundary where there is agreement about colour, then the average. Otherwise void.

The integrated colour of a point is given by weighted sum, the two-dimensional convolutions (using some wavelet or other) for red, green and blue. The activation processes which deliver the field of consciousness, do not necessarily do convolutions, but the neural firing implied by those processes integrates up to a field which amounts to the same thing.

This scheme clearly depends on orientation. Somehow, in generating the subjective experience from the tiles suggested above, something knows about which way is up.

Figure 7
In the illustration above we suppose we have left a large collection of qualifying vertical tiles and right a very similar collection of qualifying horizontal tiles . Now it is reasonable that when both collections are present, that one should experience a difference between the two. At the very least, the experience should register the discontinuity at the boundary down the middle. But how can just one of them, in isolation, be so distinguished?

Put another way, one can see that, say that red and blue are different when together, but how is red different from blue in isolation?

Why should the experience not be invariant under rigid transformations of the plane, as suggested above?

Furthermore, with eyes shut, subjective experience of say sounds and smells is more or less invariant under movements of the head, which one might suppose take the shape nets and texture nets of LWS-N with them. So rigid movements of the field do not change the experience.
One way to deal with this might be to have an orientating layer; a layer whose only purpose is to give us up and down, to qualify the other layers with up and down.

Figure 8
The trouble with this scheme is that while it serves to tell us the orientation of up and down, it does not distinguish up from down.

Figure 9
So here we deploy activation, which had been kept in reserve. We have a source of activation in blue, bottom left, and a sink of activation in red, top right, with the most visible flow of activation being up, at least on this rendering. But what about left and right? A texture net has been constrained to be planar, so we can’t do left and right at the same time as up and down, at least not in the way of Figure 9. Do we do left and right with yet another layer?

Figure 10
Not necessarily. Maybe something like Figure 10 above would do, organised a bit like the picture scanning lines of a television screen. With the blue arrows giving us vertical and the red flow across the field of vision, from left to right, giving us horizontal east.

Figure 11
Another alternative might be something not that far removed from the compass card illustrated above.
One might argue that, in an experience which is in large part two dimensional, given up, innate knowledge of clockwise and anti-clockwise might serve to give right and left respectively. However, not being entirely convinced, we have chosen a compass card which has a large icon for north and a smaller, different one for east. Hopefully that suffices.

Figure 12
A less whimsical alternative would be to build on the square bar codes now used in many industries, bar codes which, in order to readily carry information, are oriented by the three special corner squares, with the absence of a corner square bottom right marking south west. Which given that the computer knows about clockwise, is enough for it to know which way up the code is supposed to be.

Figure 13
A device which could be included in the corners of any layer for which orientation is important, rather as pictures of archaeological artefacts commonly include a ruler or measuring rod. See reference 5 for the context of Figure 13 above.

Layers which would qualify the experience as a whole in a way that means that the left hand part of Figure 7 can indeed be distinguished from the right hand part, a distinction which is preserved when either the data or the field as a whole is rotated. We do not go into how we know, or whether we need to know, that this layer is orientation; we leave that for another occasion.

However, when all is said and done, all these extras seem a bit contrived. They may well exist, at least some of the time, at those times when we are conscious of our orientation, but we think it more likely that colour will be coded up in some way which does not need them. Perhaps something involving the size or shape of tiles, rather than their orientation. To which we shall return in due course.

Conclusions

We have suggested that the subjective experience, that the consciousness generated by our field does not change when that field is moved about – rotations and translations – and that this has implications for the way in which the world around is coded up, in particular for the way in which we code up for colour, something which is locally the same in all directions.

PS: wanting to check where the compass card of Figure 8 came from, I asked Google image search. Dismal failure on this occasion, with it suggesting couple on a tandem bicycle.  Bicycle wheel yes, but why tandem? Why couple? While I think the answer should have been to do with the Library of Congress. And a further oddity later, with Microsoft on my telephone thinking, for some reason, that the date of the picture was July 2025.

References

Reference 1: http://psmv3.blogspot.co.uk/2018/05/an-update-on-seeing-red-rectangles.html.

Reference 2: http://psmv3.blogspot.co.uk/2017/09/coding-for-colour.html.

Reference 3: Chemical compass model of avian magnetoreception - Kiminori Maeda, Kevin B. Henbest, Filippo Cintolesi, Ilya Kuprov, Christopher T. Rodgers, Paul A. Liddell, Devens Gust, Christiane R. Timmel & P. J. Hore – 2008.

Reference 4: http://psmv3.blogspot.com/2017/04/a-ship-of-line.html.

Reference 5: https://psmv3.blogspot.com/2016/10/fossil-flint.html.

Group search key: srd.

No comments:

Post a Comment