Thursday, 6 January 2022

On Grasssmann

Contents

  • Introduction
  • The laws
  • Digging deeper
  • Summary and assumptions
  • Tests
  • Additional information
  • Conclusions
  • References

Introduction

Figure 1

Hermann Grassmann was a nineteenth century German mathematician and polymath who invented what are now called the Grassmann laws of tri-chromatic light, the laws which say that, in so far as human vision is concerned, any packet of light can be considered as a point in something close to three dimensional vector space, a space in which such packets can be added up, in which one can have bases. The laws are to be found at reference 1, their invention at reference 2 and their inventor at reference 3. 

In what follows, I use the laws as a peg on which to hang various thoughts about the way subjective experience of vision can and does change from person to person. Each person’s experience is, indeed, fairly unique.

Along with many other vertebrates, vision is important to humans, inter alia occupying a fair amount of space in the brain. And while human eyes and vision processing systems are all built along the same lines, there is much variation in detail. Variation which can result in variation in the subjective experience of vision, despite the efforts the brain may make to patch things up. Variation which can, for example, be the result of gross damage to the structure of one or both eyes, of genetic variations in the coding of relevant proteins (for example, the opsins which are important in colour vision), of variations acquired during development (rod and cone birth) or subsequently, possibly in old age (rod and cone death). Possibly to the extent of having only one working eye, possibly to the extent of being more or less blind, or, less seriously, of being colour blind. Then there is the sort of variation, mainly to do with the lenses of the eyes which can be corrected with spectacles or, in the case of cataract, by replacement. While here, what we are concerned with is the more or less normal variation from person to person of the experience of seeing patches of colour; what one might think of as one of the more elementary functions of vision.

Figure 2

Variations which it might be argued are not significant, do not bear on the point of the experience of colour, which is twofold. First it provides a way of distinguishing one thing from another. Second, following on from the first, it provides a way of describing, labelling and identifying things out in the world so that we can think about them and talk about them with each other. To which end it helps if, most of the time anyway, we can agree which pairs of colours are different and we can agree on the names for colours. But neither this distinguishing nor this describing would be necessarily much disturbed if my experience of green, for example, was not the same as yours.

Consider the children’s bricks in the snap above, leaving aside the noise inserted by Dreamstime. They are intended for young children, so the colours are kept simple, perhaps because their visual skills are only just developing, perhaps because their language skills are only just developing.

On the one hand, it is helpful if a blue brick, perhaps the one in partial shadow, middle right, is the same blue all over. And if one was to inspect each side, individually, under suitable illumination, they would all be the same, more or less uniform colour – something which can now be achieved with modern paint – something which one did not necessarily get out in the natural world. Having the same blue all over makes identification of the brick as a whole easy and reduces the visual clutter. The conscious part of the brain can only handle so much information at a time. On the other hand, it is also helpful to have some information about the various parts of a brick, that is to say its various faces, and the position of the blue brick with respect to the other bricks in the heap. This is mostly done by varying the blue across the surfaces of the brick, sometimes done by replacing the blue with some different colour. Painters of flesh, for example, use all kinds of unexpected colours to bring out its features. The brain has to compromise between these competing demands. Complications which need to be borne in mind in what follows. Complications which are dealt with by a cooperation between the eyes and the various parts of the brain.

The laws

These laws are one of the foundations of the science of colour, a business which has occupied people for thousands of years and one which still has a good way to go. Wikipedia at reference 1 summarises these laws as the three integrals given in Figure 1 above – integrals of the product of a colour function (I) and the three response functions (r-bar, g-bar and b-bar) over the wavelength λ – with the wavelengths of interest being roughly in the range 360-780nm, the range of wavelengths of visible light – where an ‘nm’ or nanometre is one billionth of a metre.

Some things emit light and have their own colour function. But the things of interest here only have colour because of ambient light being reflected off them. The colour function of such a thing, perhaps just a simple patch of colour, is the product of the illumination at the time, often daylight, and its reflectance, with both illumination and reflectance expressed in terms of wavelength. Put another way, the colour of the patch is just the colour of the light which bounces off it.

Figure 3

This figure gives an illumination for daylight: a function which breaks daylight down into its spectral components and which varies with the weather conditions, the latitude, the time of year and the time of day. But a function which can, at any given time and place, often be regarded as constant.

Figure 4

This figure gives reflectance for green summer leaves and red autumn leaves, expressed as a percentage of the incident light, with the reflectance extending here beyond the visible range into the infrared. A figure which assumes that reflectance does not vary with intensity – although if one worried about that, one could always measure the light coming off the leaf directly.

Roughly speaking, take the product of these two to get the colour function of your object, of the thing in question. In this example, with the illumination being fairly uniform, this product with the reflectance will have the same general appearance as that reflectance.

So we now have some amount of light coming into the eye, with the spectral density function or colour function being the I of Figure 1. This function can be expressed either in terms of photons or of energy – with the energy of a photon being a function of its wavelength. Here we suppose in terms of moles per square meter, per second, where a mole is approximately 6.02 × 10 to the power of 23 photons, sometimes, in contexts without superscript, written 6.02 × 10^23 photons. This being the only place where either place or time come into the story told by Figure 1.

Turning to the response function, we observe first that the retina usually contains a mixture of receptive cells; rods which are sensitive to brightness and cones which are selectively responsive to colour. These cones usually come in three varieties, usually called either long, medium and short or red, green and blue.

Figure 5

This figure suggests the response functions for three sorts of colour receptive cones in the human eye, although these raw sensitivities are not the whole story. Rather as the colour function can be split into two independent parts, the illumination and the reflectance, the response function can be split into the response functions of an individual cone and the distribution of those cones across the relevant part of the retina, usually some central region either around or within the fovea, the part of the retina where the cones are densest, more or less to the exclusion of rods, which are left out of this account. Whereas at the periphery the rods are very dense, more or less to the exclusion of cones.

We often make the simplifying assumptions that colour vision happens at the centre of the eye, where the cones dominate, and that there is just the one set of three response functions, more or less the same for everybody.

The colour function is then combined by integration with these three response functions to give a RGB triplet for transmission down the optic nerve to the brain.

These integrals are additive with respect to the colour functions, that is to say we can add or mix colours together and get a sensible result. So with response functions fixed, if A and B are colour functions, α and β are positive reals, then R(αA + βB) = αR(A) + βR(B), where R(A) is a shorthand for the first integral in Figure 1 above, replacing I with A.

One important consequence of these rules is that achieving a desired colour by mixing pigments in a paint pot and or by mixing coloured dots of ink on the page is much easier than would otherwise be the case. We can manage with a much smaller repertoire of base colours than would otherwise be the case. 

Digging deeper

Figure 6

Let us suppose that we have one eye and that eye is looking straight at a uniform, circular patch of a green colour. That patch of colour is mapped onto a small disc on the back of the retina which is centred on the centre of the fovea, that is to say the part of the retina where the colour sensitive receptive cones are concentrated, more or less to the exclusion of the light sensitive rods. Inclusion of which rods would imply adding a fourth line to the three in Figure 5, as indeed some versions of that diagram do, giving us quadruplets rather than triplets.

The size of the disc is often measured in degrees, that is to say the angle made by the disc at the centre of the lens of the eye, typically between 2° and 10° – out of a total of around 140° side to side and around 120° up and down. Which can be compared with, say, a horse, which has something much closer to all-round vision.

Note that while the big blue pipe from the retina to the brain is feed forward only. But there is some feedback in the brain’s control of the position of the eye, control of what it is that the eye is pointing at. Control which is not total, but is near total. Which leads to the interesting question of how the continually moving image on the retina – the eye might be making moves at a rate of the order of 3Hz, three times a second – is mapped onto something steadier in the brain - the subjective experience not being of things jumping about. A problem for another day, but one which the LWS-R of reference 10 starts to address by structuring time into frames (vey roughly, one position of the head), takes (one position of the body) and scenes (one place in the world).

Note also that we suppose the processing that is done in the eye, in the retina itself, is fairly local and fairly elementary. The eye does not know much about things although it may know something about the boundaries of things. For processing the big picture, we have to look to the brain with its much greater processing capacity.

Now if a human were more like a computer than it actually is, the human eye might have been thought of as a camera which makes a bit map image of some part of the world outside, perhaps a large circular assembly or array of coloured pixels, where each pixel takes one of, say, 256 colours. That image is then transmitted down some pipe, perhaps a fibre optic cable, to the brain for further processing. Human affairs are not like this, but for present purposes we assume that something of this sort is happening, along with lots of other stuff which is not of present concern. Somehow or other, the eye transmits information about the colour of each point, or perhaps each small blob, in the visual field, to the brain.

If great architect were a software engineer, and the cable from the eye to the brain was not broadband, he might think the way forward was to push position-colour pairs down as many channels – that is to say axons - that were available and then reassemble those pairs into the array of pixels at the other end. This might include various dodges like not sending the next pair in the case that it was the same colour as the last pair. If he were a good engineer, he might add various global adjustments to the mix, prior to transmission, perhaps allowing for the brightness of the image or the colour mix more generally. Perhaps doing something about contrast. Such adjustment might also be made in the brain, rather than in the eye.

Figure 7

Now the image which the brain assembles contains a lot more than just the colour of a whole lot of pixels. A lot of work has been done to make those pixels intelligible and a lot of other information needs to be sent from the eye down the cable to the brain, where it is applied together with the information the brain brings to the party. A pipe which is made up of the axons of ganglion neurons located near the outer surface of the retina. So maybe the number of axons available to do the colouring of pixels is 600,000, that is to say a bit more than half the total – against the much larger number of cones, perhaps 5 million – and the still larger number of rods, perhaps 100 million . All figures for being for one human eye. And much more complicated sections of the retina are available than that from Pearson above.

And what may have been done instead of colour-position pairs is that 3 axons are assigned to each of 200,000 positions, and those axons carry the RGB triples for that position, and carry on carrying them for the duration. It is no longer a case of fire and forget: each axon will carry its signal in the form of a sustained firing rate which will only change when the colour changes. 

With each RGB triple derived from of the order of 25 cones. And it may well be that the receptive areas of neighbouring neurons overlap. 

In any event, by these means the stimulus arriving at our disc has been translated into the firing rates of those three groups of axons, that is to say an RGB signal, the R, G and B of Figure 1. Rather fancifully, one might think of a cross section of the part of the blue pipe carrying our 600,000 axons as reproducing our image, very much in miniature. Which all has to be processed downstream, in the brain, before the subjective experience can be generated, a generation which might involve reassembling some kind of copy of the original patch in the brain.

On these numbers a rather smaller than that generated by the camera in a mobile phone, but quite good enough for our purposes.

Figure 8

Another summary is offered in the figure above. The relevant part of the retina is made up of red, green and blue cones. Cones of the same sort, at roughly the same place, are collected into reception zones. We allow any one cone to belong to more than one zone, hence the double headed arrows. The distribution of cones and their zones is random, a matter of chance during development, but is also assumed to be reasonably uniform and averages are stable. Then the cones of a zone map onto the dendritic tree of a neuron, which then projects its firing, down its axon in the pipe, to the brain. And the brain knows, in some way, where each axon is coming from. The establishment of all of which might be supposed to be largely a matter of development – both ante and post natal – and then fixed for life, but quite possibly subject to all kinds of systematic variation, random variation and noise.

Note that, under this arrangement, our eye is quite capable of transmitting a moving picture. If the picture moves, the signal will make the corresponding moves by default. The concept of transmitting a frame at a time, in the way of a film at the cinema, does not arise. At least not at this stage of the game.

Summary and assumptions

We have some amount of light coming into the eye, with the colour function being the function I of Figure 1, and we have supposed that this colour function is expressed in terms of photons  - that is to say in moles per square meter, per second, where a mole is approximately 6.02 × 10 to the power of 23 photons. 

This light is then converted to RGB, expressed as the three firing rates of axon triples by the response functions of Figure 5 above.

These response functions are often assumed to be the same for everybody and for every central patch of retina – although they are known to vary slightly with age and with the size of the patch. They may well vary in other ways, although not, it seems with sex or race.

On this assumption, any given light, described by its function I, will always give the same RGB signal. Furthermore, linear mixtures of light will always give the corresponding linear sum of RGB signals.

We might further assume that the subjective experience of colour is completely determined by the RGB signal. Sufficiently different RGB signal, then different subjective experience, with one question of interest being how different the signals have to be before the subjective experience is different. When does the subject say that two colours, presented side by side, are the same? Humans, as it happens, are much better at saying that a pair of adjacent colours are not the same than they are at naming or even describing any particular colour in isolation. Thus giving rise to a need for colour atlases to try and pin things down a bit, the sort of thing produced by the people at reference 4. A lot of money rides on these atlases; money which propels free tutorial material like that at reference 6.

This last assumption neglects the probability that the brain interferes with the RGB signal produced by the eyes. Are the sort of interactions described by Albers at reference 5 the business of the eyes or the brain? If I see a green woodpecker in the garden, a bird which I know is mainly green, will that knowledge tend to overrule whatever it is the eyes might be saying? A probability which is emphasised at reference 9.

Problems which the users of colour atlases try to avoid by presenting patches of colour suitably illuminated against neutral backgrounds – conditions which do not generally prevail in the outside world.

Tests

The Maxwell test and the Rayleigh test have been developed to explore the breakdown of light into its primary components, to map a miscellaneous blob of light onto an RGB triple, to compare the subjective experience of the blob with that of the triple. Inter alia, to test the truth of the Grassmann laws. With the original test equipment being 19th century brown wood – usually mahogany – glass and brass.

One of the points of interest here is the differences between people that these tests show up, in particular those between anomalous tri-chromatics and normals. That is to say, people who have three sorts of colour cone, but with significant differences in their response functions from those of normals. Their RGB triples are not the same as those of normals.

And then there are the people who only have two sorts of colour cone, and the much smaller number who have four – which throws the whole three dimensional model into disarray.

While the brain may be able to mitigate these problems, it seems clear that the subjective experience of colour of these people does differ from that of normals.

More generally, it seems to be the case that the Grassmann laws do break down at the margins, for example when there is little light.

Additional information

We have used continuous, relatively uncomplicated colour functions. But there are things out there which are not continuous and there are things out there which are more complicated.

Figure 9

The sort of thing that interests astronomers. But not the sort of thing the man in the street encounters, down here on earth.

Figure 10

The spectra of various sources of light.

Figure 11

Complications when you look at the big picture, complications which thin out when we confine ourselves, as we do here, to visible light at sea level.

Conclusions

It is commonly said that every human being is unique. The argument here is that part of this is the subjective experience of colour of people with more or less normal colour vision.

Variations in the makeup of the retina do result in the tri-chromatic coding of colour by the eye varying from person to person and from time to time. These variations translate into variations in the subjective experience of colour generated by the brain, these last despite the brain’s at least theoretical capability to interfere with, to vary, that tri-chromatic coding.

Such variation includes one person saying that two adjacent patches of colour are the same and another person saying they are different. 

Figure 12

Noting here that we are not much good in saying whether two patches of colour are the same colour unless they are adjacent, with a straight boundary between them, presented against a neutral, uniform background and appropriately illuminated. Perhaps along the lines suggested in the figure above. Lots of interesting examples of what happens when these conditions are not met are to be found, for example, at reference 5.

But then, one thinks, so what? So my experience of the world is slightly different to yours? So long as the differences are not so large as to interfere with communication in words, does it matter? Bearing in mind that words, while digital friendly in themselves, are rather more shaky, rather more elusive than colours. Who can say what a word really means?

References

Reference 1: https://en.wikipedia.org/wiki/Grassmann%27s_laws_(color_science)

Reference 2: Zur Theorie der Farbenmischung – Grassmann, H. – 1853. Which title might, I think, be loosely translated as ‘on the mixing of colours’, with ‘farben’ being roughly ‘colours’. 

Reference 3: https://en.wikipedia.org/wiki/Hermann_Grassmann

Reference 4: https://munsell.com/

Reference 5: Interaction of colour – Josef Albers – 1963. This edition is now rather expensive – but there are much cheaper reprints.

Reference 6: https://www.hunterlab.com/media/documents/basics-of-color-theory.pdf. The source for Figure 3 above.

Reference 7: https://planetarygeolog.blogspot.com/2013/06/the-colour-of-squiggly-lines.html. The source for Figure 4 above.

Reference 8: Statistical properties of color matching functions – María da Fonseca and Inés Samengo – 2021. For those of a statistical bent, the argument here seems to be that a lot of the observed variation can be put down to statistical variation in photon absorption by the cone receptors.

Reference 9: Individual differences and their implications for color perception – Emery, K. J. and Webster, M. A. – 2019. A rather easier canter over some of the same ground as the foregoing.

Reference 10: http://psmv4.blogspot.com/2020/09/an-updated-introduction-to-lws-r.html

Reference 11: https://psmv4.blogspot.com/search?q=saccade. Coming at eye movements from rather different viewpoints.

No comments:

Post a Comment