This prompted by reading recently in reference 1 that feelings, the felt part of the emotional system, were necessarily felt, necessarily conscious. Although to be fair to Solms, I am only at chapter 5 and this attractively simple proposition might be qualified as I get further into the book. Regarding which I might say that, so far, I am getting on with it rather well. Not least because he still believes in Freud: by no means the whole truth, not even nothing but the truth, but at least some of the truth.
In the block diagram above, I offer a simplified version of that offered at reference 2. But rather than the dog there, here we have a humanoid robot, under the control of the computer edged in yellow, housed in the body of the robot. Most of the work is done in the lower compartment, in the engine room, where there is lots of parallel processing, but when the going gets rough, when something goes wrong, an interrupt is raised with the supervisor, our robot’s version of consciousness – and the supervisor is expected to sort things out. Not so much parallel processing there and we end up with a serial, narrative thread: do this, then do that. A serial thread which the body can deal with: it is no good telling (even a royal) hand to wave to the right and to wave to the left at the same time.
Noting in passing that plenty of people, me included, would have enough difficulty doing something tricky with the right hand while at the same time doing something else tricky with the left hand. A difficulty which gets worse as the two hands get further apart. In which connection, I might say that when carrying two cups of tea up the stairs in the morning, one in each hand, it is important to keep my hands quite close together and it is a big help if there is at least some light. Another tricky activity, certainly for me, would be tapping a hand to one rhythm while tapping a foot to another. From where I associate to a story read long ago about how politicians like Disraeli could keep three secretaries busy, taking down three letters to his dictation, more or less simultaneously. I don’t recall whether the secretaries were using quill pens or whether they knew some kind of shorthand – but in any event, on this account, he could keep the three streams of words in play at the same time.
A robot, provided the engine room was big enough, might be able to cope perfectly well with this sort of thing. And in the event that the engine room was getting overloaded, the supervisor should be able to sort something out.
The supervisor being a version of consciousness is a shorthand way of saying that, inter alia, the supervisor does the stuff which is attributed to consciousness in humans. A homunculus if you will, the homunculus which is not allowed to humans by respectable neuroscientists, in part because they worry about infinite regress. Where does the buck stop? That said, there is no suggestion that our robot is conscious. It is just a bit of machinery which can be turned off or dropped into the crusher without a qualm. Although we might have to admit a few qualms: one can get quite attached to far more ordinary machinery, so I daresay one could get even more attached to our robot. I recall reading of Japanese seniors getting attached to their furry feline robots: they know that they are not the real thing, but the felines can do enough for stroking them to be a satisfying activity. All of which says something about us – rather than something about the robot.
This supervisor, as well as being an analogue of consciousness, is also an analogue of the operating system of a PC or of the sort of mainframe computer which dominated the 1970’s and 1980’s. Perhaps an ICL 1906S or an IBM System/370. One of the jobs of the operating system was to maximise resource usage by running several – perhaps many – streams of processing at the same time. Just the sort of capability needed to give Disraeli a run for his money in the matter of his dictation. And the ability of a process to raise an interrupt with the operating system is an important part of this. A process might ask, for example, the operating system for some more resources; for more memory or for access to this or that file. A request which might, on occasion, be denied.
The lower compartment talks with the rest of the robot, controlling things like power supply, the hands and the feet, perhaps facial expression, while the upper compartment only talks with the lower compartment. It only has indirect access to the rest of the robot and to the world outside.
I dare that the robot would include a degree of distributed processing, with, for example, limbs having their own processing units, rather as a computer might have a disc controller to micro-manage some disc units. But the central computer is very much in charge of behaviour, not least in directing the movements of limbs. While being content to delegate the actual execution of that movement.
The space of behaviours
The figure above is intended to give some idea of how I am looking at things. It depicts the space of behaviours, projected down to two dimensions in order to show how well our robot (left in blue) and the conscious human (right in green) do on them.
We suppose we have scored robots and human for their capability for each behaviour, with scores being non negative integers, with zero for no or insignificant capability. Zero for the robot is the area outside the blue cloud, zero for the human is the area outside the green cloud.
So the behaviours that our robot can manage are suggested in blue – A + B + C + D – on the left, while the behaviours that the conscious human can manage are suggested in green – B + C + D + E – on the right. With the drab green region in the middle – B + C + D – being the behaviours that both can manage, at least up to a point. B is the area where the robot does better than the human, C is the area where they are about the same and D is the area where the human does better than the robot.
An example of a behaviour in B might be a memory game, and an example of a behaviour in D might be playing poker. The robot can play poker and is good at working out the odds, but the human is apt to be better at reading the faces, reading the game. Which might give it the necessary edge.
I gloss over the fact that different robots and different humans will manage different behaviours. They might both change over time. That said, there are lots of behaviours which both most robots and most humans can manage.
In what follows I am particularly interested in the green area to the right, where the conscious human can do stuff which the unconscious robot cannot manage at all.
I leave behaviours which are outside both coloured zones to extra terrestrials, divinities, djinns, sprites and so forth.
The question of present interest
The question of present interest it not so much in how consciousness works, how the wetware delivers the subjective experience, rather in what it might be for. A puzzle which has attracted a lot of work in recent years – not to say over the millennia – and there is a growing list of things that we only seem to be able to do when we are conscious, probably only when we are conscious of doing them, which is not quite the same thing. While bearing in mind that it is also becoming apparent that lots of unlikely things can be done unconsciously. Furthermore, my understanding is that higher grade physical skills, like hitting a small ball into a not very big hole which might be tens of yards away, are best done without consciously fussing about the details. But also without trying to read the works of Shakespeare at the same time, which would draw too much processing power away from the unconscious.
Many people think that our robot, hardware rather than wetware, is unlikely to be conscious any time soon. My argument is that our robot can, nevertheless, be programmed to do all the stuff for which consciousness might be thought to be necessary. To the point where it remains unclear to me what consciousness is bringing to the party. Why did evolution bother with it? Is it just an accidental by-product of stuff which is useful? An epiphenomenon in the jargon of the trade.
So, against this background, I present you with a robot which offers a reasonable approximation to the behaviour of a real person. Then you say ‘that’s all well and good but what about function X. Only a conscious human can do that sort of thing’. To which I respond ‘you tell me what function X is and I’ll code it up. Just a bit more engine in the engine room and a bit more supervision from the supervisor. You’ll not be able to tell the difference from the real thing’.
So it might be that to respond to Solms, I build an emotion sub-system. Something more than an automatic reflex, rather a system that alerts the host, that is to say the supervisor, that there something of which it should be aware. It might be a success, like eating a good bit of cake. Or it might be a problem, usually giving some indication of what or where – perhaps a pain in the big toe – and then leaving the supervisor to decide what, if anything, to do about it. The felt emotion does code for action in a limited way – for example to approach or avoid, to fight or flee – but it also gives the supervisor the opportunity to do something more nuanced, something more complicated than just approach or avoid, or perhaps to do nothing at all.
Consciousness is bringing attention and choice to the party. With my argument being that both of these can be managed by the supervisor of our engine room without needing to bring consciousness on board at all.
So, proceeding iteratively, my robot will get more and more like a human being, while remaining unconscious. So what is missing from the robot? What is missing from, what is wrong with the argument?
I observe in passing that the ability of programmers to build on what has gone before is one of the great strengths of computers. And while evolution also builds on what has gone before, it is very much a matter of trial and error and certainly takes a very long time: programmers can do better, maybe even intelligent design. But maybe the Great Architect would be a bit of a stretch.
A red herring or a complication?
In so far as our robot is sensing things which are remote – that is to say much of the input from the eyes and the ears – action at a distance – all this is quite plausible. What the robot gets through its eyes and ears is quite like what we imagine a human getting. So we can code up the visual field, we can build a data structure for the visual field, reasonably confident that a human must, in some sense or other, be doing something of the same sort. We can, for example, program our robot to drive a car. But what happens when it is sensing things which are much closer to home? Perhaps a wasp jabbing its sting into its arm.
By way of example, we might arrange things so that our robot did anger, something that more or less all humans do, in the sense that its program had a data type called feeling which consisted of three items: a label, a real constant, positive or negative, to hold valence (for good or bad) and a real variable, non negative, to hold intensity. One instance of that data type could then be anger. We could then program the robot to qualify its behaviour by that data, by how angry it was. Too much anger and behaviour would degenerate to something rather basic, possibly rather unpredictable.
Would it matter, in the present context, that in a human, anger is triggered by all kinds of unconscious goings on and is associated with all kinds of physiological goings on, which last at least our robot does not have? I think Damasio talks of emotional systems, perhaps at reference 4, with the subjective feeling of anger being just one aspect of the anger system as a whole – which in this case map in our robot to a instance of the feeling data type called ‘anger’ and the supporting computer code.
Robots like the Sophia of reference 3 with her touch sensitive hands and mobile face notwithstanding, robots are not built in the same way as we are. But do we need our robot to understand first hand about, for example, thirst, pain and anger, if it is to exhibit a reasonable range of human behaviour?
Noting here that some people think that feelings came first. That is to say that first we evolved subjective feelings like pain, anger and thirst. Feelings driven by physiology. Then the subjective experiences and images arising more or less directly from our senses. The association of the feelings with stuff coming in from our senses. Then, lastly, language and thoughts. Some kind of mental model of both ourselves and the world around us. So if subjective feelings came first, are so basic to what we are, surely our robot is missing out on something that really matters?
The problem being to translate that intuition into something more concrete; into observable behaviours where the subjectivity of feelings really matters.
One angle here is that while robots will not have the same mental and physical problems as humans, they will, no doubt, have problems. They will have symptoms and they will need to be serviced. Some of these symptoms may well map reasonably comfortably onto human analogues, some may not.
Another angle is the fact that humans do not have conscious access to many of their internal workings. One would think that a robot would have. The supervisor of our robot could be given access to any data that there was on the system. One can just turn that access off, but that is not quite the same as not having it at all. While providing new data to consciousness in a human is much more difficult, if possible at all at the given state of the art – although I believe that some mystics from India do make claims in this area. I dare say exotic types in California do too. Otherwise, one just has to wait for access to evolve – or not, as the case may be.
Experiments which manipulate consciousness
Neuroscientists do not, on the whole, try to work with real life scenarios, apt to be complicated and difficult from their point of view. But the sort of scenario that does work is to manipulate some stimulus, typically sight or sound, at the boundary of consciousness and to watch what happens when one crosses that boundary, in one direction or another.
One might get the (human) subject to say when the stimulus is conscious by pressing a button. From where I associate to the hearing tests where, at the margin, one is not sure whether one has heard the test sound or not. Although, I dare say, if one does enough trials, statistically it is clear enough.
One might explore whether a stimulus had to be conscious to do its work, or whether the unconscious could get hold of a weaker stimulus than that needed to reach consciousness. Note that we are not talking here of whether the experimental subject is conscious or not – experimental subjects are generally conscious – but whether that subject is conscious of some particular stimulus or not.
One might wire the subject up to see what is happening in the brain, what is different about conscious stimuli. To work towards neural correlates of this sort of consciousness – and there has been a fair bit of success in this area. Maybe in time this will be an important part of our learning how the brain converts electrical activity into subjective experience.
But this is not the same as thinking about how a robot would do, whether a robot could replicate the behaviour of a human – which is the present point of interest.
As far and sights and sounds are concerned, it is quite likely that the robot will be a lot more sensitive than a human, that the boundaries will be in different places. Also that it will be less likely to miss things because its attention is elsewhere: it will notice the tiger creeping up while it examines the chocolate put out as bait.
And if the objective were to fake human behaviour, faults and all, one might to tune all those robot sensitivities down to human levels.
Restatement of our experimental objectives
We restrict ourselves to observable and comparable behaviour. Where by observable behaviour we mean behaviour which can be observed by unaided eyes and ears – so excluding all the conscious stuff, images and inner thoughts, going on inside heads. Conscious stuff which might, to some extent at least, drive observable behaviour, while not being part of the behaviour. And by comparable behaviour we mean behaviour available, at least in general terms, to both humans and robots. So driving a car, doing the accounts, answering the telephone and playing poker are in – while activities depending on the senses of touch, smell and taste are mostly out. Activities to do with taking on fuel or getting rid of the rubbish are mostly out. Something else that will change as robots get more sophisticated: I don’t think, for example, that there is anything fundamental stopping a robot smelling. It is just a question of throwing enough money and machinery at it.
We then ask what behaviours, if any, are available to a conscious human which are not available to an unconscious robot.
In which connection, things which a robot can manage, but a human cannot, are of less interest.
Conclusions
So work in progress, with the next step being to resume reading Solms. Maybe some lights will come on.
In the meantime, my position remains that if you can articulate the fact that something is missing from my unconscious robot, I can code it up and then it won't be missing any more. So I still don’t know what the point of consciousness is.
References
Reference 1: The Hidden Spring: A Journey to the Source of Consciousness – Mark Solms – 2021.
Reference 2: http://psmv3.blogspot.com/2017/04/its-chips-life.html.
Reference 3: https://psmv5.blogspot.com/2023/04/the-robot-sophia.html.
Reference 4: Persistence of feelings and sentience after bilateral damage of the insula – Damasio, Damasio and Tramel – 2013.
Group search key: sre.
No comments:
Post a Comment