The Evolution of Consciousness

I was imagining a rudimentary AI and how it would form concepts about some problem domain that it was employed to understand. It came to me with a jolt that it needed no concept of itself. If it is worrying about how the air-conditioning and heating of a building works in various weather patterns, there seems to be no reason whatsoever to include the worrier in the model. Only the fish deprived of water may come to understand in some vague way the concept of water. Imagining an intelligence with no concept of self, I was better able to think about the nature of the concept of self.

The various senses each have adaptive value to the organism and the brain processes the data from these senses to build a better model of the world it lives in. Evolution grasps for data in any direction and tries to exploit it for adaptive advantage. Our senses develop when evolution finds such useful data sources.

Some sort of intelligence is a natural outcome of such evolutionary elaboration. For this argument I define intelligence as some algorithm to process sensory data to make a better living for the organism. The brain performs this algorithm. It seems plausible that information about the working of the algorithm proper will eventually come to be a useful data source, rather like our other senses provide data about our environment. It might be thought that the algorithm already had ample access to its own workings but this may not be so; witness the difficulty we have in discovering grammar rules that we evidently employ. Our theory of grammar seems to come from deductions from observed speech patterns, rather than introspection. Most AI schemes in the literature are sublimely unaware of themselves!

I posit that the development of sensory nerves that report on the activity of the brain proper, and report it as subject, just as the optic nerve reports images of things out there as subjects to be considered and perhaps included in a model.

The Stretch computer (IBM model 7030) had a scan-out feature designed to help finding hardware failures in the field. When hardware checking circuits detected an error, normal execution of instructions would pause, mid instruction, and the states of all of the latches (flip-flops) would be copied into a core memory by hardware means specialized just to that task. The Stretch, or an unbroken aid, would then proceed and a highly Stretch savvy program therein would examine this record and try to diagnose the failure. It is noteworthy here that the computer proper could not read its state without these specialized means. It required extra hardware to move the data that was the essence of computing, to a place where it could be computed upon! Many modern processors also have scan-out, such as JTAG, which is now used mainly in later stages of debugging new hardware designs.

Perhaps speech, or some immediate substrate of speech (ala Chomsky), is implicated in this function. Much of what I am directly aware of that transpires in my head, is in the form of speech, or can at least be facilely rendered as speech. If the reports of this wiring merely proceed directly to short term memory, as with the Stretch above, then the observable effects that I can think of are explained. This also fits my subjective experience. After all as I verbally report these self observations, I draw on my memory just as I report other observations verbally. By the time I report them they are only memories and I suspect that they were mere memories as I became aware of them. The above is more of a proposal for terminology than a conjecture.

A computer, in so far as it knows anything, does not know how it works. If it did there would be a way to write a program whose meaning was to print out the wiring diagram of the computer. But a goal of computer engineers is to build computers that run programs that produce the same result regardless of what computer you run them on. Such a goal is useful in designing computers for roughly the same reason that brains need not know how they work. Such knowledge was not useful, at least until fairly recently—perhaps 20,000 years ago. Then consciousness began to emerge and parts, just parts, of the brain became useful to the organism to understand. Perhaps the utility of this was to influence other humans to think the same way thru language. (Some programs “know themselves”)

Qualia

Philosophers report qualia as being presented directly to the conscious mind, and not directly ‘placed in memory’. This is nearly a distinction without a difference. Some early computers did input output via the CPU which nearly always merely mediated between the IO device and memory. In any case I think the distinction is unimportant and introspection is especially subject to illusions here.

If our reports of things that ‘we are conscious of’ are indeed what goes to memory, and writing in memory is sequential which is the simplest hypothesis, then this would explain the unified nature of consciousness reported as central to the idea by Metzinger. This unity is not logically necessary; concurrent instances of consciousness are logically possible but would then be noted respectively by drawing on memories thereof.

I imagine that these sensory nerves that report brain function, tap into the brain at many points but leave other parts entirely unreported. The reported points may be at different abstraction levels. We may thus be aware of memories for purposes other than why our memory ability evolved.

The first chapter of Greg Egan’s Diaspora is a stream-of-consciousness report of an AI, with no initial world model and little sensory input, deducing the existence of itself. It would seem that Egan’s AI lacked what I propose already exists naturally in our brain. I agree that a moderately bright AI could indeed deduce its own existence, much as Egan imagines without this specialized wiring. With the extra wiring it would be quicker yet. Without special wiring one might have to deduce one’s thoughts from observed actions, or perhaps from perceived urges to actions. Such deductions indeed seem necessary sometimes but more often we know innately some of the principle precursors to these urges.

John McCarthy thinks that self awareness is and should be wired into people, and that it should be wired into AI’s.

It occurred to me recently that neural nets, such as those in our brains, and those that some propose should be emulated digitally to achieve AI, may keep the patterns they know in a form that cannot be decoded by any other than themselves. We are not very good at conveying in words how to recognize some particular person, even though that ability would be adaptive; nor are we able to say how to ride a bicycle, even though our neural nets do these things very well. I suspect that such introspection is impossible because nature has failed to find a way to do this, which, I suspect, is because there is no way.

How this direct introspection relates to consciousness is, I think, more a matter of choice of words than something to be decided in either the laboratory or arm chair.

Descartes’ Dualism

Damasio’s Descartes’ Error claims that Descartes erred in dividing human situation into two realms; the physical and the intellectual. Damasio makes many good points but never tries to explain why Descartes embraced this duality. I think that it is because we have first class access to our intellectual workings and separately first class access to our physical world. (Sorry Plato.) We did not evolve to sense the intermediate data processing stages between the two. I know no reason why we could not have done so. I suppose merely that it was not as useful.

What does it mean to have first class access to your thought process; how could it be otherwise? The air-conditioning demon alluded to above probably has direct knowledge of the temperature of the rooms under its surveillance, but not direct access to which information enabled the deductions that it has made about those temperatures. It does not know why it believes what it believes. It probably has no concept matching “believe”. It knows that it is hot outside, but unaware that it believes it is hot. If it is a classic (1980’s) AI there will be a list of such deductions stored in the memory of the computer and this memory will be integral to the workings of the AI. The AI will most likely have no concept of proposition with which to deal with these deductions. The list of deductions will not include propositions about the deductions. The AI deploys logic even without a theory of logic. John McCarthy thinks that successful AI’s need this ability. Seeing as how we have no successful general AI’s today he and I may both be right. You don’t need a general AI to control the air-conditioning, but you may do well to manage your deductions about temperatures, much as you manage a collection of temperature readings.

Today rare AI programs are able to report how they deduced things. Proof generating programs do this. I don’t know exactly how more general expert systems achieve this or whether they use mechanisms that may be a bit like consciousness. When I conversed with one it was slightly unnerving to have the computer tell me why I should believe something.

Those who speculate on the evolution of intelligence often claim that its survival value is to understand the thoughts of other humans. If this is so then another adaptive advantage of access to one’s own thoughts is clear. Such an ability may be necessary to create arguments to convince others.

See my notes on “A Brief Tour of Human Consciousness”. This note is in line with Dretske’s earlier ideas.


Stuff to insert into above
How did consciousness actually arise?
There were several prerequisites:
A theory of causation.
This in turn breaks down into two areas:
  things that are caused by previous events.
  things that are caused by my action.
Presumably the first happened first.
The first can be subsumed under patterns where A causing B can be described by the pattern that A happens then B.
What things I can cause to happen seems different to me.
Best I can tell is that Gelertner admits he doesn’t know what evidence would convince him, but he does not rule out new ideas and evidence that would convince him.

P.S.