This note is on the boundary of neuroanatomy, pattern matching, commercial OCR and AI. This is certainly not a new idea but I have read a few works that perhaps should have mentioned it and did not. I presume someone will drop me a note of ideas along these lines.
I hear that successive stages of the visual cortex have many signals going in the upstream direction (toward the eye) between stages. Indeed at some points the upstream bandwidth is greater than that of the downstream. Some commentators seem surprised. It seems perfectly clear to me what this is all about. I presume that it is analogous to a conscious activity we do when we read a poorly printed document. Our OCR hardware tells us that the second letter in a word is ‘e’ but we know there is no such word. We know, judging from the subject matter, that there is such a word where the letter is ‘c’ and we ‘see’ that the printed form could indeed be a ‘c’. Note the ambiguity of whether the ‘hardware’ is silicon or carbon. The logic is the same in either case. Incidentally this seems like an easy thing for commercial OCR to do, but perhaps such guesses should be passed on to the user for confirmation.
This transaction is sometimes subconscious and ‘automatic’ but sometimes we become aware of the Bayesian guess we have made.
The general meaning of the counterflow message is “The signal you just sent does not make sense. Might it be one of these? …”. It seems clear from introspection that more than two processing levels are sometimes involved, at least when we become aware of the process. I suspect the automatic version is also multi level. I suspect that this sort of processes goes on at levels of the visual cortex that we never become aware of. I am thinking of edge detectors.
This idea is not limited to visual processing. Perhaps the audio counterpart is as significant or more, especially in hearing a language. This input feedback must have evolved as soon as neuro processing of environmental input data became stratified into levels.
This is not a mere biological quirk; I imagine it to be necessary in any powerful pattern recognizing mechanism. Here are similar feedback ideas in a context of prediction. Here is the idea in a software context.
In any case this special output processing is very recently evolved, perhaps only with language.