Fun facts about neurons that impact decisions
Since neurons encode changes in stimulation (rather than absolute levels), absolute judgments on any dimension are much more difficult than relative judgments. This lies at the root of Ernst Weber’s 1834 observation that detectable increases in visual or auditory signal intensity are proportional to the starting value, i.e., need to be larger for larger starting values. (from post First Half of 2009 JDM Research Summary)
There is a hierarchy of neurons and there are a lot of them. So it is quite likely that I have a neuron dedicated to Salma Hayek, etc.
Neural responses are noisy. As an example, a radiologist may have tumor detecting neurons. These hypothetical tumor detectors will give noisy and variable responses. After one glance at a scan of a healthy lung, our hypothetical tumor detectors might fire 10 spikes per second. After a different glance at the same scan and under the same conditions, these neurons might fire 40 spikes per second. (from post Signal Detection Theory)
In Reading in the Brain, Dehaene introduces the idea of “neuronal recycling” whereby portions of our ventral visual system are turned over to reading and writing. He says that after centuries of trial and error, writing systems evolved to a form adapted to our brain circuits. (from post Toward a Culture of Neurons)
Neuronal Architecture for Decision Making
A. The processing network. This vast distributed parallel machinery for decision making accumulates multiple sources of evidence, is a major part of what we call intuition, and moves us to action. It includes functionally specialized processors or modular subsystems that “encapsulate” information relevant to its function. Sometimes this entire processing network is just called intuition or System 1. The processing network has a dual architecture.
(Note: There is little discussion of the neuronal architecture of what we consider expertise where processes that were handled consciously move just below consciousness and are handled automatically or intuitively at least for short times. In simplest terms it would become part of the processing network. The architecture considered here does not differentiate.)
An example is the model of predictive coding in the visual cortex. At the lowest level, there is some pattern of energetic stimulation, derived by sensory receptors from ambient light patterns produced by the current visual scene. These signals are then processed via a multilevel cascade in which each level attempts to predict the activity at the level below it via backward connections. The backward connections allow the activity at one stage of the processing to return as another input at the previous stage. So long as this successfully predicts the lower level activity, all is well, and no further action needs to happen. But where there is a mismatch, “prediction error” occurs and the ensuing (error-indicating) activity is sent to the higher level. This automatically adjusts probabilistic representations at the higher level so that top-down predictions cancel prediction errors at the lower level yielding rapid perceptual inference. At the same time, prediction error is used to adjust the structure of the model so as to reduce any discrepancy next time around yielding slower timescale learning.
What is most distinctive about this duplex architectural proposal is that it depicts the forward (to the brain) flow of information as solely conveying error, and the backward flow thus achieves a balance between cancelling out and selective enhancement. This is made possible by the existence of “two functionally distinct sub populations, encoding the conditional expectations of perceptual causes and the prediction error respectively”. Superficial pyramidal cells are depicted as playing the role of error units, passing prediction error forward, while deep pyramidal cells play the role of representation units, passing predictions (made on the basis of a complex generative model) downward.
When a neuron or population is predicted by top-down inputs it will be much easier to drive than when it is not”. This is because the best overall fit between driving signal and expectations will often be found by inferring noise in the driving signal and thus recognizing a stimulus as, for example, the letter m say, in the context of the word “mother”, even though the same bare stimulus, presented out of context or in most other contexts, would have been a better fit with the letter n. A unit normally responsive to the letter m might, under such circumstances, be successfully driven by an n-like stimulus. (from post the Prediction Machine)
Hohwy is studying autism as a disorder where the prediction hierarchy is stuck closer to the senses so the model of the world is not corrected through a great number of repetitions. Thus, things are not slowed down enough. Hohwy provides an example of trying to determine the mean of 20 numbers, but we are given them one at a time, and if things are working correctly, we maintain a running mean that in stepwise manner eventually can make the determination. However, in this example, an autistic person would be presented with a single number each time without the running mean. This makes prediction difficult. (from post Prediction Error Minimization)
B. A control mechanism that provides the capacity to link decisions into tactics where the outputs of one decision become the inputs of the next. This capacity routs information, task setting, and task sequencing. It provides flexibility at the cost of serial slow speed. This is known as the global neuronal workspace (GNW) and provides what we know as consciousness. It provides analysis and is sometimes called System 2. It consists of a distributed set of cortical neurons characterized by their ability to receive from and send back to homologous neurons in other cortical areas horizontal projections through long-range excitatory axons. GNW neurons typically accumulate information through recurrent top–down/bottom–up loops, in a competitive manner such that a single representation eventually achieves a global conscious status. Because GNW neurons are broadly distributed, there is no single brain center where conscious information is gathered and dispatched but rather a brain-scale process of conscious synthesis achieved when multiple processors converge to a coherent metastable state. According to the GNW hypothesis, conscious access proceeds in two successive phases. In a first phase, lasting from ~100 to ~300 ms, the stimulus climbs up the cortical hierarchy of processors in a primarily bottom–up and non-conscious manner. In a second phase, if the stimulus is selected for its adequacy to current goals and attention state, it is amplified in a top–down manner and becomes maintained by sustained activity of a fraction of GNW neurons, the rest being inhibited. The entire workspace is globally interconnected in such a way that only one such conscious representation can be active at any given time. This all-or-none invasive property distinguishes it from the processing network in which, due to local patterns of connections, several representations with different formats may coexist. (from post the Global Neuronal Workspace)
The idea is that the act of globally broadcasting the information makes us aware of it. Koch agrees that conscious information is globally accessible but that is limited and that so called zombie agents keep their knowledge to themselves. (from post Consciousness, Confession of a Romantic Reductionist). Koch uses the term “zombie agents” to describe parts of the processing network that never end up in the global neuronal workspace.
Experiments indicate that the impact of later evidence is reduced when more evidence has been accrued but only for highly visible information. For difficult to perceive information, the information contributes equally to the decision. Thus consciousness may play a role in decision making by biasing the accumulation of new evidence. Since Salma Hayek has her own neuron, new actresses, who do not, may lose out. (from post Toward a Culture of Neurons)