This post is based on the paper: “The discovery and comparison of symbolic magnitudes,” written by Cognitive Psychology 71 (2014) 27–54 This is a little different from one of Brunswik’s ideas –how good we are at determining sizes in the environment. Those might be called perceptual magnitudes. Symbolic magnitudes seem to be ones taken from memory and the immediate context.

We have sophisticated abilities to learn and make judgments based on relative magnitude. Magnitude comparisons are critical in making choices (e.g., which of two products is more desirable?), making social evaluations (e.g., which person is friendlier?), and in many other forms of appraisal (e.g., who can run faster, this bear or me?). In the paper, the authors seek to explain where subjective magnitudes come from?

For a few types of symbolic comparisons, such as numerical magnitudes of digits, it may indeed be the case that each object has a pre-stored magnitude in long-term memory. The notion that magnitudes are pre-stored is implausible for the wide range of dimensions on which people can make symbolic comparisons, especially in the interpersonal and social realm (e.g., intelligence, friendliness, religiosity, conservatism). Magnitudes are more likely derived, context-dependent features that are computed as needed in response to a query.

The authors have created a model (Bayesian Analogy with Relational Transformations-BART and its little brother BARTlet) and then run simulations using large existing data sets to test the model. The model is based at its roots on: “Relative Judgement: A phenomenon and a theory,” written by David F. Marks and published in *Perception & Psychophysics*, 1972, Vol. 11(9). That paper is much less mathematical and more understandable for me. Chen et al use what is called a support vector machine which is a way of sorting large data sets roughly based on linear regression. The authors do not suggest that the model is psychologically realistic. The simulation results confirm that concepts related to symbolic magnitudes can be discovered by inductive learning, rather than simply assumed to be directly available in long-term memory. This implies that magnitudes will be represented as probability distributions. The probabilistic framework is in agreement with the intuition that symbolic magnitudes (e.g., the size of a kangaroo, the intelligence of a goat) are ‘‘fuzzy’’ rather than firm, and thus judgments related to these attributes are susceptible to the influence of context (Post Fuzzy trace theory-Experts and Future direction).

The BARTlet model is a reference point model (post How Do We Convert a Number into a Finger Trajectory). The intuitive idea is that when judging (for example) whether an elephant is larger than a hippo, the subjective magnitude difference is in fact more discriminable than when judging whether an elephant is smaller than a hippo. The key idea is that discriminability is changed by the altering of magnitude variances based on the distance from a reference point. The model is set up to explain the following observations.

**Distance Effect**. The ease of judgments (indexed by accuracy and/or reaction time) increases with the magnitude difference between the objects being compared.

**Semantic Congruity Effect**. When judgments are made using polar concepts such as ‘‘choose brighter’’ versus ‘‘choose dimmer’’, ‘‘choose better’’ versus ‘‘choose worse’’, for objects with high values on the dimension, it is easier to judge which object is greater, whereas for objects with low values, it is relatively easier to judge which is lesser. This is part of the larger class of framing effects that effect decision making (post Fuzzy Trace Theory-Risk Taking Framing Effects).

**Markedness effect. **For pairs of polar adjectives, one (the ‘‘unmarked’’ form) is easier to process overall than the other. Marks uses the example of “probable” and “improbable”. First “Improbable” is marked because the speaker who asks: “How improbable is X? has already judged X to be improbable (has marked it). Second, the unmarked member of a pair is used to name the full scale: it is the probability scale not the improbability scale.

Reference points established by the form of the question and the range of the presented stimuli can be viewed as cues that establish attention bands. The hypothesis that attention operates in part by modulating variability in an internal representation is also consistent with findings concerning visual detection and discrimination tasks.

Fig. 2 (excerpted from the paper) sketches different levels of representation that may be involved in making magnitude comparisons and reasoning with comparative relations.

A key idea is that learning can be bootstrapped (posts Bootstrapping and Dialectical Bootstrapping) by incorporating empirical priors—a ‘‘favorable’’ initial knowledge state derived from some related but simpler learning task. The idea of bootstrapping is based on a mathematical truism. A subjective quantitative estimate can be expressed as an additive function of three components: the truth (the true value of the estimated quantity), random error (random fluctuations in the judge’s performance), and systematic error (i.e., the judge’s systematic tendency to over- or underestimate the true value). Averaging estimates increases accuracy in two ways: It cancels out random error, and it can reduce systematic error.

Specifically, BARTlet uses the following procedure to answer a comparative query such as, ‘‘Which is larger, an elephant or a giraffe?’’ First, the model establishes a reference point based on the comparative involved in the question and all presented stimuli (i.e., the context). Because the comparative in this question is larger, the reference point is taken to be the object among the presented stimuli with the highest mean magnitude on the size dimension. (If the comparative were instead slower, the reference point would be the object with the lowest mean magnitude on the speed dimension.) Based on the selected reference point, the model computes the maximum possible distance from the reference point within the current context (i.e., the subjective range on the relevant dimension). This value is simply the absolute difference in mean magnitudes between the reference point and the opposite extreme

reference point. For larger, the opposite-extreme reference point is the object among the presented stimuli with the lowest mean size magnitude. The model computes the means and unscaled variances for the magnitudes of the two objects being compared. The mean and variance of the size magnitude is computed for both the elephant and the giraffe. Then, for each object being compared, the model computes a measure of the distance between that object and the reference point as a proportion of the maximum possible distance from the reference point. This value corresponds to the absolute difference between the mean magnitudes of the object and of the reference point, divided by the maximum possible distance from the reference point. Magnitude variances are assumed to be normal Gaussian distributions and to increase more rapidly for marked relations than for unmarked relations as distance from the reference point increases and variances increase exponentially with distance from the reference point. BARTlet then proceeds to use an implicit comparison operation, which can be characterized in terms of signal detection theory,(post Signal Detection) to assess which of two objects is the larger. No explicit larger relation is needed for BARTlet to choose the larger of two objects.

According to Chen et al, the fact that magnitudes are involved in answering many different questions and can be learned by multiple routes explains why evolution has apparently placed a premium on the creation of specialized neural hardware for manipulating such representations. The authors note, however, that unidimensional magnitude representations have their limitations. One limitation is that the neural system for approximate magnitude acts as a bottleneck. Precisely because any dimension can be coded in terms of a single internal number line, it is very difficult to code distinct orderings on separate dimensions for a single set of objects, a bottleneck that contributes to the ‘‘halo effect’’. In addition, the validity of a one-dimensional magnitude representation is inherently limited, as is apparent whenever we try to reduce a complex multidimensional situation to a single number that serves as a ‘‘score’’ (e.g., GPA as a summary of a student’s academic ability, dollar earnings as a summary of a year of one’s life).