Sunday, March 10, 2019

IQ of biracial children and adults

First snow in Minnesota (c. 1895), Robert Koehler. Biracial children have IQ scores halfway between those of white children and black children, even when they are conceived by white single mothers and adopted into middle-class white families in their first year of life.

You may have heard about the Minnesota Transracial Adoption Study. It was a longitudinal study of black, biracial, and white children adopted into white middle-class Minnesotan families, as well as the biological children of the same families (Levin, 1994; Lynn, 1994; Scarr and Weinberg, 1976; Weinberg, Scarr, and Waldman, 1992). IQ was measured when the adopted children were on average 7 years old and the biological children on average 10 years old. They were tested again ten years later. Between the two tests, all four groups declined in mean IQ. On both tests, however, the differences among the four groups remained unchanged, particularly the 15-point gap between black and white adoptees. 

The biracial children remained halfway between the black and white adoptees. Could this be due to the parental environment being likewise half and half? Well, no. All of them were raised by white parents, and they were adopted at an early age: 19 months on average for the white adoptees, 9 months for the biracial adoptees, and 32 months for the black adoptees. The last figure is emphasized by Scarr and Weinberg (1976) as a reason for the IQ gap between the black and white adoptees. 

Fine, but what about the IQ gap between the biracial and white adoptees? Almost all of the biracial children were adopted at a young age and born to white single mothers who had completed high school. From conception to adulthood they developed in a "white" environment. If anything, the white adoptees should have encountered more developmental problems because they were adopted at an older age.

Could color prejudice be a reason? Perhaps the biracial children were unconsciously treated worse than the white children. By the same reasoning, they may have been treated better than the black children. We can test the second half of this hypothesis. Twelve of the biracial children were wrongly thought by their adoptive parents to have two black parents. Nonetheless, they scored on average at the same level as the biracial children correctly classified by their adoptive parents (Scarr and Weinberg 1976).

The Eyferth study

Another study found no difference in IQ between white and biracial children. This was a study of children fathered by American soldiers in Germany and then raised by German mothers (Eyferth 1961). It found no significant difference in IQ between children with white fathers and children with black fathers. Both groups had a mean IQ of about 97.

These findings were criticized by Rushton and Jensen (2005) on three grounds:

1. The children were still young when tested. One third were between 5 and 10 years old and two thirds between 10 and 13. Since IQ is strongly influenced by family environment before puberty, a much larger sample would be needed to find a significant difference between the two groups.

2. Between 20 and 25% of the “black” fathers were actually North African.

3. At the time of the study, the US Army screened out low IQ applicants with its preinduction Army General Classification Test. The rejection rate was about 30% for African Americans and 3% for European Americans. African American soldiers are thus a biased sample of the African American population.

Another factor is that the capacity for intelligence seems to be more malleable in children than in adults. We see this with the Minnesota Transracial Adoption Study. In the enriched learning environment of middle-class Minnesota families, all of the children showed impressive IQ scores at 7 years of age. By 17 years of age, however, this benefit had largely washed out:

-------------------- Age 7 Age 17

Black children ----- 97 ----- 89

Biracial children - 109 ----- 99

White children --- 112 ---- 106

Does intelligence really decline with age because of wear and tear on the brain? Perhaps we’re programmed to be most intelligent in childhood. That’s when we have to familiarize ourselves with the world. The capacity for intelligence may then be gradually deactivated as we get older because it’s less necessary.

This deactivation may follow different trajectories in different human groups. In early Homo sapiens, it may have begun not long after puberty. As ancestral humans made the transition to farming, sedentary living, and increasingly complex societies, this learning capacity became more necessary in adulthood, with the result that natural selection favored those individuals who retained it at older ages. This gene-culture coevolution would have gone farther in some populations than in others.

The Fuerst et al study

A recent study led by John Fuerst has confirmed the intermediate IQ of biracial individuals, this time in adults. The research team used the General Social Survey, which includes not only ethnic, sociological, and demographic data but also a measure of intelligence (WordSum):

The relationship between biracial status, color, and crystallized intelligence was examined in a nationally representative sample of adult Black and White Americans. First, it was found that self-identifying biracial individuals, who were found to be intermediate in color and in self-reported ancestry, had intermediate levels of crystallized intelligence relative to self-identifying White (mostly European ancestry) and Black (mostly sub-Saharan African ancestry) Americans. The results were transformed to an IQ scale: White (M = 100.00, N = 7569), primarily White-biracial (M = 96.07, N = 43, primarily Black-biracial (M = 94.14 N = 50), and Black (M = 89.81, N = 1381).

The same study also found a significant negative correlation among African Americans between facial color and WordSum scores. The correlation was low (r = -0.102), but it would be difficult to get a higher correlation because of the measures used. Self-reported skin color correlates imperfectly with actual skin color, which in turn correlates imperfectly with European admixture. Wordsum likewise correlates imperfectly with IQ (r = 0.71). On a final note, the correlation between facial color and WordSum scores was not explained by region of residence, interviewer’s race, parental socioeconomic status, or individual educational attainment.


Eyferth, K. (1961). Leistungen verscheidener Gruppen von Besatzungskindern in Hamburg-Wechsler Intelligenztest für Kinder (HAWIK). Archiv für die gesamte Psychologie 113: 222-241.

Fuerst, J.G.R., R. Lynn, and E.O.W. Kirkegaard. (2019). The Effect of Biracial Status and Color on Crystallized Intelligence in the U.S.-Born African-European American Population. Psych 1(1): 44-54.

Levin, M. (1994). Comment on the Minnesota transracial adoption study. Intelligence 19: 13-20.

Lynn, R. (1994). Some reinterpretations of the Minnesota Transracial Adoption Study. Intelligence 19: 21-27.

Rushton, P. and A.R. Jensen. (2005). Thirty years of research on race differences in cognitive ability. Psychology, Public Policy, and Law 11: 235-294.

Scarr, S., and Weinberg, R.A. (1976). IQ test performance of Black children adopted by White families. American Psychologist 31: 726-739.

Weinberg, R.A., Scarr, S., and Waldman, I.D. (1992). The Minnesota Transracial Adoption Study: A follow-up of IQ test performance at adolescence. Intelligence 16: 117-135.

Monday, February 25, 2019

Alzheimer's and African Americans

A village elder of Mogode, Cameroon (Wikicommons - W.E.A. van Beek). African Americans are more than twice as likely to develop Alzheimer's. They also more often have an allele that increases the risk of Alzheimer's in Western societies but not in sub-Saharan Africa. Why is this allele adaptive there but not here?

Alzheimer's disease (AD) is unusually common among African Americans. Demirovic et al. (2003) found it to be almost three times more frequent among African American men than among white non-Hispanic men (14.4% vs. 5.4 %). Tang et al. (2001) found it to be twice as common among African American and Caribbean Hispanic individuals. On the other hand, it is significantly less common among Yoruba in Nigeria than among age-matched African Americans (Hendrie et al. 2001).

This past year, new light has been shed on these differences. Weuve et al. (2018) analyzed data from ten thousand participants 65 years old and over (64% black, 36% white) who had been followed for up to 18 years. Compared to previous studies, this one had three times as many dementia assessments and dementia cases. It also had a wider range of data: tests of cognitive performance, specific diagnosis of Alzheimer's (as opposed to dementia in general), educational and socioeconomic data, and even genetic data—specifically whether the participant had the APOE e4 allele, a major risk factor for Alzheimer's.

The results confirmed previous findings ... with a few surprises.


Alzheimer's was diagnosed in 19.9% of the African American participants, a proportion more than twice that of the Euro American participants (8.2%).

Cognitive performance and cognitive decline

Cognitive performance was lower in the African American participants. "The difference in global cognitive score, -0.83 standard units (95% confidence interval [CI], -0.88 to -0.78), was equivalent to the difference in scores between participants who were 12 years apart in age at baseline."

On the other hand, both groups had the same rate of cognitive decline with age. In fact, executive function deteriorated more slowly in African Americans. The authors suggest that the higher rate of dementia in elderly African Americans is due to their cognitive decline beginning at a lower level:

[…] on average, white individuals have "farther to fall" cognitively than black individuals before reaching the functional threshold of clinical dementia, so that even if both groups have the same rate of cognitive decline, blacks have poorer cognitive function and disproportionately develop dementia. (Weuve et al. 2018)

Interaction with education.

Differences in educational attainment, i.e., years of education, explained about a third of the cognitive difference between the two groups of participants:

Educational attainment, as measured by years of education, appeared to mediate a substantial fraction but not the totality of the racial differences in baseline cognitive score and AD risk (Table 5). Under the hypothetical scenario in which education was "controlled" such that each black participant's educational level took on the level it would have been had the participant been white, all covariates being equal, black participants' baseline global cognitive scores were an average of 0.45 standard units lower than whites' scores (95% CI, -0.49 to -0.41), a difference smaller than without controlling years of education (-0.69; Table 5), and translating to about 35% of the total effect of race on cognitive performance mediated through years of education. (Weuve et al. 2018)

While educational attainment explains 35% of the cognitive difference between African Americans and Euro Americans, we should keep in mind that educational attainment itself is influenced by genetic factors. These genetic factors vary among African Americans, just as they vary between African Americans and other human populations.

APOE e4 allele

This allele was more common in the African American participants. It contributed to their higher risk of Alzheimer's but not to their lower cognitive score.

Black participants were more likely than white participants to carry an APOE e4 allele (37% vs 26%; Table 1). In analyses restricted to participants with APOE data, racial differences in baseline scores or cognitive decline did not vary by e4 carriership (all Pinteraction > 0.16). Furthermore, adjustment for e4 carriership did not materially change estimated racial differences in baseline performance or cognitive decline (eTable 3).

By contrast, the association between race and AD risk varied markedly by APOE ecarriership (Pinteraction = 0.05; Table 4). Among non-carriers, blacks' AD risk was 2.32 times that of whites' (95% CI, 1.50-3.58), but this association was comparably negligible among e4 carriers (RR, 1.09; 95% CI, 0.60-1.97). (Weuve et al. 2018)


This study offers two different explanations: why African Americans have a higher incidence of Alzheimer's and why they have a higher incidence of dementia in general. Two different explanations are needed because Alzheimer's seems to be qualitatively different from other forms of dementia.

First, African Americans have a higher incidence of Alzheimer’s because they have a higher incidence of the APOE e4 allele, a risk factor for Alzheimer's. They may also have other alleles, still unidentified, that similarly favor development of Alzheimer's. This would explain why, if we look at participants without APOE e4, Alzheimer's was still twice as common among African Americans as it was among Euro Americans. On the other hand, the two groups had virtually the same incidence of Alzheimer's if we look at participants with APOE e4.

Second, African Americans have a higher incidence of dementia in general because they have a lower cognitive reserve. When cognitive performance begins to deteriorate in old age, the ensuing decline starts from a lower level and reaches the threshold of dementia sooner. The rate of decline is nonetheless the same in both African Americans and Euro Americans. While this explanation could apply to most forms of dementia, it is hard to see how it applies to Alzheimer's. Euro Americans have a higher cognitive reserve, and yet the APOE e4 allele is just as likely to produce Alzheimer's in them as in African Americans.

Why does the APOE e4 allele exist? It must have some adaptive value, given its incidence of 37% in African Americans and 26% in Euro Americans. African Americans also seem to have other alleles, not yet identified, that likewise increase the risk of Alzheimer’s. Those alleles, too, must have some adaptive value.

This value seems to exist in sub-Saharan Africa but not in North America. When Hendrie et al. (2001) examined Yoruba living in Nigeria, they found no relationship between APOE e4 and Alzheimer’s or dementia in general:

In the Yoruba, we have found no significant association between the possession of the e4 allele and dementia or AD in either the heterozygous or homozygous states. As the frequencies of the 3 major APOE alleles are almost identical in the 2 populations, this variation in the strength of the association between e4 and AD may account for some of the differences in incidence rates between the populations, although it is not likely to explain all of it. It also raises the possibility that some other genetic or environmental factor affects the association of the e4 allele to AD and reduces incidence rates for dementia and AD in Yoruba. (Hendrie et al. 2001)

There has been speculation, notably by Greg Cochran, that Alzheimer’s is caused by apoptosis. Because of the blood-brain barrier, antibodies cannot enter the brain to fight infection, so neural tissue is more dependent on other means of defense, like apoptosis. Such a means of defense may be more important in sub-Saharan Africa because the environment carries a higher pathogen load.

If we pursue this hypothesis, APOE e4 and other alleles may enable neurons to self-destruct as a means to contain the spread of pathogens in the brain. In an environment with a lower pathogen load, like North America, this means of defense would become too inactive. The result would be autoimmune disorders where apoptosis is triggered in neural tissue for no good reason.


Chin, A.L., S. Negash, and R. Hamilton. (2011). Diversity and disparity in dementia: the impact of ethnoracial differences in Alzheimer disease. Alzheimer disease and associated disorders. 25(3):187-195.

Cochran, G. (2018). Alzheimers or did I already say that? West Hunter, July 14

Demirovic, J., R. Prineas, D. Loewenstein, et al. (2003). Prevalence of dementia in three ethnic groups: the South Florida program on aging and health. Ann Epidemiol. 13:472-478.

Hendrie, H.C., A. Ogunniyi, K.S. Hall, et al. (2001). Incidence of dementia and Alzheimer disease in 2 communities: Yoruba residing in Ibadan, Nigeria, and African Americans residing in Indianapolis, Indiana. JAMA. 285:739-47.

Tang, M.X., P. Cross, H. Andrews, et al. (2001). Incidence of AD in African-Americans, Caribbean Hispanics, and Caucasians in northern Manhattan. Neurology 56:49-56.

Weuve, J., L.L. Barnes, C.F. Mendes de Leon, K. Rajan, T. Beck, N.T. Aggarwal, L.E. Hebert, D.A. Bennett, R.S. Wilson, and D.A. Evans. (2018). Cognitive Aging in Black and White Americans: Cognition, Cognitive Decline, and Incidence of Alzheimer Disease Dementia. Epidemiology 29(1): 151-159. 

Thursday, February 14, 2019

The Nurture of Nature

Fleet Street, watercolor by Ernest George (1839-1922). In England, middle-class families used to be so large that they overshot their niche and flooded the ranks of the lower class.

Until the last ten years it was widely believed that cultural evolution had taken over from genetic evolution in our species. When farming replaced hunting and gathering, something fundamentally changed in the relationship between us and our surroundings. We no longer had to change genetically to fit our environment. Instead, we could change our environment to make it fit us.

That view has been challenged by a research team led by anthropologist John Hawks. They found that genetic evolution actually speeded up 10,000 years ago, when hunting and gathering gave way to farming. In fact, it speeded up over a hundred-fold. Why? Humans were now adapting not only to slow-changing natural environments but also to faster-changing cultural environments, things like urban living, belief systems, and the State monopoly on violence. Far from slowing down, the pace of genetic change actually had to accelerate (Hawks et al. 2007).

These findings received a broader public hearing with the publication of The 10,000 Year Explosion: How Civilization Accelerated Human Evolution. More recently, they have been discussed in a review article by historian John Brooke and anthropologist Clark Spencer Larsen:

Are we essentially the same physical and biological beings as Ice Age hunter-gatherers or the early farming peoples of the warming early Holocene? How has the human body changed in response to nine or ten millennia of dramatic dietary change, a few centuries of public health interventions, and a few decades of toxic environmental exposures? In short, how has history shaped biology? 

[...] But very clearly human evolution did not stop with the rise of modern humanity in the Middle to Late Paleolithic. Climatic forces, dietary shifts, disease exposures, and perhaps the wider stresses and challenges of hierarchical, literate state societies appear to have been exerting selective pressure on human genetics.

In short, we have become participants in our evolution: we create more and more of our surroundings, and these surroundings influence the way we evolve. Culture is not simply a tool we use to control and direct our environment. It is a part of our environment, the most important part, and as such it now controls and directs us.

Brooke and Larsen nonetheless feel attached to older ways of evolutionary thinking, particularly the "essentialism" of pre-Darwinian biology. We see this when they assert that “the essential modeling of the genetic code ended sometime in the Paleolithic." Actually, there was no point in time when our ancestors became essentially "human"—whatever that means. A Paleolithic human 100,000 years ago would have had less in common with you or me than with someone living 100,000 years earlier or even a million years earlier. Human evolution has been logarithmic—the changes over the past 10,000 years exceed those over the previous 100,000 years, which in turn exceed those over the previous million.

Clark’s model

Brooke and Larsen discuss Gregory Clark's work on English demography. Clark found that the English middle class expanded steadily from the twelfth century onward, its descendants not only growing in number but also replacing the lower classes through downward mobility. By the 1800s, its lineages accounted for most of the English population. Parallel to this demographic expansion, English society shifted toward "middle class" culture and behavior: thrift, pleasure deferment, increased future orientation, and unwillingness to use violence to settle personal disputes (Clark, 2007). 

Clark’s work is criticized by Brooke and Larsen on two grounds:

[... ] there is no biological evidence to support an argument for English industrial transformation via natural selection. More importantly, this was a process that—hypothetically—had been at work around the world since the launch of social stratification in the Late Neolithic and the subsequent rise of state societies.

How valid are these criticisms? Let me deal with each of them.

Is social stratification the only precondition of Clark’s model?

First, it is true that many societies around the world are socially stratified, but social stratification is only one of the preconditions of Clark’s model. There are two others:

1. Differences in natural increase between social classes, with higher natural increase being associated with higher social status.

2. Porous class boundaries. The demographic surplus of the middle and upper classes must be free to move down into and replace the lower classes.

These preconditions are not met in most socially stratified societies. Brooke and Larsen are simply wrong when they say: "The poor died with few or no children everywhere in the world, and across vast stretches of human history." In reality, there have been many societies where fewer children were born on average to upper-class families than to lower-class families. A notable example is that of the Roman Empire, particularly during its last few centuries: upper-class Romans widely practiced abortion and contraception (Hopkins 1965). A similar situation seems to have prevailed in the Ottoman Empire. By the end of the eighteenth century, Turks were declining demographically in relation to their subject peoples, perhaps because they tended to congregate in towns and were more vulnerable to the ravages of plague and other diseases (Jelavich and Jelavich, 1977, pp. 6-7)

Nor are class boundaries always porous. Social classes often become endogamous castes. This can happen when a social class specializes in "unclean" work, like butchery, preparation of corpses for burial, etc. This was the case with the Burakumin of Japan, the Paekchong of Korea, and the Cagots of France (Frost 2014). Because of their monopoly over a despised occupation, they were free from outside competition and thus had the resources to get married and have enough children to replace themselves. This was not the case with the English lower classes, who faced competition from “surplus” middle-class individuals between the twelfth and nineteenth centuries. Such downward mobility is impossible in caste societies, where “surplus” higher-caste individuals are expected to remain unmarried until they can find an appropriate social situation. 

A caste society thus tends to be evolutionarily stagnant. Lower castes in particular tend to preserve mental and behavioral predispositions that would otherwise be removed from the gene pool in a more fluid social environment.

Why did class boundaries remain porous in England? The reason was probably the greater individualism of English society, particularly its expanding middle class. Sons were helped by their parents, but beyond a certain point they were expected to shift for themselves. My mother’s lineage used to be merchants on Fleet Street in London. They were successful and had such large families that they overshot their niche. By the nineteenth century, some of them had fallen to the level of shipbuilding laborers, and it was as such that they came to Canada.

Is biological evidence lacking for Clark's model?

Brooke and Larsen are on firmer ground when they say that Clark's model is unsupported by biological evidence. There is certainly a lack of hard evidence, but the only possible hard evidence would be ancient DNA. If we could retrieve DNA from the English population between the 12th and 19th centuries, would we see a shift toward alleles that support different mental and behavioral traits? That work has yet to be done. 

Nonetheless, a research team led by Michael Woodley has examined ancient DNA from sites in Europe and parts of southwest and central Asia over a time frame extending from 4,560 and 1,210 years ago. During that time frame, alleles associated with high educational attainment gradually became more and more frequent. The authors concluded: "This process likely continued until the Late Modern Era, where it has been noted that among Western populations living between the 15th and early 19th centuries, those with higher social status […] typically produced the most surviving offspring. These in turn tended toward downward social mobility due to intense competition, replacing the reproductively unsuccessful low-status stratum […] eventually leading to the Industrial Revolution in Europe" (Woodley et al. 2017).

Again, work remains to be done, particularly on the genetic profile of the English population between the twelfth and nineteenth centuries, but the existing data do seem to validate Clark's model for European societies in general. Indeed, psychologist Heiner Rindermann presents evidence that mean cognitive ability steadily rose throughout Western Europe during late medieval and post-medieval times. Previously, most people failed to develop mentally beyond the stage of preoperational thinking. They could learn language and social norms but their ability to reason was hindered by various impediments like cognitive egocentrism, anthropomorphism, finalism, and animism (Rindermann 2018, p. 49). From the sixteenth century onward, more and more people reached the stage of operational thinking. They could better understand probability and cause and effect and could see things from the perspective of another person, whether real or hypothetical (Rindermann 2018, pp. 86-87).

As the “smart fraction” became more numerous, it may have reached a threshold where intellectuals were no longer isolated individuals but rather communities of people who could interact and exchange ideas. This was one of the hallmarks of the Enlightenment: intellectuals were sufficiently large in number to meet in clubs, “salons,” coffeehouses, and debating societies.


Brooke, J.L. and C.S. Larsen. (2014).The Nurture of Nature: Genetics, Epigenetics, and Environment in Human Biohistory. The American Historical Review 119(5): 1500-1513

Clark, G. (2007). A Farewell to Alms. A Brief Economic History of the World. Princeton University Press: Princeton and Oxford.

Clark, G. (2009a). The indicted and the wealthy: surnames, reproductive success, genetic selection and social class in pre-industrial England.

Clark, G. (2009b). The domestication of man: The social implications of Darwin. ArtefaCTos 2: 64-80. 

Cochran, G. and H. Harpending. (2009). The 10,000 Year Explosion: How Civilization Accelerated Human Evolution. New York: Basic Books. 

Frost, P. (2014). Burakumin, Paekchong, and Cagots. ResearchGate

Hawks, J., E.T. Wang, G.M. Cochran, H.C. Harpending, and R.K. Moyzis. (2007). Recent acceleration of human adaptive evolution. Proceedings of the National Academy of Sciences (USA) 104: 20753-20758.

Hopkins, K. (1965). Contraception in the Roman Empire. Comparative Studies in Society and History 8(1): 124-151.

Jelavich, C. and B. Jelavich. (1977). The Establishment of the Balkan National States, 1804-1920. Seattle: University of Washington Press.

Rindermann, H. (2018). Cognitive Capitalism. Human Capital and the Wellbeing of Nations. Cambridge University Press.

Woodley, M.A., S. Younuskunju, B. Balan, and D. Piffer. (2017). Holocene selection for variants associated with general cognitive ability: comparing ancient and modern genomes. Twin Research and Human Genetics 20(4): 271-280.

Tuesday, February 5, 2019

Did cold seasonal climates select for cognitive ability?

Paleolithic artefacts (Wikicommons). The northern tier of Eurasia saw an explosion of creativity that pre-adapted its inhabitants for later developments.

The new journal Psych will be publishing a special follow-up issue on J. Philippe Rushton and Arthur Jensen's 2005 article: "Thirty Years of Research on Race Differences in Cognitive Ability." The following is the abstract of my contribution. The article will appear later.

The first industrial revolution. Did cold seasonal climates select for cognitive ability?

Peter Frost

Abstract: In their joint article, Rushton and Jensen argued that cognitive ability differs between human populations. But why are such differences expectable? Their answer: as modern humans spread out of Africa and into the northern latitudes of Eurasia, they entered colder and more seasonal climates that selected for the ability to plan ahead, since they had to store food, make clothes, and build shelters for the winter. 

This explanation has a long history going back to Arthur Schopenhauer. More recently, it has been supported by findings from Paleolithic humans and contemporary hunter-gatherers. Tools become more diverse and complex as effective temperature decreases, apparently because food has to be obtained during limited periods of time and over large areas. There is also more storage of food and fuel and greater use of untended traps and snares. Finally, shelters have to be sturdier, and clothing more cold-resistant. The resulting cognitive demands fall on both men and women. Indeed, because women have few opportunities to get food through gathering, they specialize in more cognitively demanding tasks like garment making, needlework, weaving, leatherworking, pottery, and use of kilns. The northern tier of Paleolithic Eurasia thus produced the "first industrial revolution"—an explosion of creativity that pre-adapted its inhabitants for later developments, i.e., agriculture, more complex technology and social organization, and an increasingly future-oriented culture. Over time these humans would spread south, replacing earlier populations that could less easily exploit the possibilities of the new cultural environment. 

As this cultural environment developed further, it selected for further increases in cognitive ability. In fact, mean intelligence seems to have risen during historic times at temperate latitudes in Europe and East Asia. There is thus no unified theory for the evolution of human intelligence. A key stage was adaptation to cold seasonal climates during the Paleolithic, but much happened later.


Rushton, J.P. and A.R. Jensen. (2005). Thirty years of research on race differences in cognitive ability. Psychology, Public Policy, and Law 11(2): 235-294.

Monday, January 28, 2019

Evolution of empathy, part II

Medical students, Monterrey (credit: Daniel Adelrio, Wikicommons). Mexicans feel more empathy if they have a university degree. Does university make people more empathic?

We differ from individual to individual in our capacity not only to understand how others feel but also to experience their pain or joy. This “affective empathy” also differs between the sexes, being stronger in women than in men. Does it also differ between human populations? It should, for several reasons: 

- Affective empathy is highly heritable. A recent study put its heritability at 52-57% (Melchers et al. 2016).

- It differs in adaptiveness from one cultural environment to another, being adaptive in high-trust cultures and maladaptive in low-trust ones. There has thus been a potential for gene-culture coevolution.

- Such an evolutionary scenario would require relatively few genetic changes. Affective empathy exists in all human populations, and most likely already existed in ancestral hominids. Differences within our species are thus differences in fine-tuning of an existing mechanism. 

One can imagine the following scenario:

1. Initially, affective empathy existed primarily in women and served to facilitate the mother-child relationship.

2. Later, when human societies grew beyond the size of small kin groups, this mental trait took on a new task: regular interaction with people who were not necessarily close kin.

3. Selection thus increased the capacity for affective empathy in both sexes but more so in men.

4. This gene-culture evolution went the farthest in high-trust cultures.

Affective empathy and educational level in Mexico

To measure differences in affective empathy between human populations we can administer tests like "Pictures of Facial Affect" and the "Cambridge Behavior Scale." The first test is a measure of the ability to recognize emotion in human faces. The second test is a questionnaire with responses on a 4-point scale ranging from "strongly agree" to "strongly disagree."

In a recent study from Mexico these tests confirmed that affective empathy is stronger in women than in men. There were also differences by occupational status:

[...] we sought to explore facial emotion recognition abilities and empathy in administrative officers and security guards at a center for institutionalized juvenile offenders. One hundred twenty-two Mexican subjects, including both men and women, were recruited for the study. Sixty-three subjects were administrative officers, and 59 subjects were security guards at a juvenile detention center. Tasks included "Pictures of Facial Affect" and the "Cambridge Behavior Scale." The results showed that group and gender had an independent effect on emotion recognition abilities, with no significant interaction between the two variables. Specifically, administrative officers showed higher empathy than security guards. Moreover, women in general exhibited more empathy than men. (Quintero et al. 2018)

Why were the guards less able to recognize signs of distress or happiness on human faces? The authors offer no explanation but do note that the two groups differed in educational level: most of the administrative officers were university graduates, whereas the guards had gone no farther than middle school. 

In Mexico, educational level correlates with European admixture (Martinez-Marignac et al. 2007). Is this group difference in empathy really an ethnic difference?

The amygdala and political orientation in the U.S. and the U.K.

Tests are subjective and thus suffer from biases that may produce different results in different populations. To avoid this problem, a promising method is to measure the size or activity of brain structures that are associated with affective empathy. In the latest review of the literature, Tal Saban and Kirby (2019) assign the amygdala a key role:

Neuroscientists have identified the brain regions for the "empathy circuit": 1) the amygdala, responsible for regulating emotional learning and reading emotional expressions; 2) the anterior cingulate cortex (ACC), activated during observed or experienced pain in the self or others; and 3) the anterior insula (AI), which responds to one's pain and the pain of a loved one (Carr, Iacoboni, Dubeau, Mazziotta, & Lenzi, 2003). In recent years the mirror neuron system (MNS), comprised of the inferior frontal gyrus and inferior parietal cortex, has been suggested to also be involved in empathy (Gazzola et al., 2006, Kaplan and Iacoboni, 2006, Pfeifer et al., 2008, Baird et al., 2011). The broad notion that empathy involves "putting oneself in another's shoes" by simulating what others do, think, or feel, has been linked to the properties of mirror neurons.

The amygdala has been linked to affective empathy by MRI studies on healthy individuals and on individuals with amygdala lesions (Bzdok et al. 2012; Brunnlieb et al. 2013; Gu et al. 2010; Hurlemann et al. 2010; Leigh et al. 2013).

Two studies have found group differences in amygdala size or activity. When brain MRIs were done on 82 adults from the University of California at San Diego, the right amygdala showed more activity in Republicans than in Democrats (Schreiber et al., 2013). Similarly, a study of 90 adults from University College London found that the right amygdala was larger in self-described conservatives than in self-described liberals (Kanai et al., 2011).

Is affective empathy stronger in conservatives than in liberals? Or are these labels a proxy for something else? In both the United States and England, party politics is increasingly identity politics. While it is true that non-European minorities tend to be socially conservative, they nonetheless tend to be politically liberal, often overwhelmingly so. In the American study, party affiliation was undoubtedly the dimension being measured: participants were asked whether they were Democrat or Republican. This is less evident in the English study, where participants were asked about their "political orientation."

Both universities are ethnically diverse. University of California at San Diego is 36% Asian, 20% White, 19% non-resident alien, and 17% Latino (Anon 2019). There is no ethnic breakdown of University College students, but we know that a third of them come from outside the United Kingdom (Wikipedia 2019).


Brain MRIs provide a means to measure affective empathy objectively. We can thus evaluate differences between human populations, just as we have evaluated differences between men and women, and from individual to individual. This kind of comparative research will likely be done by accident rather than by design, as with the above three studies.

Another approach would be to identify alleles that correlate with a high level of affective empathy. A polygenic score could then be created, thus providing an objective yardstick for measuring this mental trait in any human population. Particularly promising are two polymorphisms. Alleles at the OXTR gene correlate with inter-individual differences in empathy, especially with affective empathy in women (Huetter et al. 2016). Alleles at the GNAS gene correlate with inter-individual differences in cognitive empathy, but only in women (Huetter et al. 2018).


Anon. (2019). University of California - San Diego, Ethnic Diversity.

Bzdok, D., L. Schilbach, K. Vogeley, et al. (2012). Parsing the neural correlates of moral cognition: ALE meta-analysis on morality, theory of mind, and empathy. Brain Structure and Function 217(4):783-796. 

Brunnlieb, C., T.F. Munte, C. Tempelmann, and M. Heldmann. (2013). Vasopressin modulates neural responses related to emotional stimuli in the right amygdala. Brain Research 1499:29-42. 

Gu, X., X. Liu, K.G. Guise, et al. (2010). Functional dissociation of the frontoinsular and anterior cingulate cortices in empathy for pain. Journal of Neuroscience 30:3739-3744. 

Huetter, F.K., H.S. Bachmann, A. Reinders, D. Siffert, P. Stelmach, D. Knop, et al. (2016). Association of a Common Oxytocin Receptor Gene Polymorphism with Self-Reported 'Empathic Concern' in a Large Population of Healthy Volunteers. PLoS ONE 11[7]:e0160059

Huetter, F.K, P.A. Horn, and W. Siffert. (2018). Sex-specific association of a common GNAS polymorphism with self-reported cognitive empathy in healthy volunteers. PLoS ONE 13(10): e0206114. 

Hurlemann, R., A. Patin, O.A. Onur, et al. (2010). Oxytocin enhances amygdala-dependent, socially reinforced learning and emotional empathy in humans. Journal of Neuroscience 30(14):4999-5007. 

Kanai, R., T. Feilden, C. Firth, and G. Rees. (2011). Political orientations are correlated with brain structure in young adults. Current Biology 21: 677 - 680.

Leigh, R., K. Oishi, J. Hsu, et al. (2013). Acute lesions that impair affective empathy. Brain 136(8):2539-2549.

Martinez-Marignac, V.L., A. Valladares, E. Cameron, A. Chan, A. Perera, R. Globus-Goldberg, N. Wacher, J. Kumate, P. McKeigue, D. O'Donnell, M.D. Shriver, M. Cruz, and E.J. Parra. (2007). Admixture in Mexico City: implications for admixture mapping of Type 2 diabetes genetic risk factors. Human Genetics 120(6): 807-819.

Melchers, M., C. Montag, M. Reuter, F.M. Spinath, and E. Hahn. (2016). How heritable is empathy? Differential effects of measurement and subcomponents. Motivation and Emotion 40(5): 720-730. 

Quintero, L.A.M., J. Muñoz-Delgado, J.C. Sánchez-Ferrer, A. Fresán, M. Brüne, and I. Arango de Montis.  (2018). Facial Emotion Recognition and Empathy in Employees at a Juvenile Detention Center. International Journal of Offender Therapy and Comparative Criminology 62(8) 2430-2446.

Schreiber, D., Fonzo, G., Simmons, A.N., Dawes, C.T., Flagan, T., et al. (2013). Red Brain, Blue Brain: Evaluative Processes Differ in Democrats and Republicans. PLoS ONE 8(2): e52970.

Tal Saban, M. and A. Kirby. (2019). Empathy, social relationship and co-occurrence in young adults with DCD. Human Movement Science 63: 62-72

Wikipedia (2019). University College London.

Monday, January 21, 2019

The evolution of empathy

Maria Walpole and her daughter Elisabeth Laura (1762), by Joshua Reynolds. Affective empathy may have initially evolved to facilitate the mother-child relationship. 

Empathy is key to the functioning of high-trust cultures. If everyone is empathic toward each other, there is no need to waste energy on self-protection or on double-checking every single transaction. Just as importantly, you can make transactions that would otherwise be uneconomical.

Empathy, however, has to be reciprocated. Otherwise, it will divert your limited resources to people who will never reciprocate and who will, in fact, bleed you dry.  

The adaptiveness of empathy therefore depends on the cultural environment. Some cultures will favor it but not others. Does it follow, then, that some human populations have become more empathic than others? Can this mental trait undergo gene-culture coevolution?

It can, if three pre-conditions are met:

1. The trait varies in adaptiveness from one culture to another.

2. The trait is genetically heritable.

3. The trait can easily evolve out of pre-existing traits, i.e., only a few genetic changes are needed.

Evolutionary psychologists will argue that modern humans have not existed long enough to evolve new mental adaptations, particularly since their expansion out of Africa and into new natural and cultural environments. There has only been fine-tuning of existing adaptations (Tooby, Cosmides, and Barkow 1992). This argument is debatable:

Even if 40 or 50 thousand years were too short a time for the evolutionary development of a truly new and highly complex mental adaptation, which is by no means certain, it is certainly long enough for some groups to lose such an adaptation, for some groups to develop a highly exaggerated version of an adaptation, or for changes in the triggers or timing of that adaptation to evolve. That is what we see in domesticated dogs, for example, who have entirely lost certain key behavioral adaptations of wolves such as paternal investment. Other wolf behaviors have been exaggerated or distorted. (Harpending and Cochran 2002)

Empathy can thus differ between human populations if the differences arise from simple changes to an existing mechanism.

So does empathy meet the above preconditions?

Differences in adaptiveness

All cultures have rules of one sort or another. These rules are enforced by external sanctions (shaming by the community, especially by family members) and internal sanctions (feelings of guilt). Most cultures rely primarily on shaming. Some cultures, particularly in Europe, rely much more on feelings of guilt. Guilt is a subset of empathy. As the wrongdoer, you transfer to yourself the feelings of the person you have wronged. You feel the pain you have inflicted, and you will now mentally punish yourself.

The anthropologist Ruth Benedict described the differences between shame and guilt:

True shame cultures rely on external sanctions for good behavior, not, as true guilt cultures do, on an internalized conviction of sin. Shame is a reaction to other people's criticism. A man is shamed either by being openly ridiculed and rejected or by fantasying to himself that he has been made ridiculous. In either case, it is a potent sanction. But it requires an audience or at least a man's fantasy of an audience. Guilt does not. In a nation where honor means living up to one's own picture of oneself, a man may suffer from guilt though no man knows of his misdeed and a man's feeling of guilt may actually be relieved by confessing his sin. (Benedict 1946, p. 223)

Shame seems to be evolutionarily older than guilt. Sigmund Freud speculated that feelings of guilt arose as a mechanism to punish misbehavior in larger communities where paternal authority is insufficient: 

When an attempt is made to widen the community, the same conflict is continued in forms which are dependent on the past; and it is strengthened and results in a further intensification of the sense of guilt. [...]. What began in relation to the father is completed in relation to the group. If civilization is a necessary course of development from the family to humanity as a whole, then [...] there is inextricably bound up with it an increase of the sense of guilt, which will perhaps reach heights that the individual finds hard to tolerate. (Freud 1962, pp. 79-80)

East Asians might seem to be an exception to this evolutionary trend. They generally live in large communities where paternal authority is insufficient to enforce social rules. This problem seems to have been resolved through a stronger sense of social duty, rather than a greater propensity for empathy and guilt.

We see this in a study of young Chinese adults. The participants could see things from another person's perspective and understand how that person felt, but they did not seem to internalize those feelings and experience them vicariously. They were motivated to obey social rules by a sense of duty, rather than by empathy and feelings of guilt: "taking the views of others is an essential duty, and the lack of consideration to others' perspectives is generally regarded as a lack of virtue in the Chinese culture" (Siu and Shek 2005).


First, we should keep in mind that empathy is not a unitary construct. It has different components:

Pro-social behavior: willingness to help others

Cognitive empathy:  capacity to understand how others feel

Affective or emotional empathy: involuntary transference of another person's feelings to yourself, i.e., feeling that person's pain or joy.

The last component is usually what we mean by empathy. Nonetheless, a person can be low in affective empathy while being high in cognitive empathy; this is in fact the hallmark of the sociopath, i.e., a person who understands how others feel and knows how to exploit those feelings for personal gain. Of the three kinds of empathy, pro-social behavior seems the most divergent and shares the least mental circuitry with the other two. Cognitive and affective empathy share circuits that specialize in representing another person's thoughts and intensions; affective empathy seems to be an additional step where these representations are relayed to brain regions that produce the corresponding emotional responses (Carr et al. 2003; Krishnan et al. 2016).

The latest review of the literature concluded that all three components of empathy have moderate to high heritability (Chakrabarti and Baron-Cohen 2013). Since then, an adult twin study has estimated the heritability of affective empathy at 52-57% and that of cognitive empathy at 27%. The rest of the variance was largely due to non-shared environment (Melchers et al. 2016). 

These findings are in line with those of a longitudinal twin study of children from 7 to 12 years of age. Genetic influences accounted for most of the variance in callousness/unemotionality, and environmental influences were entirely non-shared (Henry et al. 2018). Other studies have shown that the capacity for affective empathy remains stable as a child develops, while cognitive empathy progressively increases (Decety et al. 2017):

Finally, men and women seem to differ in affective empathy but not in cognitive empathy: “females do indeed appear to be more empathic than males [but] [t]hey do not appear to be more adept at assessing another person's affective, cognitive, or spatial perspective” (Hoffman 1977). This sex difference has been confirmed by recent studies, notably a British study (Baron-Cohen and Wheelwright 2004), a largely Argentinean study (Baez et al. 2017), an Italian twin study (Toccaceli et al. 2018), and a Chinese study (Liu et al. 2018). The size of the sex difference varied, however, being slight in the British and Argentinean studies, large but not significant in the Italian study, and significant in the Chinese study. 

Evolution in Homo sapiens

Affective empathy seems to be universal in our species. Differences do exist, however, between individuals, and these differences are distributed along a Bell curve in a human population (Baron-Cohen 2011; McGregor 2018). Any distinction between “normal people” and “sociopaths” is therefore arbitrary. There is simply a continuum of decreasing capacity for affective empathy.

Affective empathy also differs between men and women, and this sex difference seems, in turn, to differ from one population to another. This last point suggests an evolutionary pathway. Affective empathy may have initially evolved in ancestral humans as a means to facilitate the mother-child relationship. "Guilt cultures" then favored extension of affective empathy to a wider range of social interactions, as well as increased expression in men. One consequence would be a smaller sex difference in this mental trait.

How do guilt cultures ensure that affective empathy is reciprocated? They seem to resolve this problem by defining themselves much more as moral communities than as communities of related individuals. Adherence to social rules defines community membership, and these rules are perceived as being universal and absolute, as opposed to the situational morality of communities defined solely by kinship. Guilt cultures are also highly ideological. Community members monitor not only outward behavior for compliance but also inward thoughts—and this monitoring can target not just the thoughts of other members but also one’s own. Non-compliance can lead to a member being branded as morally worthless and expelled from the community (Frost 2017).

The current evidence is suggestive but not conclusive. As Baez et al. (2017) point out, most of our evidence on sex differences in empathy comes from self-report, i.e., questionnaires that men and women fill out. Many studies also fail to distinguish between cognitive empathy (understanding what others feel) and affective empathy (feeling what others feel). To measure affective empathy objectively, especially when comparing people from different cultural backgrounds, it would be best to use brain fMRIs (Krishnan et al. 2016).

To be cont'd


Baez, S., Flichtentrei, D., Prats, M., Mastandueno, R., García, A.M., Cetkovich, M., et al. (2017). Men, women...who cares? A population-based study on sex differences and gender roles in empathy and moral cognition. PLoS ONE 12(6): e0179336.

Baron-Cohen, S. (2011). The Empathy Bell Curve. Phi Kappa Phi Forum; Baton Rouge 91(1): 10-12.

Baron-Cohen, S. and S. Wheelwright. (2004).The Empathy Quotient: An investigation of adults with Asperger Syndrome or high functioning autism, and normal sex differences. Journal of Autism and Developmental Disorders 34: 163-175.

Benedict, R. (1946 [2005]). The Chrysanthemum and the Sword. Patterns of Japanese Culture, First Mariner Books.

Carr, L., M. Iacoboni, M-C. Dubeau, J.C. Mazziotta, and G.L. Lenzi. (2003). Neural mechanisms of empathy in humans: A relay from neural systems for imitation to limbic areas. Proceedings of the National Academy of Sciences (USA) 100: 5497-5502.

Chakrabarti, B. and S. Baron-Cohen. (2013). Understanding the genetics of empathy and the autistic spectrum, in S. Baron-Cohen, H. Tager-Flusberg, M. Lombardo. (eds). Understanding Other Minds: Perspectives from Developmental Social Neuroscience. Oxford: Oxford University Press.

Davis, M.H., C. Luce, and S.J. Kraus. (1994). The heritability of characteristics associated with dispositional empathy. Journal of Personality 62: 369-391.

Decety, J., K.L. Meidenbauer, and J.M. Cowell. (2017). The development of cognitive empathy and concern in preschool children: A behavioral neuroscience investigation. Developmental Science 2018;21:e12570. 

Freud, S. (1962[1930]). Civilization and Its Discontents. New York: W.W. Norton

Frost, P. (2017). The Hajnal line and gene-culture coevolution in northwest Europe. Advances in Anthropology 7: 154-174.

Harpending, H., and G. Cochran. (2002). In our genes. Proceedings of the National Academy of Sciences (USA) 99(1): 10-12.

Henry, J., G. Dionne, E. Viding, A. Petitclerc, B. Feng, F. Vitaro, M. Brendgen, R.E. Tremblay, and M. Boivin. (2018). A longitudinal twin study of callous-unemotional traits during childhood. Journal of Abnormal Psychology 127(4): 374-384. 

Hoffman, M. L. (1977). Sex differences in empathy and related behaviors. Psychological Bulletin 84(4): 712-722. 

Krishnan, A., C.W. Woo, L.J. Chang, L. Ruzic, X. Gu, M. López-Solà, P.L Jackson, J. Pujol, J. Fan, and T.D. Wager. (2016). Somatic and vicarious pain are represented by dissociable multivariate brain patterns. eLife 2016;5:e15166 

Liu, J., X. Qiao, F. Dong, and A. Raine. (2018). The Chinese version of the cognitive, affective, and somatic empathy scale for children: Validation, gender invariance and associated factors. PLoS ONE 13(5): e0195268. 

McGregor, J. (2018). The highly empathic. SoRECS – The Society for Research into Empathy, Cruelty & Sociopathy. May

Melchers, M., C. Montag, M. Reuter, F.M. Spinath, and E. Hahn. (2016). How heritable is empathy? Differential effects of measurement and subcomponents. Motivation and Emotion 40(5): 720-730. 

Siu, A.M.H. and D.T. L. Shek. (2005). Validation of the Interpersonal Reactivity Index in a Chinese Context. Research on Social Work Practice 15: 118-126.

Toccaceli, V., C. Fagnani, N. Eisenberg, G. Alessandri, A. Vitale and M.A. Stazi. (2018). Adult Empathy: Possible Gender Differences in Gene-Environment Architecture for Cognitive and Emotional Components in a Large Italian Twin Sample. Twin Research and Human Genetics 21(3): 214-226

Tooby J, L. Cosmides, and J. Barkow. (1992). Introduction: Evolutionary Psychology and Conceptual Integration. In J. Barkow, L. Cosmides, and L. Tooby (eds.) The Adapted Mind: Evolutionary Psychology and the Generation of Culture, pp. 3-16, New York: Oxford Univ. Press; 1992.