Monday, April 1, 2019

They really are smart ... and other surprises



Rachela - Maurycy Gottlieb (1856-1879) (Wikicommons). Ashkenazi Jews have a higher incidence of genetic variants associated with high educational attainment.



Intelligence varies from one individual to the next, and most of this variance has genetic causes. But what, exactly, are these causes? Lots and lots of genes, it seems. To be precise, if we look at the genes that influence human intelligence, we find two things:

1. They are very numerous, numbering in the thousands.

2. In general, their variants differ slightly in their effects.

This shouldn't be surprising. Evolution proceeds by tinkering, i.e., by making little changes. Big changes tend to produce big side-effects, and most side-effects are deleterious. So the genetic capacity for intelligence differs among humans through small differences at thousands upon thousands of genes. Does it follow, then, that we cannot understand these differences by looking only at a few genes? Not necessarily. Each gene is like a weathervane. If you can get enough subjects from a human population, even a few genes will tell you the direction and strength of natural selection for intelligence. 

Davide Piffer began looking at these “weathervanes” six years ago. He gathered data from different human populations on ten SNPs (single nucleotide polymorphisms) whose genetic variants are associated with differences in intelligence, specifically differences in educational attainment. Then, for each population, he estimated its genetic capacity for intelligence by calculating a "polygenic score"—the number of genetic variants associated with higher educational attainment, out of a maximum of ten.

This score correlated with population IQ (r=0.90) and with PISA scores (r=0.84). It was highest in East Asians:

East Asians have the highest frequencies of alleles beneficial to educational attainment (39%) and consistently outperform other racial groups both within the US and around the world, in terms of educational variables such as completion of college degree or results on standardized tests of scholastic achievement. Europeans have slightly lower frequencies of educational attainment alleles (35.5%) and perform slightly worse in terms of educational attainment, compared to East Asians. On the other hand, Africans seem to be disadvantaged both with regards to their level of educational attainment in the US and around the world. Indeed, Africans have the lowest frequencies of alleles associated with educational attainment (16%). (Piffer 2013)

These results were considered preliminary. Thousands upon thousands of genes influence intelligence, and here we have only ten! Perhaps chance alone produced this geographic pattern. Over the next few years, as other researchers discovered more SNPs associated with educational attainment, Davide Piffer repeated his study with more of these weathervanes.

His latest study has just come out. It uses data on 2,411 SNPs, and the polygenic score correlates even higher with population IQ (r=0.98). The geographic pattern is the same, with East Asians scoring higher than Europeans, and with Africans scoring lower.


Yes, Jews really are smart

This time, however, the highest score was obtained for Ashkenazi Jews: 

This dataset included a sample of 145 Ashkenazi Jewish individuals. The IQ of Ashkenazi Jews has been estimated to be around 110 [34]. Remarkably, their EDU polygenic score was the highest in our sample, corresponding to a predicted score of about 108, mirroring preliminary results from a smaller (N = 53) sample (Dunkel et al., 2019) [34]. (Piffer 2019)

This finding vindicates the authors of a paper written more than a decade ago. Gregory Cochran, Jason Hardy, and Henry Harpending presented evidence that the mean IQ of Ashkenazi Jews exceeds not only that of non-Jewish Europeans but also that of other Jewish groups. The most striking piece of evidence is the high incidence among Ashkenazim of four genetic disorders: Tay-Sachs, Gaucher, Niemann-Pick, and mucolipidosis type IV (MLIV). All four affect the capacity to store sphingolipid compounds that promote the growth and branching of axons in the brain. These disorders are caused by alleles that are harmful in the homozygote state and beneficial in the much more common heterozygote state, i.e., the brain receives higher levels of sphingolipids without the adverse health effects.

Ironically, these facts are coming to light at a time when Ashkenazi Jews are disappearing through low fertility and high out-marriage. Meanwhile, and not coincidentally, they are disappearing from the ranks of top winners at the U.S. Math Olympiad, the Putnam Exam, the Computing Olympiad, and other academic competitions. This decline became noticeable in the 1980s and has accelerated since the turn of the millennium (Unz 2012; Frost 2018). Jews are still present in intellectual and cultural life, but this presence is losing its dynamism and becoming a mere legacy.


African American IQ is higher than predicted

The polygenic score seems to underpredict the IQ of African Americans:

Indeed, the IQ of African Americans appears to be higher than what is predicted by the PGS (Figure 2), which suggests this cannot be explained by European admixture alone, but it could be the result of enjoying better nutrition or education infrastructure compared to native Africans. Another explanation is heterosis ("hybrid vigor"), that is the increase in fitness observed in hybrid offspring thanks to the reduced expression of homozygous deleterious recessive alleles. (Piffer 2019)

I’d propose another possible explanation: higher intelligence in African Americans may be associated with a somewhat different basket of genetic variants. Some of these variants may come from our friends the Igbos, who seem to have followed their own evolutionary path toward higher intelligence (Frost 2015). Many notable African Americans are in fact of Igbo descent, including Forest Whitaker, Paul Robeson, and Blair Underwood (Wikipedia 2019).

Davide is skeptical about this explanation, pointing out that population IQ is in line with the polygenic score he calculated for sub-Saharan African groups (Esan, Gambians, Luhya, Mende, Yoruba). None of those groups, however, are Igbo, and it's really the Igbo who stand out among West Africans in measures of intellectual and educational attainment. If only for the sake of curiosity, we should find out their polygenic score. This score may underpredict their genetic capacity for intelligence, which to some degree would be boosted by genetic variants that exist only in sub-Saharan Africa, but it should still exceed what we see for other West Africans.


Conclusion

This latest study brings to 2,411 the number of SNPs that can inform us about the genetic capacity for intelligence in different human populations. This information was more dubious when only ten SNPs were available, and the geographic pattern could be put down to chance. That argument now seems weak. If chance is causing this pattern, why do we keep getting the same one?

Sure, we can wait until we get even more relevant SNPs, but the overall picture will probably remain the same. We will get finer geographic detail. In France, for example, we will probably understand why educational attainment is so much higher in Brittany (see:  http://www.targetmap.com/viewer.aspx?reportId=5987  H/T to Philippe Gouillou). There are probably several European regions and subregions where the genetic capacity for intelligence is on a par with what we see in Ashkenazi Jews and East Asians.

In sum, these findings deserve to be better known ... and more widely discussed.


Erratum

Initially, I wrote that Davide Piffer used 127 SNPs. In fact, 127 is the number of SNPs found in the HGDP (low coverage) dataset. In the other two datasets (1000 Genomes and GnomAd), those that the main analysis was based on, there were actually 2,411 SNPs.


****
Hiatus alert ****


I'll be unable to post for the near future, probably the next three months. 


References

Cochran, G., J. Hardy, and H. Harpending. (2006). Natural history of Ashkenazi intelligence, Journal of Biosocial Science 38: 659-693.
https://antville.org/static/sites/kratzbuerste/files/AshkenaziIQ.pdf   

Frost, P. (2018). The end of Jewish achievement? Evo and Proud, May 21
http://evoandproud.blogspot.com/2018/05/the-end-of-jewish-achievement.html

Frost, P. (2015). The Jews of West Africa, Evo and Proud, July 4
https://evoandproud.blogspot.com/2015/07/the-jews-of-west-africa.html

Piffer, D. (2019). Evidence for Recent Polygenic Selection on Educational Attainment and Intelligence Inferred from Gwas Hits: A Replication of Previous Findings Using Recent Data. Psych 1(1): 55-75
https://www.mdpi.com/2624-8611/1/1/5

Piffer, D. (2013). Factor analysis of population allele frequencies as a simple, novel method of detecting signals of recent polygenic selection: The example of educational attainment and IQ, Mankind Quarterly 54(2): 168-200
https://www.researchgate.net/profile/Davide_Piffer/publication/260436834_Factor_Analysis_of_Population_Allele_Frequencies_as_a_Simple_Novel_Method_of_Detecting_Signals_of_Recent_Polygenic_Selection_The_Example_of_Educational_Attainment_and_IQ/links/0c9605314d28dba7ea000000/Factor-Analysis-of-Population-Allele-Frequencies-as-a-Simple-Novel-Method-of-Detecting-Signals-of-Recent-Polygenic-Selection-The-Example-of-Educational-Attainment-and-IQ.pdf 

Unz, R. (2012). The myth of American meritocracy. The American Conservative, November 28
http://www.theamericanconservative.com/articles/the-myth-of-american-meritocracy/   

Wikipedia (2019). Igbo people.
https://en.wikipedia.org/wiki/Igbo_people#Diaspora

Tuesday, March 26, 2019

Autumn in China



Pink Autumn, by Victor Wang (2017) (Wikipedia). China’s demographic crisis is much worse than what official statistics let on.



With its population ageing as a result of longer lifespans and a dwindling number of children, the world's most populous nation decided in 2016 to allow all couples to have a second child, relaxing a tough one-child policy in place since 1978.

But birth rates plummeted for the second consecutive year last year. Policymakers now fret about the impact a long-term decline in births will have on the economy and its strained health and social services. (Stanway 2019)

The above Reuters article appeared two weeks ago. Although China lifted its one-child policy in 2016, its total fertility rate is still declining and now stands at 1.6 children per woman—well below the replacement level of 2.1 children. Delegates to China's parliament are saying that "radical steps are needed."

Are things that bad? No … they're worse. China's fertility rate is much lower than the official figure of 1.6 and probably close to what we see in Taiwan (1.1), in Singapore's Chinese community (1.1), and in Malaysia's Chinese community (1.3). Moreover, it is continuing to decline and will soon fall below the threshold of one child per woman, if it has not already done so. Finally, this very low fertility has lasted long enough to exhaust all population momentum. The population will soon begin to shrink, most likely five years earlier than the officially projected date of 2030. 


The real figures

The official figure of 1.6 children per woman is consistent with estimates by respected international bodies: 1.62 (World Bank, 2016); 1.8 (Population Reference Bureau, 2016); and 1.6 (United Nations, 2011-2015) (Wang 2018). The total fertility rate can also be estimated from data that China collects every year, i.e., the sample surveys taken by the National Bureau of Statistics. Using this source, Mengqiao Wang came up with much lower figures:

[…] far below the 2.1 replacement level, national TFR fluctuated around 1.4 since 2003, before dropping to around 1.2 since 2010 and finally reaching an astoundingly low value of 1.05 in 2015. (Wang 2018)

This pattern of very low fertility is limited to Han Chinese, particularly those in the northeastern provinces of Liaoning, Heilongjiang, and Jilin. In those provinces, the total fertility rate has fallen to 0.75 children per woman, and death rates have already overtaken birth rates:

Population shrinkage was already a fact for the northeastern part of the country, and it remained a question of when but not whether that fact would spread to other areas of the nation (official estimate of population peak at 1.45 billion by 2030 but unofficial estimate of 1.41 billion as earliest as 2025. (Wang 2018)

Why are these figures so much lower than the official figures? The main reason given is that the one-child policy caused widespread underreporting of births: many Chinese were having second children but not reporting them to government statisticians for fear of being penalized. This is seen in the differences between the raw data of the sample surveys and the official estimates of the National Bureau of Statistics. 

The NBS estimates, however, seem to be based on the sample survey data … with an upward adjustment to take underreporting of births into account: "Ironically, the NBS mentioned that the annual total births announced were calculated and inferred from the same sample surveys this study analyzed" (Wang 2018). So we are trapped in a circular argument: the sample surveys must be missing many births because the NBS estimates show a higher birth rate. But the same NBS estimates have been adjusted upwards because so many births are supposedly being missed.

Moreover, if the one-child policy had caused so much underreporting of births, the number of reported births should have increased in 2016, when that policy was scrapped, and that increase should have persisted in subsequent years. That's not what happened. The number of births did rise in 2016 and then fell in 2017 and 2018. The rise was probably due to some parents deciding to have a second child because there were no longer any financial penalties. The "backlog" of potential childbearing has now been cleared, and the fertility rate has returned to its pre-2016 slump.

It should be emphasized that the decline in Chinese fertility is now being driven by the growing number of women who have no children at all. This childlessness cannot be blamed on the one-child policy, and it is questionable, in fact, whether that policy has had much impact on the fertility rate for the past two decades. Total fertility rates have declined to the same very low level among Chinese people in Taiwan, Singapore, and Malaysia—where there has never been a one-child policy.


How bad is it?

Pretty bad. The fertility rate dropped below the replacement level in 1992 and has been at very low levels (1.4 or lower) since 2003. There is very little population momentum left, and an absolute drop in numbers should begin in the mid-2020s. Meanwhile, the total fertility rate may decline even further. In 2018, it was 0.98 in South Korea and 0.75 in northeastern China (Kyu-min and Su-ji 2019; Wang 2018). In all of East Asia, this pattern has only one exception: North Korea, where the rate is 1.98 children per woman (Wikipedia 2019). This is what we call “a failed state.”

One can always say that China has well over a billion people and can afford to shed a few hundred million. The relevant figure, however, is not the total population but rather the number of women who can bear children. That figure is a lot smaller and will continue to shrink. Women of childbearing age are defined (generously) as being 15 to 49 years old. Their numbers peaked at 383 million in 2011 and have been falling each year by 4 to 6 million. If we look at the most fertile age group (20-29), their numbers are expected to fall from 107.7 million in 2016 to around 80 million in 2020 (Wang 2018). 

There is no real precedent for what is happening. In the Western world, the fertility rate has declined over a longer time and has reached very low levels only in some countries, notably those of southern and eastern Europe. Moreover, unlike those countries China is still creating jobs at a high rate. Who will fill those jobs?

I addressed that question in a series of posts I wrote nine years ago. The abundance of jobs and empty housing will suck in immigrants from poorer countries, initially from Southeast Asia and then increasingly from South Asia and Africa. The African influx into Asia will be the big surprise of the 21st century, being especially noticeable in Malaysia and South China.


What can be done?

The first step toward solving a problem is to recognize that it exists. Most problems go unsolved because no one takes that first step. China is starting to move in that direction, but the word "starting" should be stressed. As Wang (2018) notes, there is a recurring tendency by the Chinese bureaucracy to downplay the demographic crisis. In some cases, the relevant data are not published:

Regrettably, such data for 2016 were no longer published in the most recent 2017 yearbook as the official publication mentioned that "In comparison with China Statistical Yearbook 2016, following revisions have been made in this new version in terms of the statistical contents and in editing: Of the chapter "Population", table named Age-specific Fertility Rate of Childbearing Women by Age of Mother and Birth Order is deleted." (China Statistical Yearbook 2017). Reasons were unknown for the deletion of this table, and it was unclear if the deletion would be temporary or permanent, or whether such deletion would continue in future years beyond 2016. (Wang 2018).

With determined effort, very low fertility can be reversed. Israel has gone the farthest in this direction, having achieved replacement fertility even among secular Jews. One must provide not only financial incentives but also cultural and ideological ones. Marriage and family formation must be seen positively. In this, unfortunately, the West is not an example to follow.

Wang (2018) suggests four measures to deal with the demographic crisis:

- Make demographic data fully accessible for debate and discussion.

- Eliminate controls on couples who want to have more than two children.

- Lower the minimum age for marriage, which is currently 22 for men and 20 for women.

- Allow out-of-wedlock births.

The first three measures seem sensible, although the second one would probably have little effect. Such controls are already absent in Taiwan, Singapore, and Malaysia. The last measure is terrible. All things being equal, a single mother will have fewer children than a married mother. Yes, a single mother can eventually marry or remarry, but such marriages tend to be less stable and thus less conducive to future childbearing, largely because the husband is less willing to support children that are not his own.

Of course, not all things are equal. Single mothers tend to be more present-oriented and, thus, more indifferent to the long-term costs of their actions, like those of having children. But what is to be gained by encouraging such people to reproduce? The experience of the West has been that single mothers, and their children, end up being a net cost to society.

In the West, the increase in single motherhood has coincided very closely with the decline in fertility, and both reflect the same underlying problem: people are less willing to commit to a long-term relationship and raise the children it produces. We increasingly live in a culture where the only valid entity is the individual. Everything else—family, community, nation—is illegitimate.


References

Frost, P. (2010a). China and interesting times ... Evo and Proud, February 25

Frost, P. (2010b). China and interesting times. Part II. Evo and Proud, March 4

Frost, P. (2010c). China and interesting times. Part III. Evo and Proud, March 11

Frost, P. (2010d). Has China come to the end of history? Evo and Proud, March 18

Kyu-min, C. and S. Su-ji. (2019). Fertility rate plummets to less than 1 child per woman. National Politics, February 28

Stanway, D. (2019). China lawmakers urge freeing up family planning as birth rates plunge. Reuters, March 12

Wang, M. (2018). For Whom the Bell Tolls: A Retrospective and Predictive Study of Fertility Rates in China (November 8, 2018). Available at SSRN: https://ssrn.com/abstract=3234861

Wikipedia (2019). Demographics of North Korea.

Sunday, March 10, 2019

IQ of biracial children and adults



First snow in Minnesota (c. 1895), Robert Koehler. Biracial children have IQ scores halfway between those of white children and black children, even when they are conceived by white single mothers and adopted into middle-class white families in their first year of life.



You may have heard about the Minnesota Transracial Adoption Study. It was a longitudinal study of black, biracial, and white children adopted into white middle-class Minnesotan families, as well as the biological children of the same families (Levin, 1994; Lynn, 1994; Scarr and Weinberg, 1976; Weinberg, Scarr, and Waldman, 1992). IQ was measured when the adopted children were on average 7 years old and the biological children on average 10 years old. They were tested again ten years later. Between the two tests, all four groups declined in mean IQ. On both tests, however, the differences among the four groups remained unchanged, particularly the 15-point gap between black and white adoptees. 

The biracial children remained halfway between the black and white adoptees. Could this be due to the parental environment being likewise half and half? Well, no. All of them were raised by white parents, and they were adopted at an early age: 19 months on average for the white adoptees, 9 months for the biracial adoptees, and 32 months for the black adoptees. The last figure is emphasized by Scarr and Weinberg (1976) as a reason for the IQ gap between the black and white adoptees. 

Fine, but what about the IQ gap between the biracial and white adoptees? Almost all of the biracial children were adopted at a young age and born to white single mothers who had completed high school. From conception to adulthood they developed in a "white" environment. If anything, the white adoptees should have encountered more developmental problems because they were adopted at an older age.

Could color prejudice be a reason? Perhaps the biracial children were unconsciously treated worse than the white children. By the same reasoning, they may have been treated better than the black children. We can test the second half of this hypothesis. Twelve of the biracial children were wrongly thought by their adoptive parents to have two black parents. Nonetheless, they scored on average at the same level as the biracial children correctly classified by their adoptive parents (Scarr and Weinberg 1976).


The Eyferth study

Another study found no difference in IQ between white and biracial children. This was a study of children fathered by American soldiers in Germany and then raised by German mothers (Eyferth 1961). It found no significant difference in IQ between children with white fathers and children with black fathers. Both groups had a mean IQ of about 97.

These findings were criticized by Rushton and Jensen (2005) on three grounds:

1. The children were still young when tested. One third were between 5 and 10 years old and two thirds between 10 and 13. Since IQ is strongly influenced by family environment before puberty, a much larger sample would be needed to find a significant difference between the two groups.

2. Between 20 and 25% of the “black” fathers were actually North African.

3. At the time of the study, the US Army screened out low IQ applicants with its preinduction Army General Classification Test. The rejection rate was about 30% for African Americans and 3% for European Americans. African American soldiers are thus a biased sample of the African American population.

Another factor is that the capacity for intelligence seems to be more malleable in children than in adults. We see this with the Minnesota Transracial Adoption Study. In the enriched learning environment of middle-class Minnesota families, all of the children showed impressive IQ scores at 7 years of age. By 17 years of age, however, this benefit had largely washed out:

-------------------- Age 7 Age 17

Black children ----- 97 ----- 89

Biracial children - 109 ----- 99

White children --- 112 ---- 106

Does intelligence really decline with age because of wear and tear on the brain? Perhaps we’re programmed to be most intelligent in childhood. That’s when we have to familiarize ourselves with the world. The capacity for intelligence may then be gradually deactivated as we get older because it’s less necessary.

This deactivation may follow different trajectories in different human groups. In early Homo sapiens, it may have begun not long after puberty. As ancestral humans made the transition to farming, sedentary living, and increasingly complex societies, this learning capacity became more necessary in adulthood, with the result that natural selection favored those individuals who retained it at older ages. This gene-culture coevolution would have gone farther in some populations than in others.


The Fuerst et al study

A recent study led by John Fuerst has confirmed the intermediate IQ of biracial individuals, this time in adults. The research team used the General Social Survey, which includes not only ethnic, sociological, and demographic data but also a measure of intelligence (WordSum):


The relationship between biracial status, color, and crystallized intelligence was examined in a nationally representative sample of adult Black and White Americans. First, it was found that self-identifying biracial individuals, who were found to be intermediate in color and in self-reported ancestry, had intermediate levels of crystallized intelligence relative to self-identifying White (mostly European ancestry) and Black (mostly sub-Saharan African ancestry) Americans. The results were transformed to an IQ scale: White (M = 100.00, N = 7569), primarily White-biracial (M = 96.07, N = 43, primarily Black-biracial (M = 94.14 N = 50), and Black (M = 89.81, N = 1381).

The same study also found a significant negative correlation among African Americans between facial color and WordSum scores. The correlation was low (r = -0.102), but it would be difficult to get a higher correlation because of the measures used. Self-reported skin color correlates imperfectly with actual skin color, which in turn correlates imperfectly with European admixture. Wordsum likewise correlates imperfectly with IQ (r = 0.71). On a final note, the correlation between facial color and WordSum scores was not explained by region of residence, interviewer’s race, parental socioeconomic status, or individual educational attainment.


References

Eyferth, K. (1961). Leistungen verscheidener Gruppen von Besatzungskindern in Hamburg-Wechsler Intelligenztest für Kinder (HAWIK). Archiv für die gesamte Psychologie 113: 222-241.

Fuerst, J.G.R., R. Lynn, and E.O.W. Kirkegaard. (2019). The Effect of Biracial Status and Color on Crystallized Intelligence in the U.S.-Born African-European American Population. Psych 1(1): 44-54. https://dx.doi.org/10.3390/Psychology1010004

Levin, M. (1994). Comment on the Minnesota transracial adoption study. Intelligence 19: 13-20.

Lynn, R. (1994). Some reinterpretations of the Minnesota Transracial Adoption Study. Intelligence 19: 21-27.

Rushton, P. and A.R. Jensen. (2005). Thirty years of research on race differences in cognitive ability. Psychology, Public Policy, and Law 11: 235-294.

Scarr, S., and Weinberg, R.A. (1976). IQ test performance of Black children adopted by White families. American Psychologist 31: 726-739.

Weinberg, R.A., Scarr, S., and Waldman, I.D. (1992). The Minnesota Transracial Adoption Study: A follow-up of IQ test performance at adolescence. Intelligence 16: 117-135.

Monday, February 25, 2019

Alzheimer's and African Americans



A village elder of Mogode, Cameroon (Wikicommons - W.E.A. van Beek). African Americans are more than twice as likely to develop Alzheimer's. They also more often have an allele that increases the risk of Alzheimer's in Western societies but not in sub-Saharan Africa. Why is this allele adaptive there but not here?



Alzheimer's disease (AD) is unusually common among African Americans. Demirovic et al. (2003) found it to be almost three times more frequent among African American men than among white non-Hispanic men (14.4% vs. 5.4 %). Tang et al. (2001) found it to be twice as common among African American and Caribbean Hispanic individuals. On the other hand, it is significantly less common among Yoruba in Nigeria than among age-matched African Americans (Hendrie et al. 2001).

This past year, new light has been shed on these differences. Weuve et al. (2018) analyzed data from ten thousand participants 65 years old and over (64% black, 36% white) who had been followed for up to 18 years. Compared to previous studies, this one had three times as many dementia assessments and dementia cases. It also had a wider range of data: tests of cognitive performance, specific diagnosis of Alzheimer's (as opposed to dementia in general), educational and socioeconomic data, and even genetic data—specifically whether the participant had the APOE e4 allele, a major risk factor for Alzheimer's.

The results confirmed previous findings ... with a few surprises.


Incidence

Alzheimer's was diagnosed in 19.9% of the African American participants, a proportion more than twice that of the Euro American participants (8.2%).


Cognitive performance and cognitive decline

Cognitive performance was lower in the African American participants. "The difference in global cognitive score, -0.83 standard units (95% confidence interval [CI], -0.88 to -0.78), was equivalent to the difference in scores between participants who were 12 years apart in age at baseline."

On the other hand, both groups had the same rate of cognitive decline with age. In fact, executive function deteriorated more slowly in African Americans. The authors suggest that the higher rate of dementia in elderly African Americans is due to their cognitive decline beginning at a lower level:

[…] on average, white individuals have "farther to fall" cognitively than black individuals before reaching the functional threshold of clinical dementia, so that even if both groups have the same rate of cognitive decline, blacks have poorer cognitive function and disproportionately develop dementia. (Weuve et al. 2018)


Interaction with education.

Differences in educational attainment, i.e., years of education, explained about a third of the cognitive difference between the two groups of participants:

Educational attainment, as measured by years of education, appeared to mediate a substantial fraction but not the totality of the racial differences in baseline cognitive score and AD risk (Table 5). Under the hypothetical scenario in which education was "controlled" such that each black participant's educational level took on the level it would have been had the participant been white, all covariates being equal, black participants' baseline global cognitive scores were an average of 0.45 standard units lower than whites' scores (95% CI, -0.49 to -0.41), a difference smaller than without controlling years of education (-0.69; Table 5), and translating to about 35% of the total effect of race on cognitive performance mediated through years of education. (Weuve et al. 2018)

While educational attainment explains 35% of the cognitive difference between African Americans and Euro Americans, we should keep in mind that educational attainment itself is influenced by genetic factors. These genetic factors vary among African Americans, just as they vary between African Americans and other human populations.


APOE e4 allele

This allele was more common in the African American participants. It contributed to their higher risk of Alzheimer's but not to their lower cognitive score.

Black participants were more likely than white participants to carry an APOE e4 allele (37% vs 26%; Table 1). In analyses restricted to participants with APOE data, racial differences in baseline scores or cognitive decline did not vary by e4 carriership (all Pinteraction > 0.16). Furthermore, adjustment for e4 carriership did not materially change estimated racial differences in baseline performance or cognitive decline (eTable 3).

By contrast, the association between race and AD risk varied markedly by APOE ecarriership (Pinteraction = 0.05; Table 4). Among non-carriers, blacks' AD risk was 2.32 times that of whites' (95% CI, 1.50-3.58), but this association was comparably negligible among e4 carriers (RR, 1.09; 95% CI, 0.60-1.97). (Weuve et al. 2018)


Discussion

This study offers two different explanations: why African Americans have a higher incidence of Alzheimer's and why they have a higher incidence of dementia in general. Two different explanations are needed because Alzheimer's seems to be qualitatively different from other forms of dementia.

First, African Americans have a higher incidence of Alzheimer’s because they have a higher incidence of the APOE e4 allele, a risk factor for Alzheimer's. They may also have other alleles, still unidentified, that similarly favor development of Alzheimer's. This would explain why, if we look at participants without APOE e4, Alzheimer's was still twice as common among African Americans as it was among Euro Americans. On the other hand, the two groups had virtually the same incidence of Alzheimer's if we look at participants with APOE e4.

Second, African Americans have a higher incidence of dementia in general because they have a lower cognitive reserve. When cognitive performance begins to deteriorate in old age, the ensuing decline starts from a lower level and reaches the threshold of dementia sooner. The rate of decline is nonetheless the same in both African Americans and Euro Americans. While this explanation could apply to most forms of dementia, it is hard to see how it applies to Alzheimer's. Euro Americans have a higher cognitive reserve, and yet the APOE e4 allele is just as likely to produce Alzheimer's in them as in African Americans.

Why does the APOE e4 allele exist? It must have some adaptive value, given its incidence of 37% in African Americans and 26% in Euro Americans. African Americans also seem to have other alleles, not yet identified, that likewise increase the risk of Alzheimer’s. Those alleles, too, must have some adaptive value.

This value seems to exist in sub-Saharan Africa but not in North America. When Hendrie et al. (2001) examined Yoruba living in Nigeria, they found no relationship between APOE e4 and Alzheimer’s or dementia in general:

In the Yoruba, we have found no significant association between the possession of the e4 allele and dementia or AD in either the heterozygous or homozygous states. As the frequencies of the 3 major APOE alleles are almost identical in the 2 populations, this variation in the strength of the association between e4 and AD may account for some of the differences in incidence rates between the populations, although it is not likely to explain all of it. It also raises the possibility that some other genetic or environmental factor affects the association of the e4 allele to AD and reduces incidence rates for dementia and AD in Yoruba. (Hendrie et al. 2001)

There has been speculation, notably by Greg Cochran, that Alzheimer’s is caused by apoptosis. Because of the blood-brain barrier, antibodies cannot enter the brain to fight infection, so neural tissue is more dependent on other means of defense, like apoptosis. Such a means of defense may be more important in sub-Saharan Africa because the environment carries a higher pathogen load.

If we pursue this hypothesis, APOE e4 and other alleles may enable neurons to self-destruct as a means to contain the spread of pathogens in the brain. In an environment with a lower pathogen load, like North America, this means of defense would become too inactive. The result would be autoimmune disorders where apoptosis is triggered in neural tissue for no good reason.


References

Chin, A.L., S. Negash, and R. Hamilton. (2011). Diversity and disparity in dementia: the impact of ethnoracial differences in Alzheimer disease. Alzheimer disease and associated disorders. 25(3):187-195.

Cochran, G. (2018). Alzheimers or did I already say that? West Hunter, July 14

Demirovic, J., R. Prineas, D. Loewenstein, et al. (2003). Prevalence of dementia in three ethnic groups: the South Florida program on aging and health. Ann Epidemiol. 13:472-478.

Hendrie, H.C., A. Ogunniyi, K.S. Hall, et al. (2001). Incidence of dementia and Alzheimer disease in 2 communities: Yoruba residing in Ibadan, Nigeria, and African Americans residing in Indianapolis, Indiana. JAMA. 285:739-47.

Tang, M.X., P. Cross, H. Andrews, et al. (2001). Incidence of AD in African-Americans, Caribbean Hispanics, and Caucasians in northern Manhattan. Neurology 56:49-56.

Weuve, J., L.L. Barnes, C.F. Mendes de Leon, K. Rajan, T. Beck, N.T. Aggarwal, L.E. Hebert, D.A. Bennett, R.S. Wilson, and D.A. Evans. (2018). Cognitive Aging in Black and White Americans: Cognition, Cognitive Decline, and Incidence of Alzheimer Disease Dementia. Epidemiology 29(1): 151-159. 



Thursday, February 14, 2019

The Nurture of Nature



Fleet Street, watercolor by Ernest George (1839-1922). In England, middle-class families used to be so large that they overshot their niche and flooded the ranks of the lower class.



Until the last ten years it was widely believed that cultural evolution had taken over from genetic evolution in our species. When farming replaced hunting and gathering, something fundamentally changed in the relationship between us and our surroundings. We no longer had to change genetically to fit our environment. Instead, we could change our environment to make it fit us.

That view has been challenged by a research team led by anthropologist John Hawks. They found that genetic evolution actually speeded up 10,000 years ago, when hunting and gathering gave way to farming. In fact, it speeded up over a hundred-fold. Why? Humans were now adapting not only to slow-changing natural environments but also to faster-changing cultural environments, things like urban living, belief systems, and the State monopoly on violence. Far from slowing down, the pace of genetic change actually had to accelerate (Hawks et al. 2007).

These findings received a broader public hearing with the publication of The 10,000 Year Explosion: How Civilization Accelerated Human Evolution. More recently, they have been discussed in a review article by historian John Brooke and anthropologist Clark Spencer Larsen:


Are we essentially the same physical and biological beings as Ice Age hunter-gatherers or the early farming peoples of the warming early Holocene? How has the human body changed in response to nine or ten millennia of dramatic dietary change, a few centuries of public health interventions, and a few decades of toxic environmental exposures? In short, how has history shaped biology? 

[...] But very clearly human evolution did not stop with the rise of modern humanity in the Middle to Late Paleolithic. Climatic forces, dietary shifts, disease exposures, and perhaps the wider stresses and challenges of hierarchical, literate state societies appear to have been exerting selective pressure on human genetics.

In short, we have become participants in our evolution: we create more and more of our surroundings, and these surroundings influence the way we evolve. Culture is not simply a tool we use to control and direct our environment. It is a part of our environment, the most important part, and as such it now controls and directs us.

Brooke and Larsen nonetheless feel attached to older ways of evolutionary thinking, particularly the "essentialism" of pre-Darwinian biology. We see this when they assert that “the essential modeling of the genetic code ended sometime in the Paleolithic." Actually, there was no point in time when our ancestors became essentially "human"—whatever that means. A Paleolithic human 100,000 years ago would have had less in common with you or me than with someone living 100,000 years earlier or even a million years earlier. Human evolution has been logarithmic—the changes over the past 10,000 years exceed those over the previous 100,000 years, which in turn exceed those over the previous million.


Clark’s model

Brooke and Larsen discuss Gregory Clark's work on English demography. Clark found that the English middle class expanded steadily from the twelfth century onward, its descendants not only growing in number but also replacing the lower classes through downward mobility. By the 1800s, its lineages accounted for most of the English population. Parallel to this demographic expansion, English society shifted toward "middle class" culture and behavior: thrift, pleasure deferment, increased future orientation, and unwillingness to use violence to settle personal disputes (Clark, 2007). 

Clark’s work is criticized by Brooke and Larsen on two grounds:

[... ] there is no biological evidence to support an argument for English industrial transformation via natural selection. More importantly, this was a process that—hypothetically—had been at work around the world since the launch of social stratification in the Late Neolithic and the subsequent rise of state societies.

How valid are these criticisms? Let me deal with each of them.


Is social stratification the only precondition of Clark’s model?

First, it is true that many societies around the world are socially stratified, but social stratification is only one of the preconditions of Clark’s model. There are two others:

1. Differences in natural increase between social classes, with higher natural increase being associated with higher social status.

2. Porous class boundaries. The demographic surplus of the middle and upper classes must be free to move down into and replace the lower classes.

These preconditions are not met in most socially stratified societies. Brooke and Larsen are simply wrong when they say: "The poor died with few or no children everywhere in the world, and across vast stretches of human history." In reality, there have been many societies where fewer children were born on average to upper-class families than to lower-class families. A notable example is that of the Roman Empire, particularly during its last few centuries: upper-class Romans widely practiced abortion and contraception (Hopkins 1965). A similar situation seems to have prevailed in the Ottoman Empire. By the end of the eighteenth century, Turks were declining demographically in relation to their subject peoples, perhaps because they tended to congregate in towns and were more vulnerable to the ravages of plague and other diseases (Jelavich and Jelavich, 1977, pp. 6-7)

Nor are class boundaries always porous. Social classes often become endogamous castes. This can happen when a social class specializes in "unclean" work, like butchery, preparation of corpses for burial, etc. This was the case with the Burakumin of Japan, the Paekchong of Korea, and the Cagots of France (Frost 2014). Because of their monopoly over a despised occupation, they were free from outside competition and thus had the resources to get married and have enough children to replace themselves. This was not the case with the English lower classes, who faced competition from “surplus” middle-class individuals between the twelfth and nineteenth centuries. Such downward mobility is impossible in caste societies, where “surplus” higher-caste individuals are expected to remain unmarried until they can find an appropriate social situation. 

A caste society thus tends to be evolutionarily stagnant. Lower castes in particular tend to preserve mental and behavioral predispositions that would otherwise be removed from the gene pool in a more fluid social environment.

Why did class boundaries remain porous in England? The reason was probably the greater individualism of English society, particularly its expanding middle class. Sons were helped by their parents, but beyond a certain point they were expected to shift for themselves. My mother’s lineage used to be merchants on Fleet Street in London. They were successful and had such large families that they overshot their niche. By the nineteenth century, some of them had fallen to the level of shipbuilding laborers, and it was as such that they came to Canada.


Is biological evidence lacking for Clark's model?

Brooke and Larsen are on firmer ground when they say that Clark's model is unsupported by biological evidence. There is certainly a lack of hard evidence, but the only possible hard evidence would be ancient DNA. If we could retrieve DNA from the English population between the 12th and 19th centuries, would we see a shift toward alleles that support different mental and behavioral traits? That work has yet to be done. 

Nonetheless, a research team led by Michael Woodley has examined ancient DNA from sites in Europe and parts of southwest and central Asia over a time frame extending from 4,560 and 1,210 years ago. During that time frame, alleles associated with high educational attainment gradually became more and more frequent. The authors concluded: "This process likely continued until the Late Modern Era, where it has been noted that among Western populations living between the 15th and early 19th centuries, those with higher social status […] typically produced the most surviving offspring. These in turn tended toward downward social mobility due to intense competition, replacing the reproductively unsuccessful low-status stratum […] eventually leading to the Industrial Revolution in Europe" (Woodley et al. 2017).

Again, work remains to be done, particularly on the genetic profile of the English population between the twelfth and nineteenth centuries, but the existing data do seem to validate Clark's model for European societies in general. Indeed, psychologist Heiner Rindermann presents evidence that mean cognitive ability steadily rose throughout Western Europe during late medieval and post-medieval times. Previously, most people failed to develop mentally beyond the stage of preoperational thinking. They could learn language and social norms but their ability to reason was hindered by various impediments like cognitive egocentrism, anthropomorphism, finalism, and animism (Rindermann 2018, p. 49). From the sixteenth century onward, more and more people reached the stage of operational thinking. They could better understand probability and cause and effect and could see things from the perspective of another person, whether real or hypothetical (Rindermann 2018, pp. 86-87).

As the “smart fraction” became more numerous, it may have reached a threshold where intellectuals were no longer isolated individuals but rather communities of people who could interact and exchange ideas. This was one of the hallmarks of the Enlightenment: intellectuals were sufficiently large in number to meet in clubs, “salons,” coffeehouses, and debating societies.



References

Brooke, J.L. and C.S. Larsen. (2014).The Nurture of Nature: Genetics, Epigenetics, and Environment in Human Biohistory. The American Historical Review 119(5): 1500-1513

Clark, G. (2007). A Farewell to Alms. A Brief Economic History of the World. Princeton University Press: Princeton and Oxford.

Clark, G. (2009a). The indicted and the wealthy: surnames, reproductive success, genetic selection and social class in pre-industrial England.

Clark, G. (2009b). The domestication of man: The social implications of Darwin. ArtefaCTos 2: 64-80. 

Cochran, G. and H. Harpending. (2009). The 10,000 Year Explosion: How Civilization Accelerated Human Evolution. New York: Basic Books. 

Frost, P. (2014). Burakumin, Paekchong, and Cagots. ResearchGate

Hawks, J., E.T. Wang, G.M. Cochran, H.C. Harpending, and R.K. Moyzis. (2007). Recent acceleration of human adaptive evolution. Proceedings of the National Academy of Sciences (USA) 104: 20753-20758.

Hopkins, K. (1965). Contraception in the Roman Empire. Comparative Studies in Society and History 8(1): 124-151.

Jelavich, C. and B. Jelavich. (1977). The Establishment of the Balkan National States, 1804-1920. Seattle: University of Washington Press.

Rindermann, H. (2018). Cognitive Capitalism. Human Capital and the Wellbeing of Nations. Cambridge University Press.

Woodley, M.A., S. Younuskunju, B. Balan, and D. Piffer. (2017). Holocene selection for variants associated with general cognitive ability: comparing ancient and modern genomes. Twin Research and Human Genetics 20(4): 271-280.

Tuesday, February 5, 2019

Did cold seasonal climates select for cognitive ability?




Paleolithic artefacts (Wikicommons). The northern tier of Eurasia saw an explosion of creativity that pre-adapted its inhabitants for later developments.



The new journal Psych will be publishing a special follow-up issue on J. Philippe Rushton and Arthur Jensen's 2005 article: "Thirty Years of Research on Race Differences in Cognitive Ability." The following is the abstract of my contribution. The article will appear later.


The first industrial revolution. Did cold seasonal climates select for cognitive ability?

Peter Frost

Abstract: In their joint article, Rushton and Jensen argued that cognitive ability differs between human populations. But why are such differences expectable? Their answer: as modern humans spread out of Africa and into the northern latitudes of Eurasia, they entered colder and more seasonal climates that selected for the ability to plan ahead, since they had to store food, make clothes, and build shelters for the winter. 

This explanation has a long history going back to Arthur Schopenhauer. More recently, it has been supported by findings from Paleolithic humans and contemporary hunter-gatherers. Tools become more diverse and complex as effective temperature decreases, apparently because food has to be obtained during limited periods of time and over large areas. There is also more storage of food and fuel and greater use of untended traps and snares. Finally, shelters have to be sturdier, and clothing more cold-resistant. The resulting cognitive demands fall on both men and women. Indeed, because women have few opportunities to get food through gathering, they specialize in more cognitively demanding tasks like garment making, needlework, weaving, leatherworking, pottery, and use of kilns. The northern tier of Paleolithic Eurasia thus produced the "first industrial revolution"—an explosion of creativity that pre-adapted its inhabitants for later developments, i.e., agriculture, more complex technology and social organization, and an increasingly future-oriented culture. Over time these humans would spread south, replacing earlier populations that could less easily exploit the possibilities of the new cultural environment. 

As this cultural environment developed further, it selected for further increases in cognitive ability. In fact, mean intelligence seems to have risen during historic times at temperate latitudes in Europe and East Asia. There is thus no unified theory for the evolution of human intelligence. A key stage was adaptation to cold seasonal climates during the Paleolithic, but much happened later.



References

Rushton, J.P. and A.R. Jensen. (2005). Thirty years of research on race differences in cognitive ability. Psychology, Public Policy, and Law 11(2): 235-294.