Saturday, July 13, 2019

The Golden Age of Intelligence?

Busts of Greek philosophers (Wikicommons, Matt Neale). Did the Ancient Greeks have the highest mean IQ of any human population then and since?

Francis Galton argued that average intelligence had been much higher in ancient Greece than in modern England. He came to this conclusion after comparing the proportion of eminent men in Athens of the fifth century BC with the proportion of eminent men in the England of his day:

It follows from all this, that the average ability of the Athenian race is, on the lowest possible estimate, very nearly two grades higher than our own-that is, about as much as our race is above that of the African negro.(Galton 1869, p. 342)

This high ability was then presumably lost:

We know, and may guess something more, of the reason why this marvellously-gifted race declined. Social morality grew exceedingly lax; marriage became unfashionable, and was avoided; many of the more ambitious and accomplished women were avowed courtesans, and consequently infertile, and the mothers of the incoming population were of a heterogeneous class. (Galton 1869, pp. 342-343)

If we accept Galton's reasoning, Ancient Greeks had the highest mean IQ of any human population, something like 120 or 125. By comparison, Ashkenazi Jews have an estimated mean IQ of 110. But was Galton right? His calculations were criticized at the time, specifically for underestimating the number of Athenian citizens. He consequently revised his calculation downward to 1.5 grades higher, i.e., a mean IQ of 115 to 119 (Challis 2013, p. 56).

That's still impressive. But higher IQ doesn't necessarily imply higher innate intelligence. Conditions in ancient Greece may have simply been better for intellectual discussion, such activity being respected as an activity in its own right. By comparison, intellectual discussion was much more circumscribed in the ancient Middle East, where it was confined to specific people who performed specific duties, most often writing and copying texts at the request of others.

Admittedly, this explanation does not exclude a genetic one. If the cultural environment favors intellectual development, it will tend to reward the most promising people with reproductive success. A scribe is thus praised in a Jewish wisdom book from the second century BC: "Many will praise his understanding; it will never be blotted out. His memory will not disappear, and his name will live through all generations. Nations will speak of his wisdom, and the congregation will proclaim his praise. If he lives long, he will leave a name greater than a thousand." Book of Sirach [39.1-11].

In the ancient world, 'leaving a great name' did not mean being written about by historians but rather having many illustrious children to carry on the family name long after death. Intellectual ability thus co-evolved with a supportive cultural base. Indeed, we humans have co-evolved much more with our cultural environment than with our natural environment (Hawks et al. 2007).

A new yardstick

Galton's conjecture can now be tested with two new research tools:

1. Ancient DNA. Large quantities of genetic data have been collected from ancient human remains and are now being made available to researchers. This year, the Reich lab at the Harvard Medical School released over 2,000 ancient genomes, including 30 from ancient Greece.

2. Polygenic cognitive score. Some gene loci are associated with differences in educational attainment. By examining the variants at these loci and by adding up the ones associated with higher educational attainment, we can calculate a polygenic score that correlates with mean IQ (r = 0.98).

By examining 102 ancient genomes, a research team led by Michael Woodley of Menie was able to chart the evolution of cognitive ability in Europe and Central Asia. His team found that genetic variants for higher educational attainment gradually increased in frequency from 4,560 to 1,210 years ago (Woodley of Menie et al. 2017). Now, with newly released data from the Reich lab, he is leading a research effort to look specifically at ancient Greeks. The results are still preliminary, but they indicate a progressive increase in the polygenic score from Neolithic to Mycenaean times, followed by a decrease. When? We don't know because we lack post-Mycenaean data (Woodley of Menie et al. 2019).

More to come ...

This is a promising avenue for research. In particular, we need:

- A larger sample of modern Greek genomes. This should not be difficult.

- Samples from post-Mycenaean times to the end of Ottoman rule. Was Galton right in placing this cognitive decline during the ensuing Hellenistic and Roman periods? Or did it happen over a longer span of time?

The final published paper should explain at greater length the research team's use of a restricted polygenic score, i.e., a polygenic score based only on those genetic variants that seem causally related to high educational attainment, and not simply associated with high educational attainment. This approach is acceptable if a third party had identified these variants; otherwise, there is a risk of focusing on those variants that support Galton's hypothesis.

Another point: in the presentation of his new project, Woodley of Menie spoke repeatedly about population replacement at various times in the history of ancient Greece (Woodley of Menie et al. 2019). Yet the current thinking is that immigration was historically unimportant in Greece. Present-day Greeks are largely descended from the Mycenaeans, with some later introgression by Slavic tribes and other peoples (Gibbons 2017; Stamatoyannopoulos et al. 2017).

This research is especially exciting because the Reich lab released ancient DNA data not only from ancient Greece but also from elsewhere. History may end up being seen in a new light. For instance:

- Rome probably went through a similar increase in mean intelligence, followed by decline. When did the decline begin? During the collapse of the fifth century? I suspect earlier, perhaps in the third century. The barbarian invasions were both a cause and effect in the collapse of Roman civilization.

- The Enlightenment was due only in part to things like the invention of the printing press, the voyages of discovery, and the founding of universities. These were subsidiary causes that resulted from and supported a more fundamental change: a steady increase in the smart fraction of European societies—the proportion of people who enjoy reading, writing and, above all, thinking.


Angel, J.L. (1950). Population size and microevolution in Greece. Cold Spring Harbor Symposia on Quantitative Biology 15: 343-351. doi:10.1101/SQB.1950.015.01.031

Challis, D. (2013). The Archaeology of Race. The Eugenic Ideas of Francis Galton and Flinders Petrie. London: Bloomsbury. 

Galton, F. (1869). Hereditary Genius: An Inquiry into Its Laws and Consequences. London: MacMillan.

Gibbons, A. (2017). The Greeks really do have near-mythical origins, ancient DNA reveals. Science August 2 

Hawks, J., E.T. Wang, G.M. Cochran, H.C. Harpending, and R.K. Moyzis. (2007). Recent acceleration of human adaptive evolution. Proceedings of the National Academy of Sciences (USA) 104: 20753-20758.  
Stamatoyannopoulos, G., A. Bose, A. Teodosiadis, F. Tsetsos, A. Plantinga, N. Psatha, N. Zogas, E. Yannaki, P. Zalloua, K.K. Kidd, B.L. Browning, J. Stamatoyannopoulos, P. Paschou, P. Drineas et al. (2017). Genetics of the peloponnesean populations and the theory of extinction of the medieval peloponnesean Greeks. European Journal of Human Genetics 25: 637-645. 

Woodley of Menie, M.A., S. Younuskunju, B. Balan, and D. Piffer. (2017). Holocene selection for variants associated with general cognitive ability: Comparing ancient and modern genomes. Twin Research and Human Genetics 20: 271-280

Woodley of Menie, M.A., J. Delhez, M. Peñaherrera-Aguirre, and E.O.W. Kirkegaard. (2019). Cognitive archeogenetics of ancient and modern Greeks. London Conference on Intelligence 

Saturday, July 6, 2019

Why did brain size decrease after the Ice Age?

Nubians (Wikipedia). After the last ice age, brain size decreased in Europeans and East Asians. In western Europeans, this trend continued until some time before 1800. No decrease is observable in a large series of crania from Nubia.

In my latest paper I argue that northern hunting peoples were the first to break free from the cognitive straitjacket of hunting and gathering. Because women at northern latitudes had few opportunities for food gathering, they took on new, more cognitively demanding tasks, like garment making, needlework, weaving, leatherworking, pottery, and kiln operation. This increase in task complexity, led by women, provided these peoples and their descendants with the mental toolkit for later developments: farming, more complex technology and social organization, and an increasingly future-oriented culture (Frost 2019).

That paper left out a key piece of evidence. As these northern hunting peoples expanded southward into the temperate zone, they must have had excess mental capacity, especially the women, who were now redirected toward the cognitive demands of food gathering and, later, farming. Cognitive demand also decreased for men, who no longer had to store huge quantities of spatiotemporal information for tracking game and finding their way home. On the other hand, men put some of this excess mental capacity to new uses, by exploiting many of the technologies that women had pioneered.

So is there evidence of decreased cognitive demand after the last ice age? According to a study by Maciej Henneberg (1988), brain size steadily shrank from the Mesolithic to modern times, on the order of 9.9% for men and 17.4% for women. This is consistent with the reduction in cognitive demand being greater for women than for men.

Henneberg ignored the sex difference, preferring to attribute the decrease in brain size to a corresponding decrease in body size for both men and women. This explanation has been challenged by John Hawks, who reanalyzed Henneberg's data and showed that the decrease in body size explains only one-fifth to one-seventh of the one in brain size. He also showed that the declining ratio of brain size to body size did not affect all human populations. In fact, it can be securely demonstrated only for Europeans and Chinese. Indigenous southern Africans and Australians may have had similar declines, but the sample sizes are too small to conclude with certainty. No overall change is seen in the one case where we have a large cranial sample from a non-Eurasian population (Nubians):

A large series of crania from ancient Nubia covers the period from roughly 3400 years ago to 600 years ago [20, 21]. Samples show a slight trend toward decrease in the major length, breadth and height measurements from Iron Age (Meroitic, external cranial module 145.2) to Medieval (Christian, external cranial module 143.9) times, but the intermediate series of crania (X-Group, external cranial module 147.1) is somewhat larger in these dimensions than either of the other groups. In this context it would be misleading to speak of a reduction in cranial vault size in this region. (Hawks 2011)

A recent reversal

This trend reversed itself at some point in time, apparently before the 1800s. Jantz and Jantz (2016) and Jellinghaus et al. (2018) found an increase in brain size from at least 1800 in Germans and 1820 in white Americans. When I asked John Hawks, he attributed this reversal to improvements in nutrition and a reduction of childhood disease. That, too, was what I thought, initially.

But, then, the reversal would surely have been stronger in women than in men. If brain size had decreased twice as much in women, shouldn't the rebound have been twice as strong in women? Yet this is not what we see in the brain size of Americans born from 1820 to 1990: "Both sexes changed, but female change was less pronounced than male change" (Jantz and Jantz 2016).  In Germans born between 1800 and 1950, no clear sex difference was observable in the magnitude of this change over time (Jellinghaus et al. 2018).

Both Jantz and Jantz (2016) and Jellinghaus et al. (2018) are skeptical that these changes could be explained by improvement in nutrition or reduction of childhood disease. Infant mortality is a good proxy for both, and it did not begin to decline until circa 1900. At the very least, the increase in brain size should have accelerated during the twentieth century, yet it didn't (Jellinghaus et al. 2018).


Our knowledge on this subject comes largely from Maciej Henneberg, who concluded that brain size had decreased in all human populations and that this decrease continued into modern times. Both conclusions have been disproven. The decrease did not affect all human populations, and it had already reversed by 1800 in northern Europeans, as shown by two recent studies on white American and German samples. 

Perhaps the reason lies in changing patterns of natural selection. After the last ice age, northern hunting peoples had excess mental capacity, particularly the women. This excess capacity enabled them to create and exploit new and more complex social environments—farming, towns and cities, civilizations … It was still more than what was needed, however, and a long-term decline set in. Then, in early modern times, this decline reversed in western Europeans, and brain size once more began to increase. Why? Perhaps this is related to evidence, summarized in my last paper, that mean intelligence steadily rose in western European societies during late medieval and early modern times.

Hawks' study is the only comprehensive critique of Henneberg's work. Unfortunately, it has never appeared in a peer-reviewed journal. When asked why, he replied: "I did not feel it was necessary to pursue formal journal publication for this, because I did not think it fit well into the journals at the time." When asked why he had removed a post on that study from his weblog (it was put up in 2012 and taken down in 2017), he answered: "I used to have a section on my blog for research manuscripts that were in prep, but I decided to discontinue this as I became involved in more collaborative work."

Is there another reason? I can understand not publishing a post because other work is more pressing, but why delete an existing post? What made it less blogworthy by 2017?

The study in itself seems uncontroversial. Indeed, it leads to the amusing conclusion that European brains got smaller while Nubian brains remained unchanged. But talk about "smaller brains" can trigger some people, and John Hawks is already viewed with suspicion because of his work with Henry Harpending and Greg Cochran. Henry once told me—not long before his untimely death in 2016—about the mounting pressures he was facing to discontinue his research. Have similar pressures been brought to bear on John Hawks? One may wonder. The last three years have seen a remarkable escalation of deplatforming and outright violence in the name of "antiracism." When Steve Sailer (2019) charted the number of New York Times articles that mention the word "racism," he found that this number took off during the mid-decade, rising from 291 in 2011 to 2,353 in 2018. The mentions also changed qualitatively, becoming much more vociferous.

Today, John is a tenured professor, yet he is now much more reticent to say what he thinks than when he was a graduate student. His example should be sobering. The pressure to be "correct" doesn't end when you get tenure.


Frost, P. (2019). The Original Industrial Revolution. Did Cold Winters Select for Cognitive Ability? Psych 2019, 1(1), 166-181

Hawks, J. (2011). Selection for smaller brains in Holocene human evolution. arXiv:1102.5604 [q-bio.PE] 

Henneberg, M. (1988). Decrease of human skull size in the Holocene. Human Biology 60: 395-405.

Jantz, R.L., and L.M. Jantz. (2016). The Remarkable Change in Euro-American Cranial Shape and Size, Human Biology 88(1), 56-64

Jellinghaus, K., H. Katharina, C. Hachmann, A. Prescher, M. Bohnert, and R. Jantz. (2018). Cranial secular change from the nineteenth to the twentieth century in modern German individuals compared to modern Euro-American individuals. International Journal of Legal Medicine 132: 1477-1484. 

Sailer, S. (2019). Graphing the Great Awokening. May 28, The Unz Review  

Thursday, June 27, 2019

The Original Industrial Revolution

Cro-Magnon woman (Wikicommons) – At northern latitudes, women had fewer opportunities for food gathering, so they were free to specialize in new and more cognitively demanding tasks, like garment making, needlework, weaving, leatherworking, pottery, and kiln operation.

I've published an article on the theory that cold Paleolithic winters selected for intelligence. This theory is often attributed to J. Philippe Rushton and Arthur Jensen but actually goes much further back. The article is open access (see link), and the abstract is provided below. Comments are welcome.

Rushton and Jensen argued that cognitive ability differs between human populations. But why are such differences expectable? Their answer: as modern humans spread out of Africa and into northern Eurasia, they entered colder and more seasonal climates that selected for the ability to plan ahead, in order to store food, make clothes, and build shelters for winter. This cold winter theory is supported by research on Paleolithic humans and recent hunter-gatherers. Tools become more diverse and complex as effective temperature decreases, apparently because food has to be obtained during limited periods and over large areas. There is also more storage of food and fuel and greater use of untended traps and snares. Finally, shelters have to be sturdier, and clothing more cold-resistant. The resulting cognitive demands are met primarily by women because the lack of opportunities for food gathering pushes them into more cognitively demanding tasks, like garment making, needlework, weaving, leatherworking, pottery, and kiln operation. The northern tier of Paleolithic Eurasia thus produced the "Original Industrial Revolution"—an explosion of creativity that preadapted its inhabitants for later developments, i.e., farming, more complex technology and social organization, and an increasingly future-oriented culture. Over time, these humans would spread south, replacing earlier populations that could less easily exploit the possibilities of the new cultural environment. As this environment developed further, it selected for further increases in cognitive ability. Indeed, mean intelligence seems to have risen during recorded history at temperate latitudes in Europe and East Asia. There is thus no unified theory for the evolution of human intelligence. A key stage was adaptation to cold winters during the Paleolithic, but much happened later.


Frost, P. (2019). The OriginalIndustrial Revolution. Did Cold Winters Select for Cognitive Ability? Psych 2019, 1(1), 166-181

Monday, April 1, 2019

They really are smart ... and other surprises

Rachela - Maurycy Gottlieb (1856-1879) (Wikicommons). Ashkenazi Jews have a higher incidence of genetic variants associated with high educational attainment.

Intelligence varies from one individual to the next, and most of this variance has genetic causes. But what, exactly, are these causes? Lots and lots of genes, it seems. To be precise, if we look at the genes that influence human intelligence, we find two things:

1. They are very numerous, numbering in the thousands.

2. In general, their variants differ slightly in their effects.

This shouldn't be surprising. Evolution proceeds by tinkering, i.e., by making little changes. Big changes tend to produce big side-effects, and most side-effects are deleterious. So the genetic capacity for intelligence differs among humans through small differences at thousands upon thousands of genes. Does it follow, then, that we cannot understand these differences by looking only at a few genes? Not necessarily. Each gene is like a weathervane. If you can get enough subjects from a human population, even a few genes will tell you the direction and strength of natural selection for intelligence. 

Davide Piffer began looking at these “weathervanes” six years ago. He gathered data from different human populations on ten SNPs (single nucleotide polymorphisms) whose genetic variants are associated with differences in intelligence, specifically differences in educational attainment. Then, for each population, he estimated its genetic capacity for intelligence by calculating a "polygenic score"—the number of genetic variants associated with higher educational attainment, out of a maximum of ten.

This score correlated with population IQ (r=0.90) and with PISA scores (r=0.84). It was highest in East Asians:

East Asians have the highest frequencies of alleles beneficial to educational attainment (39%) and consistently outperform other racial groups both within the US and around the world, in terms of educational variables such as completion of college degree or results on standardized tests of scholastic achievement. Europeans have slightly lower frequencies of educational attainment alleles (35.5%) and perform slightly worse in terms of educational attainment, compared to East Asians. On the other hand, Africans seem to be disadvantaged both with regards to their level of educational attainment in the US and around the world. Indeed, Africans have the lowest frequencies of alleles associated with educational attainment (16%). (Piffer 2013)

These results were considered preliminary. Thousands upon thousands of genes influence intelligence, and here we have only ten! Perhaps chance alone produced this geographic pattern. Over the next few years, as other researchers discovered more SNPs associated with educational attainment, Davide Piffer repeated his study with more of these weathervanes.

His latest study has just come out. It uses data on 2,411 SNPs, and the polygenic score correlates even higher with population IQ (r=0.98). The geographic pattern is the same, with East Asians scoring higher than Europeans, and with Africans scoring lower.

Yes, Jews really are smart

This time, however, the highest score was obtained for Ashkenazi Jews: 

This dataset included a sample of 145 Ashkenazi Jewish individuals. The IQ of Ashkenazi Jews has been estimated to be around 110 [34]. Remarkably, their EDU polygenic score was the highest in our sample, corresponding to a predicted score of about 108, mirroring preliminary results from a smaller (N = 53) sample (Dunkel et al., 2019) [34]. (Piffer 2019)

This finding vindicates the authors of a paper written more than a decade ago. Gregory Cochran, Jason Hardy, and Henry Harpending presented evidence that the mean IQ of Ashkenazi Jews exceeds not only that of non-Jewish Europeans but also that of other Jewish groups. The most striking piece of evidence is the high incidence among Ashkenazim of four genetic disorders: Tay-Sachs, Gaucher, Niemann-Pick, and mucolipidosis type IV (MLIV). All four affect the capacity to store sphingolipid compounds that promote the growth and branching of axons in the brain. These disorders are caused by alleles that are harmful in the homozygote state and beneficial in the much more common heterozygote state, i.e., the brain receives higher levels of sphingolipids without the adverse health effects.

Ironically, these facts are coming to light at a time when Ashkenazi Jews are disappearing through low fertility and high out-marriage. Meanwhile, and not coincidentally, they are disappearing from the ranks of top winners at the U.S. Math Olympiad, the Putnam Exam, the Computing Olympiad, and other academic competitions. This decline became noticeable in the 1980s and has accelerated since the turn of the millennium (Unz 2012; Frost 2018). Jews are still present in intellectual and cultural life, but this presence is losing its dynamism and becoming a mere legacy.

African American IQ is higher than predicted

The polygenic score seems to underpredict the IQ of African Americans:

Indeed, the IQ of African Americans appears to be higher than what is predicted by the PGS (Figure 2), which suggests this cannot be explained by European admixture alone, but it could be the result of enjoying better nutrition or education infrastructure compared to native Africans. Another explanation is heterosis ("hybrid vigor"), that is the increase in fitness observed in hybrid offspring thanks to the reduced expression of homozygous deleterious recessive alleles. (Piffer 2019)

I’d propose another possible explanation: higher intelligence in African Americans may be associated with a somewhat different basket of genetic variants. Some of these variants may come from our friends the Igbos, who seem to have followed their own evolutionary path toward higher intelligence (Frost 2015). Many notable African Americans are in fact of Igbo descent, including Forest Whitaker, Paul Robeson, and Blair Underwood (Wikipedia 2019).

Davide is skeptical about this explanation, pointing out that population IQ is in line with the polygenic score he calculated for sub-Saharan African groups (Esan, Gambians, Luhya, Mende, Yoruba). None of those groups, however, are Igbo, and it's really the Igbo who stand out among West Africans in measures of intellectual and educational attainment. If only for the sake of curiosity, we should find out their polygenic score. This score may underpredict their genetic capacity for intelligence, which to some degree would be boosted by genetic variants that exist only in sub-Saharan Africa, but it should still exceed what we see for other West Africans.


This latest study brings to 2,411 the number of SNPs that can inform us about the genetic capacity for intelligence in different human populations. This information was more dubious when only ten SNPs were available, and the geographic pattern could be put down to chance. That argument now seems weak. If chance is causing this pattern, why do we keep getting the same one?

Sure, we can wait until we get even more relevant SNPs, but the overall picture will probably remain the same. We will get finer geographic detail. In France, for example, we will probably understand why educational attainment is so much higher in Brittany (see:  H/T to Philippe Gouillou). There are probably several European regions and subregions where the genetic capacity for intelligence is on a par with what we see in Ashkenazi Jews and East Asians.

In sum, these findings deserve to be better known ... and more widely discussed.


Initially, I wrote that Davide Piffer used 127 SNPs. In fact, 127 is the number of SNPs found in the HGDP (low coverage) dataset. In the other two datasets (1000 Genomes and GnomAd), those that the main analysis was based on, there were actually 2,411 SNPs.

Hiatus alert ****

I'll be unable to post for the near future, probably the next three months. 


Cochran, G., J. Hardy, and H. Harpending. (2006). Natural history of Ashkenazi intelligence, Journal of Biosocial Science 38: 659-693.   

Frost, P. (2018). The end of Jewish achievement? Evo and Proud, May 21

Frost, P. (2015). The Jews of West Africa, Evo and Proud, July 4

Piffer, D. (2019). Evidence for Recent Polygenic Selection on Educational Attainment and Intelligence Inferred from Gwas Hits: A Replication of Previous Findings Using Recent Data. Psych 1(1): 55-75

Piffer, D. (2013). Factor analysis of population allele frequencies as a simple, novel method of detecting signals of recent polygenic selection: The example of educational attainment and IQ, Mankind Quarterly 54(2): 168-200 

Unz, R. (2012). The myth of American meritocracy. The American Conservative, November 28   

Wikipedia (2019). Igbo people.

Tuesday, March 26, 2019

Autumn in China

Pink Autumn, by Victor Wang (2017) (Wikipedia). China’s demographic crisis is much worse than what official statistics let on.

With its population ageing as a result of longer lifespans and a dwindling number of children, the world's most populous nation decided in 2016 to allow all couples to have a second child, relaxing a tough one-child policy in place since 1978.

But birth rates plummeted for the second consecutive year last year. Policymakers now fret about the impact a long-term decline in births will have on the economy and its strained health and social services. (Stanway 2019)

The above Reuters article appeared two weeks ago. Although China lifted its one-child policy in 2016, its total fertility rate is still declining and now stands at 1.6 children per woman—well below the replacement level of 2.1 children. Delegates to China's parliament are saying that "radical steps are needed."

Are things that bad? No … they're worse. China's fertility rate is much lower than the official figure of 1.6 and probably close to what we see in Taiwan (1.1), in Singapore's Chinese community (1.1), and in Malaysia's Chinese community (1.3). Moreover, it is continuing to decline and will soon fall below the threshold of one child per woman, if it has not already done so. Finally, this very low fertility has lasted long enough to exhaust all population momentum. The population will soon begin to shrink, most likely five years earlier than the officially projected date of 2030. 

The real figures

The official figure of 1.6 children per woman is consistent with estimates by respected international bodies: 1.62 (World Bank, 2016); 1.8 (Population Reference Bureau, 2016); and 1.6 (United Nations, 2011-2015) (Wang 2018). The total fertility rate can also be estimated from data that China collects every year, i.e., the sample surveys taken by the National Bureau of Statistics. Using this source, Mengqiao Wang came up with much lower figures:

[…] far below the 2.1 replacement level, national TFR fluctuated around 1.4 since 2003, before dropping to around 1.2 since 2010 and finally reaching an astoundingly low value of 1.05 in 2015. (Wang 2018)

This pattern of very low fertility is limited to Han Chinese, particularly those in the northeastern provinces of Liaoning, Heilongjiang, and Jilin. In those provinces, the total fertility rate has fallen to 0.75 children per woman, and death rates have already overtaken birth rates:

Population shrinkage was already a fact for the northeastern part of the country, and it remained a question of when but not whether that fact would spread to other areas of the nation (official estimate of population peak at 1.45 billion by 2030 but unofficial estimate of 1.41 billion as earliest as 2025. (Wang 2018)

Why are these figures so much lower than the official figures? The main reason given is that the one-child policy caused widespread underreporting of births: many Chinese were having second children but not reporting them to government statisticians for fear of being penalized. This is seen in the differences between the raw data of the sample surveys and the official estimates of the National Bureau of Statistics. 

The NBS estimates, however, seem to be based on the sample survey data … with an upward adjustment to take underreporting of births into account: "Ironically, the NBS mentioned that the annual total births announced were calculated and inferred from the same sample surveys this study analyzed" (Wang 2018). So we are trapped in a circular argument: the sample surveys must be missing many births because the NBS estimates show a higher birth rate. But the same NBS estimates have been adjusted upwards because so many births are supposedly being missed.

Moreover, if the one-child policy had caused so much underreporting of births, the number of reported births should have increased in 2016, when that policy was scrapped, and that increase should have persisted in subsequent years. That's not what happened. The number of births did rise in 2016 and then fell in 2017 and 2018. The rise was probably due to some parents deciding to have a second child because there were no longer any financial penalties. The "backlog" of potential childbearing has now been cleared, and the fertility rate has returned to its pre-2016 slump.

It should be emphasized that the decline in Chinese fertility is now being driven by the growing number of women who have no children at all. This childlessness cannot be blamed on the one-child policy, and it is questionable, in fact, whether that policy has had much impact on the fertility rate for the past two decades. Total fertility rates have declined to the same very low level among Chinese people in Taiwan, Singapore, and Malaysia—where there has never been a one-child policy.

How bad is it?

Pretty bad. The fertility rate dropped below the replacement level in 1992 and has been at very low levels (1.4 or lower) since 2003. There is very little population momentum left, and an absolute drop in numbers should begin in the mid-2020s. Meanwhile, the total fertility rate may decline even further. In 2018, it was 0.98 in South Korea and 0.75 in northeastern China (Kyu-min and Su-ji 2019; Wang 2018). In all of East Asia, this pattern has only one exception: North Korea, where the rate is 1.98 children per woman (Wikipedia 2019). This is what we call “a failed state.”

One can always say that China has well over a billion people and can afford to shed a few hundred million. The relevant figure, however, is not the total population but rather the number of women who can bear children. That figure is a lot smaller and will continue to shrink. Women of childbearing age are defined (generously) as being 15 to 49 years old. Their numbers peaked at 383 million in 2011 and have been falling each year by 4 to 6 million. If we look at the most fertile age group (20-29), their numbers are expected to fall from 107.7 million in 2016 to around 80 million in 2020 (Wang 2018). 

There is no real precedent for what is happening. In the Western world, the fertility rate has declined over a longer time and has reached very low levels only in some countries, notably those of southern and eastern Europe. Moreover, unlike those countries China is still creating jobs at a high rate. Who will fill those jobs?

I addressed that question in a series of posts I wrote nine years ago. The abundance of jobs and empty housing will suck in immigrants from poorer countries, initially from Southeast Asia and then increasingly from South Asia and Africa. The African influx into Asia will be the big surprise of the 21st century, being especially noticeable in Malaysia and South China.

What can be done?

The first step toward solving a problem is to recognize that it exists. Most problems go unsolved because no one takes that first step. China is starting to move in that direction, but the word "starting" should be stressed. As Wang (2018) notes, there is a recurring tendency by the Chinese bureaucracy to downplay the demographic crisis. In some cases, the relevant data are not published:

Regrettably, such data for 2016 were no longer published in the most recent 2017 yearbook as the official publication mentioned that "In comparison with China Statistical Yearbook 2016, following revisions have been made in this new version in terms of the statistical contents and in editing: Of the chapter "Population", table named Age-specific Fertility Rate of Childbearing Women by Age of Mother and Birth Order is deleted." (China Statistical Yearbook 2017). Reasons were unknown for the deletion of this table, and it was unclear if the deletion would be temporary or permanent, or whether such deletion would continue in future years beyond 2016. (Wang 2018).

With determined effort, very low fertility can be reversed. Israel has gone the farthest in this direction, having achieved replacement fertility even among secular Jews. One must provide not only financial incentives but also cultural and ideological ones. Marriage and family formation must be seen positively. In this, unfortunately, the West is not an example to follow.

Wang (2018) suggests four measures to deal with the demographic crisis:

- Make demographic data fully accessible for debate and discussion.

- Eliminate controls on couples who want to have more than two children.

- Lower the minimum age for marriage, which is currently 22 for men and 20 for women.

- Allow out-of-wedlock births.

The first three measures seem sensible, although the second one would probably have little effect. Such controls are already absent in Taiwan, Singapore, and Malaysia. The last measure is terrible. All things being equal, a single mother will have fewer children than a married mother. Yes, a single mother can eventually marry or remarry, but such marriages tend to be less stable and thus less conducive to future childbearing, largely because the husband is less willing to support children that are not his own.

Of course, not all things are equal. Single mothers tend to be more present-oriented and, thus, more indifferent to the long-term costs of their actions, like those of having children. But what is to be gained by encouraging such people to reproduce? The experience of the West has been that single mothers, and their children, end up being a net cost to society.

In the West, the increase in single motherhood has coincided very closely with the decline in fertility, and both reflect the same underlying problem: people are less willing to commit to a long-term relationship and raise the children it produces. We increasingly live in a culture where the only valid entity is the individual. Everything else—family, community, nation—is illegitimate.


Frost, P. (2010a). China and interesting times ... Evo and Proud, February 25

Frost, P. (2010b). China and interesting times. Part II. Evo and Proud, March 4

Frost, P. (2010c). China and interesting times. Part III. Evo and Proud, March 11

Frost, P. (2010d). Has China come to the end of history? Evo and Proud, March 18

Kyu-min, C. and S. Su-ji. (2019). Fertility rate plummets to less than 1 child per woman. National Politics, February 28

Stanway, D. (2019). China lawmakers urge freeing up family planning as birth rates plunge. Reuters, March 12

Wang, M. (2018). For Whom the Bell Tolls: A Retrospective and Predictive Study of Fertility Rates in China (November 8, 2018). Available at SSRN:

Wikipedia (2019). Demographics of North Korea.

Sunday, March 10, 2019

IQ of biracial children and adults

First snow in Minnesota (c. 1895), Robert Koehler. Biracial children have IQ scores halfway between those of white children and black children, even when they are conceived by white single mothers and adopted into middle-class white families in their first year of life.

You may have heard about the Minnesota Transracial Adoption Study. It was a longitudinal study of black, biracial, and white children adopted into white middle-class Minnesotan families, as well as the biological children of the same families (Levin, 1994; Lynn, 1994; Scarr and Weinberg, 1976; Weinberg, Scarr, and Waldman, 1992). IQ was measured when the adopted children were on average 7 years old and the biological children on average 10 years old. They were tested again ten years later. Between the two tests, all four groups declined in mean IQ. On both tests, however, the differences among the four groups remained unchanged, particularly the 15-point gap between black and white adoptees. 

The biracial children remained halfway between the black and white adoptees. Could this be due to the parental environment being likewise half and half? Well, no. All of them were raised by white parents, and they were adopted at an early age: 19 months on average for the white adoptees, 9 months for the biracial adoptees, and 32 months for the black adoptees. The last figure is emphasized by Scarr and Weinberg (1976) as a reason for the IQ gap between the black and white adoptees. 

Fine, but what about the IQ gap between the biracial and white adoptees? Almost all of the biracial children were adopted at a young age and born to white single mothers who had completed high school. From conception to adulthood they developed in a "white" environment. If anything, the white adoptees should have encountered more developmental problems because they were adopted at an older age.

Could color prejudice be a reason? Perhaps the biracial children were unconsciously treated worse than the white children. By the same reasoning, they may have been treated better than the black children. We can test the second half of this hypothesis. Twelve of the biracial children were wrongly thought by their adoptive parents to have two black parents. Nonetheless, they scored on average at the same level as the biracial children correctly classified by their adoptive parents (Scarr and Weinberg 1976).

The Eyferth study

Another study found no difference in IQ between white and biracial children. This was a study of children fathered by American soldiers in Germany and then raised by German mothers (Eyferth 1961). It found no significant difference in IQ between children with white fathers and children with black fathers. Both groups had a mean IQ of about 97.

These findings were criticized by Rushton and Jensen (2005) on three grounds:

1. The children were still young when tested. One third were between 5 and 10 years old and two thirds between 10 and 13. Since IQ is strongly influenced by family environment before puberty, a much larger sample would be needed to find a significant difference between the two groups.

2. Between 20 and 25% of the “black” fathers were actually North African.

3. At the time of the study, the US Army screened out low IQ applicants with its preinduction Army General Classification Test. The rejection rate was about 30% for African Americans and 3% for European Americans. African American soldiers are thus a biased sample of the African American population.

Another factor is that the capacity for intelligence seems to be more malleable in children than in adults. We see this with the Minnesota Transracial Adoption Study. In the enriched learning environment of middle-class Minnesota families, all of the children showed impressive IQ scores at 7 years of age. By 17 years of age, however, this benefit had largely washed out:

-------------------- Age 7 Age 17

Black children ----- 97 ----- 89

Biracial children - 109 ----- 99

White children --- 112 ---- 106

Does intelligence really decline with age because of wear and tear on the brain? Perhaps we’re programmed to be most intelligent in childhood. That’s when we have to familiarize ourselves with the world. The capacity for intelligence may then be gradually deactivated as we get older because it’s less necessary.

This deactivation may follow different trajectories in different human groups. In early Homo sapiens, it may have begun not long after puberty. As ancestral humans made the transition to farming, sedentary living, and increasingly complex societies, this learning capacity became more necessary in adulthood, with the result that natural selection favored those individuals who retained it at older ages. This gene-culture coevolution would have gone farther in some populations than in others.

The Fuerst et al study

A recent study led by John Fuerst has confirmed the intermediate IQ of biracial individuals, this time in adults. The research team used the General Social Survey, which includes not only ethnic, sociological, and demographic data but also a measure of intelligence (WordSum):

The relationship between biracial status, color, and crystallized intelligence was examined in a nationally representative sample of adult Black and White Americans. First, it was found that self-identifying biracial individuals, who were found to be intermediate in color and in self-reported ancestry, had intermediate levels of crystallized intelligence relative to self-identifying White (mostly European ancestry) and Black (mostly sub-Saharan African ancestry) Americans. The results were transformed to an IQ scale: White (M = 100.00, N = 7569), primarily White-biracial (M = 96.07, N = 43, primarily Black-biracial (M = 94.14 N = 50), and Black (M = 89.81, N = 1381).

The same study also found a significant negative correlation among African Americans between facial color and WordSum scores. The correlation was low (r = -0.102), but it would be difficult to get a higher correlation because of the measures used. Self-reported skin color correlates imperfectly with actual skin color, which in turn correlates imperfectly with European admixture. Wordsum likewise correlates imperfectly with IQ (r = 0.71). On a final note, the correlation between facial color and WordSum scores was not explained by region of residence, interviewer’s race, parental socioeconomic status, or individual educational attainment.


Eyferth, K. (1961). Leistungen verscheidener Gruppen von Besatzungskindern in Hamburg-Wechsler Intelligenztest für Kinder (HAWIK). Archiv für die gesamte Psychologie 113: 222-241.

Fuerst, J.G.R., R. Lynn, and E.O.W. Kirkegaard. (2019). The Effect of Biracial Status and Color on Crystallized Intelligence in the U.S.-Born African-European American Population. Psych 1(1): 44-54.

Levin, M. (1994). Comment on the Minnesota transracial adoption study. Intelligence 19: 13-20.

Lynn, R. (1994). Some reinterpretations of the Minnesota Transracial Adoption Study. Intelligence 19: 21-27.

Rushton, P. and A.R. Jensen. (2005). Thirty years of research on race differences in cognitive ability. Psychology, Public Policy, and Law 11: 235-294.

Scarr, S., and Weinberg, R.A. (1976). IQ test performance of Black children adopted by White families. American Psychologist 31: 726-739.

Weinberg, R.A., Scarr, S., and Waldman, I.D. (1992). The Minnesota Transracial Adoption Study: A follow-up of IQ test performance at adolescence. Intelligence 16: 117-135.

Monday, February 25, 2019

Alzheimer's and African Americans

A village elder of Mogode, Cameroon (Wikicommons - W.E.A. van Beek). African Americans are more than twice as likely to develop Alzheimer's. They also more often have an allele that increases the risk of Alzheimer's in Western societies but not in sub-Saharan Africa. Why is this allele adaptive there but not here?

Alzheimer's disease (AD) is unusually common among African Americans. Demirovic et al. (2003) found it to be almost three times more frequent among African American men than among white non-Hispanic men (14.4% vs. 5.4 %). Tang et al. (2001) found it to be twice as common among African American and Caribbean Hispanic individuals. On the other hand, it is significantly less common among Yoruba in Nigeria than among age-matched African Americans (Hendrie et al. 2001).

This past year, new light has been shed on these differences. Weuve et al. (2018) analyzed data from ten thousand participants 65 years old and over (64% black, 36% white) who had been followed for up to 18 years. Compared to previous studies, this one had three times as many dementia assessments and dementia cases. It also had a wider range of data: tests of cognitive performance, specific diagnosis of Alzheimer's (as opposed to dementia in general), educational and socioeconomic data, and even genetic data—specifically whether the participant had the APOE e4 allele, a major risk factor for Alzheimer's.

The results confirmed previous findings ... with a few surprises.


Alzheimer's was diagnosed in 19.9% of the African American participants, a proportion more than twice that of the Euro American participants (8.2%).

Cognitive performance and cognitive decline

Cognitive performance was lower in the African American participants. "The difference in global cognitive score, -0.83 standard units (95% confidence interval [CI], -0.88 to -0.78), was equivalent to the difference in scores between participants who were 12 years apart in age at baseline."

On the other hand, both groups had the same rate of cognitive decline with age. In fact, executive function deteriorated more slowly in African Americans. The authors suggest that the higher rate of dementia in elderly African Americans is due to their cognitive decline beginning at a lower level:

[…] on average, white individuals have "farther to fall" cognitively than black individuals before reaching the functional threshold of clinical dementia, so that even if both groups have the same rate of cognitive decline, blacks have poorer cognitive function and disproportionately develop dementia. (Weuve et al. 2018)

Interaction with education.

Differences in educational attainment, i.e., years of education, explained about a third of the cognitive difference between the two groups of participants:

Educational attainment, as measured by years of education, appeared to mediate a substantial fraction but not the totality of the racial differences in baseline cognitive score and AD risk (Table 5). Under the hypothetical scenario in which education was "controlled" such that each black participant's educational level took on the level it would have been had the participant been white, all covariates being equal, black participants' baseline global cognitive scores were an average of 0.45 standard units lower than whites' scores (95% CI, -0.49 to -0.41), a difference smaller than without controlling years of education (-0.69; Table 5), and translating to about 35% of the total effect of race on cognitive performance mediated through years of education. (Weuve et al. 2018)

While educational attainment explains 35% of the cognitive difference between African Americans and Euro Americans, we should keep in mind that educational attainment itself is influenced by genetic factors. These genetic factors vary among African Americans, just as they vary between African Americans and other human populations.

APOE e4 allele

This allele was more common in the African American participants. It contributed to their higher risk of Alzheimer's but not to their lower cognitive score.

Black participants were more likely than white participants to carry an APOE e4 allele (37% vs 26%; Table 1). In analyses restricted to participants with APOE data, racial differences in baseline scores or cognitive decline did not vary by e4 carriership (all Pinteraction > 0.16). Furthermore, adjustment for e4 carriership did not materially change estimated racial differences in baseline performance or cognitive decline (eTable 3).

By contrast, the association between race and AD risk varied markedly by APOE ecarriership (Pinteraction = 0.05; Table 4). Among non-carriers, blacks' AD risk was 2.32 times that of whites' (95% CI, 1.50-3.58), but this association was comparably negligible among e4 carriers (RR, 1.09; 95% CI, 0.60-1.97). (Weuve et al. 2018)


This study offers two different explanations: why African Americans have a higher incidence of Alzheimer's and why they have a higher incidence of dementia in general. Two different explanations are needed because Alzheimer's seems to be qualitatively different from other forms of dementia.

First, African Americans have a higher incidence of Alzheimer’s because they have a higher incidence of the APOE e4 allele, a risk factor for Alzheimer's. They may also have other alleles, still unidentified, that similarly favor development of Alzheimer's. This would explain why, if we look at participants without APOE e4, Alzheimer's was still twice as common among African Americans as it was among Euro Americans. On the other hand, the two groups had virtually the same incidence of Alzheimer's if we look at participants with APOE e4.

Second, African Americans have a higher incidence of dementia in general because they have a lower cognitive reserve. When cognitive performance begins to deteriorate in old age, the ensuing decline starts from a lower level and reaches the threshold of dementia sooner. The rate of decline is nonetheless the same in both African Americans and Euro Americans. While this explanation could apply to most forms of dementia, it is hard to see how it applies to Alzheimer's. Euro Americans have a higher cognitive reserve, and yet the APOE e4 allele is just as likely to produce Alzheimer's in them as in African Americans.

Why does the APOE e4 allele exist? It must have some adaptive value, given its incidence of 37% in African Americans and 26% in Euro Americans. African Americans also seem to have other alleles, not yet identified, that likewise increase the risk of Alzheimer’s. Those alleles, too, must have some adaptive value.

This value seems to exist in sub-Saharan Africa but not in North America. When Hendrie et al. (2001) examined Yoruba living in Nigeria, they found no relationship between APOE e4 and Alzheimer’s or dementia in general:

In the Yoruba, we have found no significant association between the possession of the e4 allele and dementia or AD in either the heterozygous or homozygous states. As the frequencies of the 3 major APOE alleles are almost identical in the 2 populations, this variation in the strength of the association between e4 and AD may account for some of the differences in incidence rates between the populations, although it is not likely to explain all of it. It also raises the possibility that some other genetic or environmental factor affects the association of the e4 allele to AD and reduces incidence rates for dementia and AD in Yoruba. (Hendrie et al. 2001)

There has been speculation, notably by Greg Cochran, that Alzheimer’s is caused by apoptosis. Because of the blood-brain barrier, antibodies cannot enter the brain to fight infection, so neural tissue is more dependent on other means of defense, like apoptosis. Such a means of defense may be more important in sub-Saharan Africa because the environment carries a higher pathogen load.

If we pursue this hypothesis, APOE e4 and other alleles may enable neurons to self-destruct as a means to contain the spread of pathogens in the brain. In an environment with a lower pathogen load, like North America, this means of defense would become too inactive. The result would be autoimmune disorders where apoptosis is triggered in neural tissue for no good reason.


Chin, A.L., S. Negash, and R. Hamilton. (2011). Diversity and disparity in dementia: the impact of ethnoracial differences in Alzheimer disease. Alzheimer disease and associated disorders. 25(3):187-195.

Cochran, G. (2018). Alzheimers or did I already say that? West Hunter, July 14

Demirovic, J., R. Prineas, D. Loewenstein, et al. (2003). Prevalence of dementia in three ethnic groups: the South Florida program on aging and health. Ann Epidemiol. 13:472-478.

Hendrie, H.C., A. Ogunniyi, K.S. Hall, et al. (2001). Incidence of dementia and Alzheimer disease in 2 communities: Yoruba residing in Ibadan, Nigeria, and African Americans residing in Indianapolis, Indiana. JAMA. 285:739-47.

Tang, M.X., P. Cross, H. Andrews, et al. (2001). Incidence of AD in African-Americans, Caribbean Hispanics, and Caucasians in northern Manhattan. Neurology 56:49-56.

Weuve, J., L.L. Barnes, C.F. Mendes de Leon, K. Rajan, T. Beck, N.T. Aggarwal, L.E. Hebert, D.A. Bennett, R.S. Wilson, and D.A. Evans. (2018). Cognitive Aging in Black and White Americans: Cognition, Cognitive Decline, and Incidence of Alzheimer Disease Dementia. Epidemiology 29(1): 151-159.