Monday, August 29, 2022

How real is the Flynn effect?


Changes in mean IQ between 1909 and 2013 (Pietschnig and Voracek 2015, p. 285)



Because of the Flynn effect, average IQ has risen by 35 points over the past century. That’s more than the difference between the threshold of mental retardation and the current average. Does that seem plausible?


In a 1984 paper, James Flynn showed that the mean IQ of White Americans rose by 13.8 points between 1932 and 1978 (Flynn 1984). When that increase, now called the Flynn effect, was charted between 1920 and 2013, the gain in IQ was found to be no less than 35 points (Pietschnig and Voracek 2015).


The IQ gain did not happen at a uniform rate. It can be broken down into five stages:


·         a small increase between 1909 and 1919 (0.80 points/decade)

·         a surge during the 1920s and early 1930s (7.2 points/decade)

·         a slower pace of growth between 1935 and 1947 (2.1 points/decade)

·         a faster one between 1948 and 1976 (3.0 points/decade)

·         a slower pace thereafter (2.3 points/decade)


The Flynn effect began in the core of the Western world and is now ending there. In fact, it has ended altogether in Norway and Sweden and has begun to reverse itself in Denmark and Finland (Pietschnig and Voracek 2015, pp. 283, 288-289).


Was it a real increase?


Average IQ has thus risen by 35 points over the past century. That’s more than the difference between the threshold of mental retardation and the current average. Does that seem plausible?


My mother went to high school during the 1930s, and I went during the 1970s. So my generation should be 13.8 points smarter than hers. That’s a big difference, and it should have been obvious to someone like myself who knew people from both generations.


It wasn’t obvious. My mother had a small library of books that she often consulted, mostly religious literature and works like Welcome Wilderness and Little Dorrit. Not all of her generation were obsessive readers, but many were. And the books they read weren’t light reading. Fiction typically had complex plots with subplots running alongside each other, and religious books were a maze of Biblical references that would seem obscure unless you knew the Bible, usually the King James Version. If you could handle that, you could handle string theory.


The Flynn effect also implies that post-millennials are 10 points smarter than my generation. Again, that’s not my impression. Books and movies now have simpler plots and use a smaller vocabulary—a key component of verbal intelligence. According to the General Social Survey, vocabulary test scores fell by 7.2% between the mid-1970s and the 2010s among non-Hispanic White Americans. The decline affected all levels of educational attainment, so it wasn’t just a matter of dumber people now going to college (Frost 2019; Twenge et al. 2019). The same period also saw an increase in reaction time: since the 1970s, successive birth cohorts have required more time, on average, to process the same information (Madison 2014; Madison et al. 2016). 


Finally, there is the genetic evidence, specifically alleles associated with high educational attainment. In Iceland, those alleles have become steadily fewer in cohorts born since 1910 (Kong et al. 2017). The same trend has been observed between the 1931 and 1953 birth cohorts of European Americans (Beauchamp 2016). According to the Icelandic study, the downward trend is happening partly because more intelligent Icelanders are staying in school longer and postponing reproduction. But it is also happening among those who do not pursue higher education. Modern culture seems to be telling people that children are costly and bothersome, and that message is most convincing to people who like to plan ahead.


Some writers have argued that the genetic decline in intellectual potential has been more than offset by improvements to our learning environment, particularly better and longer education. This improved environment is helping us do more with our intellectual potential. But is there real-world evidence that we are, on average, becoming smarter? Robert Howard (1999, 2001, 2005) cites four lines of evidence:


·        The prevalence of mild mental retardation has fallen in the US population and elsewhere.

·        Chess players are reaching top performance at earlier ages.

·        More journal articles and patents are coming out each year.

·        According to high school teachers who have taught for over 20 years, “most reported perceiving that average general intelligence, ability to do school work, and literacy skills of school children had not risen since 1979 but most believed that children's practical ability had increased” (Howard 2001).


The above evidence is debatable, as Howard himself acknowledges. Fewer children are being diagnosed as mental retarded because that term has become stigmatized. Prenatal screening has also had an impact. As for chess, it’s a niche activity that tells us little about the general population. More journal articles are indeed being published each year, but the reason has more to do with pressure to “publish or perish.” Finally, teachers are not objective observers: they are part of a system that rewards certain views and penalizes others. And if they reject that system, they probably won’t stick around for more than twenty years.


A last word


I suspect we’re getting better at some cognitive tasks, particularly the ones we learn at school—if only because we’re spending more of our lifetime in the classroom. One of those tasks is sitting down at a desk and taking a test. We’re better not only at that specific task but also at the broader one of thinking in terms of questions and answers. Previously, we just learned the rules and imitated those who knew better than us.


Test-taking certainly made an impression on my mental development. Long after my undergrad studies I would have nightmares of sitting alone in an immense exam hall and not knowing the answer to an insoluble question.




Beauchamp, J.P. (2016). Genetic evidence for natural selection in humans in the contemporary United States. Proceedings of the National Academy of Sciences. 113(28): 7774-7779.  


Flynn, J.R. (1984). The mean IQ of Americans: Massive gains 1932–1978. Psychological Bulletin 95(1):29–51.   


Frost, P. (2019). Why is vocabulary shrinking? Evo and Proud, September 11.


Frost, P. (2020). From here it’s all downhill. Evo and Proud, March 16.


Howard, R. W. (1999). Preliminary real-world evidence that average human intelligence really is rising. Intelligence 27: 235–250.  


Howard, R. W. (2001). Searching the real world for signs of rising population intelligence. Personality and Individual Differences 30: 1039–1058.


Howard, R. W. (2005). Objective evidence of rising population ability: A detailed examination of longitudinal chess data. Personality and Individual Differences, 38(2), 347–363.


Kong, A., M.L. Frigge, G. Thorleifsson, H. Stefansson, A.I. Young, F. Zink, G.A. Jonsdottir, A. Okbay, P. Sulem, G. Masson, D.F. Gudbjartsson, A. Helgason, G. Bjornsdottir, U. Thorsteinsdottir, and K. Stefansson. (2017). Selection against variants in the genome associated with educational attainment. Proceedings of the National Academy of Sciences 114(5): E727-E732.


Madison, G. (2014). Increasing simple reaction times demonstrate decreasing genetic intelligence in Scotland and Sweden, London Conference on Intelligence. Psychological comments, April 25 #LCI14 Conference proceedings.    


Madison, G., M.A. Woodley of Menie, and J. Sänger. (2016). Secular Slowing of Auditory Simple Reaction Time in Sweden (1959-1985). Frontiers in Human Neuroscience, August 18.


Pietschnig, J., and M. Voracek. (2015). One Century of Global IQ Gains: A Formal Meta-Analysis of the Flynn Effect (1909-2013). Perspectives on Psychological Science 10(3): 282-306.


Twenge, J.M., W.K. Campbell, and R.A. Sherman. (2019). Declines in vocabulary among American adults within levels of educational attainment, 1974-2016. Intelligence 76: 101377.

Monday, August 15, 2022

Comparing an incomparable?


Stigmata Siciliana (1964), David McLure (Wikicommons)



What is the mean IQ of sub-Saharan Africans? There’s no clear answer. Current estimates come from an early stage of the Flynn effect and are also distorted by qualitative differences in cognition. Furthermore, mean IQ differs among African groups.




At present, there is little consensus on the mean IQ of sub-Saharan Africans. Estimates have ranged from a low of 66 to a high of 82 (Lynn 2010; Wicherts et al. 2010). Rindermann (2013) put forward a "best guess" of 75, which is inexplicably much lower than the estimated African American mean of 85. Yes, African Americans are about 20% European by ancestry, but that degree of admixture would not cause a 10-point difference. Malnutrition? That might depress IQ scores in some African countries but not most.


Noah Carl (2022) has reopened the debate by inferring mean IQ from harmonized test scores and GDP per capita. Sub-Saharan Africa looks somewhat better on the first measure and somewhat worse on the second. Both measures correlate roughly with mean IQ, but the correlation isn’t strong enough to tell us whether the mean is 62, 75, or 82. Moreover, the first measure suffers from the same problem that plagues IQ tests: Africa is just starting to experience the secular increase in mean IQ that the West experienced during the 20th century, i.e., the Flynn effect. By how much should we increase the estimate of mean African IQ to adjust for Africa being at an earlier stage of the Flynn effect?


As for the second measure, GDP per capita, the ability to create wealth is determined not only by cognitive ability but also by other mental traits: future time orientation (also known as time preference), willingness to follow rules and enforce them, feelings of guilt over breaking rules, reluctance to use violence to settle disputes, tendency toward individualism rather than nepotism and familialism, and so on.


In a reply to Carl’s article, Emil Kirkegaard (1922) infers mean IQ from the Social Progress Index. But that measure is no less problematic than GDP per capita. Social progress is driven by a basket of mental qualities, of which cognitive ability is only one. Emil himself makes that point:


One cannot just impute IQs reliably from non-IQ data in order to get some kind of unbiased estimates of a region's IQ because the regions themselves may under- or overperform on international rankings for whatever reason, [including] legacy of or current communism, nonWEIRDness, low individualism, or any other difference you can imagine.


Emil concludes: “There’s no avoiding having to collect more African IQ data.”


More data would be nice, but no amount of data will provide us with a mean African IQ that can be usefully compared with the mean IQs of other populations. There are several reasons:


·        Again, estimates of African IQ come from an early stage of the Flynn effect. They are not comparable with estimates of IQ that come from a later stage in other populations.

·        The genetic architecture of cognition is not the same. Sub-Saharan Africans seem to have alleles for cognitive ability that do not exist in other populations. To date, such alleles have been identified only in people of European descent.

·       Recent cognitive evolution, particularly in societies near the Niger, has created differences in mean cognitive ability among African groups. It is no more meaningful to talk about a single mean African IQ than it is to talk about a single mean European IQ.


Differences in the stage of the Flynn effect


IQ data from Western societies are not comparable with IQ data from African societies. The latter are just beginning to experience the rise in mean IQ that took place earlier in the West, specifically the increase of 13.8 points between 1932 and 1978 (Flynn 1984). The Flynn effect seems to be not so much an increase in cognitive ability as an increase in familiarity with the “test paradigm” at school and, more broadly, in society. Flynn (2013) situates the cause in the modernist paradigm: “We freed ourselves from fixation on the concrete and entered a world in which the mass of people began to use logic on abstractions and universalize their moral principles.”


Keep in mind that competitive exams began to appear in the West only in the late 19th century, first for entry into the civil service and then more generally for the educational system (Wikpedia 2022). Previously, people entered the civil service through patronage appointments, and education took the form of apprenticeship and imitation of role models. In those days, people were less inclined to formulate questions and look for the answers. The answers were already known, and you had to learn them. In fact, there was a stigma attached to asking too many questions, especially in rapid-fire succession.


Differences in the genetic architecture of cognition


As a means to estimate cognitive ability, the IQ test is becoming superseded by the educational polygenic score. This measure is based on SNPs that have been shown to be associated with educational attainment. Your polygenic score is higher to the extent that the alleles at those SNPs are associated with higher educational attainment. It is thus a measure of innate cognitive ability. At present, we have identified 1,271 SNPs that are associated with educational attainment and which, together, explain 11-13% of the variance in educational attainment among individuals (Lee et al. 2018). The educational polygenic score has shown good reliability in predicting the IQ of individuals and even better reliability in predicting the mean IQ of populations.


Again, we have identified alleles associated with educational attainment only in people of European descent. For this reason, the educational polygenic score is five times worse at predicting the cognitive ability of African Americans (Lasker et al. 2019). The loss of predictive power seems greatest in the domain of language ability, according to two studies:


·        Guo et al. (2019, p. 27) found that the educational polygenic score is ten to eighteen times worse at predicting the verbal ability of African Americans, in comparison to White, Asian, and Hispanic White Americans. They attributed this difference to the smaller size of the African American sample, to gene-environment interactions, and to “significantly less than full coverage of African genetic variants related to cognitive ability.”

·        With a sample of school-age African Americans, Rabinowitz et al. (2019) found that the educational polygenic score fails to predict performance on a standardized reading test but does predict pursuit of postsecondary education, getting a criminal record (only among boys), and performance on a standardized math test (only for one of the three cohorts).


When modern humans began to spread out of Africa some 60,000 years ago, those left behind began to pursue their own trajectory of cognitive evolution. The evolutionary change seems to have been greatest in the domain of language, i.e., the ability to express oneself in speech and writing. Polygenic scores cannot predict innate reading ability because too many of the relevant alleles are exclusive to the African gene pool and remain unidentified. Other relevant alleles may simply be more important or less important in other gene pools.


Although the educational polygenic score is based on alleles identified in Europeans, it can still be used for rough predictions of cognitive ability among people of African descent. Lasker et al. (2019, pp. 444-445) were able to increase its predictive power for African Americans by almost a factor of three, i.e., an increase from 20% to 54% of its predictive power for European Americans. They achieved this improvement by using alleles from a much smaller subset of SNPs that are less sensitive to decay of linkage disequilibrium.


Differences among African groups in the trajectory of cognitive evolution


Within the larger African trajectory of cognitive evolution, various African populations have pursued their own sub-trajectories. This has been especially true for populations in West Africa over the past millennium and a half. Their educational polygenic scores vary as you go from west to east, being lowest among the Mende (Sierra Leone) and progressively higher among Gambians, the Esan (Nigeria), and the Yoruba (Nigeria). The Yoruba have almost the same educational polygenic score as that of African Americans, who nonetheless are about 20% admixed with Europeans (Piffer 2021, see Figure 7).


Before European contact, West African societies were more complex in the north and the east, i.e., in the Sahel and the Nigerian forest. Those areas saw the creation of towns, the formation of states, and an increasing use of metallurgy and luxury goods from the fourth century onward. The increase in social complexity seems to have been driven by the development of trade along the Niger, which served as the main trading route between the coast and the interior (Frost 2022).


In West Africa, cognitive evolution seems to have gone the farthest among the Igbo of the Niger delta. We have no educational polygenic data on them, but their record of academic achievement in Nigeria, the UK, and elsewhere indicates an unusually high level of cognitive ability (Chisala 2015).




We should get more data, while recognizing the limits of what the data may tell us. IQ tests will always be problematic, and future research should focus on educational polygenic scores. In particular, we need to identify relevant alleles in non-European populations. Some of those alleles may be population-specific, and others may be universal but more important in some populations than in others. Finally, Africa is not a monolith. Different African populations have pursued different trajectories of cognitive evolution.





Carl, N. (2022). How useful are national IQs? Noah’s Newsletter, July 13.  


Chisala, C. (2015). The IQ gap is no longer a black and white issue. The Unz Review, June 25.   


Flynn, J.R. (1984). The mean IQ of Americans: Massive gains 1932–1978. Psychological Bulletin 95(1):29–51.  


Flynn, J.R. (2013). The “Flynn Effect” and Flynn’s paradox. Intelligence 41: 851-857.   


Frost, P. (2021). Polygenic scores and Black Americans. Evo and Proud, April 27.   


Frost, P. (2022). Recent cognitive evolution in West Africa: the Niger’s role. Evo and Proud, April 30.  


Guo, G., Lin, M.J., and K.M. Harris. (2019). Socioeconomic and Genomic Roots of Verbal Ability. bioRxiv, 544411.  


Kirkegaard, E.O.W. (2022). African IQs without African IQs: it’s complicated. Just Emil Kirkegaard Things. August 7.  


Lasker, J., B.J. Pesta, J.G.R. Fuerst, and E.O.W. Kirkegaard. (2019). Global ancestry and cognitive ability. Psych 1(1).  


Lee, J. J., Wedow, R., Okbay, A., Kong, E., Maghzian, O., Zacher, et al. (2018). Gene discovery and polygenic prediction from a genome-wide association study of educational attainment in 1.1 million individuals. Nature Genetics 50(8): 1112-1121.


Lynn, R. (2010). The average IQ of sub-Saharan Africans assessed by the Progressive Matrices: A reply to Wicherts, Dolan, Carlson & van der Maas. Learning and Individual Differences 20(3): 152-154.   


Piffer, D. (2021). Divergent selection on height and cognitive ability: evidence from Fst and polygenic scores. OpenPsych.     


Rabinowitz, J.A., S.I.C. Kuo, W. Felder, R.J. Musci, A. Bettencourt, K. Benke, ... and A. Kouzis. (2019). Associations between an educational attainment polygenic score with educational attainment in an African American sample. Genes, Brain and Behavior, e12558.   


Rindermann, H. (2013). African cognitive ability: Research, results, divergences and recommendations. Personality and Individual Differences 55: 229-233.   


Wicherts, J.M., C.V. Dolan, and H.L.J. van der Maas. (2010). A systematic literature review of the average IQ of sub-Saharan Africans. Intelligence 38: 1-20.   


Wikipedia. (2022). Imperial examination – Influence - West.  

Monday, August 8, 2022

Vampirism and bloodlust


Ishbosheth is slain (Wikicommons – Maciejowski Bible)


Before the State monopoly on violence, an adult male was expected to spill another man’s blood in the course of life. Such action would be authenticated by the sight, feel, and taste of that blood.



Vampirism is the desire to see, feel, and taste blood. Today, we encounter it in horror movies or Gothic fiction, yet it does exist in real life. A “vampire” derives intense pleasure, bordering on sexual excitement, from the sight, feel, and taste of blood (Jaffé and DiCataldo 1994; Vanden Bergh and Kelly 1964). The following is a classic case:


In 1978, during a two-day rampage in the Mayenne region of France, a 39-year-old man attempted to rape a preadolescent girl, also biting her deeply in the neck, murdered an elderly man whose blood he drank and whose leg he partially devoured, killed a cow by bleeding it to death, murdered a married couple of farmers, and almost succeeded in doing the same with their farm hand. (Jaffé and DiCataldo 1994)


Most of the literature on vampirism comes from societies where the State has long held a monopoly over the use of violence and where non-State violence has long been criminalized and even pathologized. In many parts of the world, however, that monopoly is either recent or ineffective. The average man is still expected to use violence to defend himself and his family against threats that may seem trivial in a State-pacified society.


We need a cross-cultural study of the desire to shed blood. An initial step in that direction was taken by Frantz Fanon, who worked as a hospital psychiatrist in Algeria. He described vampirism as a frequent characteristic of murder cases in that country. The Algerian murderer “needs to feel the heat of blood and steep himself in his victim’s blood. […] A number of magistrates even go so far as to say that killing a man for an Algerian means first and foremost slitting his throat” (Fanon 2004[1963], p. 222).


Additional cross-cultural perspective has been provided by two recent papers. One of them is a case report from Sri Lanka:


A 20-year old single, unemployed male was referred from a drug rehabilitation center to the psychiatry clinic. He presented with poor anger control, impulsive behavior and the urge to drink blood, against a background of multiple substance dependence. He had been adopted in his early childhood and there was no childhood features to suggest developmental delays, hyperactivity, impulsivity, or conduct disorder.


[…] Although he experienced a sense of satisfaction after ingestion of blood, this act was not associated with obsessions, delusions, hallucinations, sexual gratification or paraphilic behaviour. He did not have any other psychiatric illnesses. (Adicaram et al. 2021)


The other paper presents two case reports from Turkey. In that country, vampirism usually takes the form of self-mutilation, if only because personal bloodletting is less likely to invite legal retribution. The authors describe it as following a stereotypical behavioral sequence:


We propose the term "hemomania" to describe an impulse control disorder characterized by impaired functioning due to at least one of the following urges: seeing one's own blood, self-bloodletting, and tasting/drinking one's own blood. We argue that hemomania progresses from an urge to see one's own blood to the urge to drink it (Kandeğer et al. 2021).


The “vampire” responds positively to the sight of blood and is thus driven to spill more blood and ultimately bathe in it and taste it. If this is indeed an impulse-control disorder, it should exist in many apparently normal people, among whom it would be unexpressed and under strong inhibition.


A desire to shed blood may have been much more common before the State monopolized the use of violence—at a time when an adult male was expected to spill another man’s blood in the course of life. In that context, it would be counterproductive to feel nauseated. In fact, one should feel excited. And the final triumph over an adversary would be authenticated by the sight, feel, and taste of that man’s blood.


We still have a word for that: “bloodlust.” There is also the word “bloodthirsty.” Today, we hear and say those words without fully understanding their original meaning. They refer to a mental state that used to be common in another time, but which has since been expunged from normal life … to the point that we now see it as weird and pathological.


A Middle English ballad describes the pleasure that a group of men felt when drinking the blood of a freshly killed deer:


They eat of the flesh, and they drank of the blood,

And the blood it was so sweet,

Which caused Johny and his bloody hounds

To fall in a deep sleep. (Haughey 2011, p. 350)


Those men were breaking a taboo against drinking an animal’s blood or eating its bloody flesh. That taboo went back to Anglo-Saxon times, when meals would bring many men together around the same table. It was feared that consumption of animal blood would excite the male mind and lead to violence, murder and, ultimately, consumption of human blood:


From this savage sharing of raw food with dogs, it is a short logical leap to cannibalism, the ultimate food taboo, for once one is able to devour bloody flesh, one has lost inhibitions concerning food. […] Johny Cock eats raw meat with his dogs, many Robin Hood ballads fixate on the sublimated violence in overblown feast scenes, and uncouth outlaw heroes like Hereward, Gamelyn, and Fulk Fitz Waryn repeatedly break taboos against mixing raw human blood with their meals when they bleed on their plates or tables and insist on continuing their feasts. (Haughey 2011, pp. 29-30)


Many cultural traditions insist on the removal of blood from flesh before it can be eaten. This taboo is described in the Hebrew Scriptures:


But you must not eat meat that has its lifeblood still in it. And for your lifeblood I will surely demand an accounting. I will demand an accounting from every animal. And from each human being, too, I will demand an accounting for the life of another human being. Whoever sheds human blood, by humans shall their blood be shed.

Genesis 9:4-6


With the rise of State societies, male violence became criminalized in most cases, with the notable exceptions of self-defense and war. Those new circumstances favored a different sort of man, one who would react negatively to the sight of blood. With the marginalization of bloodthirsty individuals, and their gradual removal from the gene pool, there was likewise a removal of bloodlust from real life.


Today, bloodlust survives as a deactivated behavior that normally remains dormant. This is the situation that prevails in long-pacified societies: vampirism has literally become pathological—it is reactivated only by environmental or genetic accidents that cause many other pathologies. The “vampire” looks and acts like a freak.


This is less true in societies that have been pacified more recently. The “vampire” seems more normal and shows fewer signs of mental disorder.





Adicaram, D.R.S., Wijayamunige, E.S. and Arambepola, S.C.A., 2021. Vampires! Do they exist? A case of clinical vampirism. Sri Lanka Journal of Psychiatry 12(2): 38-40.


Fanon, F. (2004[1963]). The Wretched of the Earth. New York: Grove Press.


Haughey, S. (2011). The 'Bestli' Outlaw: Wilderness and Exile in Old and Middle English Literature. PhD dissertation, Cornell University.


Jaffé, P. D., and F. DiCataldo. (1994). Clinical vampirism: Blending myth and reality. Bulletin of the American Academy of Psychiatry & the Law 22(4): 533–544.


Kandeğer, A., F. Ekici, and Y. Selvi (2021). From the urge to see one’s own blood to the urge to drink it: Can hemomania be specified as an impulse control disorder? Two case reports. Journal of Addictive Diseases 39(4): 570-574.


Vanden Bergh, R.L.,and J.F. Kelly. (1964). Vampirism: A Review with New Observations. Archives of General Psychiatry 11(5):543–547.