Saturday, December 31, 2011

A few of my themes for 2012

Yakuzas (Japanese mafia). The largest Yakuza syndicate is over 70% Burakumin. Source

Here are a few themes I wish to write about during 2012:

Archaic admixture: A wild goose chase?

With the discovery that Europeans and Asians are 1 to 4% Neanderthal, there has been a rush to learn more. What genes are involved? Does this admixture explain why Eurasians are, well, hot stuff?

A few words of caution. The estimate of 1 to 4% is based on comparison of the Neanderthal genome with the modern Eurasian genome and the modern sub-Saharan African genome (Green et al., 2010). Neanderthals appear to be genetically closer to modern Eurasians than they are to modern sub-Saharan Africans. This increased closeness is therefore a measure of Neanderthal admixture in modern Eurasians. Right?

Well, not necessarily. It may also be a measure of non-Neanderthal admixture in modern sub-Saharan Africans. We now know that about 2% of the modern sub-Saharan African genome comes from a population that split from ancestral modern humans some 700,000 years ago (Hammer et al., 2011). Another 13% comes from archaics who were much closer to modern humans and probably related to the Skhul-Qafzeh hominins of the Middle East (Watson et al., 1997).

The figure of 1 to 4% Neanderthal admixture in modern Eurasians will thus have to be revised downward, just as our estimate of archaic admixture in modern sub-Saharans will have to be revised upward. This point has been made by Dienekes:

It is no longer tenable to propose that Eurasians are shifted towards Neandertals only because of Neandertal admixture: in fact some of the shift may be due to Africans being shifted away from Neandertals because of admixture with archaic African hominins.

However great or small Neanderthal admixture may be, can it explain why modern Eurasians are “hot stuff”? Doubtful. It’s true that both populations had to adapt to arctic environments, but they did so in very different ways. Neanderthals adapted to the cold through their morphology: thick body fat and dense fur. Modern Eurasians adapted by making tailored clothing and building insulated shelters.

Please don’t get me wrong. If you’re doing research on Neanderthal admixture, I wish you the best of luck. Perhaps you’ll find a thing or two. But don’t get your hopes up.

Whither North Korea?

Whenever an authoritarian leader dies, the door is opened to change, often radical change. The new leader is less able to command authority, and the chain of command itself is called into question at all levels. Pent-up pressure for change can finally be released. This was the case after the deaths of Franco in 1975, Mao Zedong in 1976, and Brezhnev in 1982. Here in my home province, it was the death of Duplessis in 1959 that ushered in the end of Quebec as a conservative Catholic society.

Will we see the same in North Korea? Will the death of Kim Jong Il lead to liberalization and, ultimately, reunification with South Korea?

Yes and no. North Korea will pursue its transition to a market economy. And this process is already making the population more independent-minded. As an observer in Pyongyang recently noted:

The women who daily set out their wares on the streets do so in defiance of police prohibitions. This is one of the clearest indications of the erosion of the regime’s control over its people. (The author observed many others, such as the men who openly smoked under “No Smoking” signs, the peasants who sim­ply ignored the traffic police and trundled their carts across intersections, and the people who—under the very eyes of the police—sat on the escalators in the Metro despite stern signs prohibiting this.) (Everard, 2011)

Private markets are also creating new spaces of social interaction that are independent of the State, and this trend will be assisted by the spread of cellphones and the strengthening of economic and social relations with China—itself a much more liberal society.

Finally, North Korea will drop all pretence of international socialism. This might seem to be just a matter of words—North Korea has long been a de facto nationalist regime—but semantics are important in the way people construct their perceived reality.

But, no, reunification is not in the cards, if only because the Chinese are adamantly opposed. There was a time in the 1990s when they were open to this idea. With reunification, U.S. troops would leave and Korea would become a more neutral country. It is now clear, however, that reunification has produced no such outcome in Germany. The Cold War may be over, but the U.S. still wants to have troops in mainland Eurasia, apparently as part of its geopolitical strategy.

So for now at least the Chinese will try to strengthen North Korea as a friendly buffer state. To this end, they will prod Pyongyang to pursue economic reforms and shed its pariah image, particularly by dismantling its nuclear program. In exchange, the Chinese may offer the protection of their own nuclear umbrella, as well as full membership in the Shanghai Cooperation Organisation (SCO).

It’s also unlikely that liberalization will lead to North Korea becoming more Westernized and Americanized. By “liberalization,” I mean the right of people to live their lives according to their own values—and not those imposed by the State or by a globalist elite. Hence, the Arab Spring has brought the triumph of Islamist political parties who promise to introduce stricter adherence to Shariah law. This has been a surprise to Western observers, but it should not have been.

The Burakumin

Although Japanese society is often seen as being very homogeneous, it does have a distinct class called the Burakumin who were officially outcastes until 1871 and are still widely looked down upon. They seem to descend from Japanese who held stigmatized occupations that involved the taking of life or contact with dead bodies, like butchery, leather making, and preparation of corpses for burial. Today, despite many remedial efforts, an academic gap persists between the Burakumin and other Japanese:

According to research on Buraku pupil/students' scholastic ability conducted in the post-war period, nearly 1 standard deviation difference in achievement scores was found between Burakumin and non-Burakumin pupil/students regardless of when and where the research was conducted. This meta-analysis on Buraku pupil/students' scholastic ability leads us to conclude that the relative difference in scholastic achievements between the Burakumin and non-Burakumin pupil/student has been maintained to a considerable degree through the post-war period. (BLHRRI, 1997)

In 2012, I will try to shed new light on this question by applying Greg Clark’s model. Clark (2007) argued that the English gene pool in 1800 was quite different from what it had been only a few centuries earlier. Over the years, the English middle class had expanded demographically and, through downward mobility, had largely replaced the English lower classes. I will suggest that Japan followed a similar evolution but with an interesting twist. As outcastes with a monopoly on certain occupations, the Burakumin were spared this demographic replacement. They may thus represent the Japanese population as it existed several centuries ago.


BLHRRI (1997). Practice of Dowa Education Today, Buraku Liberation and Human Rights Institute.

Clark, G. (2007). A Farewell to Alms. A Brief Economic History of the World, Princeton University Press, Princeton and Oxford.

Dienekes. (2011). Neanderthal admixture. Why I remain skeptical, December 19, 2011.

Everard, J. (2011). The markets of Pyongyang, Korea Economic Institute, Academic Paper Series, 6(1), 1-7.

Green, R.E., J. Krause, A.W. Briggs, T. Maricic, U. Stenzel, M. Kircher, et al. (2010). A draft sequence of the Neandertal genome, Science, 328, 710-722.

Hammer, M.F., A.E. Woerner, F.L. Mendez, J.C. Watkins, and J.D. Wall. (2011). Genetic evidence for archaic admixture in Africa, Proceedings of the National Academy of Science (USA), early edition,

Watson, E., P. Forster, M. Richards, and H-J. Bandelt. (1997). Mitochondrial footprints of human expansions in Africa, American Journal of Human Genetics, 61, 691-704.

Saturday, December 17, 2011

2012. A year of turbulence?

Child making Nike shoes (source). Western business now has access to labor under conditions not seen since the days of Charles Dickens.

My predictions from last year:

It won’t be such a bad year. Stock markets will reach record highs and pundits will say we’ve entered a sustained boom. For many people, life will never again be so good as it will be this year.

The main worry will be price rises for many commodities. With a return to even modest rates of economic growth, demand will outstrip supply in several areas. Talk of “peak oil” will be joined by concerns over “peak food” and “peak water.” Serious water shortages will hit the American southwest and southeast.

Well, the stock markets have not reached record highs. And there have been no serious water shortages, largely because of an unusually wet winter.

But food prices have been rising ominously. It was this factor that triggered the “Arab Spring” and is now fueling discontent in Russia. Also, for a lot of people—especially our elites—life has never been so good. We are into an economic recovery, of sorts.

How long will the recovery last? Perhaps another twenty years if it were a normal one. But it isn’t. The last recession was not allowed to finish its job of purging the economy. A lot of corporate flab was spared the axe, and dysfunctional attitudes toward debt are still common, particularly among consumers. In addition, the recovery is heavily dependent on government spending and consumer debt, and there is no indication that the economy is ready to go “cold turkey.” We may need more and more of the same stimulus just to maintain sluggish growth.

This debt crisis comes on top of a looming commodity crisis. Prices for fuel, food, housing, and other basics are being pushed up by the new buying power of Asian consumers and by immigration to North America and Western Europe. Can supply be increased to meet the rising demand? Yes, of course. Don’t worry. Everything will be fine—say the business interests that profit from this spike in demand.

Finally, we are facing a globalization crisis. On the one hand, jobs are being outsourced to lower-wage countries. On the other, lower-wage labor is being insourced. The result? A steady downward leveling of incomes throughout the Western World, except for the very rich. The latter now have access to labor under conditions not seen since the days of Charles Dickens.

The current recovery might nonetheless go on indefinitely. The Japanese, for instance, have kept their economy afloat for the past two decades by piling up massive debt. But they are just one society, and it’s one with a strong sense of social cohesion. In contrast, the Western World is very fractious, as seen by the bickering within the European Union. These social and political divisions will probably abort the recovery long before the possibilities for debt financing and money printing have been completely exhausted. And so much the better.

If I have to make a prediction for 2012, it will be that the recovery will continue—on life support, so to speak—but will run into increasing social turbulence. The ‘Arab spring’ will start to play out in the Western World as the elites begin to lose their legitimacy. This process is already under way in Europe, and we may see a domino effect where change in one country facilitates change in other countries.

My research interests

There have been some developments in my areas of research interest.

Skin color and face recognition

Natural selection tends to hardwire recognition of objects that regularly appear in our visual environment. One such object is the human face. As shown by Zhu et al. (2009) through a twin study, the ability to recognize faces is innate and not learned. This heritability is further shown by the two extremes of prosopagnosics and “super-recognizers.” The former cannot recognize faces better than any other object, whereas the latter have exceptional face recognition ability (Russell, Chatterjee, & Nakayama, in press; Russell, Duchaine, & Nakayama, 2009).

The American psychologist Richard Russell has recently shown that face recognition equally uses face shape and facial skin color:

Shape and pigmentation cues were used in roughly equal measure by people with very good and very bad face recognition ability. […] People who are good at recognizing faces are good at using both shape and pigmentation cues to do so; people who are bad at recognizing faces are bad at using both shape and pigmentation cues to do so (Russell, Chatterjee, & Nakayama, in press).

This mental processing of skin color seems to take place in a lower-level module whose output then feeds into the face-recognition module.

Neural circuits related to face recognition ability must use both shape and pigmentation information about equally. This supports the idea that these circuits represent facial appearance by pooling lower-level patterns of shape and reflectance into combinations that include both types of information (Jiang, et al., 2006). Further, this is consistent with the notion that the location of the Fusiform Face Area is midway along the shape–reflectance gradient in ventral cortex (Cant & Goodale, 2011) because the region integrates these two kinds of cues to visually process faces. (Russell, Chatterjee, & Nakayama, in press)

Dumouchel et al. (2010) have likewise concluded that face shape and “skin properties” are the main clues for face recognition, even more so than the relative distances of facial features from each other.

Why does skin color matter so much for face recognition? Didn’t our ancestors evolve in a context where people interacted only with their own kind or with neighboring groups of similar appearance? Yes, but there was another source of variation in skin color—gender and age. Women and young infants are paler, having less melanin and hemoglobin in their skin. Men, in contrast, are ruddier and browner.

We are thus innately sensitive to differences in skin color, but this sensitivity didn’t evolve in response to ethnic differences. It evolved in response to much smaller gender and age differences (Frost, 2010; Frost, 2011; van den Berghe & Frost, 1986).

At present, two research teams have the means and motivation to pursue this line of research: Richard Russell’s team at Gettysburg College and Frédéric Gosselin’s team at the Université de Montréal. We’ll probably see more findings by both teams over the next year.

The puzzle of European hair and eye colors

European populations have an unusually broad palette of hair and eye colors. This diversity doesn’t have a common genetic cause. It is due to a proliferation of alleles at two separate genes: MC1R for hair color and OCA2 for eye color. This proliferation did not come about through relaxation of selection for dark skin as ancestral Europeans moved into higher latitudes. Most of the new alleles have little or no affect on skin color, and in any case the timeframe is too narrow for this evolutionary scenario.

A likelier cause is sexual selection, which favors bright or novel colors that catch the attention of potential mates. If sexual selection is strong enough, a polymorphism of color variants may develop. A new color appears through mutation and, depending on its brightness or novelty, steadily rises in frequency until it is as common as the established color. Over time, these variants will increase in number. Humans have the potential for this kind of frequency-dependent sexual selection, e.g., darker-haired women are sexually preferred to the extent that they are less common. Such selection is consistent with the high number of alleles for hair color and eye color in European populations, the high ratio of nonsynonymous to synonymous variants among these alleles, and the relatively short time over which this hair and eye color diversity developed.

Sexual selection occurs when too many of one sex must compete for too few of the other. Among early modern humans, such imbalances resulted from (1) polygyny (to the degree that women could provide for themselves and their children without male assistance) and/or (2) higher mortality among men than among women (to the degree that men covered longer distances while hunting or changing camp). Wherever the polygyny rate was low and male mortality high, the result was strong sexual selection of women. Such selection was particularly strong on continental steppe-tundra, where men had to provide almost all of the food by hunting migratory game animals over long distances. Although this type of environment is now fragmentary, it covered until 10,000 years ago a much larger territory that matches the current range of European hair and eye color diversity (Frost, 2006).

This hypothesis would predict some degree of sex linkage among European alleles for hair and eye color, since the sexual selection was acting on women. Over time, there would have arisen alleles that produce non-black hair and non-brown eyes more so in women than in men, and these alleles would have gradually replaced their non-sex-linked counterparts. This process should not have gone very far, though, because of the narrow timeframe.

This prediction is borne out by a twin study on the genetics of hair color. Shekar et al. (2008) found that the women had lighter hair on average than the men and a higher proportion of red hair. Hair color was also more diverse in the women than in the men:

Females had, on average, lighter hair, on the A650t scale, than males.

[…] The correlation within brother–sister twin pairs was significantly lower than the correlation within brother–brother and sister–sister dizygotic twin pairs (P ≈ 0.01). This suggests that there may be qualitative differences in the genetic influences on the A650t index between sexes.

[…] Additive genetic influences explain 55% and 58% of variation in the A650t index within females and males, respectively. The additive genetic influence on the A650t index in males was, predominantly, qualitatively different from those that influence the index in females.

[…] Females had, on average, redder hair (P < 0.00001) and greater variation in R index scores (P _ 0.001) than males.

The sexual selection hypothesis would also predict that this evolutionary change took place over a relatively short time, specifically the last ice age 25,000 to 10,000 years ago and well after the entry of modern humans into Europe some 35,000 to 40,000 years ago. Is this prediction supported by evidence?

At present, no one is trying to date the diversification of European hair and eye colors.
The closest research effort would be the work by Norton and Hammer (2007) showing that Europeans became white-skinned long after their entry into Europe. Heather Norton is now trying to get a firm date on this phenotypic change.


Dupuis-Roy, N., I. Fortin, D. Fiset, and F. Gosselin. (2009). Uncovering gender discrimination cues in a realistic setting. Journal of Vision, 9(2), 10, 1–8., doi:10.1167/9.2.10.

Frost (2011). Hue and luminosity of human skin: a visual cue for gender recognition and other mental tasks, Human Ethology Bulletin, 26(2), 25-34.

Frost, P. (2010). Femmes claires, hommes foncés. Les racines oubliées du colorisme, Quebec City: Presses de l’Université Laval.

Frost, P. (2006). European hair and eye color - A case of frequency-dependent sexual selection? Evolution and Human Behavior, 27, 85-103

Norton, H.L. & M.F. Hammer (2007) Sequence variation in the pigmentation candidate gene SLC24A5 and evidence for independent evolution of light skin in European and East Asian populations, Program of the 77th Annual Meeting of the American Association of Physical Anthropologists, p. 179.

Russell, R., G. Chatterjee, and K. Nakayama. (In press) Developmental prosopagnosia and super-recognition: no special role for surface reflectance processing. Neuropsychologia

Russell, R., B. Duchaine, and K. Nakayama. (2009). Super-recognizers: People with extraordinary face recognition ability. Psychonomic Bulletin & Review, 16(2), 252-257.

Shekar, S.N., D.L. Duffy, T. Frudakis, G.W. Montgomery, M.R. James, R.A. Sturm, & N.G. Martin (2008). Spectrophotometric methods for quantifying pigmentation in human hair—Influence of MC1R genotype and environment, Photochemistry and Photobiology, 84, 719–726.

Taschereau-Dumouchel, V., B. Rossion, P.G. Schyns, and F. Gosselin. (2010). Interattribute Distances do not Represent the Identity of Real World Faces, Front Psychol, 1, 159.

van den Berghe, P. L. & P. Frost. (1986). Skin color preference, sexual dimorphism, and sexual selection: A case of gene-culture co-evolution? Ethnic and Racial Studies, 9, 87-113.

Zhu, Q., Y. Song, S. Hu, X. Li, M. Tian, Z. Zhen, Q. Dong, N. Kanwisher, and J. Liu. (2009). Heritability of the specific cognitive ability of face perception, Current Biology, 20, 137-142.

Saturday, December 10, 2011

Suicide and Inuit youth

Canadian suicide rates (per 100,000 people): Inuit, First Nations, all Canadians. Source

From Alaska to Greenland, young Inuit have unusually high rates of suicide, attempted suicide, and suicidal ideation. According to a 1972 survey of Inuit 15 to 24 years old from northern Quebec, 28% of the males and 25% of the females had attempted suicide (Kirmayer et al., 1998). Before the 1970s, suicide was rare among Inuit youth. Today, it has reached epidemic proportions.

Public authorities have responded largely by targeting those factors, like alcohol and drug abuse, that make it easier to go from thinking about suicide to actually doing it. While these efforts are having some success, there still remains the problem of suicidal ideation.

Why do so many Inuit youth contemplate suicide? Kirmayer et al. (1998) point to a prevailing sense of uselessness:

Inuit youth are confronted with the values of an individualistic, consumption-oriented society through mass media but have few opportunities to achieve the life-style portrayed. The result may be a sense of frustration, limited options, and difficulty imagining an optimistic future. This may extend to an impaired sense of self-continuity that contributes to attempted suicide.

Dufour (1994) argues that Inuit society has a long tradition of people ending their lives when they feel they have become useless. In the past, however, this kind of suicide involved only the elderly:

Suicide in early Inuit society was viewed positively when the individual had become a burden for the group. “Senilicide” in particular was deemed to be acceptable and appropriate. Its pattern: a usually elderly person motivated by illness, helplessness, bereavement, dependence on the group, famine, or resource shortage who would decide after consulting family members who sometimes could be called upon to assist. In contemporary Inuit society, the elderly no longer commit suicide. The young people do.

TV and video present young Inuit with an affluent lifestyle that is unattainable for all but a few. Meanwhile, school presents learning goals and standards of behavior that are likewise difficult to attain, especially for boys. By postponing adulthood in order to extend the learning process, school also has the unintended effect of humiliating Inuit youth. In another age, they were treated as young adults, often being parents in their own right. Today, they are just “children.”

Many young Inuit thus perceive themselves as being socially useless. And this self-perception is triggering suicidal ideation.

Such ideation may seem irrational from an individualistic Western standpoint. You cannot make your life better by ending it. Yet it is less irrational from the standpoint of one’s kin group, especially in a context of limited resources. Such was the case with elderly Inuit who would choose death so as not to burden the younger members of their band, such people being close relatives for the most part.

In such a context, natural selection—specifically kin selection—might have favored suicide as a response to perceived uselessness. Such selection is possible. Suicidal ideation is significantly heritable and seems to be inherited as a specific behavioral response:

Suicidal behavior is highly familial, and on the basis of twin and adoption studies, heritable as well. Both completed and attempted suicide form part of the clinical phenotype that is familially transmitted, as rates of suicide attempt are elevated in the family members of suicide completers, and completion rates are elevated in the family members of attempters. A family history of suicidal behavior is associated with suicidal behavior in the proband, even after adjusting for presence of psychiatric disorders in the proband and family, indicating transmission of attempt that is distinct from family transmission of psychiatric disorder. (Brent & Mann, 2005)

According to a twin study using American subjects, suicidal ideation has 36% heritability and suicide attempt 17% heritability (Fu et al., 2002).

De Catanzaro (1991, 1995) has argued that suicidal ideation has evolved as a response to a situation where an individual has become a burden to immediate kin. In studies of the general public and high-risk groups (elderly and psychiatric patients), he found that the strongest correlate of suicidal ideation was burdensomeness to family and, for males, lack of heterosexual activity. As Buss (1999, p. 94) concludes: “If a person is a burden to his or her family, for example, then the kin’s reproduction, and hence the person’s own fitness might suffer as a result of his or her survival.”

The threshold for suicidal ideation may be lower in some human populations than in others, depending on one’s risk of becoming a serious burden on kinfolk. This risk is high in Arctic hunting bands because their members are almost entirely close kin and because their nomadic lifestyle limits food storage for lean times. When food is scarce, who eats and who doesn’t? The question is especially difficult because close kin are involved. The easiest solution, in terms of keeping the peace and maintaining group cohesion, is one where the burdensome individual voluntarily bows out.

What does all of this mean for young Inuit who are thinking of suicide? Clearly, it is not enough to focus on things that facilitate the transition from suicidal ideation to actual suicide. That approach might work in southern Canada, where suicide tends to result from transient episodes that push people up and over the threshold of suicidal ideation. Among the Inuit, the threshold seems to be lower and the focus should be more on preventing ideation, specifically by giving young Inuit a greater feeling of self-worth and social usefulness.


Brent, D.A. & J.J. Mann. (2005). Family genetic studies, suicide, and suicidal behavior, American Journal of Medical Genetics Part C: Seminars in Medical Genetics, 133C, 13-24.

Buss, D.M. (1999). Evolutionary Psychology. The New Science of the Mind, Boston: Allyn and Bacon.

de Catanzaro, D. (1991). Evolutionary limits to self-preservation, Ethology and Sociobiology, 12, 13-28.

de Catanzaro, D. (1995). Reproductive status, family interactions, and suicidal ideation: Surveys of the general public and high-risk group, Ethology and Sociobiology, 16, 385-394.

Dufour, R. (1994). Pistes de recherche sur les sens du suicide des adolescents inuit, Santé mentale au Québec, 19, 145-162.

Fu, Q., A.C. Heath, K.K. Bucholz, E.C. Nelson, A.L. Glowinski, J. Goldberg, M.J. Lyons, M.T. Tsuang, T. Jacob, M.R. True & S.A. Eisen. (2002). A twin study of genetic and environmental influences on suicidality in men, Psychological Medicine, 32, 11-24.

Kirmayer, L.J., L.J. Boothroyd, S. Hodgins (1998). Attempted Suicide among Inuit youth: Psychosocial correlates and implications for prevention, Canadian Journal of Psychiatry, 43, 816–822.

Saturday, December 3, 2011

Were native Europeans replaced?

Spread of farming in Europe. Cultural diffusion or population replacement? Source

Between 9,000 and 3,000 years ago farming spread through Europe and replaced hunting, fishing, and gathering. Was this process just a change in lifestyle? Or was it also a population change? Did Middle Eastern farmers replace native Europeans?

For Greg Cochran, the answer is clear:

Increasingly, it looks as if the hunter-gatherers who lived in Europe at the end of the ice age have been largely replaced. Judging from all those U5 mtdna results from ancient skeletons, I’d say that the hunters don’t account for more than 10% of the ancestry of modern Europeans. (Cochran, 2011)

Actually, the U5 haplogroup remained common after the transition to farming. This was the conclusion of a study of 92 Danish human remains that ranged in time from the Mesolithic to the Middle Ages. The study found genetic continuity from late hunter/gatherer/fishers to early farmers:

The extent to which early European farmers were immigrants or descendents of resident hunter-gatherers (replacement vs. cultural diffusion) has been widely debated, and new genetic elements have recently been added. A high frequency of Hg U lineages , especially U5, has been inferred for pre-Neolithic Europeans based on modern mtDNA data, with Hg U5 being fairly specific to Europe. [...] Our study therefore would point to the Early Iron Age and not the Neolithic Funnel Beaker Culture as suggested by Malmstrom et al. (2009), as the time period when the mtDNA haplogroup frequency pattern, which is characteristic to the presently living population of Southern Scandinavia, emerged and remained by and large unaltered by the subsequent effects of genetic drift (Melchior et al., 2010)

Thus, the sharp genetic divide was not between late hunter/fisher/gatherers and early farmers. It was between the earliest farmers and groups that had been farming for at least a millennium or so. The evidence is more consistent with natural selection than with population replacement.

But isn’t mtDNA unresponsive to natural selection? That’s what I used to think. There is growing evidence, however, that some mtDNA loci respond to natural selection. In particular, some haplogroups seem to reflect a trade-off between thermogenesis and ATP synthesis (Balloux et al, 2009). This trade-off might explain differences in disease risk between different mtDNA haplogroups. Haplogroup U, in particular, is associated with a lower risk of glaucoma (Wolf et al., 2010). There also seems to be an age-related association between this haplogroup and risk of Alzheimer’s (Santoro et al., 2010).

If true, the decline of U-type haplogroups among early farmers may reflect the different patterns of physical activity between them and hunter/fisher/gatherers.

So was it cultural diffusion or population replacement?

The jury is still out, but the consensus is moving towards a position where Middle Easterners initially established pioneer farming settlements in central Europe but were over time largely replaced by native farmers. Rowley-Conwy (2011, p. S434) describes this new model:

Our explanations must now rest on two major foundations: most Neolithic genes were native, but the major domesticates were exotic. Small-scale rather than continent-wide migrations are the best way to integrate these into one model. Agriculture in a region may have been introduced by immigrants, but that does not mean that the immigrants carried mainly Near Eastern genes (Richards 2003; Rowley-Conwy 2004b; Zvelebil 2005). The LBK, for example, originated in the Carpathian Basin; the population that moved westward emerged there carrying a complex mix of European and Near Eastern mtDNA and no doubt picking up more as it moved.

There is evidence that these pioneer farming settlements assimilated local hunter-gatherers, especially women. In at least some cemeteries, the female skeletons are likelier than the male skeletons to have come from outside the local farming community (Rowley-Conwy, 2011, p. S439). Thus, over time, this recruitment of local hunter-gatherers would have steadily diluted the original gene pool, and this dilution would have been more advanced in later, secondary settlements that budded off from the early centers of colonization.

This process was hastened by the extinction of many of the early farming settlements. In northwestern France, the Villeneuve-Saint-Germain culture represented the furthest westward extension of these colonizing farmers. After a couple of centuries, however, it disappeared and was replaced by farming cultures of local origin (Rowley-Conwy, 2011, p. S439)


Balloux F., L.J. Handley, T. Jombart, H. Liu, and A. Manica (2009).
Climate shaped the worldwide distribution of human mitochondrial DNA sequence variation. Proceedings. Biological Sciences, 276 (1672), 3447–55.

Cochran, G. (2011). First-mover advantage, West Hunter, November 25

Melchior, L., N. Lynnerup, H.R. Siegismund, T. Kivisild, J. Dissing. (2010). Genetic diversity among ancient Nordic populations, PLoS ONE, 5(7): e11898

Rowley-Conwy, P. (2011). Westward Ho! The Spread of Agriculturalism from Central Europe to the Atlantic, Current Anthropology, 52 (S4), S431-S451

Santoro A., V. Balbi, E. Balducci, C. Pirazzini, F. Rosini, et al. (2010). Evidence for Sub-Haplogroup H5 of Mitochondrial DNA as a Risk Factor for Late Onset Alzheimer's Disease. PLoS ONE, 5(8): e12037. doi:10.1371/journal.pone.0012037

Wolf, C., E. Gramer, B. Müller-Myhsok, F. Pasutto, B. Wissinger, & N. Weisschuh. (2010). Mitochondrial haplogroup U is associated with a reduced risk to develop exfoliation glaucoma in the German population, BMC Genetics, 11, 8

Friday, November 25, 2011

How late is too late?

In pre-industrial Finland, the average woman was caught in a squeeze play. If she married too young, she and her partner might not have enough resources to start a family. If she married too old, she risked genetic extinction. None of her children might survive to adulthood and have children of their own. Source

In a previous post, I discussed how Western European societies used to postpone the age of first reproduction to the mid-20s. When this cultural pattern began is unsure, but it certainly preceded the Black Death of the 14th century and may have existed as early as the 9th century. It seems to have resulted from a combination of land scarcity and the rule of primogeniture, i.e., farms were kept intact and handed over to the eldest son when the parents died or retired. In such a situation, young couples had to wait until they had a farm of their own—and the means to start a family.

How late can a woman postpone having her first child and still be sure of perpetuating her lineage? In her mid 30s? That answer might be true today. In pre-modern societies, however, women had to start earlier. A recent study by Liu and Lummaa (2011) puts it no later than 30. Beyond 30, a woman would be faced with declines in both offspring quantity and offspring quality. Her risk of genetic extinction was proportionately higher.

In a study of rural Finnish parish records from the 18th and 19th centuries, Liu and Lummaa (2011) found that the women were 26 years old on average when they first gave birth. They had thus foregone the first decade of their reproductive life, something that would be unheard of in most traditional human societies. Yet they nonetheless managed to have 6.54 children on average.

Among these women, offspring quantity decreased with increasing age of first reproduction (AFR). This was partly because the time window for reproduction was narrower and partly because more of the reproduction was taking place during years of reduced fertility. Offspring quality held constant until the age of 30 and then too decreased.

The study defined “offspring quality” as the probability that one’s children will survive to adulthood and have children of their own. On average, 60% of the offspring survived to 15 years of age and 47% had children of their own. AFR did not affect offspring quality among mothers under 30. Over 30, higher AFR was associated with a lower probability of the children surviving to adulthood and a lower probability that the surviving children would have families of their own.
Why did fewer of these children survive to adulthood? The data provide no direct answers. One reason may be that older mothers tend to have children with lower birth weights, which in turn may lead to early death. Older mothers are also at risk of having children with birth defects.

And why did fewer of the surviving children have children of their own? Again, the data provide no direct answers. It may be that many of these children were physically or behaviorally compromised and thus less able to attract potential mates.

What does all of this mean for us today? In the short term, it means that a large part of the current population is headed towards genetic extinction. In the long term, there will be selection for increased fertility at older ages:

In today´s society however, women do not start childbearing until an older age as marriage is often delayed, and casual or short-term relationships and divorce are more common. As a result, the natural selection maintaining young-age fertility might weaken and the relative strength of natural selection on old-age fertility could increase, something that could potentially lead to improvements in old-age fertility over many generations.

Duncan Gillespie from the University of Sheffield´s Department of Animal and Plant Sciences, said: "In today´s society, family-building appears to be increasingly postponed to older ages, when relatively few women in our evolutionary past would have had the opportunity to reproduce. As a result, this could lead to future evolutionary improvements in old-age female fertility.” (Davis, 2011)


Davis, S. (2011). Marriage patterns drive fertility decline,

Liu, J. and V. Lummaa. (2011). Age at first reproduction and probability of reproductive failure in women, Evolution and Human Behavior, 32, 433-443.

Saturday, November 19, 2011

Vitamin D and Northern Natives

Annual average exposure to erythema-inducing UV radiation at ground level. Source: Jablonski & Chaplin, 2000. At high northern latitudes, vitamin D can be obtained only from one’s diet, notably fatty fish. Yet many northern native peoples consume little fish. Have they evolved a different vitamin D metabolism?

I’ve just published an article on population differences in vitamin D metabolism, specifically about northern native peoples. Comments are welcome.


Vitamin-D deficiency seems to be common among northern native peoples, notably Inuit and Amerindians. It has usually been attributed to: 1) higher latitudes that prevent vitamin-D synthesis most of the year; 2) darker skin that blocks solar UVB; and 3) fewer dietary sources of vitamin D. Although vitamin-D levels are clearly lower among northern natives, it is less clear that these lower levels indicate a deficiency. The above factors predate European contact, yet pre-Columbian skeletons show few signs of rickets—the most visible sign of vitamin-D deficiency. Furthermore, because northern natives have long inhabited high latitudes, natural selection should have progressively reduced their vitamin-D requirements. There is in fact evidence that the Inuit have compensated for decreased production of vitamin D through increased conversion to its most active form and through receptors that bind more effectively. Thus, when diagnosing vitamin-D deficiency in these populations, we should not use norms that were originally developed for European-descended populations who produce this vitamin more easily and have adapted accordingly.


Frost, P. (2011). Vitamin D deficiency among northern Native Peoples: A real or apparent problem? International Journal of Circumpolar Health, early view

Jablonski, N.G. and G. Chaplin (2000). The evolution of human skin coloration
, Journal of Human Evolution, 39, 57-106.

Saturday, November 12, 2011

The Western European marriage pattern

The ‘Hajnal line’ marks the eastern limit of a longstanding pattern of late and non-universal marriage. The line in red is Hajnal's. The dark blue lines show areas of high nuptiality West of the Hajnal line. Source

In the 17th and 18th centuries, settlers emigrated from land-poor France to land-rich Canada. The result was a lower age of marriage. Young men and women no longer had to wait for their parents to hand over the family farm. Land was plentiful, and early family formation much easier.

This new social reality led to a new biological reality. From one generation to the next there was a steady contraction of the time between age of marriage and age of first birth. Married women—many as young as 15— were getting pregnant faster. The mean age of full reproductive maturity seems to have slowly fallen at a steady rate, apparently through the reproductive success of women who could better exploit the opportunities for early family formation (Milot et al., 2011).

But what about the homeland of these French settlers? Had that land-poor environment selected for a later onset of full reproductive maturity? That would seem to be a logical inference. Late and non-universal marriage was in fact the pattern throughout Europe west of a line stretching from Trieste to St. Petersburg:

By 1650, when village reconstitution studies become sufficiently numerous to render the generality of the pattern indubitable, the average age of women at first marriage was twenty-four or over, 7 to 20 per cent of women never married, and the incidence of childbirth out of wedlock was below 3 per cent. This marital pattern restricted fertility massively. A very considerable minority of women remained single and bore no children; those who married bore none for the first ten years of their fecund life-phase, on average. If they had their last child at the age of forty, their entire reproductive careers would span roughly fifteen years, a long time by modern standards but remarkably brief in a pre-transition context. Resulting fertility was less than half the rate that would have been achieved if all women between fifteen and fifty were married. (Seccombe, 1992, p. 184)

The ‘Western European marriage pattern’ was initially thought to have developed after the Black Death of the mid-14th century. But this belief has been challenged by a study of marriage between 1252 and 1478 in an English community:

The average age at first marriage in the Lincolnshire Fenland before the Black Death would be 24 years for the woman and 32 years for the man. The wife would die one year before her husband and the marriage would last for about 13 years. The couple could have six children, if their fertility was higher than average, of whom, judging by pedigrees, perhaps three would survive to become adults. After the Black Death the mean age would be 27 for the woman and 32 for the man. The husband would die three years before his wife and the marriage would last about 12 years. Again the couple could have six children, of whom perhaps three would survive to become adult. (Hallam, 1985, p. 66)

This pattern of late marriage may have been accentuated by the Black Death, but it was already present beforehand. Hallam (1985, p. 56) cites additional evidence for late marriage farther back in 9th-century France. On the estates of the Abbey of St Germain-des-Prés near Paris, about 16.3% of all adults were unmarried. In Villeneuve-Saint-Georges, the figure was 11.5%. Seccombe (1992, p. 94) cites a 9th-century survey of the Church of St Victor of Marseille, where both men and women appear to have married in their mid to late twenties.

Going even farther back, Seccombe (1992, p. 94) cites the Roman author Tacitus’ reference to Germanic women being “not hurried into marriage [and] as old and as full-grown as the men [who were] slow to mate.”

So when did the Western European marriage pattern begin? I suspect its origins lie in the late Neolithic of Western Europe, when farming communities had reached a saturation point. With farmland in short supply, young men and women had to wait their turn before they could marry and have children of their own. And some would never marry.

What happened to these never-married? They may have turned toward community service of one kind or another. If they couldn’t have children of their own, they would’ve invested their energies in helping others of their community—who were often their kinfolk. In this respect, the Catholic Church may have simply adopted and further developed a cultural pattern that was already present in Western Europe.

By the early Christian era, this pattern was clearly in evidence: monks and nuns dedicated their lives to creating centers of learning and, eventually, colleges and universities. They also founded hospices for the sick and injured. Much of what we now call the ‘welfare state’ has its origins in the work of these celibate men and women.

Together with the prohibition of cousin marriage, this pattern of lengthy and sometimes lifelong celibacy paved the way for a future of larger and more open societies where the State, and not one’s clan, would provide collective services. Of course, it wasn’t planned that way. Nothing is planned in cultural or biological evolution. Western Europe simply accumulated a mix of cultural traits that would later make possible the rise of ‘modern society.’

Did this marriage pattern shape the biology of Western Europeans through natural selection? Was there gene-culture co-evolution? This is likely with respect to the pace of sexual maturation. Keep in mind that the time between menarche and first birth was ten to twelve years on average. Nature abhors a vacuum, and there would have been a tendency to slow the pace of sexual maturation for both biological and psychological traits. Just as land-rich North America selected for successful pregnancy at younger ages, the reverse had probably happened in land-poor Europe.


Hallam, H.E. (1985). Age at first marriage and age at death in the Lincolnshire Fenland, 1252-1478, Population Studies, 39, 55-69.

Milot, E., F.M. Mayer, D.H. Nussey, M. Boisvert, F. Pelletier, and D. Réale. (2011). Evidence for evolution in response to natural selection in a contemporary human population, Proceedings of the National Academy of Sciences (USA), early view

Seccombe, W. (1992). A Millennium of Family Change. Feudalism to Capitalism in Northwestern Europe, London: Verso.

Saturday, November 5, 2011

Apples, oranges, and genes

Publicly funded misinformation. Source: PBS website

In human genetics, a ‘population’ is a group of individuals who share ancestry and hence genes. This sharing is not absolute. There is always some gene flow from outside, and sometimes “outside” means another species. We humans, for example, have received genes not only from Neanderthals and Denisovans but also from … viruses.

In addition, new gene variants are constantly arising through mutation. Most of them are harmful or useless. But some are useful and will thus spread through the population.

So below the species level, and often even at the species level, population boundaries tend to be fuzzy. Genes vary both between and within populations.

You’ve undoubtedly heard that there is much more genetic variation within human populations than between them, this being true even for the large continental populations we used to call ‘races.’ This was the finding of the geneticist Richard Lewontin (1972), and others have concluded likewise. You’ve probably not heard, however, that the same kind of genetic overlap exists between many sibling species that are nonetheless distinct in anatomy and behavior (Frost, 2011).

How come? First, keep in mind that genes vary a lot in adaptive value. Some are little more than ‘junk DNA.’ Others code for structural proteins that form the building blocks of flesh and blood. Others still are very important because they code for regulatory proteins that control how other genes behave and, hence, the way an organism grows and develops. The last kind of gene accounts for only a tiny fraction of the genome. Most genes have modest effects, or none at all.

Second, keep in mind that different populations occupy different environments and are thus exposed to differences in natural selection. In most species, these differences are due to physical environments that differ in climate, vegetation, and wildlife. Humans also have to adapt to cultural environments that differ in social structure, belief systems, and technology. In either case, when a gene varies between two populations the cause is probably a difference in natural selection, since the population boundary also separates different selection pressures. Conversely, when a gene varies within a population this variation is less likely to have adaptive significance. It hasn’t been flattened out by the steamroller of similar selection pressures.

This is one aspect of “Lewontin’s fallacy.” Within-population variation isn’t comparable to between-population variation. It’s like comparing apples and oranges.

Another aspect of Lewontin’s fallacy is that natural selection within a population exercises a leveling effect only on phenotypes, and not on genotypes. If two gene variants have a similar phenotypic effect, natural selection will take longer to replace one with the other. Sometimes, this sort of diversity will persist indefinitely because epidemics often spare individuals whose surface proteins are somewhat different from those of their neighbors.

Thus, within-population variation tends to consist of different gene variants at different loci whose effects nonetheless point in the same general direction. To some degree, these variants can stand in for each other. If one is absent, another one might do the trick. This is probably why population differences are more sharply defined if several gene loci are compared simultaneously. If we chart how each gene varies geographically and then superimpose these maps on top of each other, the resulting composite map will show population differences in sharper relief (Edwards, 2003; Mitton, 1977; Mitton, 1978; Sesardic, 2010).

This point has been made by Emmanuel Milot, the principal author of the paper I reviewed in my last post. His research team found that the time between marriage and first birth steadily shrank among succeeding generations of French Canadians on Île aux Coudres (Milot et al., 2011). In the land-rich environment of the New World, there was strong selection for married women to get pregnant faster. A genetic difference has thus developed between French Canadians and the French who remained in France.

But this difference is not due to a few genes. As Milot points out, natural selection tends to produce effects at many different genes:

“We should not think that there are genes that code specifically for age at first reproduction. In fact, this type of trait is probably influenced by hundreds, even thousands, of genes. These genes act on other characteristics, like body weight at birth, age at first menstruation, or even personality traits, which impact on age at first birth” (Bourdon, 2011)

This point is important. If two populations differ at one gene, and if the difference is sensitive to natural selection, they probably also differ at many other genes. The same selection pressure that caused one difference has almost certainly caused others. Typically, we see only the tip of the iceberg—a gene variant that produces an obvious effect in affected individuals, such as illness. Most gene variants, however, don’t cause medically recognized illnesses, and their effects also tend to be subtler.


Bourdon, M-C. (2011). L’espèce humaine. Toujours en évolution. UQAM. Entrevues

Edwards, A.W.F. (2003). Human genetic diversity: Lewontin’s fallacy. BioEssays, 25, 798-801.

Frost, P. (2011). Human nature or human natures? Futures, 43, 740-748.

Lewontin, R.C. (1972). The apportionment of human diversity. Evolutionary Biology, 6, 381-398.

Milot, E., F.M. Mayer, D.H. Nussey, M. Boisvert, F. Pelletier, and D. Réale. (2011). Evidence for evolution in response to natural selection in a contemporary human population, Proceedings of the National Academy of Sciences (USA), early view

Mitton, J.B. (1977). Genetic differentiation of races of man as judged by single-locus and multilocus analyses, American Naturalist, 111, 203-212.

Mitton, J.B. (1978). Measurement of differentiation: reply to Lewontin, Powell, and Taylor, American Naturalist, 112, 1142-1144.

Sesardic, N. (2010). Race: a social destruction of a biological concept, Biology and Philosophy, 25(2), 143-162.

Saturday, October 29, 2011

Bringing reproductive maturity into line with the age of marriage

Île aux Coudres, a French Canadian community on an island in the St. Lawrence

Human biodiversity is slowly making headway in academia. It has three defining principles:

1. Evolution did not end, or even slow down, with the advent of Homo sapiens. It has actually accelerated.

2. It especially accelerated about 10,000 years ago, when the rate of genetic change rose over a hundred-fold among early modern humans. This acceleration didn’t happen because they were spreading into new physical environments with different climates, topographies, vegetation, and wildlife. By then, humans had already spread throughout the world from the equator to the arctic. They were now spreading into new cultural environments with different technologies, social structures, belief systems, and means of subsistence.

3. The human species has therefore experienced more genetic change over the past 10,000 years than over the previous million years. This change has particularly involved genes for mental, behavioral, and life-history traits (Frost, 2011; Hawks et al., 2007).

One life-history trait is the age of first reproduction (AFR). Because AFR is highly heritable, there may have been co-evolution between biology and culture. In other words, natural selection has tended to bring full reproductive maturity into line with the age when young couples have enough resources to marry and start a family.

When Europeans first began to settle North America, they came from a land-poor environment where young people had to postpone marriage and family formation. Typically, they had to wait until their parents handed over the farm in whole or in part. This tendency toward late marriage was widespread throughout Western Europe, so much so that it has been dubbed the ‘Western European Marriage Pattern’:

[…] the late and non-universal marriage pattern was definitely prevalent across Northwestern Europe in the seventeenth century. By 1650, when village reconstitution studies become sufficiently numerous to render the generality of the pattern indubitable, the average age of women at first marriage was twenty-four or over, 7 to 20 per cent of women never married, and the incidence of childbirth out of wedlock was below 3 per cent. This marital pattern restricted fertility massively. A very considerable minority of women remained single and bore no children; those who married bore none for the first ten years of their fecund life-phase, on average. If they had their last child at the age of forty, their entire reproductive careers would span roughly fifteen years, a long time by modern standards but remarkably brief in a pre-transition context. Resulting fertility was less than half the rate that would have been achieved if all women between fifteen and fifty were married. (Seccombe, 1992, p. 184)

All of this changed when Europeans began to settle in the “New World.” Suddenly, land was no longer a constraint on marriage, and early marriage became the norm. With the downward shift in the age of marriage, was there a corresponding downward shift, via natural selection, in the age of full reproductive maturity?

Yes, according to a recent study of Île aux Coudres, a French Canadian community on an island in the St. Lawrence. Over a period of 140 years, from 1800 to 1940, this community saw its mean AFR fall by four years. This decline was driven not by a lowering of the mean age of marriage (which now remained stable) but by a shortening of the mean interval between marriage and first birth.

The decline in AFR seems to have been real, and not an artefact of incomplete marriage and birth records. In fact, the church registers provide exceptionally detailed birth and marriage data. Nor was it an artefact of an influx of people with lower AFRs. Almost everyone on Île aux Coudres is descended from thirty families who settled the island between 1720 and 1773.

Could the reason have been changes to diet and nutrition? Unlikely.

The advancement of age at maturity, as well as increases in fertility, may reflect plastic responses to improvements in nutritional conditions, such as those observed during the 19th and 20th centuries in Western societies. Better-fed women grow faster, mature earlier and in a better physiological state, and are more fecund. Importantly, alongside such plastic responses in reproductive traits, we would expect an increase in infant and juvenile survival rates with time. Despite some fluctuations, infant and juvenile survival rates on île aux Coudres were not higher at the end of the study period than at the beginning. (Milot et al., 2011)

When I first heard of this study, I thought that some kind of cultural lag might have been responsible. Old habits die hard. Perhaps many of the early settlers, with memories of the old country, were still afraid of not having enough land to support a family, even after they had decided to marry. This reticence would have then gradually disappeared as memories of the old country disappeared.

Such a change in mentality, however, would have happened much more among the earlier generations than among the later ones. Yet this is not what we see in the data. AFR changed at the same rate from one generation to the next throughout the 140-year period. Couples married after 1870 showed the same rate of change as couples married before 1870. Indeed, this steady rate of change seems to rule out most socio-cultural explanations, particularly those that involve some kind of re-adjustment to new conditions. In any case, the study period (1800 to 1940) postdates the years of immigration and settlement (1720 to 1773).

We’re certainly going to see more studies like this one, either from Île aux Coudres or from other regions of French Canada. In general, French Canadian communities are ideal for the study of human microevolution. Records of births, marriages, and deaths are remarkably complete over a span of three centuries, and the inhabitants tended to stay put in the same locality generation after generation. For several regions of Quebec, we already have complete genealogical databases that could be enriched with genetic data for the most recent generations.


Frost, P. (2011). Human nature or human natures? Futures, 43, 740-748.

Hawks, J., E.T. Wang, G.M. Cochran, H.C. Harpending, & R.K. Moyzis. (2007). Recent acceleration of human adaptive evolution, Proceedings of the National Academy of Sciences (USA), 104, 20753-20758.

Milot, E., F.M. Mayer, D.H. Nussey, M. Boisvert, F. Pelletier, and D. Réale. (2011). Evidence for evolution in response to natural selection in a contemporary human population, Proceedings of the National Academy of Sciences (USA), early view

Seccombe, W. (1992). A Millennium of Family Change. Feudalism to Capitalism in Northwestern Europe, London: Verso.

Saturday, October 22, 2011

End of an era

In 2003, Qaddafi dismantled Libya’s nuclear arms program in exchange for better relations with the West. At the time, it seemed like a great idea ...

Let me summarize my last series of posts.

We are nearing the end of relative global peace, specifically the peace that has reigned since the Korean armistice was signed back in 1953. This era is ending because of changes to the international system over the past two decades and to the nature of global peace itself.

First, a balance of terror no longer exists to contain regional conflicts and thus keep them from going global. Military alliances have become less specific in their aims and reciprocal responsibilities. In the East, this has happened through the replacement of the Warsaw Pact by the Shanghai Cooperation Organisation (SCO). In the West, this has happened through a broadening of NATO to include new members and new aims, as well as through a general weakening of commitment among older members.

Second, global peace no longer maintains an acceptable status quo. Previously, states merely pushed the envelope of peace here and there to see what they could get away with. There was also terrorist action by dispossessed groups that had nothing to gain from the status quo. But such groups, by definition, were geopolitically marginal. Peace was acceptable to those who had the power to destroy it.

This view has radically changed in the case of North Korea. Previously, Pyongyang saw the conquest of South Korea as an almost hypothetical goal that could be pushed indefinitely into the future. The status quo was wrong but bearable. Today, it’s no longer bearable. The other Korea has created a new dynamic by embracing post-nationalism, multiculturalism, and large-scale immigration.

This might not matter if the North had followed the same ideological evolution as the South. But it hasn’t. The North has been frozen in time. It still sees itself as a vehicle for preserving and perpetuating the Korean people. It still adheres to values that were normal throughout the world only a half-century ago. Thus, North Korea can no longer accept the “status quo” even as a temporary expediency. It now sees an invasion of the South as something that must happen soon—before the demographic changes become irreversible.

Elsewhere, the status quo has likewise taken on a new dynamic. With the end of the Cold War, it increasingly means American military interventions that would have been unthinkable previously. There is now an “imbalance of terror”—the United States is free to overthrow one unfriendly regime after another without triggering a major war. This trend has not gone unnoticed, particularly by the North Koreans:

North Korea’s official news agency carried comments this week from a Foreign Ministry official criticizing the air assault on Libyan government forces and suggesting that Libya had been duped in 2003 when it abandoned its nuclear program in exchange for promises of aid and improved relations with the West.

Calling the West’s bargain with Libya “an invasion tactic to disarm the country,” the official said it amounted to a bait and switch approach. “The Libyan crisis is teaching the international community a grave lesson,” the official was quoted as saying Tuesday, proclaiming that North Korea’s “songun” ideology of a powerful military was “proper in a thousand ways” and the only guarantor of peace on the Korean Peninsula.
(McDonald, 2011)

There has indeed been an arms buildup in countries that fear eventual U.S. intervention, particularly China, Russia, and Iran. None of them wish to go one-on-one with the U.S., for obvious reasons. The result has been the formation of a new pan-Eurasian alliance: the Shanghai Cooperation Organisation (SCO). Politics makes strange bedfellows, and it is indeed strange to see the Islamic Republic of Iran wanting to bed down with China and Russia, both of which have restive Muslim minorities.

Despite the arms buildup, and an ever more fragile international system, global peace might still continue indefinitely. The destabilizing factor is really the spread of an increasingly aggressive globalist ideology and, correspondingly, resistance by various forms of anti-globalism.

For comparison, we can turn to the gradual breakdown of the post-Napoleonic peace that lasted from 1815 to 1914. That century-long peace was made possible by the Concert of Europe, a coalition of conservative regimes that worked together to keep the continent free of liberalism and nationalism. The coalition fell apart during the second half of the 19th century and gave way to a looser system of opposing military alliances.

The really destabilizing factor, however, was the return of liberalism and nationalism to the continent. Demands grew for democratic constitutions and for the creation of states along ethnic lines. Eventually, much if not most of Europe’s intelligentsia came to see the status quo as an archaic monstrosity. When global war finally broke out in 1914, it was readily framed as a struggle between “freedom” and “tyranny.”

From the “old order” to nationalism to globalism

Today, the rise of globalism might seem to promise a return to that lost era, when liberalism and nationalism were still on the margins of political life. That old order, however, allowed people to organize their lives along traditional lines—on the basis of kinship and ethnicity. Nor was there any effort to “elect a new people.” Loyalty ran both ways, and the ruling aristocracies felt bound to their subjects in a way that would seem incomprehensible to our current elites.

Indeed, the late 19th century saw liberalism and nationalism begin a process that would eventually culminate in today’s globalism. Both liberals and nationalists wanted to open up the tight little world of small communities. Both wished to create a larger community, the nation state, that would provide more room for individual freedom and individual identity. Both viewed “parochialism” as an obstacle to progress and the creation of a more rational social order. Above all, both saw the leveling of local identities as the means to create a society that would be militarily stronger and economically more viable.

Nationalism has thus paved the way for globalism. It has merged local identities into national identities that are, to varying degrees, synthetic and lacking in authenticity. The more artificial the resulting national identity, the easier it has been for globalism to present itself as the next logical step.

From globalism to what?

There is one more way in which globalism resembles nationalism, as well as other ideologies. It tends to push ahead while ignoring evidence that things aren’t working out as planned. And this blindness will likewise condemn it to the same fate that has befallen other ideologies. Until then, however, it will likely do much harm.

Will globalism collapse through an eventual global conflict, like fascism in the mid-20th century? Or will it collapse under the weight of its own contradictions, like communism a half-century later? And just what will post-globalism look like? These are questions for which I have no ready answers. Perhaps you do.


McDonald, M. (2011). North Korea suggests Libya should have kept nuclear program, New York Times, March 24, 2011.

Saturday, October 15, 2011

Towards an imbalance of terror?

Member and observer states of the Shanghai Cooperation Organisation (SCO). In both the east and the west, defense alliances have become less centralized and more loosely defined since the end of the Cold War. They no longer contain regional conflicts and may actually cause them to go global. (source)

Tensions are mounting on the Korean Peninsula, as seen last November in the bombardment of Yeongpyeong Island. Is a second Korean War imminent?

Not likely, if the general reaction is to be believed. An example of this thinking is given by David Kang, director of the Korean Studies Institute at USC:

We often call the situation a “powderkeg” or a “tinderbox,” implying a very unstable situation in which one small spark could lead to a huge explosion. But the evidence actually leads to the opposite conclusion: we have gone 60 years without a major war, despite numerous “sparks” such as the skirmish that occurred last week. If one believes the situation is a tinderbox, the only explanation for six decades without a major war is that we have been extraordinarily lucky. I prefer the opposite explanation: deterrence is quite stable, both sides know the costs of a major war, and both sides—rhetoric aside—keep smaller incidents in their proper perspective. (Kang, 2010)

Yet the current situation differs from the one that prevailed during most of those sixty years. From 1953 to the late 1980s, there was no second Korean War because neither the United States nor the Soviet Union wanted one. Both parties considered the division of the Korean Peninsula to be an acceptable compromise. The only people really unhappy were the Koreans themselves, who on their own could do little. The decision to go to war ultimately lay in Washington and Moscow.

This situation has changed since the Cold War ended in the late 1980s. Moscow has ceased to be a decision center for global conflict, and the Warsaw Pact has given way to a much more decentralized defense pact: the Shanghai Cooperation Organisation (SCO). NATO still exists and has even accepted new member states, but it too is now a looser organization with less clearly defined obligations. Many members have refused to support the latest military operations in Afghanistan and Libya.

The end of the Cold War also stopped the Soviet Union’s direct and indirect subsidies of North Korea. Throughout the 1990s, the regime in Pyongyang teetered on the brink of collapse, with reunification being the most likely outcome. At the time, many South Koreans actually feared this prospect, having seen the high cost of reunification in Germany.

That window of opportunity closed in the early 2000s. By then, Pyongyang had weathered the worst of the storm, as had its semi-allies China and Russia. By then too, the South had embraced its new Global Korea policy—an explicit shift to post-nationalism, multiculturalism, and large-scale immigration. In 2006, Pyongyang’s leading newspaper, Rodong Sinmun, angrily denounced the new policy as “an unpardonable argument to obliterate the race by denying the homogeneity of the Korean race and to make an immigrant society out of South Korea, to make it a hodgepodge, to Americanize it” (Koehler, 2006).

The Global Korea policy has fundamentally changed Pyongyang’s vision of the future. Conquest of South Korea is no longer a goal to be pushed indefinitely into the future. It is something that must happen soon—before the demographic changes in the South become irreversible.

So the North Koreans are upset. But what can they do? Any invasion of the South would trigger an American intervention. And it is doubtful whether China would come in on Pyongyang’s side. As David Kang points out:

If it is an unprovoked North Korean invasion, then the North probably goes it alone. Even China is unlikely to support such a war. Although the Chinese are supportive of North Korea, they are clearly not in favor of starting a war on the peninsula that would have enormous negative consequences for every country in the region. (Kang, 2010)

In a conflict between North Korea alone and the United States, there is little doubt about the eventual outcome. The United States would win.

[…] although North Korea possesses a significant missile arsenal, Pyongyang is unlikely to contemplate launching full scale strikes against anyone, given the conservative nature of the regime which fears for its own survival, and the inevitable scale of US retaliation which would almost certainly result in the destruction of North Korea. The same reality applies to North Korea’s million plus army, which despite being among the largest in the world, is devoid of any real sustainable offensive capacity. Even in the unlikely scenario that the regime considers launching an invasion of South Korea, North Korea simply lacks the most basic resources that would be needed to mount an aggressive military campaign. Conversely, the South Koreans and the US have the personnel and technology, especially air supremacy, to quickly neutralise any North Korean offensive strike (Fazio, 2011)

Clearly, the above scenario holds little appeal for the North Koreans. But there are other scenarios. The most attractive one, from their standpoint, would bring other nations into the war on North Korea’s side, especially China.

Yet, as David Kang noted, China is at most a semi-ally. With some reluctance, it might even accept reunification of the peninsula under South Korean control, the only proviso being the departure of U.S. troops. In the mid-1990s, this outcome seemed very likely to the Chinese:

[There] would come other important developments, most important the eventual collapse of North Korea and the reunification of the Korean Peninsula. It is awkward for the Chinese to acknowledge this publicly given their long relationship to the ultraorthodox Communist regime in North Korea, but Beijing has to realize that that regime, which has literally bankrupted the country it rules, is doomed and that reunification under South Korea is likely in the next decade or two. Foreign-affairs experts in China told us that they doubted American troops would remain in Korea long after reunification began, a prediction that seems realistic, since the reason for the troops, the North Korean threat, would have disappeared. (Bernstein & Munro, 1998, p. 176)

Today, such an outcome seems unlikely. Germany has been reunified for two decades, and U.S. troops are still there. For a number of geopolitical reasons, the Americans wish to keep a military presence in mainland East Asia just as they wish the same in continental Europe. Even if U.S. troops did leave, South Korea’s political class would remain oriented to the United States and would tilt a reunified Korea in that direction. China would thus have a U.S. ally right next to its industrial heartland of Manchuria.

For these reasons and others, China will not abandon Pyongyang:

Despite Chinese rhetoric in support of peaceful unification of the Koreas, Beijing fears that a unified Korea would have strong ties with the United States, eliminating the buffer zone that North Korea provided. A reunified Korea would also eliminate North Korea’s value as political and military leverage against the U.S. stance on Taiwan. Lastly, China has a population of nearly two million ethnic Korean-Chinese living just north of the Chinese-North Korean border. A unified Korea might provide the impetus for a separatist movement. Therefore, instead of a reunified Korea, China’s long-term objective is to encourage an evolution of the DPRK into a stable and economically prosperous, non-nuclear regime that remains aligned toward Beijing. (Mrosek, 2011, p. 4)

As a semi-ally, how might China enter a second Korean conflict on Pyongyang’s side? There seem to be four conditions:

1. North Korea is not perceived as the aggressor. If the U.S. intervenes in North Korea as it has previously in Kosovo, Iraq, and Libya, the Chinese would at least covertly assist Pyongyang.

2. Tensions are already high between China and the U.S. This could come about for a number of reasons: Taiwan; the trade balance; concerns over Tibet and the South China Sea; etc.

3. Other SCO member states are willing to provide at least covert assistance.

4. NATO is increasingly divided, with some member states being members in name only.

The above scenario is certainly far from science fiction. The United States could intervene in North Korea if it believed that the regime was about to collapse and that a popular uprising was in progress. After all, the same kind of intervention seemed to work in Libya. There is also the mistaken belief, common among U.S. policymakers, that the Chinese would support the U.S. or at least do nothing (Mrosek, 2011, pp. 50-52). Ironically, that belief can be traced in part to the above passage by Bernstein and Munro.

Just as mistakenly, the Americans, and perhaps also the Chinese, believe that the resulting conflict could be contained to the Korean Peninsula—much like the first Korean War. Yet such containment is less likely today than it was in 1950-1953. Back then, both the United States and the Soviet Union were war-weary and wished to consolidate their newly won spheres of influence. There was thus a deliberate effort to keep the war from spreading, as seen in Truman’s sacking of Gen. MacArthur. Finally, although many nations fought in the Korean War, only two of them—the United States and the Soviet Union—had the power to decide whether it would remain regional or go global.

The same principle held throughout the Cold War. The international system was essentially a duopoly—a “balance of terror.” When the Hungarian Revolution broke out, the United States thought long and hard … and did nothing. When the two power blocs did intervene in regional conflicts, as in Korea and Vietnam, the conflicts remained regional.

With the end of the Cold War, the United States has been more willing to engage in military interventions that would have been unthinkable before. One result has been an arms buildup in countries that fear U.S. intervention, notably China, Russia, and Iran. This fear was instrumental to creation of the SCO. Unlike the Warsaw Pact, however, the SCO has no single decision center, and its member states do not have clearly defined obligations to each other.

The same could be said for NATO. Its aims are no longer clearly defined and its members more and more reluctant to engage in theatres of war that now lie well outside Europe. Increasingly, NATO is providing a cover for operations led primarily by the United States and any other member states that wish to tag along.

This new international system can do little to contain regional wars. It may indeed have the potential to draw one nation after another into an initially minor conflict, especially if they see it as a prelude to similar interventions to be launched against themselves. The world situation today thus scarcely resembles 1950. It seems to have more in common with … 1914.


Bernstein, R. & R.H. Munro. (1998).The Coming Conflict with China, New York: Vintage Books.

Fazio, D. (2011). The North Korean security threat: an historical context and current policy options, ERAS, 12(2), 1-25.

Kang, D. (2010). Korea Expert Answers Your Questions.

Koehler, R. (2006). I guess this means the DPRK won’t be inviting Hines Ward for a visit (English translation of Rodong Sinmun editorial).

Mrosek, D.M. (2011). China and North Korea: A Peculiar Relationship, thesis, Naval Postgraduate School, Monterey, California.

Saturday, October 8, 2011

They won't be the only ones

North Korean shelling of Yeongpyeong Island, November 23, 2010. Why are tensions rising in Korea?

There has been much talk about B.R. Myers’ book: The Cleanest Race: How North Koreans See Themselves—and Why It Matters. Far from being communists, the North Koreans are, well, Nazis. And that matters a lot to us. Or so the book argues.

Actually, North Koreans see themselves pretty much the same way they saw themselves back in the 1950s. The most interesting change has been among Westerners—and Americans in particular. We no longer view ourselves as heirs of a specific ethnic and national tradition. Indeed, blood relationships scarcely matter at all in the West, except within the confines of the nuclear family—and even that last bastion has fallen for almost half of all adults. The market economy is becoming the sole organizing principle of our social life.

But perhaps it doesn’t really matter who has changed. What does matter is the fundamental difference in self-perception that has developed between them and us. And in recent years the difference seems to have been growing further. Concurrently, tensions have been rising on the Korean Peninsula. In 2009, a naval battle took place near the island of Daecheong. In March 2010, a North Korean submarine may have sunk the South Korean corvette Cheonan. On November 23, 2010, the North Koreans bombarded Yeongpyeong Island.

Are the two trends related? Yes, according to B.R. Myers, who concludes:

There is no easy solution to the North Korea problem, but to begin to solve it, we must realize that its behavior is aggressive, not provocative, and that its aggression is ideologically built in. Pyongyang is thus virtually predestined to push Seoul and Washington too far, thereby bringing about its own ruin. (Myers, 2010b)

It’s neither novel nor controversial to say that the Korean conflict is ideologically driven. What is new is the apparent ideological renewal of this conflict. After a lull of two decades—the “End of History”—we seem to be entering a new Cold War: post-nationalism versus nationalism, globalism versus localism, us versus them.

And the situation will probably get worse before it gets better.

Will South Korea abandon its Global Korea policy? Unlikely. This policy is backed by the local and international business community and by a broad cross-section of South Korean society. Opposition to it is disorganized, and it’s hard to see how opposition can organize within the current framework of “right” and “wrong.” Globalism is “right.” Ethnic nationalism is “wrong.” South Koreans can disagree over the ways and means of building a post-national society, but the actual goal is beyond criticism.

Needless to say, this push for post-nationalism is under way throughout the Western world. Is it going to stop? Unlikely, at least not in the near future. Will American policymakers try to call a halt in South Korea for purely pragmatic reasons, i.e., for the sake of world peace? Also unlikely. Given the reception of Myers’ book, they’ll see a golden opportunity to frame the Korean conflict in progressive terms—as a struggle to defend a modern, inclusive, and post-national society.

Will the North Koreans join us in embracing post-nationalism? Unlikely. They aren’t plugged into our current notions of right and wrong. They don’t watch American TV. Their students don’t go to American universities. They don’t have our pundits, experts, and policy wonks. They just aren’t exposed to our norms of correct thinking.

Will the North Korean regime fall? Unlikely. There’s no reason to believe it’s closer to collapse today than it was in the 1990s. Back then, the entire eastern bloc seemed to be disintegrating, and North Korea had to cope with a sudden loss of subsidies from the Soviet Union. As bad as things now are in North Korea, the situation is nowhere near as bad as it was back then. Just as importantly, its allies to the north—China and Russia—have likewise weathered the storm and are entering a period of renewed self-confidence.

All of this leads to two conclusions. First, the divide between them and us will continue to grow. There is no desire on either side for genuine rapprochement.

And the second conclusion? The North Korean leadership no longer sees the conquest of South Korea as a goal that can be pushed indefinitely into the future. It is something that must happen soon—before the demographic changes in the South become irreversible. Yes, war is coming. Soon.

In this, I claim no access to inside information. I simply know that the North Koreans care about their country and their people in a way that most of us no longer understand. To me, the eventual outcome seems inevitable.

By a strange quirk of fate the Korean Peninsula is once more becoming a fracture zone between two ways of viewing the world. And the Korean people will be the first victims.

But they won’t be the only ones.


Myers, B.R. (2010a). The Cleanest Race: How North Koreans See Themselves—and Why It Matters, Brooklyn: Melville House.

Myers, B.R. (2010b). North Korea will never play nice, The New York Times – The Opinion Pages, November 24, 2010