Tuesday, October 28, 2008

Skin color and vitamin D

Differences in human skin color are commonly explained as an adaptive response to solar UV radiation and latitude. The further away from the equator you are, the weaker will be solar UV and the less your skin will need melanin to prevent sunburn and skin cancer.

A variant of this explanation involves vitamin D, which the body needs to make strong bones and which the skin produces with the help of UV-B. The further away from the equator you are, the lighter your skin will be to let enough UV-B into its tissues for vitamin D production. Or so the explanation goes.

To test this hypothesis, Osborne et al. (2008) measured skin color and bone strength in a hundred white and Asian adolescent girls from Hawaii. Skin color was measured at the forehead and the inner arm. Bone strength was measured by section modulus (Z) and bone mineral content (BMC) at the proximal femur. A multiple regression was then performed to investigate the influences of skin color, physical activity, age, ethnicity, developmental age, calcium intake, and lean body mass on Z and BMC. Result: no significant relationship between skin color and bone strength.

Is there, in fact, any hard evidence that humans vary in skin color because they need to maintain the same level of vitamin D production in the face of varying levels of UV-B? Robins (1991, pp. 204-205) found the data to be unconvincing when he reviewed the literature. In particular, there seems to be little relationship between skin color and blood levels of 25-OHD—one of the main circulating metabolites of vitamin D:

The vulnerability of British Asians to rickets and osteomalacia has been ascribed in part to their darker skin colour, but this idea is not upheld by observations that British residents of West Indian (Afro-Caribbean) origin, who have deeper skin pigmentation than the Asians, very rarely manifest clinical rickets … Moreover, artificial irradiation of Asian, Caucasoid and Negroid subjects with UV-B produced similar increases in blood 25-OHD levels irrespective of skin pigmentation … A study under natural conditions in Birmingham, England, revealed comparable increases in 25-OHD levels after the summer sunshine from March to October in groups of Asians, West Indians and Caucasoids … This absence of a blunted 25-OHD response to sunlight in the dark-skinned West Indians at high northerly latitudes (England lies farther north than the entire United States of America except for Alaska) proves that skin colour is not a major contributor to vitamin D deficiency in northern climes.

The higher incidence of rickets in British Asians probably has less to do with their dark color than with their systematic avoidance of sunlight (to remain as light-skinned as possible).

Skin color and natural selection via solar UV

Solar UV seems to be a weak agent of natural selection, be it through sunburn, skin cancer, or vitamin D deficiency. Brace et al. (1999) studied skin color variation in Amerindians, who have inhabited their continents for 12,000-15,000 years, and in Australian Aborigines, who have inhabited theirs for some 50,000 years. Assuming that latitudinal skin-color variation in both groups tracks natural selection by solar UV, their calculations show that this selection would have taken over 100,000 years to create the skin-color difference between black Africans and northern Chinese and ~ 200,000 years to create the one between black Africans and northern Europeans (Brace et al., 1999). Yet modern humans began to spread out of Africa only about 50,000 years ago. Clearly, something other than solar UV has also influenced human variation in skin color ... and one may wonder whether lack of solar UV has played any role, via natural selection, in the extreme whitening of some human populations.

Indeed, people seem to do just fine with a light brown color from the Arctic Circle to the equator. Skeletal remains from pre-contact Amerindian sites show little evidence of rickets or other signs of vitamin D deficiency—even at latitudes where Amerindian skin is much darker than European skin (Robins, 1991, p. 206).

Why, then, are Europeans so fair-skinned when ground-level UV radiation is equally weak across Europe, northern Asia, and North America at all latitudes above 47º N? (Jablonski & Chaplin, 2000). Proponents of the vitamin D hypothesis will point to the Inuit and say that non-Europeans get enough vitamin D at high northerly latitudes from fatty fish. So they don’t need light skin. In actual fact, if we look at the indigenous peoples of northern Asia and North America above 47º N, most of them live far inland and get little vitamin D from their diet. For instance, although the Athapaskans of Canada and Alaska live as far north as the Inuit and are even somewhat darker-skinned, their diet consists largely of meat from land animals (caribou, deer, ptarmigan, etc.). The same may be said for the native peoples of Siberia.

Conversely, fish consumption is high among the coastal peoples of northwestern Europe. Skeletal remains of Danes living 6,000-7,000 years ago have the same carbon isotope profile as those of Greenland Inuit, whose diet is 70-95% of marine origin (Tauber, 1981). So why are Danes so light-skinned despite a diet that has long included fatty fish?

Skin color and sexual selection via male choice

Latitudinal variation in human skin color is largely an artefact of very dark skin in sub-Saharan agricultural peoples and very light skin in northern and eastern Europeans. Elsewhere, the correlation with latitude is much weaker. Indeed, human skin color seems to be more highly correlated with the incidence of polygyny than with latitude (Manning et al., 2004).

This second correlation is especially evident in sub-Saharan Africa, where high-polygyny agriculturalists are visibly darker than low-polygyny hunter-gatherers (i.e., Khoisans, pygmies) although both are equally indigenous. Year-round agriculture allows women to become primary food producers, thereby freeing men to take more wives. Thus, fewer women remain unmated and men are less able to translate their mate-choice criteria into actual mate choice. Such criteria include a preference, widely attested in the African ethnographic literature, for so-called 'red' or 'yellow' women — this being part of a general cross-cultural preference for lighter-skinned women (van den Berghe & Frost, 1986). Less mate choice means weaker sexual selection for light skin in women and, hence, less counterbalancing of natural selection for dark skin in either sex to protect against sunburn and skin cancer. Result: a net increase in selection for dark skin.

Just as weaker sexual selection may explain the unusually dark skin of sub-Saharan agricultural peoples, stronger sexual selection may explain the unusually light skin of northern and eastern Europeans, as well as other highly visible color traits.

Among early modern humans, sexual selection of women varied in intensity along a north-south axis. First, the incidence of polygyny decreased with distance from the equator. The longer the winter, the more it cost a man to provision a second wife and her children, since women could not gather food in winter. Second, the male death rate increased with distance from the equator. Because the land could not support as many game animals per unit of land area, hunting distance increased proportionately and hunters more often encountered mishaps (drowning, falls, cold exposure, etc.) or ran out of food, especially if other food sources were scarce.

Sexual selection of women was strongest where the ratio of unmated women to unmated men was highest. This would have been in the ‘continental Arctic’, a steppe-tundra environment where women depended the most on men for food and where hunting distances were the longest (i.e., long-distance hunting of highly mobile herds with no alternate food sources). Today, this environment is confined to the northern fringes of Eurasia and North America. As late as 10,000 years ago, it reached much further south. This was particularly so in Europe, where the Scandinavian icecap had pushed the continental Arctic down to the plains of northern and eastern Europe (Frost, 2006).

The same area now corresponds to a zone where skin is almost at the physiological limit of depigmentation and where hair and eye color have diversified into a broad palette of vivid hues. This ‘European exception’ constitutes a major deviation from geographic variation in hair, eye, and skin color (Cavalli-Sforza et al., 1994, pp. 266-267).


Brace, C.L., Henneberg, M., & Relethford, J.H. (1999). Skin color as an index of timing in human evolution. American Journal of Physical Anthropology, 108 (supp. 28), 95-96.

Cavalli-Sforza, L.L., Menozzi, P., & Piazza, A. (1994). The History and Geography of Human Genes. Princeton: Princeton University Press.

Frost, P. (2006). European hair and eye color - A case of frequency-dependent sexual selection? Evolution and Human Behavior, 27, 85-103

Jablonski, N.G. & G. Chaplin. (2000). The evolution of human skin coloration, Journal of Human Evolution, 39, 57-106.

Manning, J.T., Bundred, P.E., & Mather, F.M. (2004). Second to fourth digit ratio, sexual selection, and skin colour. Evolution and Human Behavior, 25, 38-50.

Osborne, D.L., C.M. Weaver, L.D. McCabe, G.M. McCabe, R. Novotony, C. Boushey, & D.A. Savaiano. (2008). Assessing the relationship between skin pigmentation and measures of bone strength in adolescent females living in Hawaii. American Journal of Physical Anthropology, 135(S46), 167.

Robins, A.H. (1991). Biological perspectives on human pigmentation. Cambridge Studies in Biological Anthropology, Cambridge: Cambridge University Press.

Tauber, H. (1981). 13C evidence for dietary habits of prehistoric man in Denmark. Nature, 292, 332-333.

van den Berghe, P.L., & Frost, P. (1986). Skin color preference, sexual dimorphism and sexual selection: A case of gene-culture co-evolution? Ethnic and Racial Studies, 9, 87-113.

Tuesday, October 21, 2008

More on gene-culture co-evolution

According to the online magazine Seed, “a growing number of scientists argue that human culture itself has become the foremost agent of biological change.” Much of this change has been surprisingly recent:

In the DNA of a group of 5,000-year-old skeletons from Germany, they discovered no trace of the lactase allele, even though it had originated a good 3,000 years beforehand. Similar tests done on 3,000-year-old skeletons from Ukraine showed a 30 percent frequency of the allele. In the modern populations of both locales, the frequency is around 90 percent.

I thought I was on top of the literature, but this was new to me. It’s even more proof that human evolution did not stop with the advent of Homo sapiens. It has continued … even after the transition from prehistory to history!

The same article also has some thoughts from Bruce Lahn, the evolutionary geneticist who has mapped human variation at two genes, ASPM and microcephalin, that seem to regulate the growth of brain tissue.

Even if Lahn could prove to everyone's satisfaction that ASPM and microcephalin are under selection, whether intelligence is the trait being selected for would be far from a settled question. It could be, as Lahn suggested, that some other mental trait is being selected, or that the activity of ASPM and microcephalin in other parts of the body is what is under selection. More work will certainly be done. But one can speculate with far more confidence about what drove the dramatic increase in intelligence attested by the fossil record: the advent of human culture.

"Intelligence builds on top of intelligence," says Lahn. "[Culture] creates a stringent selection regime for enhanced intelligence. This is a positive feedback loop, I would think." Increasing intelligence increases the complexity of culture, which pressures intelligence levels to rise, which creates a more complex culture, and so on. Culture is not an escape from conditioning environments. It is an environment of a different kind.


Phelan, B. (2008). How we evolve. A growing number of scientists argue that human culture itself has become the foremost agent of biological change. Seed Posted October 7, 2008

Wednesday, October 15, 2008

Polygyny or patrilocality?

Have all humans been more or less equally polygynous? The answer seems to be yes if we believe a team of researchers from the University of Arizona. They found that genetic diversity is higher on the maternally inherited X chromosome than on chromosomes inherited by both sexes (autosomes) in samples from five different populations: Biaka (Central African Republic), Mandenka (Senegal), San (Namibia), Basques (France), Han (China), and Melanesians (Papua New Guinea). Their conclusion: “our results point to a systematic difference between the sexes in the variance in reproductive success; namely, the widespread effects of polygyny in human populations.” In other words, proportionately more women than men have been contributing to the gene pool (Hammer et al., 2008).

It’s no surprise that polygyny has existed in the five populations under study. Almost all human populations are polygynous to some degree. The surprise is the relative lack of difference between the European and African subjects. Indeed, according to this study, the Basques have been more polygynous than the Mandenka have been. This is truly counterintuitive. Among the Basques, polygyny is normally limited to its serial form (marriage to a second wife upon the death of the first), as well as occasional cuckoldry. Among the Mandenka, polygyny is the preferred marriage type.

For some people in the blogosphere, this is simply scientific truth and we just have to accept it, however counterintuitive it may seem. There is nonetheless an alternate explanation: patrilocality. In many societies, a wife goes to live in her husband’s community after marriage. This has the effect of inflating the genetic diversity of women in any one community.

These two confounding levels of explanation, polygyny and patrilocality, bedeviled the previous methodology of comparing maternally inherited mtDNA with the paternally inherited Y chromosome. With the new methodology, patrilocality biases the results even more because the Y chromosome is no longer a point of reference.

The University of Arizona researchers do not mention patrilocality in their paper although they do discuss ‘sex-biased forces.’ Under this heading, they tested a model where only females migrate between communities (‘demes’) and at such a rate that panmixia eventually results. They concluded that this factor could not be significant. To my mind, the model is unrealistic, partly because the assumed migration rate is far too high and partly because two demes are used to represent a real world where brides are exchanged among many communities separated by varying genetic distances. To be specific, the more genetically different a bride is from her host community, the further away will be her community of origin, and the lower will be the probability of panmixia between the two.

To the extent that the methodology is biased toward patrilocality effects, any polygyny effects will be less apparent. If this new methodology primarily tracks differences in patrilocality, no major differences would be observable among the different population samples.

In addition, there may be a weak inverse relationship between patrilocality and polygyny. Patrilocality correlates with patriarchy, which correlates with high paternal investment, which inversely correlates with polygyny. If so, the two effects – polygyny and patrilocality – would tend to cancel each other out in the data.

Finally, the burden of proof is on those who propose new methodologies, especially one that produces inconsistent results. The University of Arizona researchers themselves say as much: “Our findings of high levels of diversity on the X chromosome relative to the autosomes are in marked contrast to results of previous studies in a wide range of species including humans.” More importantly, their findings run counter to the comparative literature on human mating systems. To cite only one authority, Pebley and Mbugua (1989) note:

In non-African societies in which polygyny is, or was, socially permissible, only a relatively small fraction of the population is in polygynous marriages. Chamie's (1986) analysis of data for Arab Muslim countries between the 1950s and 1980s shows that only 5 to 12 percent of men in these countries have more than one wife. … Smith and Kunz (1976) report that less than 10 percent of nineteenth-century American Mormon husbands were polygynists. By contrast, throughout most of southern West Africa and western Central Africa, as many as 20 to 50 percent of married men have more than one wife … The frequency is somewhat lower in East and South Africa, although 15 to 30 percent of husbands are reported to be polygynists in Kenya and Tanzania.


Hammer, M.F., Mendez, F.L., Cox, M.P., Woerner, A.E., & Wall, J.D. (2008). Sex-biased evolutionary forces shape genomic patterns of human diversity. PLoS Genet, 4(9), e1000202. doi:10.1371/journal.pgen.1000202

Pebley, A. R., & Mbugua, W. (1989). Polygyny and Fertility in Sub-Saharan Africa. In R. J. Lesthaeghe (ed.), Reproduction and Social Organization in Sub-Saharan Africa, Berkeley: University of California Press, pp. 338-364.

Wednesday, October 8, 2008

Ancient reading and writing

The French journal L’Histoire has a special issue on reading and writing in ancient societies. One article, about Mesopotamia, makes several points that support an argument I have made: the invention of writing, especially alphabetical writing, created a strong selection pressure for people who had the rare ability to take dictation or copy written texts with a low error rate and over extended lengths of time (Frost, 2007).

1. In the ancient world, reading and writing required much stamina, concentration, and memorization, more than is the case today with current reader-friendly language. This may be seen in the long training needed to make a good scribe.

To learn cuneiform writing, the students followed a specific and very standardized curriculum that has been reconstituted thanks to the thousands of exercises that have been found. Training began with writing of simple signs and then writing of lists of syllables and names. Next came copying of long lexical lists that corresponded to all sorts of realities: names of trades, animals, plants, vases, wooden objects, fabrics, … Then came copying of complex Sumerian ideograms, even though Sumerian had become a dead language, with their pronunciation and their translation in Akkadian. Learning of Sumerian was completed by copying increasingly difficult texts: proverbs and contracts, and then hymns.

2. Scribes were not recruited from the general population. Their profession seems to have been largely family-transmitted, and was recognized as such.

Learning of cuneiform, in the early 2nd millennium, took place in a master’s home and not in an institutional “school”. The tradition was often passed down within families, with scribes training their children.

3. Although writing was generally done by scribes, many more people could read and, if need be, write.

It has long been believed that in ancient Mesopotamia only a very small part of the population knew how to read and write and that these skills were reserved for specialists, i.e., scribes. Several recent studies have called this idea into question and have shown that access to reading, and even writing, was not so uncommon. Some kings, and also the members of their entourage, family, ministers, or generals, as well as merchants, could do without a reader’s services, when necessary, and decipher on their own the letters sent to them. Sometimes, they were even able to take a quill—the sharpened end of a reed—and write their own tablets.

The last point may help us understand a chicken-and-egg question. If reading and writing are associated with specific genetic predispositions, how did people initially manage to read and write? (see previous posts: Decoding ASPM: Part I, Part II, Part III)

The answer is that these predispositions are not necessary for reading and writing. But they do help. Specifically, they help the brain process written characters faster. In this way, natural selection has genetically reinforced an ability that started as a purely cultural innovation.

This may be a recurring pattern in human evolution. Humans initially took on new tasks, like reading and writing, by pushing the envelope of mental plasticity. Then, once these tasks had become established and sufficiently widespread, natural selection favored those individuals who were genetically predisposed to do them better.

The term is ‘gene-culture co-evolution’ and it’s still a novel concept. Until recently, anthropologists thought that human cultural evolution had simply taken over from human genetic evolution, making the latter unnecessary and limiting it to superficial ‘skin-deep’ changes. But recent findings now paint a different picture. Genetic evolution has actually accelerated in our species over the past 40,000 years, and even more over the past 10,000-15,000. The advent of agriculture saw the rate increase a 100-fold. In all, natural selection has changed at least 7% of the genome during the existence of Homo sapiens. (Hawks et al., 2007; see previous post). And this is a minimal estimate that excludes much variation that may or may not be due to selection. The real figure could be higher. Much higher.


Frost, P. 2007. "The spread of alphabetical writing may have favored the latest variant of the ASPM gene", Medical Hypotheses, 70, 17-20.

Hawks, J., E.T. Wang, G.M. Cochran, H.C. Harpending, and R.K. Moyzis. (2007). Recent acceleration of human adaptive evolution. Proceedings of the National Academy of Sciences (USA), 104(52), 20753-20758

Lion, B. (2008). Les femmes scribes de Mésopotamie. L’Histoire, no. 334 (septembre), pp. 46-49.

Wednesday, October 1, 2008

Thoughts on the crisis

Historians will argue back and forth over the causes of the current economic crisis, just as they still argue over the causes of the Great Depression. But there is consensus on some points:

  • The U.S. government wanted to increase home ownership among Hispanic and African Americans. Since it was politically unacceptable to impose racial quotas on mortgage money, this could be done only by pressuring banks to relax lending practices, even to the point of eliminating down payments (!). The rules were thus loosened across the board for everyone.
  • The increase in home buyers set off an inflationary spiral that became self-perpetuating. People bought houses with the intention of reselling them at a much higher prices.
  • The rising house prices fuelled demand for housing construction. Entire exurbs of McMansions were being built until the crisis began.
  • The Federal Reserve kept all of this going by providing lenders with cheap money.

In sum, the boom kept going as long as enough people could borrow enough money to buy more and more homes at higher and higher prices.

It couldn’t go on forever. On the one hand, wages have not kept pace with housing prices, not to mention the rising cost of oil and food. On the other, mortgages were being given to people who were, by any honest measure, insolvent.

Will the crisis be resolved by the proposed $700 billion bailout? This one might be resolved, at least for now. But the same kind of speculative bubble could happen elsewhere in the economy for similar reasons. The U.S. economy is increasingly geared to creating illusory value.

In all fairness, the bailout may buy time to dismantle the bubble economy before more damage is done. The U.S. government could stop badgering lenders to relax their lending criteria. The Federal Reserve could stop providing cheap money. The speculators and deadbeats could be stripped of their ill-gotten gains.

It won’t happen.

What then? A full-blown recession will likely be averted for another two years. By then, any further bailout would reach astronomical figures and simply drag down those who were wise enough to shun speculation and improvidence.

When I was an undergrad, I remember reading a Marxist book on economics. Among other things, it argued that the boom-bust cycle is inevitable. Once a boom has set in, decision-making becomes less and less optimal. Market discipline slackens and incompetence increasingly goes unpunished. Even if you do get fired or if your company goes under, you can always get rehired elsewhere. Many people also take advantage of the boom to make money through pure speculation, and they will do their utmost to keep the boom going until speculation has become the main driving force. Why not? It’s their bread and butter … or rather their cocaine and cognac.

And so, the longer the boom goes on, the greater the load of inefficiency that the economy has to bear. Eventually a crisis becomes inevitable and even desirable … to clean all the gunk out of the system.

But there is another wrinkle to the current boom-bust cycle. It is playing out against the backdrop of a worsening commodity crisis. Demand is increasing faster than supply for the basics of life, particularly oil, food, and water. Resource-rich countries will be all right. But things will be less rosy for areas that have a high ratio of people to resources, like the Eastern U.S., California, Western Europe, and many areas of the Third World.

Nonetheless, many of these same areas have embarked on a program of aggressive population growth through immigration. The U.S. is projected to grow by 135 million in just 42 years—a 44% increase (Camarota, 2008). The United Kingdom is slated to grow by 16 million in 50 years—a 26% increase (United Kingdom – Wikipedia).

This situation might be manageable if the immigrants were going into export sectors that can earn foreign exchange and pay for increased imports of oil, food, and water (yes, fresh water will become an item of international trade). But they aren’t. For the most part, they are being brought in to serve the needs of agribusiness, slaughterhouses, landscapers, homebuilders, hotel and restaurant services, and so forth.

Yes, we are living in interesting times.


Camarota, S.A. (2008). How many Americans? The Washington Post. Tuesday, September 2, 2008; Page A15