Saturday, April 26, 2014

Small effects at many genes, but are the effects non-additive?


 
Allele dominance (source). A single copy of a dominant allele is as effective as two copies of a recessive allele. The current thinking is that intellectual capacity has increased in humans through new alleles that cause small positive effects at a large number of gene loci. It now seems that some of these new alleles display non-additive effects.
 

How has intellectual capacity increased in the course of human evolution? The current thinking is that natural selection has favored new alleles that cause small positive effects at a large number of gene loci. Over the human genome, these little effects have added up to produce a large effect that distinguishes us from our predecessors.

But are these effects simply additive? Many alleles are dominant, i.e., a single copy has the same effect as two copies. Many alleles also interact with alleles at other gene loci. It would be strange if none of the many gene loci involved in intellectual capacity showed no dominance or interaction. This point was made over a decade ago:

The search for genes associated with variation in IQ will be made more difficult, to the extent that genetic effects on IQ are not additive. We used earlier the illustrative possibility that IQ was affected by 25 genes, each with an equal, additive effect (paragraph 7.15). But some genetic effects, dominance and epistasis, are not additive.

[...] For example, it might be the case that allele 5 of the IGF2R gene is associated with high IQ only if it is accompanied by particular alleles at other loci. In their absence, it is accompanied by normal or even low IQ. If that were true, it would clearly be difficult to detect, and replicate, substantial effects.

[...] Is the genetic variance underlying variation in IQ mostly additive? We noted in Chapter 4 that much research in behavioural genetics assumes this to be the case. But two relatively sophisticated attempts to model IQ variation, while both concluding that the overall broadsense heritability of IQ is about 0.50, also argue that additive genetic variance accounted for no more than about 30% of the overall variation in IQ, while non-additive effects accounted for some 20%. (Nuffield Council on Bioethics, 2002)

Yet many researchers still argue for a simple additive model. Davies et al. (2011) estimated additive genetic variance at 40-51%. As some of the same authors later pointed out, however, the methodology of that study (genome-wide complex trait analysis) ignores non-additive effects: "GCTA estimates additive genetic influence only, so that non-additive effects (gene–gene and gene-environment interaction) are not captured either" (Trzaskowski et al., 2013).
 

Differences among human populations

Davide Piffer (2013) has studied geographic variation in alleles that influence intellectual capacity. He began with seven genes (SNPs) whose different alleles are associated with differences in performance on PISA or IQ tests. Then, for fifty human populations, he looked up the prevalence of each allele that seems to increase performance. Finally, for each population, he calculated the average prevalence of these alleles at all seven genes.

The average prevalence was 39% among East Asians, 36% among Europeans, 32% among Amerindians, 24% among Melanesians and Papuan-New Guineans, and 16% among sub-Saharan Africans. The lowest scores were among San Bushmen (6%) and Mbuti Pygmies (5%). A related finding is that all but one of the alleles seem to be specific to humans and not shared with ancestral primates.

Davide Piffer has now used these geographic differences in allele frequencies to estimate the corresponding geographic differences in “genotypic IQ”, i.e., the genetic component of intellectual capacity:

I had already estimated the African genotypic IQ from my principal component scores extracted from allele frequencies (Piffer, 2013) for different populations. If we take the factor score of people living in equal environmental conditions (Europeans and Japanese), we can figure out how many IQ points each unit score corresponds to. The factor score of Europeans is 0, that of the Japanese is 1.23. The average IQ of Europeans is 99 and that of the Japanese is 105. Thus, 6 IQ points equal a difference of 1.23 factor scores. The factor score of sub-Saharan Africans is -1.73, which is 1.41 times greater than the difference between Europeans and East Asians. Thus, the genotypic IQ difference between Africans and Europeans must be 6*1.41= 8.46. Thus the real African genotypic IQ is 99-8.46= 90.54 (source)

This estimate of 91 seems to contradict the IQ literature, although there is still disagreement over the mean IQ of sub-Saharan Africans. In their review of the literature, Wicherts et al. (2010) argue for a mean of 82, whereas Lynn (2010) puts it at 66. Rindermann (2013) favors a “best guess” of 75. There is some fudging in all of these estimates, since no one really knows how much adjustment should be made for the Flynn Effect. Indeed, what is the potential for IQ gains in societies that are still becoming familiar not only with test taking but also with the entire paradigm of giving standardized answers to standardized questions?

We are on firmer ground when estimating the mean IQ of African Americans, which seems to be around 85, i.e., 15 points below the Euro-American mean. We can argue back and forth over the cause, but the same gap comes up time and again, even when black and white children are adopted into the same home environment. This was the finding of the Minnesota Transracial Adoption Study: a longitudinal study of black, biracial, and white children adopted into white middle-class Minnesotan families, as well as the biological children of the same families (Levin, 1994; Lynn, 1994; Scarr and Weinberg, 1976; Weinberg, Scarr, and Waldman, 1992). IQ was measured when the adopted children were on average 7 years old and the biological children on average 10 years old. They were tested again ten years later. Between the two tests, all four groups declined in mean IQ. On both tests, however, the differences among the four groups remained unchanged, particularly the 15-point gap between blacks and whites. Whatever the cause, it must happen very early in life. Could it be in the womb? We would then have to explain the consistently halfway scores of the biracial children, who were born overwhelmingly to white mothers.

In any case, whether we accept the African American mean of 85 (which is influenced by some admixture with other groups) or the upper estimate of 82 for sub-Saharan Africans, we are still well below the “genotypic” estimate of 91.  Is this an indication of non-additive effects? Do some of the intelligence-boosting alleles display partial dominance? Do some of them interact with other such alleles?
 

References

Davies, G., A. Tenesa, A. Payton, J. Yang, S.E. Harris, D. Liewald, X. Ke., et al. (2011). Genome-wide association studies establish that human intelligence is highly heritable and polygenic, Molecular Psychiatry, 16, 996–1005.
http://www.nature.com/mp/journal/v16/n10/abs/mp201185a.html  

Levin, M. (1994). Comment on the Minnesota transracial adoption study, Intelligence, 19, 13-20.
http://www.sciencedirect.com/science/article/pii/0160289694900493  

Lynn, R. (1994). Some reinterpretations of the Minnesota Transracial Adoption Study, Intelligence, 19, 21-27.
http://www.sciencedirect.com/science/article/pii/0160289694900507  

Lynn, R. (2010). The average IQ of sub-Saharan Africans assessed by the Progressive Matrices: A reply to Wicherts, Dolan, Carlson & van der Maas, Learning and Individual Differences, 20, 152-154.
http://www.sciencedirect.com/science/article/pii/S1041608010000348  

Nuffield Council on Bioethics. (2002). Genetics and human behaviour: The ethical context. London
http://www.nuffieldbioethics.org/sites/default/files/files/Genetics%20and%20behaviour%20Chapter%207%20-%20Review%20of%20the%20evidence%20intelligence.pdf  

Piffer, D. (2013). Factor analysis of population allele frequencies as a simple, novel method of detecting signals of recent polygenic selection: The example of educational attainment and IQ, Interdisciplinary Bio Central, provisional manuscript
http://www.ibc7.org/article/journal_v.php?sid=312  

Rindermann, H. (2013). African cognitive ability: Research, results, divergences and recommendations, Personality and Individual Differences, 55, 229-233.
http://www.sciencedirect.com/science/article/pii/S0191886912003741  

Scarr, S., and Weinberg, R.A. (1976). IQ test performance of Black children adopted by White families, American Psychologist, 31, 726-739.
http://www.kjplanet.com/amp-31-10-726.pdf   

Trzaskowski, M., O.S.P. Davis, J.C. DeFries, J. Yang,  P.M. Visscher, and R. Plomin. (2013). DNA Evidence for strong genome-wide pleiotropy of cognitive and learning abilities, Behavior Genetics, 43(4), 267–273.
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3690183/  

Weinberg, R.A., Scarr, S., and Waldman, I.D. (1992). The Minnesota Transracial Adoption Study: A follow-up of IQ test performance at adolescence, Intelligence, 16, 117-135.
http://www.sciencedirect.com/science/article/pii/016028969290028P  

Wicherts, J.M., C.V. Dolan, and H.L.J. van der Maas. (2010). A systematic literature review of the average IQ of sub-Saharan Africans, Intelligence, 38, 1-20.
http://mathsci.free.fr/survey.pdf

Saturday, April 19, 2014

The novelty effect: a factor in mate choice


 
Series of facial images from clean-shaven to full beard (Janif et al., 2014)


For the past thirty years, the tendency has been to study sexual attractiveness from the observer's standpoint, i.e., we choose mates on the basis of what's good for us. We therefore unconsciously look for cues that tell us how healthy or fertile a potential mate may be. But what about the standpoint of the person being observed? If you want to be noticed on the mate market, it's in your interest to manipulate any mental algorithm that will make you noticeable, including algorithms that have nothing to do with mating and exist only to keep track of unusual things in the observer's surroundings. If you're more brightly colored or more novel in appearance, you will stand out and thus increase your chances of finding a mate.

We see this with hair color. In one study, men were shown pictures of attractive women and asked to choose the one they most wanted to marry. One series had equal numbers of brunettes and blondes, a second series 1 brunette for every 5 blondes, and a third 1 brunette for every 11 blondes. It turned out that the scarcer the brunettes were in a series, the likelier any one brunette would be chosen (Thelen, 1983). Another study likewise found that Maxim cover girls were disproportionately light blonde or dark brown, and much less often the more usual dark blonde or light brown (Anon, 2008). This novelty effect may be seen in sales of home interior colors over the past half-century: preference for one color rises until satiated, then falls and yields to preference for another (Stansfield & Whitfield, 2005).

The novelty effect seems to apply not only to colors but also to other visible features. In a recent study, participants were shown a series of faces with different degrees of beardedness. A clean-shaven face was preferred to the degree that it was rare, being most appreciated when the other faces had beards. Heavy stubble and full beards were likewise preferred to the degree that they were rare (Janif et al., 2014).

The authors conclude:
 
Concordant effects of frequency-dependent preferences among men and women might reflect a domain-general effect of novelty. Frost [20] suggested the variation in female blond, brown and red hair between European populations spread, geographically, from where they first arose, via negative frequency-dependent preferences for novelty. There is some evidence that men's preferences increase for brown hair when it is rare [21] and for unfamiliar (i.e. novel) female faces [22]. (Janif et al., 2014)

 
The authors go on to suggest that the quest for novelty may drive the ups and downs of fashion trends. A new fashion will rise sharply in popularity when it is still unfamiliar to most people. As the novelty wears off, its popularity will peak and then decline, especially if it faces competition from a more recent fashion.

There are certainly limits to the novelty effect—something can be novel but also disgusting—but it seems to be more general than previously thought.
 

References

Anon. (2008). Maxim's audience prefers brunettes; distribution is bimodal. Gene Expression, July 6, 2008.  http://www.gnxp.com/blog/2008/07/maxims-audience-prefers-brunettes.php  

Frost P. (2006). European hair and eye color: a case of frequency-dependent sexual selection? Evolution & Human Behavior, 27, 85-103.

Frost, P. (2008). Sexual selection and human geographic variation, Special Issue: Proceedings of the 2nd Annual Meeting of the NorthEastern Evolutionary Psychology Society, Journal of Social, Evolutionary, and Cultural Psychology, 2(4),169-191. http://137.140.1.71/jsec/articles/volume2/issue4/NEEPSfrost.pdf  

Janif, Z.J., R.C. Brooks, and B.J. Dixson. (2014). Negative frequency-dependent preferences and variation in male facial hair, Biology Letters, 10, early view
http://rsbl.royalsocietypublishing.org/content/10/4/20130958

Little A.C., L.M. DeBruine, B.C. Jones. (2013). Sex differences in attraction to familiar and unfamiliar opposite-sex faces: men prefer novelty and women prefer familiarity, Archives of Sexual Behavior, early view

Stansfield, J., and Whitfield, T.W.A. (2005) Can future colour trends be predicted on the basis of past colour trends? An empirical investigation, Color Research & Application, 30(3), 235-242. 

Thelen, T.H. (1983). Minority type human mate preference, Social Biology, 30, 162-180.

Saturday, April 12, 2014

Compliance with moral norms: a partly heritable trait?


 
Election poster from the 1930s for Sweden’s Social Democratic Party (source). Is the welfare state more workable if the population is more predisposed to obey moral norms?
 

Do we differ genetically in our ability, or willingness, to comply with moral norms? Please note: I'm talking about compliance. The norms themselves can vary greatly from one historical period to another and from one society to another.

Apparently some people are more norm-compliant than others. This is the conclusion of a recent twin study from Sweden (Loewen et al., 2013). A total of 2,273 individuals from twin pairs were queried about the acceptability of four dishonest behaviors: claiming sick benefits while healthy (1.4% thought it totally or fairly acceptable), avoiding paying for public transit (2.8%), avoiding paying taxes (9.7%), and accepting bribes on the job (6.4%).

How heritable were the responses to the above questions? The heritabilities were as follows: 

Claiming sick benefits while healthy - 42.5%
Avoiding paying for public transit - 42.3%
Avoiding paying taxes - 26.3%
Accepting bribes on the job - 39.7%

Do these results indicate a specific predisposition to obey moral norms? Or is the genetic influence something more general, like religiosity or risk-taking, both of which are known to be partly heritable? To answer this question, the authors ran correlations with other factors:


Significant correlations were exhibited for age (r=.10, p=.00), sex (r=.12, p=.00), religiosity (r=.06, p=.00), preferences for risk (r=-.09, p=.00) and fairness (r=-.10, p=.00), locus of control (r=-.03, p=.01), and charitable giving (r=.09, p=.00). However, these significant correlations were relatively weak, suggesting that our measure is not merely standing in for these demographic and psychological differences between individuals. There were no significant correlations with behavioral inhibition (r=-.00, p=.81) or volunteering (r=.01, p=.29). (Loewen et al., 2013)


The jury is still out, but it looks like compliance with moral norms has a specific heritable component.
 

Population differences

Does this heritable component vary from one population to another, just as it seems to vary from one individual to another? The authors have little to say, other than the following:


Replication in other countries should occur, as the exact role and extent of genetic and common environment-influence could change in different national and cultural contexts. Such a multi-country approach could thus offer some clues on the generalizability of our findings. (Loewen et al., 2013)


Swedes seem to be better than most people at obeying moral norms. Only 1.4% think it acceptable to claim sick benefits while healthy! Maybe that's why they've been so successful at creating a welfare state. So few of them want to be free riders on the gravy train:


Gunnar and Alva Myrdal were the intellectual parents of the Swedish welfare state. In the 1930s they came to believe that Sweden was the ideal candidate for a cradle-to-grave welfare state. First of all, the Swedish population was small and homogeneous, with high levels of trust in one another and the government. Because Sweden never had a feudal period and the government always allowed some sort of popular representation, the land-owning farmers got used to seeing authorities and the government more as part of their own people and society than as external enemies. Second, the civil service was efficient and free from corruption. Third, a Protestant work-ethic—and strong social pressures from family, friends and neighbors to conform to that ethic—meant that people would work hard, even as taxes rose and social assistance expanded. Finally, that work would be very productive, given Sweden´s well-educated population and strong export sector. (Norberg, 2006)


This is not how most of the world works. While studying in Russia, I noticed that the typical Russian feels a strong sense of moral responsibility toward immediate family and longstanding friends, more so than we in the West. Beyond that charmed circle, however, the general feeling seems to be distrust, wariness, or indifference. There was little of the spontaneous willingness to help strangers that I had taken for granted back home. People had the same sense of right and wrong, but this moral universe was strongly centered on their own families.

In sociology, the term is amoral familialism. Family is everything and society is nothing, or almost nothing. It was coined by American sociologist Edward Banfield:


In 1958, Banfield, with the assistance of his wife, Laura, published The Moral Basis of a Backward Society, in which they explained why a region in southern Italy was poor. The reason, they said, was not government neglect or poor education, but culture. People in this area were reluctant to cooperate outside of their families. This kind of "amoral familialism," as they called it, was the result of a high death rate, a defective system of owning land, and the absence of extended families. By contrast, in an equally forbidding part of southern Utah, the residents were engaged in a variety of associations, each busily involved in improving the life of the community. In southern Italy, people did not cooperate; in southern Utah, they scarcely did anything else. (Banfield, 2003, p. viii)
 

Where did Western societies get this desire to treat family and non-family the same way? To some extent, it seems to be a longstanding trait. English historian Alan Macfarlane sees a tendency toward weaker kinship ties that goes back at least to the 13th century. Children had no automatic rights to the family property. Parents could leave their property to whomever they liked and disinherit their children if they so wished (Macfarlane, 2012).

Indeed, Macfarlane argues that "Weber's de-familization of society" was already well advanced in Anglo-Saxon times (Macfarlane, 1992, pp. 173-174). This picture of relatively weak kinship ties is consistent with the Western European marriage pattern. If we look at European societies west of a line running from Trieste to St. Petersburg, we find that certain cultural traits predominate:

- relatively late marriage for men and women
- many people who never marry
- neolocality (children leave the family household to form new households)
- high circulation of non-kin among different households (typically young people sent out as servants) (Hajnal, 1965; see also hbd* chick)

Again, these characteristics go back at least to the 13th century and perhaps much farther back (Seccombe, 1992, p. 94).

Historians associate this model of society with the rise of the market economy. In other words, reciprocal kinship obligations were replaced with monetized economic obligations, and this process in turn led to a broader-based morality that applied to everyone equally. In reality, the arrow of causation seems to have been the reverse. Certain societies, notably those of northwestern Europe, were pre-adapted to the market economy and thus better able to exploit its possibilities when it began to take off in the late Middle Ages. The expansion of the market economy and, later, that of the welfare state were thus made possible by certain pre-existing cultural and possibly genetic characteristics, i.e., weaker kinship ties and a corresponding extension of morality from the familial level to the societal level.
 

References

Banfield, E.C. (2003). Political Influence, New Brunswick (N.J.): Transaction Pub.

Hajnal, John (1965). European marriage pattern in historical perspective. In D.V. Glass and D.E.C. Eversley. Population in History. Arnold, London. 

Loewen, P.J., C.T. Dawes, N. Mazar, M. Johannesson, P. Keollinger, and P.K.E. Magnusson. (2013). The heritability of moral standards for everyday dishonesty, Journal of Economic Behavior & Organization, 93, 363-366.
https://files.nyu.edu/ctd1/public/Moral.pdf  

Macfarlane, A. (1992). On individualism, Proceedings of the British Academy, 82, 171-199.
http://www.alanmacfarlane.com/TEXTS/On_Individualism.pdf  

Macfarlane, A. (2012). The invention of the modern world. Chapter 8: Family, friendship and population, The Fortnightly Review, Spring-Summer serial
http://fortnightlyreview.co.uk/2012/07/invention-8/  

Norberg, J. (2006). Swedish Models, June 1, The National Interest.
http://www.johannorberg.net/?page=articles&articleid=151  

Seccombe, W. (1992). A Millennium of Family Change. Feudalism to Capitalism in Northwestern Europe, London: Verso.

 

Saturday, April 5, 2014

The riddle of Microcephalin


 
World distribution of the recent Microcephalin allele. The prevalence is indicated in black and the letter 'D' refers to the 'derived' or recent allele (Evans et al., 2005)
 

Almost a decade ago, there was much interest in a finding that a gene involved in brain growth, Microcephalin, continued to evolve after modern humans had begun to spread out of Africa. The 'derived' allele of this gene (the most recent variant) arose some 37,000 years ago somewhere in Eurasia and even today is largely confined to the native populations of Eurasia and the Americas (Evans et al., 2005).

Interest then evaporated when no significant correlation was found between this derived allele and higher scores on IQ tests (Mekel-Bobrov et al, 2007; Rushton et al., 2007). Nonetheless, a later study did show that this allele correlates with increased brain volume (Montgomery and Mundy, 2010).

So what is going on? Perhaps the derived Microcephalin allele helps us on a mental task that IQ tests fail to measure. Or perhaps it boosts intelligence in some indirect way that shows up in differences between populations but not in differences between individuals.

The second explanation is the one favored in a recent study by Woodley et al. (2014). The authors found a high correlation (r = 0.79) between the incidence of this allele and a population's estimated mean IQ, using a sample of 59 populations from throughout the world. They also found a correlation with a lower incidence of infectious diseases, as measured by DALY (disability adjusted life years). They go on to argue that this allele may improve the body’s immune response to viral infections, thus enabling humans to survive in larger communities, which in turn would have selected for increased intelligence:

Bigger and more disease resistant populations would be able to produce more high intelligence individuals who could take advantage of the new cognitive opportunities afforded by the social and cultural changes that occurred over the past 10,000 years. (Woodley et al., 2014)

Bigger populations would also have increased the probability of “new intelligence-enhancing mutations and created new cognitive niches encouraging accelerated directional selection for the carriers of these mutations.” A positive feedback would have thus developed between intelligence and population density:

[…] the evolution of higher levels of intelligence during the Upper Paleolithic revolution some 50,000 to 10,000 ybp may have been necessary for the development of the sorts of subsistence paradigms (e.g. pastoralism, plant cultivation, etc.) that subsequently emerged. (Woodley et al., 2014)
 
 
What do I think?

I have mixed feelings about this study. Looking at the world distribution of this allele (see above map), I can see right away a much higher prevalence in Eurasia and the Americas than in sub-Saharan Africa. That kind of geographic distribution would inevitably correlate with IQ. And it would also correlate with the prevalence of infectious diseases.

Unfortunately, such correlations can be spurious. There are all kinds of differences between sub-Saharan Africa and the rest of the world. One could show, for instance, that per capita consumption of yams correlates inversely with IQ. But yams don't make you stupid.

More seriously, one could attribute the geographic range of this allele to a founder effect that occurred when modern humans began to spread out of Africa to other continents. In that case, it could be junk DNA with no adaptive value at all. There is of course a bit of a margin between its estimated time of origin (circa 37,000 BP) and the Out of Africa event (circa 50,000 BP), but that difference could be put down to errors in estimating either date.

No, I don't believe that a founder effect was responsible. A more likely cause would be selection to meet the cognitive demands of the First Industrial Revolution, when humans had to create a wider range of tools to cope with seasonal environments and severe time constraints on the tasks of locating, processing, and storing food. This allele might have helped humans in the task of imagining a 3D mental “template” of whatever tool they wished to make. Or it might have helped hunters store large quantities of spatio-temporal information (like a GPS) while hunting over large expanses of territory. Those are my hunches.

I don't want to pooh-pooh the explanation proposed in this study. At times, however, the authors' reasoning seems more than a bit strained. Yes, this allele does facilitate re-growth of neural tissue after influenza infections, probably via repair of damaged DNA, but the evidence for a more general role in immune response seems weak. More to the point, the allele’s time of origin (39,000 BP) doesn't correspond to a time when humans began to live in larger, more sedentary communities. This was when they were still hunter-gatherers and just beginning to spread into temperate and sub-arctic environments with lower carrying capacities. Human population density was probably going down, not up. It wasn't until almost 30,000 years later, with the advent of agriculture, that it began to increase considerably.

The authors are aware of this last point and note in it their paper. So we come back to the question: what could have been increasing the risk of disease circa 39,000 BP? The authors suggest several sources of increased risk: contact with archaic hominins (Neanderthals, Denisovans), domestication of wolves and other animals, increasing population densities of hunter-gatherers, and contact by hunter-gatherers with new environments. Again, this reasoning seems to push the envelope of plausibility. Yes, Neanderthals were still around in 39,000 BP, but they had already begun to retreat and by 30,000 BP were extinct over most of their former range. Yes, we have evidence of wolf domestication as early as 33,000 BP, but livestock animals were not domesticated until much later. Yes, there was a trend toward increasing population density among hunter-gatherers, but this was not until after the glacial maximum, i.e., from 15,000 BP onward. Yes, hunter-gatherers were entering new environments, but those environments were largely outside the tropics in regions where winter kills many pathogens. So disease risk would have been decreasing.

I don’t wish to come down too hard on this paper. There may be something to it. My fear is simply that it will steer researchers away from another possible explanation: the derived Microcephalin allele assists performance on a mental task that is not measured by standard IQ tests.

 
References 

Evans, P. D., Gilbert, S. L., Mekel-Bobrov, N., Vallender, E. J., Anderson, J. R., Vaez-Azizi, L. M., et al. (2005). Microcephalin, a gene regulating brain size, continues to evolve adaptively in humans, Science, 309, 1717-1720.
http://www.fed.cuhk.edu.hk/~lchang/material/Evolutionary/Brain%20gene%20and%20race.pdf  

Mekel-Bobrov, N., Posthuma, D., Gilbert, S. L., Lind, P., Gosso, M. F., Luciano, M., et al. (2007). The ongoing adaptive evolution of ASPM and Microcephalin is not explained by increased intelligence, Human Molecular Genetics, 16, 600-608.
http://psych.colorado.edu/~carey/pdfFiles/ASPMMicrocephalin_Lahn.pdf  

Montgomery, S. H., and N.I. Mundy. (2010). Brain evolution: Microcephaly genes weigh in, Current Biology, 20, R244-R246.
http://www.sciencedirect.com/science/article/pii/S0960982210000862  

Rushton, J. P., Vernon, P. A., and Bons, T. A. (2007). No evidence that polymorphisms of brain regulator genes Microcephalin and ASPM are associated with general mental ability, head circumference or altruism, Biology Letters, 3, 157-160.
http://semantico-scolaris.com/media/data/Luxid/Biol_Lett_2007_Apr_22_3(2)_157-160/rsbl20060586.pdf  

Woodley, M. A., H. Rindermann, E. Bell, J. Stratford, and D. Piffer. (2014). The relationship between Microcephalin, ASPM and intelligence: A reconsideration, Intelligence, 44, 51-63.
http://www.sciencedirect.com/science/article/pii/S0160289614000312