Monday, January 25, 2021

Sex differences in human eye morphology

 


Women have rounder-looking eyes with narrower fissures, but only in Europeans. Eyes are not sexually dimorphic in other human populations. (Petr Novak, Wikicommons)

 

 

The exposed white of the eye is larger in men than in women among Europeans but not in other human groups. This sexual dimorphism is due to the white of the eye being more horizontally exposed in men, with the result that female eyes look rounder. In addition, eye fissures are narrower and less rectangular in women (Danel et al. 2018; Danel et al. 2020).

 

This is analogous to what we see with eye color and hair color. Eyes are brown in most humans with the exception of Europeans, whose eyes may also be blue, gray, or green. Hair is black in most humans with the exception of Europeans, whose hair may also be blonde, red, or brown. In both cases, the palette of colors is more evenly balanced in women than in men. Women are less likely to have the more common hues, like blue or brown eyes and black hair. Conversely, they are more likely to have the less common hues, like green eyes and red hair.

 

There is no common genetic cause of these sex differences in eye morphology, eye color, and hair color. The genes are different in each case. The common cause seems to be some kind of selection among ancestral Europeans. Something favored the reproduction of women with rounder-looking eyes and less common eye and hair colors.

 

Was that "something" a someone? Were men selecting women through a process of sexual selection? That has been my explanation: in northern Eurasia until the end of the last ice age, women outnumbered men and had to compete for them, as a result of high male mortality and the high cost of polygyny. There was thus strong selection for women with an eye-catching appearance, and this selection ultimately changed the appearance of both sexes. The new phenotype eventually died out in northern Asia but survived in parts of Europe, which had a larger and more continuous human presence. It then spread throughout the rest of Europe almost at the dawn of history (Frost 2006; Frost 2014; Frost et al. 2017).

 

Danel et al. (2020) consider this explanation but reject it because female eye morphology does not correlate with two other aspects of female attractiveness: face shape and facial averageness. That lack of correlation, however, simply shows that each of these aspects has different constraints on the direction of sexual selection:

 

Eye morphology - the direction of sexual selection seems open-ended. Women are more attractive if they have rounder eyes.

 

Face shape - the direction of sexual selection goes into reverse beyond a certain point. Women are more attractive if they have smaller chins and smaller noses, but only up to a certain point. Excessively small chins and noses are not attractive either.

 

Facial averageness - the constraints are again different. Women become less attractive on each side of a narrow median.

 

References

 

Danel, D.P., S. Wacewicz, Z. Lewandowski, P. Zywiczynski, and J.O. Perea-Garcia. (2018). Humans do not perceive conspecifics with a greater exposed sclera as more trustworthy: a preliminary cross-ethnic study of the function of the overexposed human sclera. Acta Ethologica 21: 203-208.

https://doi.org/10.1007/s10211-018-0296-5

 

Danel, D.P., S. Wacewicz, K. Kleisner, Z. Lewandowski, M.E. Kret, P. Zywiczynski, and J.O. Perea-Garcia. (2020). Sex differences in ocular morphology in Caucasian people: a dubious role of sexual selection in the evolution of sexual dimorphism of the human eye. Behavioral Ecology and Sociobiology 74(115)

https://doi.org/10.1007/s00265-020-02894-1

 

Frost, P. (2006). European hair and eye color - A case of frequency-dependent sexual selection? Evolution and Human Behavior 27(2): 85-103.

https://doi.org/10.1016/j.evolhumbehav.2005.07.002

 

Frost, P. (2014). The puzzle of European hair, eye, and skin color. Advances in Anthropology 4(2): 78-88.

https://doi.org/10.4236/aa.2014.42011

 

Frost, P., K. Kleisner, and J. Flegr. (2017). Health status by gender, hair color, and eye color: Red-haired women are the most divergent. PLoS One 12(12): e0190238.

https://doi.org/10.1371/journal.pone.0190238

 

Monday, January 18, 2021

Are identical twins really identical?

 

Sibling similarity in personality for monozygotic twins, dizygotic twins, and adoptees (Wikicommons)

 

 

Monozygotic and dizygotic twins who were separated early in life and reared apart (MZA and DZA twin pairs) are a fascinating experiment of nature. They also provide the simplest and most powerful method for disentangling the influence of environmental and genetic factors on human characteristics. (Bouchard et al. 1990)

 

Monozygotic twins are identical twins. They develop from a single fertilized egg and are assumed to be genetically identical. Any differences between them in mind or behavior must therefore have an environmental cause. Of course, "environmental cause" does not mean only things like diet, upbringing, education, or parental help with homework. It can also mean accidents during pregnancy or childbirth.

 

But are monozygotic twins really identical? Monozygotic twins begin to go their own ways long after the zygote has made its first division. It's actually around a week later that they begin to develop separately, when the zygote has already divided several times to form a mass of about sixteen cells. During that time, mutations may have occurred in one cell lineage or another, and not all of those mutations will be inherited by both twins. A twin may in fact develop from a single lineage or several lineages within the cell mass. The two twins may thus be genetically different.

 

Jónsson et al. (2021) have quantified these genetic differences between twins. They examined the body tissues of adult twins, specifically one sample from adipose tissue, 204 samples from buccal tissue, and 563 blood samples.  On average, one of the twins had 14 postzygotic mutations that were not present in the other. There was, however, considerable variability: 39 twin pairs differed at more than 100 loci, whereas 38 pairs did not differ at all.

 

Germ cells develop from a subset of cell lineages very early in embryonic development, and it is possible to see how twins differ genetically in their germ lines by looking at their offspring. In this case, there was a difference of 5.2 mutations between twins. Again, there was considerable variability, ranging from a minimum of no mutations at all in 207 offspring to a maximum of 8 mutations in 3 offspring.

 

If monozygotic twins are not genetically identical, we will have to revise upwards our estimates of the relative importance of nature versus nurture in different human traits:

 

Phenotypic discordance between monozygotic twins has generally been attributed to the environment. This assumes that the contribution of mutations that separate monozygotic twins is negligible; however, for some diseases such as autism and other developmental disorders, a substantial component is due to de novo mutations. Our analysis demonstrates that in 15% of monozygotic twins a substantial number of mutations are specific to one twin but not the other. This discordance suggests that in most heritability models the contribution of sequence variation to the pathogenesis of diseases with an appreciable mutational component is underestimated. (Jónsson et al. 2021)

 

In particular, we will have to revise upwards our estimates of the genetic component of intelligence, such as the 70% estimate offered by Bouchard et al. (1990):

 

Since 1979, a continuing study of monozygotic and dizygotic twins, separated in infancy and reared apart, has subjected more than 100 sets of reared-apart twins or triplets to a week of intensive psychological and physiological assessment. Like the prior, smaller studies of monozygotic twins reared apart, about 70% of the variance in IQ was found to be associated with genetic variation. On multiple measures of personality and temperament, occupational and leisure-time interests, and social attitudes, monozygotic twins reared apart are about as similar as are monozygotic twins reared together.

 

Or the 41% to 66% estimate offered by Haworth et al. (2020):

 

Although common sense suggests that environmental influences increasingly account for individual differences in behavior as experiences accumulate during the course of life, this hypothesis has not previously been tested, in part because of the large sample sizes needed for an adequately powered analysis. Here we show for general cognitive ability that, to the contrary, genetic influence increases with age. The heritability of general cognitive ability increases significantly and linearly from 41% in childhood (9 years) to 55% in adolescence (12 years) and to 66% in young adulthood (17 years) in a sample of 11 000 pairs of twins from four countries, a larger sample than all previous studies combined.

 

My criticisms

 

Why focus on germline differences?

 

I have two criticisms of the study by Jónsson et al. (2020). First, their abstract highlights the median of 5.2 mutational differences in the germline, and not the larger median of 14 mutational differences in somatic tissues.

 

Here we show that monozygotic twins differ on average by 5.2 early developmental mutations and that approximately 15% of monozygotic twins have a substantial number of these early developmental mutations specific to one of them. (Jónsson et al. 2021)

 

Yes, "heritability" refers to genes that are passed on to the next generation, but most twin studies don't include the offspring of twins. The researchers simply examine pairs of monozygotic twins and see how they differ. Any differences would therefore reflect differences in somatic tissues and not the germline, or at least not solely the germline.

 

Undoubtedly, some of the somatic mutations occurred later in development, but they would still be relevant for any study on adult monozygotic twins.

 

Do these differences really make a difference?

 

We estimate the genetic component of a mental or behavioral trait by comparing monozygotic and dizygotic twins, i.e., identical and fraternal twins. A difference between monozygotic twins is assumed to be 100% environmental, and a difference between dizygotic twins is assumed to be partly environmental and partly genetic. Therefore, we can estimate the genetic component by subtracting one from the other, right?

 

This is where the study by Jónsson et al. (2021) comes in. They argue that the genetic component is always underestimated because some of the difference between monozygotic twins is also genetic. But is that additional genetic difference large enough to make a difference? If monozygotic twins differ from each other, on average, at 14 loci, and dizygotic twins differ from each other, on average, at 1400 loci, we might as well assume that monozygotic twins are genetically identical. Any upward revision of the heritability estimate would be slight.

 

Of course, the key lies in the words "on average." Some of the twins in this study differed at more than 100 loci. More importantly, around 15% of the twins had a substantial number of "near-constitutional" mutations, i.e., absent from one twin and present in almost all the tissues of the other. In those cases, we could see big differences in development between the two.

 

It's difficult to say without a point of comparison. In other words, the same kind of study should be done on dizygotic twins. How much more variable are they genetically?

 

 

References

 

Bouchard Jr., T.J., D.T. Lykken, M. McGue, N.L. Segal, and A. Tellegen. (1990). Sources of human psychological differences: the Minnesota Study of Twins Reared Apart. Science 250(4978): 223-228. https://doi.org/10.1126/science.2218526

 

Haworth, C.M.A., M. J. Wright, M. Luciano, N.G. Martin, E.J.C. de Geus, et al. (2010). The heritability of general cognitive ability increases linearly from childhood to young adulthood. Molecular Psychiatry 15: 1112-1120. https://doi.org/10.1038/mp.2009.55

 

Jónsson, H., E. Magnusdottir, H.P. Eggertsson, O.A. Stefansson, G.A. Arnadottir, et al. (2021). Differences between germline genomes of monozygotic twins. Nature Genetics 53: 27-34 (2021). https://doi.org/10.1038/s41588-020-00755-1

Monday, January 11, 2021

Are fungal pathogens manipulating human behavior?

 


Fungal infection of brain tissue (Wikicommons, CDC). Some fungi persist in the human brain for years and begin to harm their host only in old age. What were they doing previously?

 

 

I've published a paper on manipulation of human behavior by fungal pathogens. Here's the abstract:

 

Many pathogens, especially fungi, have evolved the capacity to manipulate host behavior, usually to improve their chances of spreading to other hosts. Such manipulation is difficult to observe in long-lived hosts, like humans. First, much time may separate cause from effect in the case of an infection that develops over a human life span. Second, the host-pathogen relationship may initially be commensal: the host becomes a vector for infection of other humans, and in exchange the pathogen remains discreet and does as little harm as possible. Commensalism breaks down with increasing age because the host is no longer a useful vector, being less socially active and at higher risk of death. Certain neurodegenerative diseases may therefore be the terminal stage of a longer-lasting relationship in which the host helps the pathogen infect other hosts, largely via sexual relations. Strains from the Candida genus are particularly suspect. Such pathogens seem to have co-evolved not only with their host population but also with the local social environment. Different social environments may have thus favored different pathogenic strategies for manipulation of human behavior.

 

Please feel free to comment.

 

Reference

 

Frost, P. (2020). Are Fungal Pathogens Manipulating Human Behavior? Perspectives in Biology and Medicine 63(4): 591-601. https://doi.org/10.1353/pbm.2020.0059

 

Sunday, January 3, 2021

The mental qualities that make a society workable

 

A questionnaire survey found very low levels of altruism in Czechs and very high levels in Moroccans, Egyptians, and Bangladeshis. Do these results show differences in actual behavior or differences in socially desired response? (GPS 2020)

 


Emil Kirkegaard and Anatoly Karlin have written a paper on the relative importance of intelligence versus other mental traits in determining national well-being. Their conclusion? Intelligence contributes a lot more to national well-being than do time preference, reciprocity, altruism, and trust.

 

We find that overall, national IQ is a better predictor of outcomes than (low) time preference as well as the five other non-cognitive traits measured by the Global Preference Survey (risk-taking, positive reciprocity, negative reciprocity, altruism, and trust). We find this result across hundreds of regression models that include variation in the inclusion of controls, different measures of time preference, and different outcomes. Thus, our results appear quite robust. Our results do show some evidence of time preference's positive validity, but it is fairly marginal, sometimes having a small p value in one model but not in the next. (Kirkegaard and Karlin 2020)

 

The two authors especially focus on time preference, i.e., the willingness to defer gratification in exchange for long-term gains. While acknowledging previous studies, which show that time preference has a strong effect on national well-being, they argue that this effect is only apparent. If a society has low time preference (i.e., a strong orientation toward the future), it almost always has a high mean IQ. So the relationship between national well-being and time preference is largely spurious.

 

If true, this is a significant finding. But is it true?

 

I see one big problem: the paper compares datasets with very different levels of error. Intelligence was measured by IQ tests under controlled conditions. On an IQ test you cannot make yourself seem more intelligent than you really are, unless someone has provided you with the right answers.

 

This is not the case with the method for measuring the other mental traits: a questionnaire, on which the "right answer" is whatever the respondent chooses to write down. The difference between the two methods is thus the difference between direct measurement and self-report. The level of error is much higher with the latter, and this difference can explain the findings by Kirkegaard and Karlin, specifically why national well-being correlates more with intelligence than with time preference:

 

The median ß across the indicators was 0.11 for time preference but 0.39 for national IQ. We replicated these results using six economic indicators, again with similar results: median ßs of 0.15 and 0.52 for time preference and national IQ, respectively. Across all our results, we found that national IQ has 2-4 times the predictive validity of time preference.

 

What will happen to the same correlations if intelligence is measured by a questionnaire? Let's survey a thousand people and ask them: "How smart do you think you are?" The result will correlate with their performance on an IQ test, but far from perfectly. So the correlation between self-reported intelligence and national well-being will be lower than the correlation between IQ and national well-being. Instead of getting the correlation of 0.39 that Emil and Anatoly found, we now have something closer to 0.11, i.e., the correlation they found between time preference and national well-being.

 

The problems with questionnaire data are especially apparent if we look at the results of the Global Preference Survey for altruism (see map at the top of this post). We see considerable differences even between neighboring countries that are culturally similar. For some reason, Czechs are at the low end of human variation in altruism, whereas Moroccans, Egyptians, and Bangladeshis are at the high end.

 

What’s going on here? The results are based on the following two questions of the Global Preference Survey:

 

1. (Hypothetical situation:) Imagine the following situation: Today you unexpectedly received 1,000 Euro. How much of this amount would you donate to a good cause? (Values between 0 and 1000 are allowed.)

 

2. (Willingness to act:) How willing are you to give to good causes without expecting anything in return? (Falk et al. 2016, p. 15)

 

The first problem is that the respondents will answer the above questions in a way that is viewed favorably by others and by their own conscience. This is called “social desirability bias,” and it’s stronger in a society with a high level of religious belief, like Morocco, than in one with a low level, like the Czech Republic.

 

Second problem: the term “good cause” has different connotations in different places. In the Western world, it generally refers to a non-religious organization that may endorse controversial views on political or social issues. As a result, many Westerners have mixed feelings about donating to “good” causes. This is not the case in the Muslim world, where “good causes” are explicitly Islamic or at least compliant with Islamic teachings. There is a similar problem with the term “donate.” It usually means the act of giving money to an organization, whereas the corresponding word in another language may simply mean “give.”

 

I wrote to Emil Kirkegaard about my criticisms:

 

In my opinion, you're comparing apples and oranges. Cognitive ability is difficult to fake on an IQ test - unless somebody has provided the participant with the right answers. On a questionnaire, anyone can give the "right" answer. It's entirely self-report. It's like measuring intelligence by asking people how smart they think they are.

 

His reply:

 

Your stance on this seems to imply you are unhappy with any kind of comparison of self-rated data vs. objectively scored cognitive data. One difficulty for you here is that people can also cheat on cognitive tests, namely by scoring low on purpose. Furthermore, while you may disapprove, such comparisons are the norm everywhere. I don't know any other person who refuses to do this comparison. There are also other-rated personality data, and these show even more validity than self-rate ones. https://emilkirkegaard.dk/en/?p=6457  There is a lot of research on faking good on personality tests, generally showing that subjects are not very good at this, presumably owing to lack of understanding of how the tests work.

 

I checked out the link he provided. This is what I found:

 

Self-rating measures of personality suffer from not just regular, random measurement error, but also have systematic measurement error (bias): people are not able to rate their own personality as well as other people who know them can. They introduce self-rating method variance into the data, and this variance is not so heritable. There is a twin study that used other-ratings of personality and when they used them or combined them with self-ratings, the heritabilities went up:

 

So with self-report they found H 42-56%, mean = 51%. Other-report: 57-81, mean = 66%, combined: 66-79, mean = 71%. (I used the AE models' results when possible.) In fact, these analyses did not correct for regular measurement error either, so the heritabilities are higher still according to these data, likely into the 80%s area. This is the same territory as cognitive ability. (Kirkegaard 2017)

 

 

Parting thoughts

 

Emil and Anatoly are right when they argue that intelligence is confounded with other mental traits. If, on average, a human population is high in intelligence, it is almost always low in time preference and high in altruism. This doesn't mean, however, that the latter are secondary expressions of intelligence. Many individuals are high in intelligence but low in altruism, sometimes pathologically low. They're called "sociopaths."

 

Few, if any, populations are both sociopathic and highly intelligent because such a combination can succeed only at the level of individuals, and not at the level of an entire population. The same pressures of selection that increase the mean intelligence of a population will also increase the average level of altruism and the average future time orientation. Consequently, all of these traits correlate with each other at the population level.

 

Will we ever be able to parcel out the relative importance of each mental trait in determining national well-being? In others words, will we ever find out how much of national well-being is due to intelligence, how much to time preference, and how much to altruism?

 

Not for a while. First, because these traits correlate with each other at the population level, it would be difficult to separate them and measure the relative importance of each one. They’re confounded. Second, they probably interact with each other. Altruism, for instance, is not a successful group strategy unless other mental or behavioral mechanisms are in place, in particular mechanisms to exclude non-altruists, i.e., the “free rider problem.” Intelligence, likewise, does not exist in a vacuum.

 

 

References

 

Falk, A., A. Becker, T. Dohmen, B. Enke, D. Huffman, and U. Sunde. (2016). Online Appendix: Global Evidence on Economic Preferences.

https://oup.silverchair-cdn.com/oup/backfile/Content_public/Journal/qje/133/4/10.1093_qje_qjy013/5/qjy013_supplemental_file.pdf?Expires=1611916150&Signature=Bc-zlE8jYXQUPteS3fbcNvbEI60jZ0VRXsPC0xkiG3G-5xgW9K2N4hn0LUJ2i-2oxn1BIE9wLMSNdf5-nlMTHf4Hf78TZcsUV-7yGui72UCFz-e7OrcCyiZpzhy-P6LIKXaAqWhIMva5ZKi0Rcf2wuIt195WSSWE7Y2hq9ilWKMuR~xqjHlkMkiq9Exq9D2xS4EIQX3O96IpRm-oMYpEbaCDaehxRA4BinqbuGhWcUcK9i3ocb5kxe2ZjF7OqDDiVZuaRAtDRYezLe8oQciZf4skXuLTfM5aSkNarWkOh617x0kcc1jOBgzrVUZYZ9FeWZY0r9OvHsDQNs6Z2CDp-A__&Key-Pair-Id=APKAIE5G5CRDK6RD3PGA


Global Preferences Survey (2020). https://www.briq-institute.org/global-preferences/about  


Kirkegaard, E.O.W. (2017). Getting personality right. Clear Language, Clear Mind.

https://emilkirkegaard.dk/en/2017/02/getting-personality-right/

 

Kirkegaard, E.O.W., and A. Karlin. (2020). National Intelligence Is More Important for Explaining Country Well-Being than Time Preference and Other Measured Non-Cognitive Traits. Mankind Quarterly 61(2): 339-370. http://doi.org/10.46469/mq.2020.61.2.11

https://www.researchgate.net/publication/347563852_National_Intelligence_Is_More_Important_for_Explaining_Country_Well-Being_than_Time_Preference_and_Other_Measured_Non-Cognitive_Traits