Monday, January 25, 2021

Sex differences in human eye morphology

 


Women have rounder-looking eyes with narrower fissures, but only in Europeans. Eyes are not sexually dimorphic in other human populations. (Petr Novak, Wikicommons)

 

 

The exposed white of the eye is larger in men than in women among Europeans but not in other human groups. This sexual dimorphism is due to the white of the eye being more horizontally exposed in men, with the result that female eyes look rounder. In addition, eye fissures are narrower and less rectangular in women (Danel et al. 2018; Danel et al. 2020).

 

This is analogous to what we see with eye color and hair color. Eyes are brown in most humans with the exception of Europeans, whose eyes may also be blue, gray, or green. Hair is black in most humans with the exception of Europeans, whose hair may also be blonde, red, or brown. In both cases, the palette of colors is more evenly balanced in women than in men. Women are less likely to have the more common hues, like blue or brown eyes and black hair. Conversely, they are more likely to have the less common hues, like green eyes and red hair.

 

There is no common genetic cause of these sex differences in eye morphology, eye color, and hair color. The genes are different in each case. The common cause seems to be some kind of selection among ancestral Europeans. Something favored the reproduction of women with rounder-looking eyes and less common eye and hair colors.

 

Was that "something" a someone? Were men selecting women through a process of sexual selection? That has been my explanation: in northern Eurasia until the end of the last ice age, women outnumbered men and had to compete for them, as a result of high male mortality and the high cost of polygyny. There was thus strong selection for women with an eye-catching appearance, and this selection ultimately changed the appearance of both sexes. The new phenotype eventually died out in northern Asia but survived in parts of Europe, which had a larger and more continuous human presence. It then spread throughout the rest of Europe almost at the dawn of history (Frost 2006; Frost 2014; Frost et al. 2017).

 

Danel et al. (2020) consider this explanation but reject it because female eye morphology does not correlate with two other aspects of female attractiveness: face shape and facial averageness. That lack of correlation, however, simply shows that each of these aspects has different constraints on the direction of sexual selection:

 

Eye morphology - the direction of sexual selection seems open-ended. Women are more attractive if they have rounder eyes.

 

Face shape - the direction of sexual selection goes into reverse beyond a certain point. Women are more attractive if they have smaller chins and smaller noses, but only up to a certain point. Excessively small chins and noses are not attractive either.

 

Facial averageness - the constraints are again different. Women become less attractive on each side of a narrow median.

 

References

 

Danel, D.P., S. Wacewicz, Z. Lewandowski, P. Zywiczynski, and J.O. Perea-Garcia. (2018). Humans do not perceive conspecifics with a greater exposed sclera as more trustworthy: a preliminary cross-ethnic study of the function of the overexposed human sclera. Acta Ethologica 21: 203-208.

https://doi.org/10.1007/s10211-018-0296-5

 

Danel, D.P., S. Wacewicz, K. Kleisner, Z. Lewandowski, M.E. Kret, P. Zywiczynski, and J.O. Perea-Garcia. (2020). Sex differences in ocular morphology in Caucasian people: a dubious role of sexual selection in the evolution of sexual dimorphism of the human eye. Behavioral Ecology and Sociobiology 74(115)

https://doi.org/10.1007/s00265-020-02894-1

 

Frost, P. (2006). European hair and eye color - A case of frequency-dependent sexual selection? Evolution and Human Behavior 27(2): 85-103.

https://doi.org/10.1016/j.evolhumbehav.2005.07.002

 

Frost, P. (2014). The puzzle of European hair, eye, and skin color. Advances in Anthropology 4(2): 78-88.

https://doi.org/10.4236/aa.2014.42011

 

Frost, P., K. Kleisner, and J. Flegr. (2017). Health status by gender, hair color, and eye color: Red-haired women are the most divergent. PLoS One 12(12): e0190238.

https://doi.org/10.1371/journal.pone.0190238

 

Monday, January 18, 2021

Are identical twins really identical?

 

Sibling similarity in personality for monozygotic twins, dizygotic twins, and adoptees (Wikicommons)

 

 

Monozygotic and dizygotic twins who were separated early in life and reared apart (MZA and DZA twin pairs) are a fascinating experiment of nature. They also provide the simplest and most powerful method for disentangling the influence of environmental and genetic factors on human characteristics. (Bouchard et al. 1990)

 

Monozygotic twins are identical twins. They develop from a single fertilized egg and are assumed to be genetically identical. Any differences between them in mind or behavior must therefore have an environmental cause. Of course, "environmental cause" does not mean only things like diet, upbringing, education, or parental help with homework. It can also mean accidents during pregnancy or childbirth.

 

But are monozygotic twins really identical? Monozygotic twins begin to go their own ways long after the zygote has made its first division. It's actually around a week later that they begin to develop separately, when the zygote has already divided several times to form a mass of about sixteen cells. During that time, mutations may have occurred in one cell lineage or another, and not all of those mutations will be inherited by both twins. A twin may in fact develop from a single lineage or several lineages within the cell mass. The two twins may thus be genetically different.

 

Jónsson et al. (2021) have quantified these genetic differences between twins. They examined the body tissues of adult twins, specifically one sample from adipose tissue, 204 samples from buccal tissue, and 563 blood samples.  On average, one of the twins had 14 postzygotic mutations that were not present in the other. There was, however, considerable variability: 39 twin pairs differed at more than 100 loci, whereas 38 pairs did not differ at all.

 

Germ cells develop from a subset of cell lineages very early in embryonic development, and it is possible to see how twins differ genetically in their germ lines by looking at their offspring. In this case, there was a difference of 5.2 mutations between twins. Again, there was considerable variability, ranging from a minimum of no mutations at all in 207 offspring to a maximum of 8 mutations in 3 offspring.

 

If monozygotic twins are not genetically identical, we will have to revise upwards our estimates of the relative importance of nature versus nurture in different human traits:

 

Phenotypic discordance between monozygotic twins has generally been attributed to the environment. This assumes that the contribution of mutations that separate monozygotic twins is negligible; however, for some diseases such as autism and other developmental disorders, a substantial component is due to de novo mutations. Our analysis demonstrates that in 15% of monozygotic twins a substantial number of mutations are specific to one twin but not the other. This discordance suggests that in most heritability models the contribution of sequence variation to the pathogenesis of diseases with an appreciable mutational component is underestimated. (Jónsson et al. 2021)

 

In particular, we will have to revise upwards our estimates of the genetic component of intelligence, such as the 70% estimate offered by Bouchard et al. (1990):

 

Since 1979, a continuing study of monozygotic and dizygotic twins, separated in infancy and reared apart, has subjected more than 100 sets of reared-apart twins or triplets to a week of intensive psychological and physiological assessment. Like the prior, smaller studies of monozygotic twins reared apart, about 70% of the variance in IQ was found to be associated with genetic variation. On multiple measures of personality and temperament, occupational and leisure-time interests, and social attitudes, monozygotic twins reared apart are about as similar as are monozygotic twins reared together.

 

Or the 41% to 66% estimate offered by Haworth et al. (2020):

 

Although common sense suggests that environmental influences increasingly account for individual differences in behavior as experiences accumulate during the course of life, this hypothesis has not previously been tested, in part because of the large sample sizes needed for an adequately powered analysis. Here we show for general cognitive ability that, to the contrary, genetic influence increases with age. The heritability of general cognitive ability increases significantly and linearly from 41% in childhood (9 years) to 55% in adolescence (12 years) and to 66% in young adulthood (17 years) in a sample of 11 000 pairs of twins from four countries, a larger sample than all previous studies combined.

 

My criticisms

 

Why focus on germline differences?

 

I have two criticisms of the study by Jónsson et al. (2020). First, their abstract highlights the median of 5.2 mutational differences in the germline, and not the larger median of 14 mutational differences in somatic tissues.

 

Here we show that monozygotic twins differ on average by 5.2 early developmental mutations and that approximately 15% of monozygotic twins have a substantial number of these early developmental mutations specific to one of them. (Jónsson et al. 2021)

 

Yes, "heritability" refers to genes that are passed on to the next generation, but most twin studies don't include the offspring of twins. The researchers simply examine pairs of monozygotic twins and see how they differ. Any differences would therefore reflect differences in somatic tissues and not the germline, or at least not solely the germline.

 

Undoubtedly, some of the somatic mutations occurred later in development, but they would still be relevant for any study on adult monozygotic twins.

 

Do these differences really make a difference?

 

We estimate the genetic component of a mental or behavioral trait by comparing monozygotic and dizygotic twins, i.e., identical and fraternal twins. A difference between monozygotic twins is assumed to be 100% environmental, and a difference between dizygotic twins is assumed to be partly environmental and partly genetic. Therefore, we can estimate the genetic component by subtracting one from the other, right?

 

This is where the study by Jónsson et al. (2021) comes in. They argue that the genetic component is always underestimated because some of the difference between monozygotic twins is also genetic. But is that additional genetic difference large enough to make a difference? If monozygotic twins differ from each other, on average, at 14 loci, and dizygotic twins differ from each other, on average, at 1400 loci, we might as well assume that monozygotic twins are genetically identical. Any upward revision of the heritability estimate would be slight.

 

Of course, the key lies in the words "on average." Some of the twins in this study differed at more than 100 loci. More importantly, around 15% of the twins had a substantial number of "near-constitutional" mutations, i.e., absent from one twin and present in almost all the tissues of the other. In those cases, we could see big differences in development between the two.

 

It's difficult to say without a point of comparison. In other words, the same kind of study should be done on dizygotic twins. How much more variable are they genetically?

 

 

References

 

Bouchard Jr., T.J., D.T. Lykken, M. McGue, N.L. Segal, and A. Tellegen. (1990). Sources of human psychological differences: the Minnesota Study of Twins Reared Apart. Science 250(4978): 223-228. https://doi.org/10.1126/science.2218526

 

Haworth, C.M.A., M. J. Wright, M. Luciano, N.G. Martin, E.J.C. de Geus, et al. (2010). The heritability of general cognitive ability increases linearly from childhood to young adulthood. Molecular Psychiatry 15: 1112-1120. https://doi.org/10.1038/mp.2009.55

 

Jónsson, H., E. Magnusdottir, H.P. Eggertsson, O.A. Stefansson, G.A. Arnadottir, et al. (2021). Differences between germline genomes of monozygotic twins. Nature Genetics 53: 27-34 (2021). https://doi.org/10.1038/s41588-020-00755-1

Monday, January 11, 2021

Are fungal pathogens manipulating human behavior?

 


Fungal infection of brain tissue (Wikicommons, CDC). Some fungi persist in the human brain for years and begin to harm their host only in old age. What were they doing previously?

 

 

I've published a paper on manipulation of human behavior by fungal pathogens. Here's the abstract:

 

Many pathogens, especially fungi, have evolved the capacity to manipulate host behavior, usually to improve their chances of spreading to other hosts. Such manipulation is difficult to observe in long-lived hosts, like humans. First, much time may separate cause from effect in the case of an infection that develops over a human life span. Second, the host-pathogen relationship may initially be commensal: the host becomes a vector for infection of other humans, and in exchange the pathogen remains discreet and does as little harm as possible. Commensalism breaks down with increasing age because the host is no longer a useful vector, being less socially active and at higher risk of death. Certain neurodegenerative diseases may therefore be the terminal stage of a longer-lasting relationship in which the host helps the pathogen infect other hosts, largely via sexual relations. Strains from the Candida genus are particularly suspect. Such pathogens seem to have co-evolved not only with their host population but also with the local social environment. Different social environments may have thus favored different pathogenic strategies for manipulation of human behavior.

 

Please feel free to comment.

 

Reference

 

Frost, P. (2020). Are Fungal Pathogens Manipulating Human Behavior? Perspectives in Biology and Medicine 63(4): 591-601. https://doi.org/10.1353/pbm.2020.0059

 

Sunday, January 3, 2021

The mental qualities that make a society workable

 

A questionnaire survey found very low levels of altruism in Czechs and very high levels in Moroccans, Egyptians, and Bangladeshis. Do these results show differences in actual behavior or differences in socially desired response? (GPS 2020)

 


Emil Kirkegaard and Anatoly Karlin have written a paper on the relative importance of intelligence versus other mental traits in determining national well-being. Their conclusion? Intelligence contributes a lot more to national well-being than do time preference, reciprocity, altruism, and trust.

 

We find that overall, national IQ is a better predictor of outcomes than (low) time preference as well as the five other non-cognitive traits measured by the Global Preference Survey (risk-taking, positive reciprocity, negative reciprocity, altruism, and trust). We find this result across hundreds of regression models that include variation in the inclusion of controls, different measures of time preference, and different outcomes. Thus, our results appear quite robust. Our results do show some evidence of time preference's positive validity, but it is fairly marginal, sometimes having a small p value in one model but not in the next. (Kirkegaard and Karlin 2020)

 

The two authors especially focus on time preference, i.e., the willingness to defer gratification in exchange for long-term gains. While acknowledging previous studies, which show that time preference has a strong effect on national well-being, they argue that this effect is only apparent. If a society has low time preference (i.e., a strong orientation toward the future), it almost always has a high mean IQ. So the relationship between national well-being and time preference is largely spurious.

 

If true, this is a significant finding. But is it true?

 

I see one big problem: the paper compares datasets with very different levels of error. Intelligence was measured by IQ tests under controlled conditions. On an IQ test you cannot make yourself seem more intelligent than you really are, unless someone has provided you with the right answers.

 

This is not the case with the method for measuring the other mental traits: a questionnaire, on which the "right answer" is whatever the respondent chooses to write down. The difference between the two methods is thus the difference between direct measurement and self-report. The level of error is much higher with the latter, and this difference can explain the findings by Kirkegaard and Karlin, specifically why national well-being correlates more with intelligence than with time preference:

 

The median ß across the indicators was 0.11 for time preference but 0.39 for national IQ. We replicated these results using six economic indicators, again with similar results: median ßs of 0.15 and 0.52 for time preference and national IQ, respectively. Across all our results, we found that national IQ has 2-4 times the predictive validity of time preference.

 

What will happen to the same correlations if intelligence is measured by a questionnaire? Let's survey a thousand people and ask them: "How smart do you think you are?" The result will correlate with their performance on an IQ test, but far from perfectly. So the correlation between self-reported intelligence and national well-being will be lower than the correlation between IQ and national well-being. Instead of getting the correlation of 0.39 that Emil and Anatoly found, we now have something closer to 0.11, i.e., the correlation they found between time preference and national well-being.

 

The problems with questionnaire data are especially apparent if we look at the results of the Global Preference Survey for altruism (see map at the top of this post). We see considerable differences even between neighboring countries that are culturally similar. For some reason, Czechs are at the low end of human variation in altruism, whereas Moroccans, Egyptians, and Bangladeshis are at the high end.

 

What’s going on here? The results are based on the following two questions of the Global Preference Survey:

 

1. (Hypothetical situation:) Imagine the following situation: Today you unexpectedly received 1,000 Euro. How much of this amount would you donate to a good cause? (Values between 0 and 1000 are allowed.)

 

2. (Willingness to act:) How willing are you to give to good causes without expecting anything in return? (Falk et al. 2016, p. 15)

 

The first problem is that the respondents will answer the above questions in a way that is viewed favorably by others and by their own conscience. This is called “social desirability bias,” and it’s stronger in a society with a high level of religious belief, like Morocco, than in one with a low level, like the Czech Republic.

 

Second problem: the term “good cause” has different connotations in different places. In the Western world, it generally refers to a non-religious organization that may endorse controversial views on political or social issues. As a result, many Westerners have mixed feelings about donating to “good” causes. This is not the case in the Muslim world, where “good causes” are explicitly Islamic or at least compliant with Islamic teachings. There is a similar problem with the term “donate.” It usually means the act of giving money to an organization, whereas the corresponding word in another language may simply mean “give.”

 

I wrote to Emil Kirkegaard about my criticisms:

 

In my opinion, you're comparing apples and oranges. Cognitive ability is difficult to fake on an IQ test - unless somebody has provided the participant with the right answers. On a questionnaire, anyone can give the "right" answer. It's entirely self-report. It's like measuring intelligence by asking people how smart they think they are.

 

His reply:

 

Your stance on this seems to imply you are unhappy with any kind of comparison of self-rated data vs. objectively scored cognitive data. One difficulty for you here is that people can also cheat on cognitive tests, namely by scoring low on purpose. Furthermore, while you may disapprove, such comparisons are the norm everywhere. I don't know any other person who refuses to do this comparison. There are also other-rated personality data, and these show even more validity than self-rate ones. https://emilkirkegaard.dk/en/?p=6457  There is a lot of research on faking good on personality tests, generally showing that subjects are not very good at this, presumably owing to lack of understanding of how the tests work.

 

I checked out the link he provided. This is what I found:

 

Self-rating measures of personality suffer from not just regular, random measurement error, but also have systematic measurement error (bias): people are not able to rate their own personality as well as other people who know them can. They introduce self-rating method variance into the data, and this variance is not so heritable. There is a twin study that used other-ratings of personality and when they used them or combined them with self-ratings, the heritabilities went up:

 

So with self-report they found H 42-56%, mean = 51%. Other-report: 57-81, mean = 66%, combined: 66-79, mean = 71%. (I used the AE models' results when possible.) In fact, these analyses did not correct for regular measurement error either, so the heritabilities are higher still according to these data, likely into the 80%s area. This is the same territory as cognitive ability. (Kirkegaard 2017)

 

 

Parting thoughts

 

Emil and Anatoly are right when they argue that intelligence is confounded with other mental traits. If, on average, a human population is high in intelligence, it is almost always low in time preference and high in altruism. This doesn't mean, however, that the latter are secondary expressions of intelligence. Many individuals are high in intelligence but low in altruism, sometimes pathologically low. They're called "sociopaths."

 

Few, if any, populations are both sociopathic and highly intelligent because such a combination can succeed only at the level of individuals, and not at the level of an entire population. The same pressures of selection that increase the mean intelligence of a population will also increase the average level of altruism and the average future time orientation. Consequently, all of these traits correlate with each other at the population level.

 

Will we ever be able to parcel out the relative importance of each mental trait in determining national well-being? In others words, will we ever find out how much of national well-being is due to intelligence, how much to time preference, and how much to altruism?

 

Not for a while. First, because these traits correlate with each other at the population level, it would be difficult to separate them and measure the relative importance of each one. They’re confounded. Second, they probably interact with each other. Altruism, for instance, is not a successful group strategy unless other mental or behavioral mechanisms are in place, in particular mechanisms to exclude non-altruists, i.e., the “free rider problem.” Intelligence, likewise, does not exist in a vacuum.

 

 

References

 

Falk, A., A. Becker, T. Dohmen, B. Enke, D. Huffman, and U. Sunde. (2016). Online Appendix: Global Evidence on Economic Preferences.

https://oup.silverchair-cdn.com/oup/backfile/Content_public/Journal/qje/133/4/10.1093_qje_qjy013/5/qjy013_supplemental_file.pdf?Expires=1611916150&Signature=Bc-zlE8jYXQUPteS3fbcNvbEI60jZ0VRXsPC0xkiG3G-5xgW9K2N4hn0LUJ2i-2oxn1BIE9wLMSNdf5-nlMTHf4Hf78TZcsUV-7yGui72UCFz-e7OrcCyiZpzhy-P6LIKXaAqWhIMva5ZKi0Rcf2wuIt195WSSWE7Y2hq9ilWKMuR~xqjHlkMkiq9Exq9D2xS4EIQX3O96IpRm-oMYpEbaCDaehxRA4BinqbuGhWcUcK9i3ocb5kxe2ZjF7OqDDiVZuaRAtDRYezLe8oQciZf4skXuLTfM5aSkNarWkOh617x0kcc1jOBgzrVUZYZ9FeWZY0r9OvHsDQNs6Z2CDp-A__&Key-Pair-Id=APKAIE5G5CRDK6RD3PGA


Global Preferences Survey (2020). https://www.briq-institute.org/global-preferences/about  


Kirkegaard, E.O.W. (2017). Getting personality right. Clear Language, Clear Mind.

https://emilkirkegaard.dk/en/2017/02/getting-personality-right/

 

Kirkegaard, E.O.W., and A. Karlin. (2020). National Intelligence Is More Important for Explaining Country Well-Being than Time Preference and Other Measured Non-Cognitive Traits. Mankind Quarterly 61(2): 339-370. http://doi.org/10.46469/mq.2020.61.2.11

https://www.researchgate.net/publication/347563852_National_Intelligence_Is_More_Important_for_Explaining_Country_Well-Being_than_Time_Preference_and_Other_Measured_Non-Cognitive_Traits

 

Saturday, December 26, 2020

Frank Salter and the National Question

 


When did the interests of our elites begin to diverge from ours? (Wikicommons)

 


Kinship ties have historically been weak among Europeans north and west of a line running from Trieste to St. Petersburg. Within that area, and going back at least a millennium, almost everyone would be single for at least part of adulthood, with many staying single their entire lives. In addition, children usually left the nuclear family to form new households, and many individuals circulated among unrelated households, typically young people sent out as servants.

 

This marriage pattern is associated with an equally unusual behavioral pattern: stronger individualism; weaker loyalty to kin; and greater willingness to trust strangers. These tendencies have a psychological basis. Affective empathy is not expressed primarily within intimate relationships, as between a mother and her child. Instead, it is extended to everyone, unless that person is judged to be a moral outcaste. Morality itself is less situational and more universal—absolute rules that apply equally to everyone. Finally, the ability to internalize that kind of morality is stronger, as are the feelings of guilt you experience when breaking a rule—even if you are the sole witness to your misdeed.

 

Some say the "Western European Marriage Pattern" began with Western Christianity—what would become Roman Catholicism and, later, Protestantism. By forbidding cousin marriages and by framing morality in terms of universal rules, the Western Church laid the basis for a new civilization (Schulz et al. 2019). Others say this pattern goes farther back in time; the Western Church thus assimilated pre-existing social norms from its northwest European converts (Frost 2017; Frost 2020).

 

Whatever the cause, northwest Europeans possess a behavioral package that has helped them create larger social networks independently of kinship. One example is the market economy. Keep in mind the distinction between "market economy" and "markets." The latter are as old as history, and yet for most of history they were little more than marketplaces—pockets of economic activity limited in time and space, incapable of becoming the main organizing principle of society. That role was filled by kinship. Production of goods for a market was secondary to the reproduction of life for one’s family and kin group.

 

The market economy did not originate in the markets of Greece and Rome. It ultimately goes back to the North Sea communities of the seventh century. There, trade underwent a sustained expansion that would in time eclipse trade on the Mediterranean, eventually creating the current global economy (Callmer 2002, see also Barrett et al. 2004).  

 

Greer (2013a, 2013b) pinpoints the fourteenth century as the time when the North Sea economies began to outpace the rest of the world:

 

[...] the two exceptions are Netherlands and Great Britain. These North Sea economies experienced sustained GDP per capita growth for six straight centuries. The North Sea begins to diverge from the rest of Europe long before the 'West' begins its more famous split from 'the rest.'

 

[...] we can pin point the beginning of this 'little divergence' with greater detail. In 1348 Holland's GDP per capita was $876. England's was $777. In less than 60 years time Holland's jumps to $1,245 and England's to 1090. The North Sea's revolutionary divergence started at this time. (Greer 2013b; see also Greer 2013a and Hbd *chick 2013)

 

The rise of the West is usually attributed to things like the European conquest of the Americas, the invention of printing, the creation of modern financial institutions, the Atlantic slave trade, and the Protestant Reformation. Yet the West was already rising before any of that happened. The ultimate cause was behavioral: the West was better at exploiting the market concept because it could extend the sphere of high trust far beyond small groups of closely related individuals.

 

The rest is ... history. The market economy grew and grew and grew. Initially, its main vehicle was the nation-state; the nations of northwest Europe thus became fierce rivals for commercial dominance. Only later would the market economy be freed of that vehicle. When exactly? At the dawn of the twentieth century, when the elite of the British Empire became fully global in its ambitions? After the two world wars, which left the United States as the dominant power in the global market? During the 1980s, when offshoring of jobs got into full swing?

 

The liquidation of the nation-state was a process, not a point in time. Over the twentieth century our national elites went global and lost any loyalty they once had to the old working class of the West, eventually viewing it as an anachronism. After weighing the costs and benefits, they concluded it should be replaced with cheaper labor from other sources. The old working class has thus been caught in a vice. On the one hand, high-paying jobs are outsourced to low-wage countries; on the other hand, low-wage labor is insourced for those jobs that cannot be outsourced, typically in services and construction. The result? Non-elite individuals have seen their wages stagnate throughout the West, particularly in the United States. And the peoples of the West are being progressively replaced, even in their ancestral homelands.

 

Well, so what? Yes, they created the concept of the market economy, but that concept no longer belongs exclusively to them and no longer requires their existence. So why should they continue to exist?

 

That question has two answers. First, the market economy isn't just a concept. It is also certain ways of being and doing. As northwest Europeans dwindle away and eventually disappear, there will be a shift toward behaviors and mindsets that prevail elsewhere. People will become less trusting of each other, and less sure about what they pay for. Transactions will have to be checked and double-checked, and many will no longer be worth the bother. To keep the market economy from collapsing, governments will become increasingly authoritarian and adopt Orwellian levels of surveillance. Like China, but not as nice.

 

The second answer is existential. It's the answer that explains every living thing on this planet. We were. We are. We will be. Existence is not justified by argument. It is justified by an act of will.

 

 

Frank Salter and the National Question

 

Frank Salter is an Australian political scientist who is probably best known for his book On Genetic Interests: Family, Ethnicity and Humanity in an Age of Mass Migration (2003). In a recent speech, he has argued for a new balance between the market economy and our need for kinship. This balance would be provided by “national liberalism,” as defined by the nineteenth-century thinker John Stuart Mill:

 

Where the sentiment of nationality exists in any force, there is a prima facie case for uniting all the members of the nationality under the same government, and a government to themselves apart [...] One hardly knows what any division of the human race should be free to do if not to determine with which of the various collective bodies they choose to associate themselves.

 

This twinning of nationalism with liberalism was common during the nineteenth century. Liberals saw the nation-state as a means to emancipate the individual from the confines of local and regional identities. France was the go-to model. Originally, its people mostly spoke various regional languages; only a minority could speak French. Even the laws differed from one part of the country to the other. After the Revolution, a uniform language was imposed through the schools, and the laws too were made uniform. Individuals could now freely circulate and express themselves within a much larger territory. There were also economic benefits: economies of scale, labor mobility, and a more rational distribution of the factors of production.

 

That logic, however, didn't stop with the nation-state. It eventually led to globalism. We like to see globalism as a healthy reaction to the sins of nationalism, particularly the two world wars, yet nationalism was already morphing into globalism before 1914. Look at John Stuart Mill's country. In the early nineteenth century it was, arguably, a nation-state. Most people under British rule were of British origin and shared the same language, culture, and life-ways. When the century came to an end, all of that had changed: the British were now a minority within a vast multinational empire. The country no longer served its people as a vehicle for their survival. It now served an increasingly globalist elite.

 

As Frank Salter points out, nationalism can be diverted into post-national channels. Modern techniques of propaganda can create an artificial feeling of kinship that serves elite interests:

 

... investment in ethnic kin carries risks due to reliance on culture, which is more prone to error than the instinct-laden bonds of family. In his book, Imagined Communities, the Marxist historian Benedict Anderson argued convincingly that national communities are perceived indirectly through cultural channels, such as stories, books, films, press reports, memorials, and so on. The same goes for events that are perceived to enhance or threaten the nation. The sense of fellowship can be extended through cultural devices to elicit bonding with hypothetical kin. Likewise, the realm of antagonisms, of distrust, hatred and combat, can be hugely inflated in scope and intensity in the ethnocentric mind. (Salter 2020)

 

The risks are obvious. The national elite may pursue its self-interest to the detriment of the nation it supposedly serves. Instead of using its cultural dominance to promote common national aims, it may manipulate the nation’s culture to further its own post-national and supra-national ambitions.

 

 

References

 

Barrett, J.H., Locker, A.M. and Roberts, C.M. (2004). Dark Age Economics revisited: The English fish bone evidence AD 600-1600. Antiquity 78 (301): 618-636.

https://www.cambridge.org/core/journals/antiquity/article/abs/dark-age-economics-revisited-the-english-fish-bone-evidence-ad-6001600/898F73D2812CA7E5F5BDF3EA071341F0

 

Callmer, J. (2002). North-European trading centres and the early medieval craftsman. Craftsmen at Åhus, North-Eastern Scania, Sweden ca. AD 750-850+, UppSkrastudier 6 (Acta Archaeologica Lundensia Ser. in 8, no. 39), 133-158.

 

Frost, P. (2017). The Hajnal line and gene-culture coevolution in northwest Europe. Advances in Anthropology 7: 154-174.

http://file.scirp.org/pdf/AA_2017082915090955.pdf  

 

Frost, P. (2020). The large society problem in Northwest Europe and East Asia. Advances in Anthropology 10(3): 214-134.

https://doi.org/10.4236/aa.2020.103012   

 

Greer, T. (2013a). The Rise of the West: Asking the Right Questions. July 7, The Scholar's Stage

http://scholars-stage.blogspot.com/2013/07/the-rise-of-west-asking-right-questions.html  

 

Greer, T. (2013b). Another look at the 'Rise of the West' - but with better numbers. November 20, The Scholar's Stage

http://scholars-stage.blogspot.ca/2013/11/another-look-at-rise-of-west-but-with.html   

 

Hbd *chick (2013). Going Dutch, November 29

https://hbdchick.wordpress.com/2013/11/29/going-dutch/

 

Salter, F. (2020).  Sir Henry Parkes's liberal-ethnic nationalism. Sydney Trads, December 18

https://sydneytrads.com/2020/12/18/sir-henry-parkess-liberal-ethnic-nationalism/  

 

Schulz, J.F., D. Bahrami-Rad, J.P. Beauchamp, and J. Henrich. (2019). The Church, intensive kinship, and global psychological variation. Science 366(707): 1-12. https://doi.org/10.1126/science.aau5141

 

Saturday, December 19, 2020

Brain size and family structure in Estonia

 


Estonian schoolchildren (Wikicommons). Estonian children have smaller brains if raised by a biological parent and a step-parent. Therefore, two committed parents are better than one, right? Well, not in this case. Brains aren't smaller in Estonian children raised by a single parent (and no step-parent).

 

 

In Estonia, cranial volume was one of several anthropometric traits that were routinely measured in schoolchildren during the Soviet era. The data didn't suffer from volunteer bias because the measurements were mandatory. Mortality bias was minimal because the subjects were young. This data source is thus better in many respects than data from Western biobanks. It is now being mined by Peeter Hõrak, a University of Tartu professor, to learn more about nature and nurture in human brain development.

 

I discussed this data source in a previous post (Frost 2020). One problem is that the study population is not as homogeneous as it may seem. In fact, 16% of the fathers and 7% of the fathers were not Estonian (Hõrak 2020). This factor might explain some differences in the data, especially changes over time.

 

 

The latest study

 

This data source has now been used to see whether the brain size of children is influenced by family structure, specifically whether the child was raised by biological parents or by step-parents. The data came from 822 children born between 1980 and 1987 in Tartu, Estonia and were measured at around 14 years of age.

 

The children had significantly larger brains when the household had both biological parents:


Cranial volume was related to family structure and paternal education. Children living with both birth-parents had larger heads than those living in families containing a step-parent. [...] our findings suggest that families including both genetic parents provide non-material benefits that stimulate predominantly cranial growth. (Lauringson et al. 2020)

 

That's what we read in the Abstract. The brain was bigger on average in children who had been raised by both biological parents, rather than by a biological parent and a step-parent, presumably because a step-parent contributes less to the child's upbringing.

 

That finding is rejected, however, in the Results section. It turns out that there was no difference in brain size between children raised by both biological parents and children raised by a single parent (in almost all cases the biological mother). The brain was smaller only in children raised by a biological parent and a step-parent:

 

At the same time, cranial volumes of children living with a single parent were similar to those living with two providers, even though the former reported on average lower resource availability and more frequent meat shortage. Associations between family type and cranial volume thus cannot be explained on the basis of dilution of material resources. (Lauringson et al. 2020)


Differences in family structure also failed to correlate with differences in the child's height. If life in a stepfamily had somehow harmed the child's development, that harm was much less observable in overall body growth than in cranial volume.

 

So what's going on here? Keep in mind two things about Estonian society of the late 20th century:

 

- A single parent was almost always a woman, often a widow who refused to remarry, either because she still felt attached to her deceased spouse or because she considered the potential husbands available to be more trouble than they were worth.

 

- A step-parent could be of either sex. A stepfather often took over from a man who had sired the child out of wedlock or during a short-lived marriage.

 

Thus, on average, the biological father was a different kind of man in the two situations. In the first situation, he was usually the sort of man who would remain with the mother of his child until his death. In the second, he was often the sort of man who would leave the mother of his child once a more interesting woman came into view. One may presume there are differences in genetic quality between the two kinds of men. This hypothesis is actually advanced in the study:

 

An alternative (yet not mutually exclusive) explanation to the observed associations between family type and cranial volume of children would be that parents prone to remarrying possess on average (genetically) smaller heads than those prone to avoiding divorce or remaining single after divorcing. Such a scenario would assume robust genetic correlations between cranial volume and personality traits related to marriage stability. Twin studies have shown that genetic factors account for 13-53% of the variation in divorce [...], and if personality traits associated with a propensity to divorce are genetically correlated with cranial volume or its growth rate, one would detect smaller heads of children growing up in divorced/separated families. Such an explanation would be consistent with the predictions of life history theory, assuming that qualities characteristic of slow pace of life-including high somatic investment into body and brain growth and propensity for relatively low mating effort (in relation to parenting effort)-have coevolved (and cluster) with higher mental abilities and conscientious and risk-averse personality traits [...]. Consistent with this view are also the findings in our sample where fathers with only primary education were shorter and more prone to divorce/separate than others. (Lauringson et al. 2020)

 

We see a similar problem of interpretation with the relationship between father absence and early sexual maturity in daughters. Using a large sample of 1,247 daughters, Surbey (1990) found that daughters with an absent father matured four to five months earlier than those who lived with both parents continuously and seven months earlier than those with an absent mother. Surbey argued that the presence of a strange male accelerates the speed of sexual maturation. In other words, at a subconscious level, the girl does not recognize the man as a father. She recognizes him as a potential mate, and her body gears up for procreation.

 

This hypothesis was challenged by Mendle et al. (2006) who examined the daughters of twin mothers.

 

In a pair of twin mothers of which only one raises her children with a stepfather, the offspring of both twins are equally likely to display early age of menarche. It therefore appears that some genetic or shared environmental confound accounts for the earlier association found in female children living with stepfathers.

 

It seems, then, that people who end up as step-parents are, on average, genetically different from other parents. They tend to have the mental and behavioral characteristics of a "fast" life history.

 

 

References

 

Frost, P. (2020). Declining intelligence in the 20th century: the case of Estonia. Evo and Proud, August 3 http://evoandproud.blogspot.com/2020/08/declining-intelligence-in-20th-century.html

 

Hõrak, P. (2020). Personal communication.

 

Lauringson, V., G. Veldre, and P. Hõrak. (2020). Adolescent Cranial Volume as a Sensitive Marker of Parental Investment: The Role of Non-material Resources? Frontiers in Psychology 15 December https://doi.org/10.3389/fpsyg.2020.602401

 

Surbey, M.K. (1990). Family composition, stress, and the timing of human menarche. In T.E. Ziegler & F.B. Bercovitch (eds.) Socioendocrinology of Primate Reproduction, pp. 11-32, New York: Wiley-Liss Inc.