May 19, 2021
Source: Bigstock
What matters most: nature or nurture, genes or environment, ancestry or upbringing? The conventional wisdom argues for the malleable latter, even though twin and adoption studies typically find more substantial evidence for nature than for nurture. Yet, I would like to encourage hereditarians to not get overconfident just because they so soundly defeat the politically ascendant nurturists on the rare occasions when they can lure them into scientific debate. Today, I want to point out a limitation on twin studies that opens up the likelihood that Hegel’s notion of the zeitgeist (spirit of the age) also matters.
I am an extremist only in the sense of being an extreme moderate. For example, on the central intellectual question of the human sciences of nature versus nurture, my default assumption since the 1990s has been to begin by guessing that, for whatever trait is under consideration, genetics and the environment are about equal in influence. I shift away from that evenhanded standpoint only when there is strong evidence one way or another.
In contrast, the dogma of the day argues for the extremist position that all behavior must be determined by shadowy environmental forces, such as “systemic racism,” even though the rapidly improving genetic sciences find ever more evidence for the power of heredity.
Yet, the older I get, the more I feel compelled to warn realists to avoid genetic triumphalism because history has many cunning passages. In the very long run, time can deal us big surprises.
Old-fashioned twin and adoption studies have many strengths.
Twin research cleverly exploits the fact that twins come in two genetic varieties: identical and fraternal. Among twins raised in the same home, the identical twins wind up behaving much more alike on most measures than do the nonidentical twins.
In contrast, adoption studies follow singletons who are raised by different families from their biological parents. But on many measures, such as IQ, adoptees turn out more like their parents by nature than their parents by nurture. On other traits, however, such as language, religion, cuisine, and perhaps occupation, they grow up more like their adoptive parents.
Finally, there is the holy grail of human science observational studies: the rare instances of identical twins raised apart. Several hundred such cases have now been studied over the past century.
The results tended to astonish mid-century social scientists raised on the pervasive social engineering ideologies of that unfortunate era. For example, the 2018 documentary Three Identical Strangers recounted the postwar experiment by the Jewish Board of Family and Children Services to split up for adoption one set of triplets and at least four sets of identical twins. Researchers visited each child weekly for years to document Freud’s theory of the importance of toilet-training techniques and similar intellectual delusions of the age. Nonetheless, the finding that kept leaping out at the surprised Jewish Board’s interviewers was that the separated twins and triplets, despite their fairly different family settings, behaved extraordinarily alike.
Now, most cases of twins raised more or less apart are less clear-cut than this notorious example in which, for progressive ideological reasons, the triplets had never even heard of their brothers until they met at age 19. A more common separated-twin story might involve an ugly divorce with the ex-husband and the ex-wife each taking one child to spite the other, then dumping their kid on relatives to raise, who can afford to have the children spend only a week together every few years.
Nor have separated twins typically been raised in radically different environments such as great wealth and poverty. Reasons for splitting twins often involve adoptive parents having enough money for one child but not for two, so there are few Prince and the Pauper cases to examine, because a truly rich couple would likely have taken both twins. Instead, many separated twins grew up in fairly similar circumstances: not impoverished but constrained.
But rather than rehash the often argued-over fine points of twin studies, let me note a truly fundamental limitation: Twins can be separated in space but not in time.
That’s important because some traits are strongly influenced by when you are born.
For example, twin studies find that obesity is 70 or 80 percent heritable. And yet, Americans are a lot fatter than they were just forty years ago.
Clearly, we haven’t evolved to be more obese in just a generation or two. The environment must have changed notably.
How? In many different ways, but one I would focus upon is that when I was in marketing research in the 20th century, several of our big clients had long-term goals of someday making their tasty salted snacks and carbonated beverages be within arm’s reach of all Americans at all times. They still haven’t quite gotten there yet, but they’ve made enormous efforts to persuade us to consume ever more of their sweet or salty products.
In 1981, identical twins raised apart in different circumstances would likely be, by current standards, skinny, while identical twins raised apart today would probably seem pretty chunky in the eyes of 1981 Americans. Times have changed.
Yet, you can’t do a study of twins separated in time rather than space.
Granted, if you could clone humans, you could do all sorts of interesting experiments with, in effect, identical twins raised in different eras. But you can’t do that yet. And let’s not do that. Among numerous reasons not to carry out mad-scientist experiments on future clones is that one thing twin researchers have discovered for sure is that separated identical twins enjoy getting back together. We hear a lot about sibling rivalry, but sibling revelry is also a thing.
As well as obesity, we’ve also seen, on a more positive note, average height and raw IQ scores go up over the generations. The curious phenomenon of average IQ scores increasing two or three points per decade, requiring that scoring periodically be made harder to keep the average at 100, was named in The Bell Curve the Flynn Effect in honor of the late philosopher James Flynn, who brought it to scientific attention.
Some of the reasons for increased IQ scores in the 20th century doubtless overlap with the reasons for increased height, such as better nutrition and fewer diseases. But much remains mysterious.
The Flynn Effect is usually considered an embarrassment for IQ testing, but it can also be viewed as its vindication. Tests with their roots in the early 20th century could easily have become less relevant as the generations went by, just as many innovations of 1905 are out of date today.
Instead, IQ tests anticipated the cognitive direction the modern world would move as subsequent generations all over the world kept getting better at taking IQ tests. Presumably, people got more practice in their daily lives at the kinds of thinking that IQ tests measure, while concentrating less on the older modes of thought that IQ tests are less good at measuring.
I don’t see much reason to believe that newer generations particularly focused on gaming IQ tests. Instead, people increasingly had to deal in daily life with the kind of problems long posed by cognitive tests. “Life is an IQ test,” said Johns Hopkins psychologist Robert Gordon, and that became increasingly true over the generations.
Tellingly, the Flynn Effect has been smallest on IQ subtests that test traditional skills such as vocabulary. Instead, scores have gone up the most on the subtests that have been intentionally designed to be less culturally biased, such as coding. Indeed, the nonverbal and downright alien-looking Raven’s Progressive Matrices, a science-fiction-like IQ test planned in the 1930s to be equally usable all over the world, has seen the largest Flynn Effect.
In other words, people aren’t getting much smarter at skills that, say, Charles Dickens would have instantly recognized as markers of intelligence. But they are getting better at cognitive tasks that would have baffled Dickens, like using your television remote control to set up your Netflix account.
My speculation has been that the Flynn Effect stems from a long-term trend during the Information Age, going back to Gutenberg in the 1450s and accelerating with the digital revolution, toward inculcating mental skills such as book-learning and machine logic that IQ tests tend to probe rather than the older, more intuitive ways of thinking that are more difficult for a standardized paper-and-pencil test to evaluate.
The economic historian Andrea Matranga of Chapman University recently conjectured on Twitter about how the first few hundred years of the spread of printed books eventually paved the way for the subsequent Industrial Revolution.
I wonder if the printing press/encyclopedia thing moved processes towards the more repeatable/explainable, and in some way was a precursor to mechanization?
E.g., say that to manufacture widgets, there were two processes, a more efficient process a) that required a lot of hands-on tutoring by an experienced worker, and a more “put peg A in hole B” type process which was initially inferior.
In other words, all sorts of skills had been passed down through hands-on learning under the apprentice system, but extremely intelligent Encyclopedists like Diderot had to figure out how to describe three-dimensional industrial processes on two-dimensional printed pages. This anti-intuitive, Aspergery approach wasn’t hugely useful at first, but after the development of the steam engine, it could be put to good use, rather like how George Boole’s binary logic had little practical use until Claude Shannon picked up the trillion-dollar bill lying on the sidewalk by pointing out a century later that Boolean algebra was optimal for electrical devices.
Because the encyclopedia/manual writers couldn’t really just write “try different levels of pressure until it feels right” they write down process B, which then spreads more and is improved through learning by doing.
So eventually there are a lot of very efficient manual process which are already broken up in a series of discrete steps, and are largely independent of operator feel. This means that when suitable power sources come around, they are ripe for automation.
I suspect the Flynn Effect of rising raw IQ scores in the 20th century was in part a continuation of this post-Gutenberg de-emphasis on thinking in hands-on terms toward thinking in ways that could be written on a piece of paper or, later, a computer screen. Pre-modern humans like Shakespeare thought really hard about how to persuade other humans, but modern humans have increasingly had to think about how to instruct robots, which requires different modes of thought.
Likewise, IQ tests had to ask questions whose answers didn’t depend upon feel or mood or eloquence, but upon a sort of machine logic. The answers had to be right or wrong, rather like programming a computer, not that anybody in the golden age of IQ test design, 1905–1950, programmed computers yet, other than, say, Shannon or Alan Turing.
Nobody expected the Flynn Effect of rising average raw scores on IQ tests. But IQ test pioneers like Stanford’s Louis Terman somehow anticipated the dominant cognitive style of the future. Strikingly, this was a future that Terman’s son, Stanford dean of engineering Fred Terman (often called the Father of Silicon Valley), and Fred’s students Bill Hewlett and David Packard helped launch. We live and reason in a brave new world foreseen and created by the Termans.
That history can unsettle conclusions seemingly dictated by population genetics does not prove that behavioral disparities can be made to vanish simply by taxing and spending hard enough.
But history should induce some humility and restraint in our beliefs.