The Anthropocentrism Trap, Part 1
What 19th-century skull measurements reveal about how we assess other minds
In the 1830s and 1840s, Samuel George Morton — a Philadelphia physician and one of America’s most respected scientists — assembled the world’s largest collection of human skulls. His project was ambitious and, by the standards of his era, rigorously scientific: measure cranial capacity across racial groups, let the data speak, and settle the question of intellectual hierarchy once and for all.
Morton was meticulous. He published all his raw data and explained all his procedures. He was widely regarded as the man who would rescue American science from unsupported speculation. And his conclusion was unequivocal: the measurements proved the intellectual superiority of white males.
Half a century later, Paul Broca — the brilliant French neurologist whose name still graces a region of the human brain — pursued essentially the same project with even greater statistical sophistication. Broca understood how to correct for confounding variables: differences in body size, age, health. He was careful, methodical, and entirely sincere in his commitment to objectivity. He reached the same conclusion Morton had.
But both men were wrong. Not because their facts were fabricated but because something subtler was operating. When the evolutionary biologist Stephen Jay Gould reexamined Morton’s data in the 1970s, he discovered that the raw measurements — when recalculated without Morton’s selective groupings and convenient omissions — revealed no significant differences between races at all. Morton hadn’t committed fraud. He had done something more insidious: he had unconsciously arranged his data to confirm what everyone in his society already knew to be true.1
Gould found the same pattern in Broca, but more elegantly concealed. Broca’s facts were reliable, but as Gould put it, they were gathered selectively and then manipulated unconsciously in service of prior conclusions. Conclusions came first. The data were made to follow. And the result achieved not only the blessing of science but the prestige of numbers.
There is a revealing postscript. In 2011, a team led by the anthropologist Jason Lewis remeasured over 300 of Morton’s original skulls and argued that Gould himself had been selective — that Morton’s physical measurements were largely accurate, and that it was Gould who had manipulated the analysis to fit his narrative. The study received wide press coverage, with headlines declaring that the great critic of scientific bias had been guilty of it himself. But in 2016, philosophers Michael Weisberg and Diane Paul published a detailed rebuttal showing that Lewis’s team had misunderstood Gould’s argument: Gould had never claimed Morton’s later measurements were inaccurate, only that the pattern of differences between Morton’s earlier and later methods revealed unconscious bias in how the measuring was done. The dispute remains active.2
The episode is worth noting not because it settles the question in Gould’s favor but because it illustrates how disputes about bias are themselves theory-laden. The pattern doesn’t spare anyone — not Morton, not Gould, not Gould’s critics. That alone should give us pause.
It’s not that Morton and Broca were bad scientists. By the standards of their era, they were exemplary ones. But they operated inside a framework so pervasive and deeply engrained that it was invisible to them. In their world, the intellectual inferiority of non-white people was self-evident — not a hypothesis to be tested but a background assumption that shaped which questions seemed natural, which data seemed relevant, and which conclusions seemed reasonable. Their a priori convictions were so powerful that those convictions directed their work along pre-established lines without conscious awareness.
This is the pattern Gould documented: not fraud but framework. Not dishonesty but the far more insidious and dangerous phenomenon of sincere scientists who unconsciously hold society’s deepest assumptions beneath the threshold of their awareness.
Walk into any aquarium with a dolphin show and watch the audience. The delight is real, and so is the assumption behind it: these are clever animals performing tricks. For most viewers, the possibility that the dolphin might be a person in captivity simply doesn’t present itself as a live question — not because it’s been considered and rejected, but because it falls outside their interpretive frame. That is what an invisible framework feels like from the inside.
The philosopher Thomas White makes this uncomfortable observation: it is as obvious today that nonhuman beings are “just animals” as it was obvious in the 19th century that non-white and female humans were inferior. In our culture, it is self-evident that animals lack genuine self-awareness, that they have limited cognitive and emotional capacities, and that they are fundamentally different from us in kind rather than degree. Because they’re so different, we don’t need to worry much about harming them.3
The parallel is structural, not moral: both cases involve background assumptions shaping what counts as evidence, not comparable histories of oppression. And this isn’t a claim that species differences are marginal or illusory — they’re real and important. It’s the subtler point that our framework shapes how we interpret those differences. When differences are found between human and dolphin brains, the default interpretation runs in one direction: dolphins are deficient. The possibility that the differences might be due to alternative, comparably sophisticated forms of awareness — that a different evolutionary path through a different medium might produce different but equally complex cognitive solutions — rarely surfaces as a serious scientific hypothesis. Not because scientists are dishonest, but because the framework makes one interpretation feel like objectivity and the other feel like romantic projection.
Gould himself warned about exactly this dynamic. His study of Morton and Broca led him to a sobering general conclusion: science has never operated free of the attitudes that predominate in the societies in which scientists live. The expression of scientific opinion on matters touching deep social commitments is, Gould argued, as much a political act as a scientific one — and scientists tend to provide objectivity for what society at large wants to hear.
Could contemporary science be doing something similar when it studies dolphins?
White examined the scientific literature and found cases worth scrutiny — not of fraud or overt bias, but of the same subtle pattern Gould documented: factually accurate claims framed in ways that strongly imply conclusions the data don’t support.
Consider one example. In an article arguing that dolphins fail to show evidence of advanced intelligence, the scientist Margaret Klinowska describes the dolphin brain as “actually quite primitive,” retaining structures found in hedgehogs and bats, with its cortical regions “stuck at a stage” representing the most primitive stage in land mammals.4 The basic neuroanatomical facts she cites are correct. But the framing — primitive, stuck, lacking — carries an unmistakable implication: these brains could not support advanced cognition. What’s missing is any acknowledgment that the dolphin brain’s evolutionary history diverged from terrestrial mammals roughly fifty million years ago, and that what looks “primitive” by primate standards might represent a different organizational strategy — one that arrived at sophisticated cognitive outcomes through a completely independent path.
Or consider the commentary by Lester Aronson and Ethel Tobach on dolphin brain anatomy. They describe the anatomical level of the dolphin brain as “considerably below” that of higher primates and “far below the human level.” But the data they cite only show that the dolphin brain is different from the human brain — not that it is below it. They select one metric — the corticalization index — that places dolphins low, while ignoring three other ratios from the same research that show rough equivalence between dolphins and humans. They close with a memorable flourish borrowed from another researcher: dolphins appear to be “three servants short” of Kipling’s six honest serving men of learning and intellect. Witty, quotable, and unsupported by the evidence presented.5
In neither case do scientists explicitly claim that dolphin brain features prove limited cognition. The bias operates through tone, through selective emphasis, through which facts are highlighted and which are quietly omitted. This is not the heavy-handed pronouncement of a 19th-century craniologist declaring skull size proves racial hierarchy. It is something more subtle and, for that reason, harder to detect — exactly the kind of bias that operates most effectively when it operates beneath awareness.
White puts the question directly: Why does conflicting data not simply lead these scientists to a noncommittal stance? Why, when the evidence is genuinely mixed, does the interpretation consistently tilt toward minimizing dolphin cognitive sophistication? Perhaps, he suggests, we are seeing the same phenomenon Gould described — scientists operating in good faith who are nonetheless influenced by their society’s overwhelming belief that only humans possess advanced cognitive and emotional lives.
The cetacean neuroscientist Lori Marino has made a similar observation from within the field itself: that comparative metrics routinely treat the primate brain as the evolutionary endpoint against which all other brains are measured — a methodological choice so deeply embedded that it rarely registers as a choice at all.6
The reason this matters extends well beyond academic debate. How we interpret evidence about other minds has enormous consequences for cetaceans. Thousands of dolphins die each year in fishing operations. Hundreds live in captivity for entertainment. Human activity has pushed orca populations in Puget Sound to near extinction. These practices rest on assumptions about what dolphins are and what they can experience — assumptions shaped by the same scientific frameworks that may carry the biases we’ve been examining.
If Morton and Broca teach us anything, it’s that the most dangerous biases are the ones we can’t see — the ones that feel like objectivity because everyone around us shares them. The question isn’t whether scientists could be affected by species bias when studying cetaceans. It’s whether we have any reason to believe any of us are immune.
We don’t. But the case isn’t complete. If the anthropocentrism trap distorts how we interpret evidence about other minds, then the next question becomes urgent: what does the evidence actually show when the distortion is removed? What has the bias been concealing?
That’s the subject of Part 2.
This essay draws on Thomas I. White’s ‘What Is It Like to Be a Dolphin?’ and Toni Frohoff’s ‘Lessons from Dolphins,’ both in Whales and Dolphins: Cognition, Culture, Conservation and Human Perceptions (2013).
Stephen Jay Gould, The Mismeasure of Man, 1981
Jason E. Lewis et al., “The Mismeasure of Science: Stephen Jay Gould versus Samuel George Morton on Skulls and Bias,” PLoS Biology 9, no. 6 (2011). For the rebuttal, see Michael Weisberg and Diane Paul, “Morton, Gould, and Bias: A Comment on ‘The Mismeasure of Science,’” PLoS Biology 14, no. 4 (2016).
White, T. I. (2013). “What is it like to be a Dolphin?” In Whales and Dolphins: Cognition, Culture, Conservation and Human Perceptions (pp. 188-206). Taylor and Francis.
Margaret Klinowska, “How Brainy are Cetaceans?” reprinted in Oceanus, Volume 32, Number 1, Spring 1989, 20.
Lester R. Aronson and Ethel Tobach, “Conservative aspects of the dolphin cortex match its behavioral level,” Behavioral and Brain Sciences, vol 11, no 1 (1988), 89-90.
Lori Marino, “Convergence of Complex Cognitive Abilities in Cetaceans and Primates,” Brain, Behavior and Evolution 59, no. 1-2 (2002): 21-32.
