This is part of a series of posts summarizing chapters from my evolving essay, “Worlds of Awareness: Cetaceans, Evolution, and Cultures of Consciousness.” Each post presents the core argument of one or more chapters as a standalone piece. For more about the project and why everything here is free, see the About page.
Before examining how consciousness manifests across species — the subject of the next two chapters — we need to acknowledge something that may have been nagging at you from the last ones. Many of the terms used in the previous posts are strange. “Psychophysical continuum.” “Manifestation.” “Differentiation.” They seem strange not because the concepts are confused but because four centuries of mechanistic thinking shaped our vocabulary. We’re trying to describe participatory wholeness using language built for mechanical parts.
You may have experienced this difficulty yourself — trying to articulate moments of recognizing consciousness in another being, or why the mechanistic account feels inadequate, and finding that ordinary language fails. The words sound either too mystical or too vague. This isn’t your failure. It’s a limitation of the language itself.
Deeper Than Vocabulary
The problem runs deeper than missing words. We cannot adequately define even the most basic terms. What is “matter”? Physics describes its structure, relations, and behavior with extraordinary precision but remains silent about its intrinsic nature. We know matter exists; we cannot say what it *is*, only what it does.
“Consciousness” is worse. Any definition proves circular. “What it’s like from inside” presumes consciousness to understand the definition. “Subjective experience” defines one mystery with another. “Phenomenal awareness” uses synonyms without explaining. The hard problem is partly a language problem: we’re trying to describe from outside what only exists from inside.
But the deepest constraint may be structural. English subject-verb-object grammar embeds a particular metaphysics: things (nouns) that do actions (verbs) to other things (objects). This structure makes it nearly impossible to discuss consciousness without treating it as either a thing that exists, a process that happens, or a property that things have. But what if consciousness is none of these? What if, as the dual-aspect frameworks discussed in the previous post suggest, both mental and physical features differentiate from something that the subject-object grammar simply cannot express?
This grammatical constraint is not universal. In Chinese, 心 (xīn) — often translated as “heart-mind” — functions as both noun and verb, allowing more fluid expression of consciousness as activity rather than substance. This linguistic integration of emotion and cognition isn’t merely philosophical preference — it may better capture biological reality. As we’ll see in the cetacean chapters, odontocete brains display massive paralimbic development deeply integrated with cognitive functions, challenging the Western assumption that advanced intelligence requires emotional suppression. The Chinese language that refuses to separate heart from mind may recognize what Western categories systematically obscure.
Indigenous languages often go further. The Blackfoot language, as physicist F. David Peat documented, structures reality primarily through verbs rather than nouns, enabling speakers to express ongoing processes and relationships more readily than English allows. Botanist Robin Wall Kimmerer describes how Potawatomi employs a “grammar of animacy” — linguistic structures that distinguish beings by whether they demonstrate agency and reciprocity rather than by arbitrary human categories. In English, we grammatically equate a living orca with a rock, both rendered as “it.” Potawatomi grammar makes such conflation structurally impossible.
The difficulty articulating consciousness-inclusive frameworks in English reflects specific Western categorical choices, not universal cognitive limits. Other traditions that never reified the mind-matter split don’t struggle to overcome it.
A Revealing Split
“Consciousness” and “conscience” were historically the same word — and in some languages, still are. This is a conceptual artifact revealing a major shift in how we understand the mind.
In Latin, conscientia meant “knowing together” — a knowing that witnesses itself, carrying inherent moral weight. To be conscious was to be accountable to what one knows. Medieval philosophers didn’t distinguish between awareness and moral sensitivity because the distinction seemed artificial. French preserved this unity: conscience still covers both meanings, requiring context to distinguish them. The language never forced the split.
English did split them. As C.S. Lewis documented in *Studies in Words*, the Latin conscientia originally meant acting as a witness to oneself — both perceiving and being accountable for what one perceives. But the mechanistic turn required a neutral term for studying mind as mere object rather than moral witness. “Consciousness” emerged as the technical term stripped of moral connotations — pure awareness, supposedly value-free. John Locke formalized this in 1690, appropriating “consciousness” to define personal identity based on memory while stripping away the moral witness that *conscientia* had always carried.
German manufactured an even more deliberate separation. Christian Wolff and 18th-century German philosophers coined Bewusstsein (consciousness) as a technical term explicitly distinct from Gewissen (conscience), though both share the root wissen (to know). Thomas Metzinger observes that this linguistic engineering wasn’t accidental — it reflected and reinforced the mechanistic view that treated awareness as neutral observation rather than engaged participation.
This matters because what contemplative practices develop across traditions isn’t merely “awareness” in the modern, neutered sense — a neutral registering of stimuli. They cultivate something richer: what Metzinger calls “mental autonomy,” what Buddhist traditions call clarity and compassion arising together, what Indigenous frameworks recognize as right relationship. The practices aim at integrated development of awareness and responsibility, perception and care — consciousness *and* conscience. The difficulty reintegrating these concepts may reveal not their true separateness but our conceptual impoverishment.
The Quantum Precedent
The history of quantum mechanics provides a real-world case where exactly this kind of language crisis arose — and was navigated productively.
In the early twentieth century, physicists discovered phenomena that could not be captured in ordinary language. Electrons behaved in ways that violated not only classical physics but basic assumptions embedded in our vocabulary. Werner Heisenberg observed that words like “position” and “velocity” did not carry the same meaning in the realm of atoms. Certain quantum concepts, he noted, were “derivable neither from our laws of thought nor from experiment.” The very categories available in ordinary language — derived from macroscopic experience — simply did not apply.
Niels Bohr emphasized this predicament repeatedly: we are forced to use the language of classical physics simply because we have no other language in which to express the results. Even when physicists knew classical concepts were inadequate, they had no alternative vocabulary. Quantum mechanics found a way forward through mathematical formalisms that could express relationships words could not capture — but even mathematical formalism required interpretation, and deep disagreements about what the mathematics meant persist to this day.
We face a similar situation with consciousness, though our predicament is in one respect more severe. Mathematical formalism won’t help us here — these aren’t mathematical structures. But we can learn from how quantum physicists handled language limits: acknowledge the inadequacy, use terms carefully and often symbolically, and remain explicit about where language breaks down.
Working Within Constraints
We cannot escape language’s constraints, but we can work responsibly within them. Scientific terminology itself encodes physicalist assumptions more broadly than we usually notice: “tropism” rather than “response,” “mechanism” rather than “organized system,” “instinct” rather than “intelligence” — each choice predetermines what interpretations seem reasonable. This is why consciousness-inclusive frameworks sound strange: not because they lack rigor but because we lack vocabulary.
The terms this essay uses — consciousness, manifestation, psychophysical ground — are necessary tools, not perfect descriptions. At key moments, we’ll ask for something beyond purely linguistic understanding: direct recognition of your own consciousness, intuitive grasping of why mechanistic reduction feels inadequate. This isn’t mysticism or failure of argument. It’s acknowledging that some knowing comes through participation rather than verbal description.
The risk of using unfamiliar language is real. But the greater risk is staying trapped in vocabulary that seems precise only because it’s familiar — mistaking linguistic habit for transparency to reality.
This post summarizes Chapter 3 of “Worlds of Awareness.” The next posts examine the evidence: how consciousness manifests across evolution, with particular attention to cetacean brains and what they reveal about the nature of mind itself.
I’m looking for critical readers willing to engage with full chapters — particularly people with backgrounds in philosophy of language, linguistics, or cross-cultural epistemology, but also thoughtful readers who can tell me where the argument lost them. You can comment below or reach me at rsm at 137fsc dot net.
