What if AI treats humans the way we treat animals?

What if AI treats humans the way we treat animals?

By now, you may have heard — possibly from the same people creating the technology — that artificial intelligence might one day kill us all. The specifics are hazy, but then, they don’t really matter. Humans are very good at fantasizing about being exterminated by an alien species, because we’ve always been good at devising creative ways of doing it to our fellow creatures. AI could destroy humanity for something as stupid as, in philosopher Nick Bostrom’s famous thought experiment, turning the world’s matter into paper clips — much like humans are now wiping out our great ape cousins, orangutans, to cultivate palm oil to make junk foods like Oreos.

You might even say that the human nightmare of subjugation by machines expresses a sublimated fear of our treatment of non-human animals being turned back on us. “We know what we’ve done,” as journalist Ezra Klein put it on a May episode of his podcast. “And we wouldn’t want to be on the other side of it.”

AI threatens the quality that many of us believe has made humans unique on this planet: intelligence. So, as author Meghan O’Gieblyn wrote in her book God, Human, Animal, Machine, “We quell our anxiety by insisting that what distinguishes true consciousness is emotions, perception, the ability to experience and feel: the qualities, in other words, that we share with animals.” We tell ourselves, in other words, that even if AI may one day be smarter than us, unlike the machines, we have subjective experience, which makes us morally special.

The obvious problem with this, though, is that humans aren’t special in this way. Non-human animals share many of our capacities for intelligence and perception, yet we’ve refused to extend the generosity we might expect from AI. We rationalize unmitigated cruelty toward animals — caging, commodifying, mutilating, and killing them to suit our whims — on the basis of our purportedly superior intellect. “If there were gods, they would surely be laughing their heads off at the inconsistency of our logic,” O’Gieblyn continues. “We spent centuries denying consciousness in animals precisely because [we thought] they lacked reason or higher thought.”

Why should we hope that AI, particularly if it’s built on our own values, treats us any differently? We might struggle to justify to a future artificial “superintelligence,” if such a thing could ever exist, why we’re deserving of mercy when we’ve failed spectacularly at offering our fellow animals the same. And, worse still, the dehumanizing philosophy of AI’s prophets is among the worst possible starting points to defend the value of our fleshy, living selves.

Transhumanism is built on a hatred of animality

Although modern humans defend the exploitation of non-human animals in terms of their assumed lack of intelligence, this has never been the real reason for it. If we took that argument at face value and treated animals according to their smarts, we would immediately stop factory-farming octopuses, which can use tools, recognize human faces, and figure out how to escape enclosures. We wouldn’t keep elephants in solitary confinement in zoos, recognizing it as a violation of their rights and needs as smart, caring, deeply social creatures. We wouldn’t psychologically torture pigs by immobilizing them in cages so small they can’t turn around, condemning them to a short lifetime essentially spent in a coffin, all to turn them into cheap cuts of bacon. We would realize that it’s wholly unnecessary to subject intelligent cows to the trauma of repeated, human-induced pregnancies and separation from their newborns, just so we can drink the milk meant for their calves.

In reality, we aren’t cruel to animals because they’re stupid; we say they’re stupid because we’re cruel to them, inventing fact-free mythologies about their minds to justify our dominance, as political theorist Dinesh Wadiwel lays this out in his brilliant 2015 book The War Against Animals. In a chapter called “The Violence of Stupidity,” Wadiwel contends that human power over animals enables us to be willfully and unaccountably stupid about what they are really like. “How else might we describe a claimed superiority by humans over animals (whether based on intelligence, reason, communication, vocalisation, or politics) that has no consistent or verifiable ‘scientific’ or ‘philosophical’ basis?” he writes. Humans, like animals, are vulnerable, breakable creatures who can only thrive within a specific set of physical and social constraints. We can only hope that future AI, however intelligent, doesn’t evince the same stupidity with respect to us.

While we can only guess whether some powerful future AI will categorize us as unintelligent, what’s clear is that there is an explicit and concerning contempt for the human animal among prominent AI boosters. AI research itself has strong ties to transhumanism, a movement that aims to radically alter and augment human bodies with technology. Its most extreme aspirants hope to merge humanity with computers, excising suffering from life like a tumor from a cancer patient and living in a state of everlasting bliss, as Bostrom, one of the main proponents of transhumanism, has suggested. Elon Musk, for instance, has said that he launched Neuralink, his brain-computer interface startup, in part so that humans can remain competitive in an intelligence arms race with AI. “Even under a benign AI, we will be left behind,” Musk said at a Neuralink event in 2019. “With a high bandwidth brain-machine interface, we will have the option to go along for the ride.”

This aspiration can be interpreted as an implicit loathing of our animality, or at least a desire to liberate ourselves from it. “We will be the first species ever to design our own descendants,” technologist Sam Altman, now the CEO of OpenAI, wrote in a 2017 blog post. “My guess is that we can either be the biological bootloader for digital intelligence” — meaning just a stepping stone for advanced AI — “and then fade into an evolutionary tree branch, or we can figure out what a successful merge looks like.”

Computer scientist Danny Hillis, co-founder of the now-defunct AI company Thinking Machines, declared in the early ’90s that humans are composed of two fundamentally different things: “We’re the metabolic thing, which is the monkey that walks around, and we’re the intelligent thing, which is a set of ideas and culture,” as historian David Noble quotes in his 1997 book The Religion of Technology. ”What’s valuable about us,” Hillis continued, “what’s good about humans, is the idea thing. It’s not the animal thing.” Merging with computers signifies our extrication from animal biology.

This human/animal dualism posits a clean cognitive break between us and the rest of the animal evolutionary tree, when in fact no such division exists. It relies on an implausible model of human intelligence as having nothing to do with our physical, animal selves: a notion that “the mind is computation, that it does not involve the affective dimensions of the human experience, and it doesn’t involve the body,” Michael Sacasas, a technology critic who writes The Convivial Society, a popular Substack, told me.

The societal reckoning taking place now over where humans fit in a world of AI, might, as Sacasas hopes, prompt us to start to rethink this dualism, to recognize that the body is “not just as the firmware for the rational software, but actually an integral part of what we call ‘mind.’” Breaking down that dualism ought to also mean giving up the separate status we assign ourselves as human beings. It could help us broaden the definition of intelligence itself to encompass the animal qualities described by O’Gieblyn — “emotions, perception, the ability to experience and feel.” There is, after all, no single thing in our brains called “intelligence” or “thought”; it’s not a body part, but an emergent property continuous with our other mental processes. Animals share these, and in some cases exceed them.

Migratory birds, for example, can famously navigate by perceiving the Earth’s magnetic field. Raccoons can “see” and learn about the world with their hyper-sensitive hands (this is why they can sometimes be seen enthusiastically patting objects and other animals). Pigs are undoubtedly smart, but the widely cited idea that they’re “as smart as” 3-year-old children reflects the depressing way that we’ve come to measure intelligence against a single-variable, anthropocentric yardstick, rather than recognizing different beings as having different minds. Yet this is dehumanizing to us, too, because it judges our cognition as though it were a computer’s CPU. If we can properly value animals’ capacities, then we might also see how claiming human exceptionalism through a disembodied view of our minds has done spiritual harm to ourselves.

AI criticism ought to include non-human animals

You don’t have to believe that AI could become autonomous and orchestrate our extinction to see how, for example, chatbots are already blurring the line between humans and machines, creating the illusion of sentience where it doesn’t exist, a critique made by linguist Emily Bender. Others, like Sacasas, point to how AI replacing humans represents the culmination of modernity’s drive to eliminate inefficiency from life. “By the logic of the market and of techno-capitalism, if you like, the inefficiencies of the human being were always ultimately meant to be disposed of,” he said. “AI, in a sense, just kind of furthers that logic … and brings it to its logical conclusion, which is, you’re just getting rid of people.”

These kinds of critiques ring true to me — yet they also have a way of fixating on the ethical and spiritual uniqueness of human beings, to the exclusion of the other sentient, intelligent creatures with whom we’ve always shared the planet. “One of the anxieties generated by AI is built upon how we have sought to distinguish the human, or to elevate the human, or to find the unique thing about the human,” Sacasas points out. Humans are, in important ways, obviously unique among animals. But the critical discourse about AI has shown little interest in thinking beyond ourselves, or reckoning with what implications this moment has for our undervaluing of animals.

One of the best-known critiques of AI large language models, or LLMs, for example, compares AI’s lack of language understanding to that of an animal: the concept of the “stochastic parrot,” which refers to how chatbots, not having minds, spit out language based on probabilistic models with no regard for meaning. “You are not a parrot,” proclaimed the headline of a widely read March profile of Emily Bender in New York magazine.

I’m sure Bender has nothing against parrots — exceptionally smart animals that are thought to reproduce sounds with astonishing fidelity as part of their communication with one another and with humans. But parrots aren’t machines, and imagining them as such only reinforces the human/animal dualism that gave us the disembodied view of our own minds. It’s as if we have no language for affirming our worth as humans without repudiating animality.

The ascendance of AI should be a pivotal moment from which to start to come to grips with our relationship to other sentient, biological life. If AI were ever in a position to make judgments about us, we should hope that it’s far more charitable than we have been, that it doesn’t nitpick, mock, or nullify our capacities and needs as we’ve done to other animals. If we wouldn’t want to be tyrannized by a more powerful intelligence, we have no credible defense for continuing to do the same.

We don’t know if sentient AI is possible, but if it is, we shouldn’t build it

None of this necessarily tells us whether the machines themselves could ever become sentient, or how we should proceed if they can. I used to find the idea of sentient AI risible, but now I’m not so sure. The scientific method has not figured out how to explain consciousness, as O’Gieblyn points out. Modern science, she writes, “was predicated in the first place on the exclusion of the mind.”

If we don’t know where consciousness comes from, we may want to be careful about assuming it can only arise from biological life, especially given our poor track record of appreciating it in animals. “Evolution was just selecting repeatedly on ability to have babies, and here we are. We have goals,” as Vox’s Kelsey Piper said on The Ezra Klein Show in March. “Why does that process get you things that have goals? I don’t know.”

We have no reason to believe any current AIs are sentient, but we also have no way of knowing whether or how that could change. “We’re kind of at the point where we can make fire but do not even have the rudiments of what we’d need to understand it,” my friend Luke Gessler, a computational linguist, told me.

If sentience in AI could ever emerge (a big if), I’m doubtful we’d be willing to recognize it, for the same reason that we’ve denied its existence in animals. Humans are very good at dismissing or lying about the interests of beings that we want to exploit (including not just animals but also, of course, enslaved humans, women, and any other class of people who have been excluded from moral consideration). Creating sentient AI would be unethical because we’d be bringing it into the world as chattel. Consigning sentient beings to property status, as we know from the experience of non-human animals, is inherently unjust because their welfare will always be subordinated to economic efficiency and the desires of their owners. “We will inevitably inflict suffering on them,” science fiction author Ted Chiang said of building sentient AI in 2021. “That seems to me clearly a bad idea.”

In a May essay, Columbia philosopher Dhananjay Jagannathan offered a different perspective on the AI minds question. Drawing from Aristotle, he suggests that the nature of thought isn’t something that can be scientifically deduced or implanted into a computer, because it’s an irreducible part of our lives as biological animals. “Thinking is life,” the Aristotelian idea puts it. A raccoon who pats things to learn about her environment, for example, or a baby bird who pecks around at objects to do the same, or a human whose sense of smell vividly triggers a distant memory are all having experiences of thinking that are inextricable from the biological organs through which they’re engaging with the world.

One upshot of this, Jagannathan writes, is that the transhumanist dream of digitally uploading our consciousness and splitting from our bodies, far from being any sort of liberation, amounts to “self-annihilation.” The idea of thinking as inseparable from animality can be hard for modern people to comprehend because, as O’Gieblyn writes, our concept of the mind pulls so heavily from computational metaphors. Because we imagine our cognition as a computer, we start to imagine, erroneously, that computers can think.

AI evokes our anxieties about the fragility and mistreatment of animality

Jagannathan’s view, that we can understand thought through our kinship with non-human animals, helps clarify what is disconcerting about the dualist, computational view of experience, taken to its logical endpoint by AI and transhumanist philosophy. The assumption that we can apprehend, measure, and perfect subjective experience, rendering life as though it were bits of information encoded on a computer, can lead to conclusions that are obviously repugnant. It has made the annihilation of biological life, both human and non-human, imaginable.

Prominent philosopher Will MacAskill, for example, proposed in his 2022 book What We Owe the Future that declining populations of wild animals (we are, if you haven’t heard, in the middle of a mass extinction) may actually be desirable. Their lives might be “worse than nothing on average, which I think is plausible (though uncertain),” he writes, because they may consist more of suffering, from things like predation and disease, than of pleasure. Perhaps, then, they’d be better off if they’d never been born — an argument that springs from the same well as the transhumanist impulse to remove suffering from life and colonize the universe with beings merged with machines.

The idea of wild animal eradication represents one of the more extreme manifestations of the drive to denude life of physical content. In a similar vein, transhumanist philosopher David Pearce, who sits on the board of the organization Herbivorize Predators (it aims to do what the name implies), hopes to technologically “eliminate all forms of unpleasant experience from human and non-human life, replacing suffering with ‘information-sensitive gradients of bliss.’”

In the actual world, where wild animals are often exterminated wholesale when their presence is inconvenient for us, the notion that it could actually be morally righteous to get rid of them might provide a justification for the ecocide that humans are engaged in anyway. Who’s to say that an AI won’t one day say the same thing about us, deciding that it’s best to put us out of our misery based on its cold calculation of our pains and pleasures? That would be consistent with the transhumanist ethos of transcending the hardship of physical existence.

Yet this dim estimation of our biological selves, as well as those of animals, forecloses the possibility of valuing or interpreting life in other ways. We can hardly access an animal’s interiority, much less be able to say whether they think their lives are worth living. If a utilitarian bean counter told me that the rest of my life would be 70 percent suffering, I wouldn’t choose to die, even if I truly believed them; I would want to live out my life.

A very different, more integrated interpretation of animal life, one that I return to again and again, can be found in a work by the poet Alan Shapiro. His 2002 poem “Joy” gives expression to the strange entanglement of joy, fear, and tragedy that defines our lives, and, he imagines, perhaps those of wild animals also. “Joy,” he writes, is the thing that is “Savagely beautiful,” likening it to antelope evading a lion:

This vision doesn’t, to me, suggest that the suffering of wild animals doesn’t matter, but rather that the vulnerable, mysterious fullness of their lives is worth living. AI evokes our anxieties about the fragility and mistreatment of animality — our own, as well as that of nonhuman animals. It reminds us of our own vulnerability, the parts of us that are unfathomable or expendable in mechanistic terms. In a world where the ability to manipulate language is no longer a uniquely human capacity, the rationalizing impulse might ask us to co-sign our own obsolescence. We might, instead, decide that our creaturely selves are worth holding on to, and, in doing so, invite our fellow animals into our moral circle.

We’re here to shed some clarity

One of our core beliefs here at Vox is that everyone needs and deserves access to the information that helps them understand the world, regardless of whether they can pay for a subscription. With the 2024 election on the horizon, more people are turning to us for clear and balanced explanations of the issues and policies at stake. We’re so grateful that we’re on track to hit 85,000 contributions to the Vox Contributions program before the end of the year, which in turn helps us keep this work free. We need to add 2,500 contributions this month to hit that goal.
Will you make a contribution today to help us hit this goal and support our policy coverage? Any amount helps.

$5/month

$10/month

$25/month

$50/month

Other

Yes, I’ll give $5/month

Yes, I’ll give $5/month

We accept credit card, Apple Pay, and

Google Pay. You can also contribute via

>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : Recode – https://www.vox.com/the-highlight/23777171/ai-animals-rights-cruelty-transhumanism-bostrom

Exit mobile version