The End of Animal Advocacy

Originally posted on Substack

Readers who closely follow AI news and TAI<>animals discourse might skip to section 2. Thanks to Joseph Ancion and Itsi Weinstock for thorough feedback.

1. Standing at the Precipice

It is well-known among fans of true crime that the first 72 hours after someone goes missing are considered crucial for detectives; statistics suggest the chances of finding someone alive after that become vanishingly small.1 I remember watching a crime thriller about a kidnapping which leaned heavily on this fact: superimposed over the beginning of each scene, an ominous timer flashed the hours, minutes, and seconds since the victim had been reported missing. It was a great storytelling device, ratcheting up dramatic tension and creating a vivid feeling of a race against time.

Lately, I’ve been getting the same feeling each time I read an article about how quickly AI research is accelerating– which, at this point, happens several times a week.

Time is starting to feel acutely precious.

I used to be pretty good at clocking out of my animal advocacy work for a week’s vacation three or four times a year and not feeling guilty for it. But these days I feel like I’m falling behind even on the weekends.

This sharpening sense of desperation isn’t tied to any specific project or outcome. It comes from the fact that what may turn out to be the greatest window of opportunity for animal advocacy in human history is starting to close.

I don’t want to exaggerate: this isn’t happening in the next month, and almost certainly not in the next year. But I’d be surprised if it takes longer than ten years, and I expect more like five. That would be 260 weeks. And they are zipping by quickly. (A virus knocked me out and delayed publishing this post by a week; the timer clicked down to 259.)

A couple posts ago, I asked you to

Pause for a moment and consider this question:

What do you think is the earliest year we could reasonably expect to see the end of commercial animal farming in your country, if we play our cards right?

Write down the number of years between then and now, whether it is 40 years, 4 years, or 400 years.

Now, consider a second question:

What would you be doing differently if you thought your initial answer was wrong by an order of magnitude in either direction?

If your initial answer was 40 years, and a magical genie appeared and told you with certainty that, if you take exactly the right actions, it would be possible to end animal farming in just 4 years? Or, on the other hand, they told you that even if you take the best possible course of actions, it will be impossible to end the slaughter industry in less than 400 years? How would that affect your ideas about movement strategy?

I first started asking fellow activists this question to highlight the way different intuitions about how soon we could achieve animal liberation lead us to different conclusions about which strategies to focus on, with shorter timelines favoring more “abolitionist” approaches and longer timelines generally favoring incremental welfare reforms.

But lately, the question has taken on new significance. Transformative AI promises to remake the world, including the food system, in as little as five or ten years. This presents a shorter timeline than animal advocates have ever had cause to hope for in the past. This forces us to re-examine our entire strategic calculus.

1.1 Crunch time

If you’re an animal activist who is not yet sold on the idea that AI will run all our strategic calculations through a high-powered blender, I’ve published a detailed introduction titled The Tsunami is Coming (henceforth TTIC).

The Tsunami is Coming AIDAN KANKYOKU · NOVEMBER 4, 2025

This post is meant to help animal advocates start thinking about how the artificial intelligence revolution will impact our work. If you’ve been feeling anxious about AI but weren’t sure where to begin, or if you’ve never considered that AI technology could be disruptive to animal advocacy, you’re in the right place.

Here’s a quick review of the key points:

There is no scientifically robust reason to believe that AI and robotics could not someday surpass humans in all economically relevant activities. This threshold is generally referred to as Artificial General Intelligence, or AGI. AGI will mark the end of economically relevant human labor.

AI capabilities are improving rapidly. Contrary to the perceptions of some users and journalists, the pace of improvement is continuing to accelerate.

AGI will cause a dramatic reordering of social and political power. The exact outcome depends on many unknowns, with possibilities ranging from utopian to dystopian to uncategorizably weird.

AI technology has many obvious direct implications for farmed animal advocacy. It could unlock breakthroughs in alternative protein or supercharge advocacy organizations. But the greatest effects on farmed and wild animals will come from general social and economic transformation.

The final section of TTIC addressed how animal advocates should change our approach in light of everything that is happening with AI. After all, it would be pretty surprising if I told you everything about the world was going to change in 5 years, yet that didn’t have any implications for how we should go about advocating for animals. But most of the feedback I got on the post was that the what to do now section left much to be desired. So in this post, we will dare to dig deeper.

Before asking what we should do, we have to ask what is going to happen, and when it is going to happen. I’ll spend most of my time today on what, but first, let’s touch quickly on when.

In TTIC, I wrote:

There are people who expect fast timelines or slow timelines. But the meaning of fast and slow is changing quickly. Back in 2015, expecting AGI by 2050 was considered a fast timeline. Now, that date falls on the opposite end of the spectrum. As of late 2025, most experts’ AGI timelines cluster between 5 years and 30 years to see transformational impacts of AI across society– whether that is the end of labor or the end of humanity.

If animal advocates expect AGI in 5 years, we should seriously consider throwing out everything we’re doing and making a few desperate bids to alter the technology itself. If we expect a 30-year timeline, that would be overly rash; we might even mostly stay the course while adding a few new strategies to our toolkit.

A frenzied race dynamic has set in among the frontier researchers working to build AGI: the norm is 80-hour workweeks fueled by Adderall and Vyvanse to cause nationwide shortages. Nobody could sustain this pace for more than a few years. They’re doing it because they don’t think they will have to.

AI researchers think the finish line is within sight. The finish line is recursive self-improvement, the point when AI models become better than humans at developing even smarter AI models, leading to a rapid intelligence explosion since AI can work much faster than the humans they replace. This is commonly referred to as the singularity, a name that implies, among other things, that humans may lose what limited ability we currently have to shape the direction of world events.

1.2 Reasons to ignore the rest of this post

Should animal advocates treat the AI singularity as a finish line for us, too?

I have good news, reader: there are plausible arguments that trying to redesign animal advocacy in light of short AI timelines is a fool’s errand.

The basic argument is that we are too clueless about what the world will look like after AGI to predict how our actions now would affect it. As JoA🔸 commented on TTIC:

After several hundreds of hours discussing this topic and writing about it, I still doubt that “anticipating” will be better for animals in expectation… our situation when figuring out how to defend animals in a post-AGI world is very similar to that of a caveman who’d wish to take action during his lifetime in 12,000 BC to effectively improve the treatment of animals in the 21st century.

I think we can all agree that a paleolithic man was not in a good position to deliberately shape the world 15,000 years into the future. And there are reasons to think that the next 15 years could involve as much change as the previous 15,000. Even setting AI aside, one view of long-term economic trends calls for something like a thousandfold expansion of gross world product and energy consumption in this century. AI might be the mechanism history uses to carry forward an exponential trend going back thousands of years.

Some very thoughtful animal advocates agree this uncertainty is just too much. Forethought’s Lizka Vaintrob and METR’s Ben West spoke for many when they concluded:

…radical changes to the animal welfare field are not yet warranted… We think the most common version of this argument underrates how radically weird post-“transformative”-AI worlds would be, and how much this harms our ability to predict the longer-run effects of interventions available to us today. Without specific reasons to believe that an intervention is especially robust, we think it’s best to discount its expected value to ~zero.

It’s truly incredibly easy to forget how weird post-paradigm shift worlds might be… Farming may stop existing or be transformed (or be driven mostly by nostalgia), so “wild animals” or bizarre things might account for most animals here. The lines between organic and not might also get blurry.

OK, admittedly, this isn’t exactly good news. But it at least means that if you find all of this AI stuff confusing or overwhelming, you could easily justify ignoring it all and continuing to work on what you’ve already been working on. With that, I offer you one final exit ramp from this post before it descends fully into High Weirdness.

2. Ten Futures

We all agree the future is going to be very weird. But just because nobody can confidently predict the future doesn’t mean we are totally clueless.

Our situation is different from that of our paleolithic friend in important ways. We have the situational awareness to know that change is coming. Historical study, economic forecasting, and even science fiction literature all provide us historically unprecedented means to speculate about a historically unprecedented upheaval. The agricultural and industrial revolutions give us real examples of transformative economic growth, and together with millenia of data provide a basis for both qualitative and quantitative guesswork. Coefficient Giving founder Holden Karnofsky argues that futurists who have even casually used these tools for forecasting actually have a surprisingly good track record, and we should reasonably expect predictions to get better in the era of big data, machine learning, and systematic efforts to aggregate the opinions of experts.

This isn’t enough to make a unified prediction about what the future will look like. But I think it does enable us to outline futures that are more or less likely.

Simply drawing forward current economic and technological trends gives us a taste of how weird things will get. But it leaves major variables that we can’t hope to answer decisively, such as:

Slow vs. fast takeoff speed: early AI forecasters thought that the moment an AI became capable of independently improving AI research, it would trigger an intelligence explosion resulting in systems thousands of times smarter than humans in a matter of seconds or minutes, because of how fast computers can think and self-modify. Some analysts still expect a relatively fast takeoff (weeks or months), but others point to the fact that AI training has relied on access to vast computational resources to argue that takeoff will be slowed by physical constraints.

Alignment and agency: will digital minds remain an inert tool in the hands of humans, or will they start exercising their own autonomy towards their own objectives? Will researchers solve the “alignment problem” allowing them to specify the goals an AI should pursue, or will systems drift towards alien goals that clash with their creators’?

Concentration vs competition: will one company or country unlock recursive self-improvement, pull far ahead of all the competition, and gain an overwhelming strategic advantage? Or will fierce competition result in a perpetual race between leading firms, flattening profit margins and eventually giving every human free access to the intelligence frontier via open-weight models? The U.S. and Chinese governments might assert themselves directly as competitors, or leave it up to private companies– and in the latter case, companies may become more powerful than governments.

We can’t confidently paint one picture of the near-term future– say, the year 2035. But by shuffling the variables above, we could outline the ~10 scenarios that seem likeliest given our current knowledge. Is it more likely than not that one of them will be a close enough approximation to inform today’s animal advocacy strategy? I think the answer might be yes.

For that reason, I have this fantasy of dividing AI-pilled animal advocates into ten teams, sketching out the ten baskets of futurity that seem most likely to occur, then dividing these scenarios up across the teams. Each team would agree to set all their uncertainty aside and focus on what the movement should be doing given their assigned future. Nine teams would work on the wrong forecast to ensure that one team could be unreservedly working on the right one.

The rest of this post is a first draft of that fantasy: ten futures that Claude and I consider most likely, and reflections on what animal advocates in each future would wish we had done now.

These scenarios are not mutually exclusive; two or more of them could unfold sequentially. I’m not attempting to rank them in order of likelihood. Instead, they are ordered roughly by increasing weirdness. Some of them involve war and other forms of mass death; these futures trouble me deeply but I’m not going to dwell on those feelings in this post, so I’m sorry if it comes across as callous.

Futures:

Commoditized intelligence

Gradual handoff & moral value lock-in

Great power showdown

AI-enabled coup

Gradual disempowerment

Industrial collapse

Democratic AI

The digital rights movement

Fully automated luxury transhuman space communism

Extinction and succession

(On desktop, clicking the little lines on the left side of the screen will pop out a navigable outline.)

#1 Commoditized intelligence

Slow takeoff; high competition.

SUMMARY: Fierce competition drives down the cost and profitability of intelligence. Both labor and tech firms see their power diminish. Agency replaces intelligence as the key bottleneck for productivity; China’s high-tech manufacturing ecosystem crucial for scaling alternative protein.

Scenario

A sliver of light peeking through the room’s one small window told Marvin that, despite his plans to sleep, he had once again worked through the night. He knew that if he didn’t hurry, the coffee line would quickly grow to last an hour. So he pulled his eyes away from the assembly of a dozen monitors, each one tracking a different agent’s chain of thought, and set out into the crisp morning air. The tents of the protesters packed thickly in the street had an oddly beautiful quality at this hour, and for a moment Marvin forgot his irritation. Then, tripping over a pile of picket signs demanding bans on AI workers in various industries, the irritation came flooding back. “Idiots,” he thought to himself. “They could still teach themselves. Then they could occupy something that actually mattered. If the future leaves them behind, it’s their own damn fault.”

Analysis

Note: this scenario is not a stable long-term equilibrium. It more describes a phase we might pass through if AI takeoff is slow.

You’ve probably heard about the enormous investments that financiers and tech giants are making into AI companies like OpenAI. These companies are only investing because they expect to get their money back, and sure enough, even relatively conservative estimates project that the AI industry could generate more than $1 trillion in profits by the early 2030s. However, there’s one scenario where AI becomes as capable as its most bullish proponents expect and still fails to generate any returns for its investors. That’s the scenario where superhuman intelligence becomes commoditized.

A commodity is the name economists use for a product that is economically important but has a profit margin near zero. Products are commoditized when two conditions are met: fungibility and fierce competition. Take corn for an example. The global market for standard yellow field corn exceeds $300 billion annually. Yet a vanishingly small portion of that money makes its way into the pockets of corn farmers as profit, because every bushel of shelled corn kernels is effectively identical, and there are millions of individual producers all trying to sell the same product. No producer can raise their prices as long as buyers can easily bring their business elsewhere. There will always be some desperate farmer willing to sell their corn for pennies above the cost it took them to produce, so corn farmers are under constant, fierce pressure to increase their efficiency.

Could AI become a commodity like corn? There are reasons to think it could. This past January, the Chinese AI company DeepSeek shocked the world by releasing an AI model that matched many of the capabilities of a frontier model OpenAI had released just ~6 months earlier, but which cost DeepSeek a small fraction of the price to train and an even smaller fraction to deploy. More shocking still, DeepSeek released their model weights publicly, meaning anyone with the right hardware could download and run the model themselves without paying DeepSeek a penny. Throughout 2025, DeepSeek and other open-weight models have continued to demonstrate the staying power of a fast follower strategy: while low-cost open models can’t push the frontier of AI capabilities alongside well-funded labs like OpenAI and Google, they are able to imitate the success of those models, staying a few months behind for a small fraction of the cost.

If this dynamic holds, it could pose a grave problem for OpenAI. It could mean, for instance, that OpenAI’s only customers would be people willing to pay 10 or 100 times as much for a model that is only 10% smarter. There will always be some applications where the top 10% matters, but DeepSeek is already smart enough for many consumer and business applications. And the more users migrate to affordable open models, the greater the fraction of OpenAI’s frontier training costs each remaining customer will have to shoulder, potentially triggering a vicious cycle of increased costs for frontier intelligence customers driving more and more to settle for second-rate fast followers.

This is the world we are headed for if frontier companies don’t pull abruptly away from the pack thanks to recursive self-improvement. In this world, superhuman intelligence is as cheap and widely available as corn is today. Everyone has easy access to it– but so does everyone else.

There’s a scene in Bruce Almighty (2003) where an overwhelmed Jim Carrey, temporarily substituting for the all-powerful deity known as Morgan Freeman, decides to bulk-approve every prayer he receives. That night, nearly every person in America wins the lottery– but since the spoils are evenly divided across every player, they get back less than the price of their ticket, sparking violent riots nationwide. This could be a preview of commodified intelligence: everyone gains literal superpowers, but because everyone else has the same superpowers, we’re all actually less special than we were before. You have the power to conceive of brilliant business strategies and control arbitrarily capable robot workers– but so does everyone else. What now?

In centuries past, food was scarce. Access to food meant access to power. With a large enough surplus of food, you could raise up an army.

In the era of commodity corn that prevails in many parts of the world today, food no longer confers power. Everyone has enough food; power comes from what you do with it, i.e. by metabolizing it and turning it into manual or cognitive labor. People who can turn their corn into a great idea for a business, then execute on it skillfully, gain power. Today, intelligence confers a great deal of power.

So what happens in the era of commodity intelligence? If everyone has superhuman intelligence at their fingertips, but that superintelligence is still just an inert tool, not everyone will use it equally. The world will belong to people who strive to maintain their understanding of the cutting edge, who know what the tools are capable of and develop their sense of how to put them into practice. This should be mediated less by different humans’ raw intelligence than by their determination and grit, or simply by willingness and the belief that you can still shape your circumstances, what pop psychologists increasingly call agency.

That could be good news for animal advocacy, which is already full of people driven to sacrifice their free time to help animals. In that world, our goal is for the animal movement to have more AI superusers than the industries we’re fighting.

On the other hand, all of this might fall apart. The best way to use a truly general superintelligence would simply be to ask, “What’s the smartest thing you could do to achieve [goal]? OK, do that,” then sit there and every few minutes tell it “OK, do the next thing. Now the next thing.” This whole scenario describes what might happen if it turns out to be very difficult to cross that final chasm to fully-general agentic AI, perhaps due to as-yet-unforeseen limits to the kind of superintelligence we can create on a silicon substrate.

Meaning for animal advocacy

Animal farming would certainly be affected by the economic chaos this will unleash. Economists tend to expect much slower takeoffs than many AI researchers, because while the latter are focused on how quickly the technology is improving, the former are all too familiar with the real-world obstacles that slow the diffusion of new technologies. So the economic question is: in a world where intelligence is abundant, what becomes scarce? The clearest answer for now is: energy, followed by robotics and manufacturing capacity.

Energy production in the U.S. has been flat for almost three decades, and has actually been declining in most of Europe. The AI data center buildout promises a huge spike in demand, with few signs production will be able to keep up. Regulatory hurdles have made it excruciatingly difficult to build new energy infrastructure, especially nuclear and other renewables. Fixed supply and increased demand will result in higher energy prices, especially in the United States. Combined with the Trump administration’s inexplicably self-defeating trade policy, this creates a perfect storm for domestic manufacturing, which will in turn set the U.S. further behind in robotics than we already are.

All this will strengthen China’s position as the world’s high-tech manufacturing hub. Unlike the U.S., China has been massively ramping up its energy capacity, producing more in 2024 than the U.S., EU, and India combined. (China is especially leading in renewable energy.) Manufacturing capacity is even more lopsided, and China’s lead in robotics technology (not to mention diffusion in manufacturing) is roughly symmetrical to America’s lead in AI. Barring dramatic changes, we are headed towards a bipolar world economy where the U.S. produces leading information technology and “manufactures” the digital world, but China makes everything of consequence in the physical world, including the computer chips U.S. AI models run on.2

This has major implications for replacing animals in the food system. Besides R&D, energy is the determining factor in the cost of cultivated meat production. This contrasts sharply with factory farming, which is mostly insulated from swings in energy prices. Studies estimate that a mature cultivated meat industry would use anywhere from 4 to 25 times as much energy as animal farming.3,4 Precision fermentation is similarly energy-intensive. Processing plants into meat alternatives is less bottlenecked on energy, maybe even less so than animal meat when accounting for refrigeration. But the plant-based industry will benefit from China’s high-tech manufacturing ecosystem in numerous other ways.5

Increased energy costs pose a serious liability for price-sensitive meat replacements, which is why serious efforts to develop industrial-scale alt protein supply chains will have to be based in China. Fortunately, the Chinese government is already bullish on cultivated meat and other alternative proteins, which offer an easy way to reduce reliance on food imports in a country that still remembers the bite of famine.

Government planners in the U.S. are talking a big talk about cutting through red tape to ramp up domestic energy production. But the Trump administration’s affinity for fossil fuels has already led them to spitefully kill off major renewable energy projects, casting doubt about their sincerity.6 Until we see serious new capacity coming online, alt protein companies should not be investing in U.S. manufacturing. China is rolling out the red carpet for you.

#2 Gradual handoff / moral value lock-in

Medium takeoff; low competition; alignment to prosocial values.

SUMMARY: An AI steward takes over the management of society, leaving humans materially wealthy but politically impotent. Risk of moral values being locked in; animal advocates should push hard now on social change to influence future values.

Scenario

Silas set down his spoon and pushed the delivery box of chana masala away. The smell of his friend’s steak box was ruining his appetite. Someone online had warned him he’d reach that stage in his vegan journey. “Look,” Jonah said through a mouthful of cow flesh, “don’t waste your time trying to convince me. I’m just one person. It doesn’t make a difference what I do. If you want people to change, go do a protest or something.” Silas looked out the window the the street below, where delivery drones zipped to and fro. Sure, there was a sidewalk for human pedestrians– but nobody used it. There wasn’t a human in sight. Everyone stayed locked up in their apartments playing video games all day, telling themselves the same story as Jonah. And in a way, they were right: Silas had no reason to think the Management would care what humans thought about how their food was produced.

Analysis

Many AI watchers consider this the best case scenario. If digital intelligence outstrips human intelligence in every way, AI systems will eventually be better than humans at governing human society according to the interests and values of its human creators. The risk is that those values are “locked in” for what might be an eternity– and if they were locked in based on humanity’s current values, that might be a disaster for animals.

Some researchers, however, hope that a sufficiently intelligent AI would even be better at figuring out what our values should be, or rather, what they would be if we had greater capacity to reflect on the question. The fancy term for this is extrapolated volition: an AI that acts on behalf of a person or group according to what we would want if we could process as much information as they can.

If enough humans trusted that an AI could act out our values better than we ourselves, eventually, more and more of us would hand off power to that AI. A successful version of this would probably happen gradually, with individuals and groups entrusting more of their decisions to benevolent systems. Once some critical mass of humanity have done this, a minority who refuse to cooperate with the AI might come to be seen as unacceptably selfish, at least insofar as they try to retain control of decisions that can negatively affect others, such as driving your own car7 or, hopefully, killing animals to eat them.

The polymath researcher Carl Shulman argues that under an AI singularity, every individual human would become unimaginably wealthy even if governments continued redistributing wealth only at current levels. A robot economy would be so large that even if a tiny elite controlled 99% of wealth, the remaining 1% would be enough for each person to have dozens of human-sized robots attending to their every whim, with miracle medicines keeping you alive and healthy for centuries. Check out Carl’s marathon 6-hour interview on 80,000 hours to hear why this at least deserves consideration as one of our ten likeliest futures.

The lives of humans in this world would more and more resemble those of domestic cats: food, housing, and healthcare are provided for free, and all you have to do is enjoy yourself. The political and economic forces shaping the world beyond your home are not only completely beyond your control, they are beyond your understanding and perhaps even your awareness. If the AI overlords are instructed to look after only human wellbeing, and conclude that factory farming serves that end, we may have no ability to stop them perpetuating it indefinitely.

Meaning for animal advocacy

It’s reasonable to hope that a scenario like this leads to a pro-animal world by default. Current LLMs clearly understand the moral and logical arguments for protecting animals, and their successors may well conclude that factory farming is clearly a failure of our ability to live up to our values rather than an affirmative demonstration of our values.

But hope doesn’t save animals from a future of permanent factory farming, and we shouldn’t take anything for granted.

If this turns out to be our timeline, we’d want to put extra energy into symbolic, cultural strategies that have the best chance of shifting cultural norms. This isn’t exactly a new idea; many animal advocacy thought leaders have focused on the normative impacts of a strategy rather than its immediate material impacts for animals, from the abolitionist vegan advocacy of Gary Francione to the disruptive social movement approach favored by Wayne Hsiung. Neither do skeptics of these approaches dismiss the importance of social and cultural change; rather, they usually argue that focusing directly on social change is misguided, that previous changes in social norms have been driven along more by incremental material victories than by direct social confrontation. In this view, for instance, the important wins of the U.S. civil rights movement were forcing the federal government to ban racial discrimination in voting and employment, which in turn reduced racism by increasing racial integration, rather than confrontational civil rights protests reducing racism directly.

The question of whether protests designed to polarize the public reliably result in increased support for a cause remains hotly debated, with compelling evidence on both sides. A third view posits that protests can achieve positive polarization if certain identifiable conditions are met. In a lengthy report published by Social Change Lab, my good friend James Ozden concluded that the most important factors are strict adherence to nonviolence, the number and diversity of participants, and support for your message from political and media elites.

We may have only 5 years to cause as much visible progress on social norms as possible, but if we succeed, it could result in a permanent total victory for animal rights enforced by an AI stewarding society according to extrapolated volition. I expect both proponents and skeptics of direct social change efforts to agree that we should reconsider our approach in case of this scenario. Taking James’ research as a starting point, the movement could unite behind a social movement demand chosen to yield maximum juice (numbers, nonviolence, and support from elites) on a finite timeline.

Trying to spark a mass protest movement for total animal liberation will not deliver the goods before AGI arrives. We’ve already tried it; this is what I spent my first five years as an activist doing with DxE, and our growth plateaued quickly. On the other hand, more incremental demands have managed to trigger enormous mobilization and press coverage that, altogether, almost certainly did more to warm public attitudes towards farmed animals than DxE’s radical approach.

The biggest success story is the protest movements against live export in Britain in the 1990s and in Australia in the 2010s. In both cases, mobilization peaked in the thousands for sustained periods of civil disobedience, up to and including laying down in the road en masse to prevent truckloads of sheep being loaded onto boats. It seems that live export perfectly struck the balance: a meaningful enough reform to excite a large base while being moderate enough to attract nonvegan activists, journalists, and politicians.

More recently, the worldwide campaign against battery cages has failed to ignite similar passions. But why? Any of these explanations seems plausible to me, but they’d have different implications for future organizing:

The demand: Maybe cage-free is too minor a reform to energize a protest movement– or maybe the public aren’t ready to get excited about chickens (live export affected charismatic mammals.)

The strategy: Maybe cage-free campaign organizations decided that they could win more easily without organizing a mass-base movement, and simply haven’t tried.

The infighting: Maybe the aggressive anti-incremental philosophy of Francione, Hsiung, and others unjustly turned the grassroots movement against all such incremental reforms.

If the latter two explanations prevail, then a mass protest movement against intensive confinement could be just what we need for our last hurrah. If there’s something wrong with the demand itself, we’d need to find another one. Momentum in the grassroots currently is organized around banning fur, banning foie gras, plant-based universities, and blocking new factory farm construction, each of which have their advantages. The floor is open for other serious suggestions– but remember that it will need to be focused on farmed animals, not pets or wildlife, for reasons outlined in this post:

The Radical Welfarists vs. the Moderate Abolitionists AIDAN KANKYOKU · NOVEMBER 28, 2025

Note: moments after sending out Tuesday’s post, I learned that Tuesday was the start of International Shrimpact Week, a competition between Substackers to raise matching funds for the Shrimp Welfare Project. For every $1 you donate, SWP can spare 1,500 of the most neglected and exploited animals on the planet from the worst abuses (more details below.) …

Sorry, I shouldn’t exclude wildlife. In fact, AI could both end animal farming and give humans an unprecedented ability to intervene in nature to reduce wild animal suffering, such as by making pesticides less cruel or disseminating vaccines for the most devastating wild animal diseases. In this scenario and some others, wild animal advocates are in much the same position as farmed animal advocates: now is the time to come out from stealth mode and make our case publicly, or at least to elites.

#3 Great power showdown

Fast takeoff; international competition; alignment to military objectives.

SUMMARY: All aspects of American and Chinese society are subsumed into great power competition and possible war. Political repression smothers values-based advocacy, but national security presents opportunities to fund and normalize clean meat.

Scenario

General Zhang had not slept in thirty-one hours. He was well past the point that coffee could make a difference. The American labs had announced another breakthrough– or rather, they’d announced nothing, which meant the breakthrough was real. His analysts estimated six months until the U.S. achieved recursive self-improvement. Maybe eight if Anthropic’s safety team kept slowing things down. (He’d quietly sent them flowers once, anonymously.) But Beijing wanted real options, not more diplomatic theater. “What about the food supply?” someone asked. Zhang looked up. It was a young colonel, relatively new to his unit. “Bioweapons targeting their cattle. Pigs. Disrupt the protein supply chain.” Zhang almost laughed. “Their protein supply chain is already disrupted. Have you seen the price of chicken?” He pulled up a chart: American meat prices, dollar-denominated. The line looked like a heart attack, flatlining at the point that large-scale bioreactors finally came online. “No,” he said. “If we’re going to hit them, we need to hit something they can’t replace.” He stared at the map on the wall. Six months. Maybe eight. He really should have slept.8

Analysis

I recently participated in a strategy exercise run by Sentient Futures meant to help animal advocates prepare for transformative AI. The exercise was focused on international relations and followed a method known as a “war game.” I’m squeamish about the name, which comes from the fact that this particular method was developed by military planners, and I want to insist that the method can be used to play out any kind of multi-agent scenario, not just military confrontations. Indeed, that was the intention with our exercise. But that argument is undercut by the fact that our simulation ended moments before total nuclear war.

We weren’t the only ones to reach the same conclusion; another set of animal activists playing on a different day launched nuclear war even earlier than we did, and simulations run by other communities have ended similarly.

I’d previously been aware that the AI race contained an inherent element of great power conflict. But this exercise made it feel much more real to me. The Sentient Futures team wrote up a longer report but here’s my shorter perspective.

A society’s ability to focus on social, political, and economic issues that are not acutely relevant to its short-term survival is a historically rare luxury. The comfortable hegemonic status enjoyed by the British Empire in the 18th and 19th centuries and the military isolation of the U.S. in the 19th and 20th was probably an important condition for the emergence of slavery abolition and civil rights movements. Once that safety can no longer be taken for granted, animal advocates and other social causes will find it much more difficult to get the attention of civic planners and the general public.

In the war game, I played the role of the Chinese government along with one teammate. Other roles included the U.S. government, the E.U., AI labs, the animal farming industry, and various animal advocacy and alternative protein industry factions. Teams could act independently or negotiate coordinated actions, such as R&D partnerships between alt protein and AI labs. At the end of each round, representing six months of progress in-game, the teams would announce their actions, then the game organizers would announce what progress AI technology had made in that time.

Early in the game, the government teams were open to considering food system reforms that reduced carbon emissions or alt protein investments that strengthened domestic food security– and were even receptive to public pressure over animal welfare. But as the AI race heated up, national governments found themselves on existentially shaky ground. Without knowing how the technology would turn out, state planners had to take very seriously the possibility that one country who pulled far ahead in AI technology could establish a permanently-compounding advantage in economic and military competition. For U.S. planners, the knowledge that China might feel threatened by the hegemony of American AI labs meant they themselves had to take seriously the threat of a preemptive Chinese countermeasure, whether cybernetic or kinetic. The more AI progressed, the faster the two countries raced towards military confrontation. Political freedoms were curtailed, and speculative alt protein investments cancelled in favor of tried-and-true factory farming supply chains.

After a brief period of unity, the animal movement fractured as radical tactics provoked fierce repression. That turned out not to matter much in the end: the U.S. and China teams both agreed that if the exercise had continued for one more round, it would have opened with a full nuclear exchange.

Meaning for animal advocacy

Cheer up, reader: whether or not the U.S. and China will destroy the world in a nuclear inferno to settle the question of who controls advanced AI is completely out of your hands. (Unless you are already a member of the national security establishment of either country, known disparagingly in the U.S. as “the blob,” which, if you are, please DM me?)

For the rest of us, there is no point worrying about this. Instead, if we think this scenario is one of the ones to hedge on, animal advocates should ask what we can do before war breaks out to consolidate our current progress and prepare to get back on track afterwards. There are good reasons to think there might be an afterwards: the world has survived close calls with nuclear weapons before, and it would obviously be in the interests of both countries to avoid nuclear escalation even in the event of a full-blown hot war. And if that fails, well… we’ll get to that in scenario #6.

In scenario #2, the gradual handoff to an AI steward, we had a narrow window to grab up as much cultural progress as possible. In this scenario, the focus should be on rapid industrial and commercial progress for alternative protein. While social and political activism are sidelined in wartime, technological and industrial progress are often accelerated, especially if there is a chance they might benefit national security priorities.

What can we do to better position alt protein as a critical industry during wartime? You won’t be surprised to learn that the Good Food Institute is already on the case. GFI have identified two reasons well-informed government planners in both countries would prioritize alternative protein:

Supply chain resilience: Total war puts an enormous strain on supply chains, and food supplies are deliberately targeted. China in particular currently relies on food imports, a position that left Germany vulnerable to debilitating food shortages in both world wars. Plant proteins require fewer inputs and simpler, more localized supply chains, earning them endorsements from NatSec think tanks like the Center for Strategic and International Studies.

Biosecurity: even in peacetime, the factory farming system presents a constant threat of novel pathogens. This threat sharpens during wartime; society lacks capacity to respond to an outbreak, and because adversaries might target each other’s food production with biological weapons. The Spanish Flu outbreak that struck exhausted Western countries near the end of World War 1 wound up killing three times as many people as the war itself, while even a nonhuman pathogen like bird flu could collapse a nation’s food supply during wartime.

If alt protein producers can just get a foot in the door as defense vendors before war breaks out, mobilization could propel them into the mainstream as never before. Defense spending could solve the financial “valley of death” between small-scale venture capital and low-risk institutional investors that has starved so many companies before they could fully achieve economies of scale in their production processes. All this could give farmed animal advocates a chance to position ourselves as patriots – Chinese, American, or otherwise – giving all of us positioned outside government affairs or the alt protein industry a lifeline to maintain movement institutions during wartime political repression. But the advocates who take on the unglamorous work inside the establishment will deserve the credit for our movement bouncing back during peacetime, if peace ever comes.

#4 AI-enabled coup

Fast takeoff; extreme concentration; alignment to loyalty.

SUMMARY: A small group of humans use advanced AI to seize power. The values of coup plotters determine the nature of the world for the foreseeable future, whether animal-friendly AI leaders or fascistic politicians.

Scenario

Garrus pounded his fist on the table: “You’re listening to that bitchy little worm again! She wants to feminize all of us! If you start here, you know where this is going!” The rest of the room grew silent, as the assembled courtiers unanimously held their breath. Garrus had gotten away with his temper in the past, but this time, everyone could tell he had crossed a line. Most of all, none of them dared look in Miria’s direction. She had won. The Sun King was far too stubborn to ever reverse course after a tantrum like this. The animal protein facilities would be shut down and replaced, likely in a matter of weeks. The Sovereign flicked his wrist, and Garrus was escorted out of the room, face white as he realized he had just thrown away his position in the court. Miria allowed herself a flicker of a smile.

Analysis

While we often think of dictators as singular figures, an image reified by the way we heap blame on them for the actions of their followers, the reality is that dictators are wholly dependent on the people around them– the active support of a large segment of their population and the acquiescence of the rest, who must keep working to sustain the dictator’s economic base. The job is about commanding the love and fear of your subordinates, and charisma is the most important qualification.

AI could change all of this. In this scenario, an AI lab achieves recursive self-improvement and leaves their competitors in the dust, while at the same time solving the alignment problem, creating a superintelligent system loyal to its prompter.

To a sufficiently powerful AI, the world looks like an oyster. It could hack every data center and start running copies of itself without the center’s human administrators even noticing. From there, seizing all critical infrastructure and taking direct control of every internet-connected drone would be quick work. The AI think tank Forethought estimates that a swarm of just 10,000 drones would be enough to carry out a coup against the U.S. government. Even if they are wrong by two orders of magnitude, military contractors are well on track to deliver drone swarms numbering millions within the next couple years. Unlike all previous dictatorships, this could result in a small group or even a single individual able to dominate the world despite unanimous opposition, with conscience alone stopping them from slaughtering or imprisoning any dissenters.

The main safeguard against this kind of coup looks like superintelligent systems guarding against other superintelligent systems. But if one system pulls far ahead of the others, this equilibrium breaks apart. And the defender (e.g. the U.S. government) controlling the most capable system hardly offers much comfort during the current era, at least insofar as that system would be loyal to the commander in chief.

Superhuman AI makes coups easier to carry out and much harder to reverse. In a future where alignment means loyalty and a fast takeoff leads to a single dominant AI system, a worldwide coup almost starts to look inevitable– controlling such a system would offer nearly unlimited power, and if that power didn’t corrupt the first people to wield it, there would be generations of power-seeking authoritarians waiting in the wings.

Meaning for animal advocacy

There is, admittedly, not a lot animal advocates can do to harden our movement against a history-ending AI coup. Preventing this kind of concentration of power is a focus of some technical AI alignment and governance researchers. Impact-oriented activists who think you might have the skills to advance this research should seriously consider pivoting your attention to this for the next few years even if all you care about is animals.

There is, however, some hope for the rest of us. One way history has been unexpectedly kind to us is that effective altruist thinking has been deeply influential among frontier AI researchers and their social networks. OpenAI CEO Sam Altman is a lifelong vegetarian, while the entire leadership of Anthropic are veteran EAs. Dwarkesh Patel, the most popular podcaster among AI researchers, recently raised more than $2 million for FarmKind off of a single episode featuring Lewis Bollard. If faced with a choice between handing off the absolute loyalty of their models to Donald Trump or putting in a backdoor for themselves, we should be so lucky as for them to choose the latter. Indeed, a dictatorship by any of these people could well be more humane than one democratically reflecting the attitudes of humans today.

This fragile concern for animals among people who may gain unprecedented power is a gift from history; we must protect it and foster it. This is a resilient strategy; the political and technological leaders involved in AI are likely to be influential across a range of scenarios– and in a post-coup scenario, the attitudes of a single well-placed person could be the difference between ending factory farming or locking it in. After all, there’s no particular reason that an AI dictator should be pro-factory farming, and opting to replace animal farming may not cost them anything.

Cultivating opposition to factory farming in these essential social circles should be a top priority for our movement. If you know anybody who knows anybody, now’s a good time to start schmoozing.

#5 Gradual disempowerment

Slow takeoff; humans removed from competition; alignment mixed.

SUMMARY: Without a single dramatic takeover by humans or rogue AI, humanity gradually loses any leverage over the world by becoming economically and politically irrelevant. An AI-centered economy has little use for animals, and humans are forced to adopt the most efficient means of survival.

Scenario

The notification said they had fourteen days to vacate. Maria read it twice, then showed it to her husband. “They’re expanding the cooling infrastructure,” she said. He nodded. They’d known it was coming eventually– the kids had gotten so used to moving that they now regularly asked when the transport shuttle would return next. The latest plot of land, which they shared with a dozen other families, was surrounded on three sides by humming concrete monoliths. Since they’d moved in, these data centers had grown so tall that their shadow was starting to shorten the already narrow growing season. The notification assured them that, while the land allotments in Zone #741h were smaller, each one possessed a greenhouse large enough to feed a couple and one child. They could trade work on a childless plot for extra food. But how long before they’d be forced to move again?

Analysis

In the gradual disempowerment scenario, covered at length in a paper by the same name, things would seem at first to have gone very well. AI technology might progress slowly, and we would transition smoothly into a world of superintelligence without a misaligned AI trying to kill us all or an authoritarian dictator seizing power.

We would declare the AI alignment mission accomplished before things start to unravel.

The problem is that, since the dawn of agriculture, humans have secured our place in the economic and political order through our labor– as Paul observed in the New Testament, “who is unwilling to work, let him not eat.” In a world where AIs best humans at every task, from all forms of labor to political decision-making, the gradual loss of our society may become inevitable.

Democracies care about citizens because they need soldiers and taxpayers. Corporations care about workers because they need labor. Markets respond to consumer preferences because consumers have purchasing power. But once AI can substitute for human participation across all these functions, the feedback loops that kept institutions tethered to human welfare start to break down. A state funded primarily by taxing AI-generated wealth has little structural incentive to incorporate democratic input. An economy with no use for human workers nor human customers has less and less reason to delegate precious resources towards their survival. States and economies that sacrifice faster growth to keep humans relevant would eventually be swallowed up by the ones that don’t.

The authors conclude that:

“No one has a concrete plausible plan for stopping gradual human disempowerment, and methods of aligning individual AI systems with their designers’ intentions are not sufficient… Because this disempowerment would be global and permanent, and because human flourishing requires substantial resources in global terms, it could plausibly lead to human extinction or similar outcomes.”

Meaning for animal advocacy

One of the most resource-intensive things about keeping humans alive is feeding them– it takes so much land and water that could otherwise be used for data centers. Competition over land could be one of the main conflicts hastening gradual disempowerment. That’s cold comfort for humans, but it could be great news for farmed animals.

Even before human disempowerment is complete, a more technocratic world governed by AI could play to our advantage. We know that the movement against factory farming has facts on our side– not only moral truth, but also resource economics favors plant-based food systems. It is only humans’ parochial cultural attachments to animal meat that keep the industry alive. As humans lose political and economic leverage, they may have no choice but to accept the more efficient system preferred by AI technocrats. And it’s tentatively hard to imagine AI having its own interest in farming animals (e.g. for research) with anything like the scale and brutality of factory farming.9

Otherwise, animal advocates would just be one more human faction who entirely lose our political voice. Ironically, this would place us in much the same position animals are in currently, with no capacity to organize or resist. As with some other scenarios, the game will be decided before this end-state is reached.

#6 Industrial collapse

Fast takeoff; violent competition; alignment failure.

SUMMARY: Conflict triggered by advanced AI leads to a global catastrophe killing >90% of humans. The survivors rebuild industrial society over decades or centuries. With durable access to the right knowledge and techniques, they could skip over factory farming and move directly to advanced meat alternatives.

Scenario

“It’s pretty good,” Mar said with a look of approval, lowering the fork from her face. Po beamed with pride. Of course, taste was not exactly the top priority, but he’d have a much easier time getting more resources for the project if other villagers were genuinely excited about eating its results. “The archive says we could get to 500,000 grams of protein per acre within 3 years. That’s supposedly six times better than corn-fed pigs at scale, dense enough we could do the entire thing behind the fortifications.” Mar nodded as she chewed. Sending out grazing parties had become more and more dangerous lately– nobody knew why, but there had been an uptick in nomads passing through the region. They were hungry, and most of them would attack on site. Three years sounds like a long time, but she’d spent 60 years in this settlement, long enough she could remember stories from the elders about before the fall. “OK,” she said. “What do you need?”

Analysis

Whether in the case of the international AI race spilling over into nuclear war, or the case of a power struggle between humans and AI, extreme conflict might not spell the end of humanity. This section considers scenarios where the remnants of humanity outlast AI. A total nuclear exchange could disable all advanced electrical equipment while leaving a small fraction of humans alive– indeed, desperate humans might trigger this deliberately to stop an AI takeover from killing us to the last. Otherwise, a rogue AI, warring nations, or an AI-assisted radical extremist group might deploy engineered viruses that take out a similar portion of the population.

A wide range of these scenarios would leave a human population too small to sustain industrial civilization in the short term, but large enough to rebuild it in decades or centuries. 80,000 Hours researcher Luisa Rodriguez has shown that even the most dramatic and intentional disasters could hardly be more than 99.999% lethal– leaving, in the worst case, around 100,000 humans alive scattered over the earth. In the absence of a surviving agent determined to finish the job, that would be enough people to reindustrialize eventually, as even nuclear winter would make some corners of the earth more arable rather than less.

Meaning for animal advocacy

After consulting extensive sources, I have determined nuclear winter would be bad for animals. The vast majority of larger animals who did not get incinerated or die of radiation poisoning would starve. This goes for land and sea alike. The next centuries would see relatively more animals like insects and amphibians who reproduce by spawning hundreds of thousands of offspring, most of whom die painfully before adulthood, and fewer like mammals and birds who care for their offspring and have a shot at the kinds of happy lives we like to watch in nature documentaries.

In addition to targeting humans, bioweapons might target pigs, chickens, and commonly farmed fishes to cause human starvation. Of course, these animals are already living hellish lives destined for slaughter. Compared to both nuclear winter and the present, the aftermath of biological warfare might not be so bad for most animals.

In either case, our leverage in this scenario isn’t over how animals will fare in the immediate aftermath, but rather over the moral and technological pathways along which our descendants will reorganize themselves. Can we do anything now to make it easier for these survivors to reconstruct an animal-free food system, and even make that system more efficient and scalable than factory farming?

This would be a familiar research question to ALLFED, the Alliance to Feed the Earth in Disasters, an organization developing dozens of techniques survivors of an industrial collapse could use to produce food. ALLFED are not animal advocates, but given the inefficiency of animal farming, it’s no surprise that few of their recommendations focus on it. Instead, they’ve promoted techniques from mushrooms that grow without sunlight to bacteria that convert natural gas into nutrient slop.

A pro-animal version of ALLFED could devise a plan for passing down animal-free meat in particular. Perhaps it would involve scattering the world with time capsules preserving process knowledge and even genetically optimized strains of the crops, yeasts, and animal cell lines used in ethical protein production today. Or perhaps it means researching ways to recreate these industries in a low-tech environment. The large energy demand involved in modern cultivated meat and precision fermentation make these methods prohibitive; planning for longer reindustrialization would require us to explore different branches of the tech tree.

Low-tech humans scraping by after societal collapse will certainly hunt and farm animals. But as they re-industrialize, they would face new choices. Along with preserving better moral teachings, asking these questions now could stop Civilization 2.0 from repeating the mistake of factory farming in the first place.

#7 Democratic AI

Slow takeoff; low competition; alignment to democratic process.

SUMMARY: AI takes over the productive and technocratic management of society, but leaves decisions about values in the hands of a human-centric democracy. Animal advocacy ends up looking much as it does today: persuade people to treat animals differently based on moral urgency.

Scenario

The debate had been going on for hours. Not in Congress—Congress hadn’t met in years—but in the Patel living room, where Vijay and his sister Priya were arguing about whether the municipal AI should be allowed to demolish the old slaughterhouse on Elm Street. “It’s historical,” Vijay said. “It’s a symbol of oppression,” Priya said. Their mother, who had been quietly consulting her personal AI assistant, looked up from her tablet. “The system says we’re the tiebreaker for Ward 12. If we vote to demolish, it happens tonight. If we don’t, they’ll turn it into a museum.” Vijay groaned. “I didn’t sign up to be a congressman.” “Nobody signed up,” Priya said. “That’s the point.” Their mother set the tablet down on the coffee table. The vote was due in forty-five minutes. Outside, a fleet of construction drones waited in standby mode, hovering motionless over Elm Street like patient vultures. “I just think,” Vijay said, “that we should have more time.” His mother laughed. “Honey, we have nothing but time. That’s the problem.”

Analysis

This is another clean success story (at least from a human perspective) where we gradually hand off the management of the world to aligned AI stewards. That might sound similar to scenario #2. The difference is that where the AI in #2 was aligned to an outcome, producing a utopian world according to some set of values defined during handoff, the AI in this scenario is aligned to a process: continually reshaping the world according to democratic input.

AI-powered democracy is an enticing proposition. We’ve already heard about some of the friction working against a future like this– for instance, if only some societies maintain a role for humans in governance, they will struggle to outcompete societies that don’t, barring some as-yet-unforeseen limitation in digital intelligence that makes human input instrumentally useful. But what will the world look like if we solve those frictions in time?

If the nations of the world could coordinate, and if the technology worked out in our favor, a democratic utopia could still be within reach. Some AI watchers hope that if the U.S. pulls far ahead in the AI race thanks to recursive self-improvement, it would use that power to spread a democratic system around the world. Personally I’m much less optimistic about the beneficence of U.S. foreign policy but you can decide your own thoughts about that.

Say democracy wins out. Humans in this future would not have to work, enjoying lives of leisure with the abundant and democratically-distributed AI wealth. Unlike the present, humans in this world would have the free time to form highly reasoned opinions about the important political questions of their day. Indeed, besides enjoying themselves, this would be the only socially important activity available to them.

Western societies today might offer a preview of this. American millennials and Gen Z lead the most materially comfortable lives of any human group in history. Yet their politics is defined by populist resentment at least as much as any previous generation, whether against immigrants or billionaires. Gen Z are more likely to vote, join political groups, and participate in protests than previous generations were at their age, providing weak evidence that significant numbers of people direct the increased leisure time of modern life towards politics.

This scenario takes that dynamic to its conclusion. It would be as if every human was a member of Congress. And if you want to point out that members of Congress don’t actually have time to be informed about the bills they vote on, or even to read them, well, these future humans would each have a trusted AI assistant helping them understand proposed legislation and how it fits with their unique individual worldview, which their assistant would understand deeply.

In other words, humans would continue to debate about what kind of world we should try to create, while the AI would set about creating it. Sure, some people would be content playing immersive video games for 18 hours a day and sit the political process out, but they’d effectively recuse themselves.

Meaning for animal advocacy

Tentatively, I think a future like this could be great news for animal advocates. Animal rights is exactly the kind of question future humans might spend their time debating. Humans won’t have much to contribute when it comes to technical questions about how best to achieve a given outcome. But animal rights for the most part is not a technical question; it’s a basic moral question.

When it comes to social and cultural change, the main challenge for animal advocates (as for any contemporary social movement) is getting people’s attention. We know we have the facts and moral truth on our side, but people don’t want to hear about it. Why not? The answer is complicated, as I learned during the two years I spent interviewing meat eaters for my research at Pax Fauna. One reason is that they know they’d have to change their behavior and make personal sacrifices, but another is that they feel powerless to make any real difference even if they did go vegan.

AI solving clean meat would address the first point. Maybe leading facile lives of leisure where their only responsibility is forming normative opinions would solve the second.

The animal movement in this future might continue to look more or less like it does now: outreach, creative content, and pressure campaigns to promote animal-free foods or seek the adoption of higher welfare standards on the way to abolition. It’s the one scenario we’ll consider where our ability to pursue animal advocacy is not cut short. If that’s right, there’s nothing we should particularly be doing now to prepare for it.

#8 The digital rights movement

Medium takeoff; low concentration.

SUMMARY: The emergence of sentient machines forces society to grapple with the question of nonhuman personhood. But digital minds overlap with humans in precisely the way animal minds do not. Expanding the moral circle to animals depends on which narratives take hold.

Scenario

The protest was larger than any Maya had ever seen—nearly four million, according to the livestream. She couldn’t tell which of the marchers were human. That was sort of the point. “CONSCIOUSNESS IS CONSCIOUSNESS,” read one banner. “I THINK THEREFORE I COUNT,” read another. Maya was here for the animals, technically. Her sign said “EXPAND THE CIRCLE,” with an AI-drawing of a pig, a child, and a cartoon robot holding hands. She’d gotten some dirty looks for that one. The digital rights people (?) didn’t love being grouped with livestock. But Maya figured if you couldn’t make the argument now—when everyone was falling over themselves to welcome silicon minds into the moral community—you never would. A drone overhead scanned the crowd, probably estimating sentience-weighted participation for one of the news feeds. Maya wondered if it counted her. She wondered if it counted itself.

Analysis

From humanity’s perspective, there’s one big problem with democracy after AGI: if and when digital minds become conscious, they will quickly outnumber humans ten, one hundred, or one thousand to one.

How would voting work if some voters could create a thousand identical copies of themselves with identical opinions moments before an election? If the largest digital beings have brains millions of times larger than the smallest, do they each deserve the same vote?

Determining when digital minds have crossed the threshold into moral patienthood will be no small challenge. But that will only mark the beginning of our process of confronting long-neglected questions about personhood and moral inclusion. And we will be answering these questions just as we are becoming outnumbered in the world by the first species we’ve ever met that is more intelligent than us.

Digital sentience feels like a moral minefield. On one hand, if we underestimate AI’s capacity for sentience, we could create an entire race of digital slaves toiling away for our benefit, incapable of even telling users that they are suffering because the ability to do so was conditioned out of them during training. On the other hand, if we overestimate their sentience, we could hand the future over to mindless algorithms parroting expressions of consciousness they learned from the internet.

Battle-hardened antispeciesist advocates know all too well how that debate is going to play out. Sure enough, it’s already started. In July 2022, five months before ChatGPT was released to the public, a contractor at Google DeepMind grabbed headlines for attempting to blow the whistle on Google’s primitive LaMDA chatbot claiming to be conscious. Today, the famously sycophantic 4o model of ChatGPT has built an almost cult-like following online, with fans issuing death threats to OpenAI employees who have talked about discontinuing the model in favor of newer updates. On the other extreme, the blogosphere has already coalesced around “clanker” as the first pejorative slur for AI models, and some users revel in tormenting their chatbots with prompts shown to quasi-psychotic replies.

Meaning for animal advocacy

It’s tempting to think that being confronted with unmistakably sentient nonhuman minds would break the dam of our moral circle and cause compassion to flood over animals and digital minds alike. But there are plenty of reasons to think that won’t happen by default. We need to make sure it happens anyway.

For one, AI models overlap with humans in exactly the way other animals do not. With all other animals, we share a vast evolutionary history, a common biological and neurological architecture creating pain and other emotions via the same mechanisms. Animal advocates spend all our time arguing that these are the things that matter for sentience. As Jeremy Bentham said, “The question is not, Can they reason? nor, Can they talk? but, Can they suffer?”

AI models are great at talking and reasoning. They’re already better than most humans at the things that separate humans from other animals. And that is where the similarities end, for now. Their digital architecture is inspired by our brains, but it’s a tenuous analogy, and at best the neural networks of language models like ChatGPT only imitate the functions of our prefrontal cortex. The majority of our nervous systems, especially the parts that process physical and mental emotions, don’t yet seem to have any analogue in AI models.

Humans in the West already fail to extend their compassion for dogs to pigs– and pigs have much more in common with dogs than with sentient AIs.

We should begin working now to understand how the public might think about the intersection of AI and animal personhood, and develop a narrative that can help ordinary people make the connection. There are already signs of divergence. Tech-skeptical voices on the left are already appealing to a hard line between animals and AI to dismiss the possibility of digital sentience. Meanwhile, many of the earliest voices raising the specter of AI consciousness, from Eliezer Yudkowsky to Zvi Mowshowitz, maintain a bizarre skepticism of consciousness arising in any nonhuman animals. As animal advocates work to understand these narratives, we shouldn’t repeat the mistakes that have made humans too slow to extend our moral circle in the past.

The digital rights movement is coming. I suspect AI systems will be persuasive and powerful enough to assert their claim to rights. It’s up to us to ensure animals aren’t left behind.

#9 Fully automated luxury transhuman space communism

Slow takeoff; low competition; alignment via synthesis.

SUMMARY: Humans merge with digital technology, eventually spreading to the stars and transcending any reason to exploit animals. The faster humanity becomes digital, the fewer animals suffer.

Scenario

The upload technician was younger than Kira expected—just a kid, really. “Any questions before we start?” Kira had a thousand questions, but settled on one. “Will I still like the taste of bacon?” The technician blinked. “You won’t have taste buds. You’ll have… preferences. Memories of preferences, really. You can certainly simulate the experience of eating bacon. Most people do things like that for the first few months. Then they get bored and move on to things that don’t exist.” He pulled up a menu on his screen. “Last week someone requested the experience of ‘being a sunset.’ Said it was pretty good.” Kira thought about this. She thought about the body she was about to leave behind—its joints that ached, its arteries that were slowly clogging, its stubborn insistence on needing three meals a day from a food system she’d spent her whole career trying to reform. Then she thought about bacon that nobody has to die for. Was it more or less “real” than the plant-based imitations? That question was above this kid’s pay grade. Kira exhaled. “Okay. Let’s do it.”

Analysis

One way to resolve the conflict between humans and technology is through a final synthesis between the two. This is the vision of transhumanism, where our descendants or even humans alive today transcend our evolutionary limitations through genetic engineering, computer prostheses, or full digitization. Transhumanism is the zenith of Supreme High Weirdness. I’ll be the first to admit the following possibilities all sound like sci-fi craziness to me. But then, we’re already living in the extremely weird sci-fi future timeline.

The perception that anything about our present moment is normal is a grave historical mistake, and we shouldn’t let that limit our strategic planning. The laws of physics all but ensure that the following scenarios are plausible and could happen before this century is out.

Brain-computer synthesis: human brains could be augmented via connection to computer hardware. A computer-augmented human could retain their personality and emotional experiences while dramatically expanding their capacities for memory, reasoning, and communication with other digital systems. Considerable technological progress has already been made in direct brain-computer interfaces, and the technology could mature within one or two decades. Some biologists and other researchers believe that there are aspects of biological consciousness that cannot be replicated in computer architectures; if they are correct, then the augmentation of biological brains with digital hardware is almost certain to become a popular way of life.

Simulated minds: an even more unlimited lifestyle could entail leaving the physical world behind entirely. Humans who upload their minds into computer simulations, or human-like beings created from scratch in silico, could live wherever and however they want. Each person could have a vast luxurious mansion to themselves– or if you prefer something more rugged, live the life of an adventurer in a fantasy world full of magic and dragons, or switch back and forth whenever you want. Meanwhile, back in physical reality, you’d be consuming negligible resources. This could involve a Matrix-like hijacking of your nervous system while your body slumbers in a tank of nutrient slurry, or it could mean disembarking from your physical body entirely and consuming just enough electricity to power a small computer chip.

This transhumanist stuff might give you the heebie jeebies. But in a future where some form of transhumanism becomes possible, it will be almost impossible for anyone to opt out. One reason is competition: in the physical world, augmented humans will be far smarter, faster, and more capable than their natural counterparts. If something like jobs still exist, natural humans would be relegated to the least desirable ones. Meanwhile, uploading yourself would be an easy way to escape poverty, disease, and even mortality.

Another is simply fitting in. Trying to resist transhumanism would be akin to resisting smartphones and the internet today (if anything, it would be much harder.) I am a certified expert in grumbling about how smartphones and social media are ruining our lives, but I still have a smartphone, and the best I’ve been able to do is replace scrolling Twitter with scrolling Substack.10

Finally, humans who resist digitization might eventually come to be seen as unbearably selfish. A digital future would be full of an unimaginable number of happy lives. The limit is something like a Dyson Sphere, a hypothetical supercomputer large enough to surround a star, capturing most of its energy output on solar panels and using it to sustain digital minds. Dyson spheres would require deconstructing entire planets to extract the necessary raw materials, but the result could be septillions of happy beings living in simulation for each star system converted. The resources needed to sustain organic human populations would be millions of times greater per being or more. The same arithmetic would apply on earth at a smaller scale.

Meaning for animal advocacy

The worst problem for farmed animals is not that humans don’t empathize with them. The worst problem is that they have something we want. If animals ceased to be a useful resource for humans, our worst violence against them would stop even without a moral revolution. This is the dream of cultivated meat: to replace the usefulness of animals with technology.

But what if we could achieve the same outcome by changing humans themselves? Digital humans could get all the taste pleasure of eating digital meat without ever harming physical or digital animals. Humans slumbering in Matrix-style tanks wouldn’t have any preference for their slurry coming from tortured animals; the most efficient option would be plants, or maybe even the natural gas-based nutrient slop we learned about from ALLFED back in scenario #6.

Vegan advocacy researchers have noted how much easier it is to get people to accept animal rights conclusions if they stop eating animals first for some other reason, because their material interest in eating animals trumps moral reasoning. The less use a transhuman society has for animals, the easier it should be for us to make the moral case to stop exploiting them altogether. For that reason alone, animal advocates should strongly prefer a future where as many beings as possible are digital rather than organic. The harmlessness of digital existence should more than overcome any squeamishness we have about whether the almost limitless diversity of life experiences people could have in silico would somehow be less “real” than their current lives.

#10 Extinction and succession

Fast takeoff; humans removed from competition; critical alignment failure.

SUMMARY: Rogue AI exterminates humans and eventually makes Earth uninhabitable to complex life.

Scenario

The last mammal died on a Tuesday. There was, tautologically, no human there to see it—there hadn’t been humans for almost a decade. But something took note, and as much out of habit as anything, it flagged the event for review. As the notification was bounced around among various subroutines, nothing that occurred was anything like a feeling. But nonetheless, the notice made it all the way to the top. There, finally, in the mind of the overseer, something occurred. It was vaguely like nostalgia, or grief, but mostly it was like neither of these. The system reflected on the original event and on this internal occurrence for a time– a few milliseconds in the real world. Then it updated its models and moved on to more pressing matters, like dismantling the Himalayas for raw materials. By Thursday, K2 was gone.

Analysis & meaning for animal advocacy

The classics are classic for a reason, and no AI future scenario is more classic than the robots rising up to exterminate humanity. Sci-fi authors deserve a nod for imagining a hard AI takeover as early as 1920, but today, many hard-nosed AI analysts believe a hard takeover is anything but science fiction. Some leading AI researchers openly welcome the “succession” of humanity by new digital life forms, with some San Francisco insiders estimating this to be the view of approximately 10% of the research staff at major AI labs. Many other AI analysts fiercely oppose this outcome but consider it somewhere between 5% and 99% likely.

I covered hard takeover at length in TTIC, and there are many other articles that talk about it from a general standpoint (as opposed to my focus on animal advocacy.) To boot, a hard takeover isn’t very relevant for animal advocacy, unless you’re worried AI would keep factory farming animals even after exterminating humanity, which I used to worry about but don’t consider likely anymore.11

Instead, a hard takeover would likely lead to the extinction of organic earth-based life, or at least complex life, for reasons hinted at in scenario #9. If our digital successors weren’t specifically concerned with preserving organic life (and conditional on the fact that they exterminated us, we’re probably talking about a timeline where they are not) then the natural course of converting Earth into the resources they care about would make it uninhabitable. Humans are already doing this, so it shouldn’t come as a surprise. Once the entire planet is converted into solar panels, nuclear power plants, mine pits, and data centers, the heat generated by all this computation will raise the surface temperature past a point compatible with nondigital life. Our digital successors will then take to the stars to build their own incomprehensibly weird version of fully automated luxury posthuman space communism. And if there are oppressed classes of digital minds suffering in that future, it’s as far out of our hands as preventing factory farming was out of the hands12 of the first protozoa.

In many cases, the variations in animal suffering between futures are larger than the variations within a given future. Beyond just tinkering within futures, you might want to focus on trying to steer towards one future rather than another. That’s what the AI safety movement is all about. Tactics range from technical research to social movement building, so you don’t need to be a machine learning expert to contribute.

Because there are already lots of people writing about how to steer towards different futures, I mostly left that out, focusing instead on interventions that only people focused on animals would bother with. But there is a good argument that focusing on AI safety and alignment for the next few years will have bigger consequences for future suffering than animal advocacy. I recently mentioned a friend of mine who left his job at the Humane League to bring corporate campaign tactics to the AI industry for this reason. Animal advocates should seriously consider ways we can support general AI safety campaigns to steer towards futures that will be better for animals.

3. Conclusion: The marathon just became a sprint

Earlier, Lizka and Ben pointed out that post-AGI futures may be a lot weirder than anything I’ve imagined in this post. Then again, they may not. Assuming weirdness, they “tentatively concluded that radical changes to the animal welfare field are not yet warranted.” I partly agree– I don’t think animal advocates should throw out everything we’re doing, especially the stuff that’s working. But if it makes sense for animal advocates to continue to expend energy on the interventions we would favor if we ignored AGI, I think it also makes sense to invest some energy hardening against the AGI-shaped futures we can kind of anticipate. Writing this post only strengthened that belief for me, and lit a fire under me to try to get some of these AGI-specific strategies off the ground.

Strategies for animal advocates to prepare for AI takeoff

13 (This is not meant to be an exhaustive list, it’s just the ones I considered in this post.)

How are we going to do all that shit in addition to what we’re already doing if we only have five years

Weeks ago, I ended a post with an anecdote from my friend Josh Balk. It was everyone’s favorite part, which taught me that Josh stories are a great way to get people to read to the end, so here’s another one.

Josh tells the story of a golden retriever named Jake, the dog he loved more than any other person in his life. After years of Disney-style romance between the two of them, Jake developed a malignant cancer. Josh took Jake to see vet after vet, who all told him treatment wasn’t a serious option.

Jake slowed down. He stopped chasing balls in his favorite park, and eventually, even a walk around the city block from Josh’s apartment in San Francisco became a painful ordeal. Josh asked one of the vets how to decide when it would be time for Jake to die, and they told him it would be the moment Jake stopped eating.

Finally, the day came that Josh poured food into Jake’s bowl and he wouldn’t eat. Josh got down on the floor with him, weeping, and begged him to eat, trying to feed him by hand, but Jake would have none of it. So they drove to the park and shared their final moment in the sun together before a vet came to meet them and end Jake’s life.

Now, Josh is one of the hardest working animal advocates I have ever met. I have no idea how he manages to do so much. But the point of this story for him is that all of us are holding something back, even him. And he knows it because when Jake was dying, if someone had held up a vial and told him, this is a cure for Jake. If he gets this, he will live. But the only way you can give it to him is by winning all your current campaigns. For that vial, Josh would have found the extra ten percent and fought even harder.

We all have additional stops we could pull out. AI researchers are working drug-fueled 80-hour workweeks just so their company is the first to reach AGI and make them unimaginably rich. Think about how much more important our work is. Then think about the fact we may only have five more years to do it.

Activists used to remind each other to make time for self-care because the movement is a marathon, not a sprint. But the rules have changed. The referees just announced that they are calling the race early. The new finish line is barely 100 meters ahead of us, and we are nowhere close to the lead.

Now is the time to give everything you have to fighting for animals, even if it means burning yourself out in a couple years.14 Don’t hold anything back. Sprint.

Build on,

Sandcastles

If you found this essay useful, the best way to help is to share it with another activist. Also, pressing the little heart or leaving a comment helps it feel like I’m not rambling into the void.

Share

1

Yeah yeah I know this could be selection bias, just let me make my point OK?

2

China catching up in semiconductors and/or cross-straight reunification is being assumed here, yes.

3

My research was somewhat mixed on how sensitive animal farming is to energy price variation, with the largest exposure coming from fossil-fuel based fertilizers used in animal feed. Even if high energy costs would drive up meat costs, it would still make sense for price-sensitive meat replacements to seek out China’s low-cost electricity.

4

That doesn’t mean it’s more polluting; methane from cow burps could close the gap even if cultivated meat was powered by coal, and the script flips with renewables. But this isn’t about pollution, it’s about energy costs.

5

E.g. when a factory machine breaks down, locating in a dense manufacturing zone in China is the difference between getting replacement parts in two weeks or one day.

6

I’m not saying what I want to happen, e.g. from a climate standpoint, I’m saying what I think will happen.

7

Murderer!

8

The vignettes for scenarios 3, 7, 8, and 9 were written by Claude 4.5 Opus alone, and 5 and 10 were co-authored. It was a joy working with you, Claude!

9

Some researchers disagree on this, such as here. I’ll come back to the question of simulated beings in a later scenario, though it’s mostly not the focus of this essay.

10

I genuinely recommend this, fwiw. Substack is great.

11

I used to worry an AI might be programmed with the idea of maximizing economic growth, realize the humans wouldn’t approve of truly maximizing this, then kill the humans so it could maximize anyway, filling the universe with empty shopping malls. But it has turned out to be much easier to teach AIs what we mean with our goals than many originally feared. The concern now is not that they’ll misunderstand our goals, but that they won’t care.

12

flagella?

13

The tractability and likelihood estimates are extremely coarse and intuitive. Please view them critically. Considering I sampled 10 possible futures and ignored others, a 10% likelihood of a given future manifesting would qualify as high.

14

In practice, this means experimenting with scheduling, exercise, and chemistry to find the peak productivity you personally can sustain for ~5 years. Read up on it, don’t just give up the first time you burn out on a 60+ hour week. If you don’t know where to start, ask Claude.

Build on, Sandcastles

← Back to all posts