The Times Australia
Google AI
The Times World News

.

Longtermism – why the million-year philosophy can't be ignored

  • Written by Katie Steele, Associate Professor in Philosophy, Australian National University
Longtermism – why the million-year philosophy can't be ignored

In 2017, the Scottish philosopher William MacAskill coined[1] the name “longtermism” to describe the idea “that positively affecting the long-run future is a key moral priority of our time”. The label took off among like-minded philosophers and members of the “effective altruism” movement, which sets out to use evidence and reason to determine how individuals can best help the world.

This year, the notion has leapt from philosophical discussions to headlines. In August, MacAskill published a book[2] on his ideas, accompanied by a barrage of media coverage and endorsements from the likes of Elon Musk[3]. November saw more media attention as a company set up by Sam Bankman-Fried[4], a prominent financial backer of the movement, collapsed in spectacular fashion.

Critics say longtermism relies on making impossible predictions[5] about the future, gets caught up in speculation about robot apocalypses and asteroid strikes[6], depends on wrongheaded moral views, and ultimately fails to give present needs the attention they deserve.

But it would be a mistake to simply dismiss longtermism. It raises thorny philosophical problems – and even if we disagree with some of the answers, we can’t ignore the questions.

Why all the fuss?

It’s hardly novel to note that modern society has a huge impact on the prospects of future generations. Environmentalists and peace activists have been making this point for a long time – and emphasising the importance of wielding our power responsibly.

In particular, “intergenerational justice[7]” has become a familiar phrase, most often with reference to climate change.

Seen in this light, longtermism may look like simple common sense. So why the buzz and rapid uptake of this term? Does the novelty lie simply in bold speculation about the future of technology — such as biotechnology and artificial intelligence[8] – and its implications for humanity’s future?

Read more: What do we owe future generations? And what can we do to make their world a better place?[9]

For example, MacAskill acknowledges we are not doing enough about the threat of climate change, but points out other potential future sources of human misery or extinction that could be even worse. What about a tyrannical regime enabled by AI from which there is no escape? Or an engineered biological pathogen that wipes out the human species?

These are conceivable scenarios, but there is a real danger in getting carried away with sci-fi thrills. To the extent that longtermism chases headlines through rash predictions about unfamiliar future threats, the movement is wide open for criticism.

Moreover, the predictions that really matter are about whether and how we can change the probability of any given future threat. What sort of actions would best protect humankind?

Longtermism, like effective altruism more broadly, has been criticised[10] for a bias towards philanthropic direct action – targeted, outcome-oriented projects – to save humanity from specific ills. It is quite plausible that less direct strategies, such as building solidarity and strengthening shared institutions, would be better ways to equip the world to respond to future challenges, however surprising they turn out to be.

Optimising the future

There are in any case interesting and probing insights to be found in longtermism. Its novelty arguably lies not in the way it might guide our particular choices, but in how it provokes us to reckon with the reasoning behind our choices.

A core principle of effective altruism is that, regardless of how large an effort we make towards promoting the “general good” — or benefiting others from an impartial point of view — we should try to optimise: we should try to do as much good as possible with our effort. By this test, most of us may be less altruistic than we thought.

A photo of a snow-covered mountain peak among hills.
Always optimise: the idea you should do the maximum good possible with your efforts is a key tenet of effective altruism. Sanjay Koranga / Unsplash[11]

For example, say you volunteer for a local charity supporting homeless people, and you think you are doing this for the “general good”. If you would better achieve that end, however, by joining a different campaign, you are either making a strategic mistake or else your motivations are more nuanced. For better or worse, perhaps you are less impartial, and more committed to special relationships with particular local people, than you thought.

In this context, impartiality means regarding all people’s wellbeing as equally worthy of promotion. Effective altruism was initially preoccupied with what this demands in the spatial sense: equal concern for people’s wellbeing wherever they are in the world.

Read more: The cold logic of doing good[12]

Longtermism extends this thinking to what impartiality demands in the temporal sense: equal concern for people’s wellbeing wherever they are in time. If we care about the wellbeing of unborn people in the distant future, we can’t outright dismiss potential far-off threats to humanity – especially since there may be truly staggering numbers of future people.

How should we think about future generations and risky ethical choices?

An explicit focus on the wellbeing of future people unearths difficult questions that tend to get glossed over in traditional discussions of altruism and intergenerational justice.

For instance: is a world history containing more lives of positive wellbeing, all else being equal, better? If the answer is yes, it clearly raises the stakes of preventing human extinction.

A number of philosophers insist the answer is no[13] – more positive lives is not better. Some suggest that, once we realise this, we see that longtermism is overblown or else uninteresting.

But the implications of this moral stance are less simple and intuitive than its proponents might wish. And premature human extinction is not the only concern of longtermism.

Speculation about the future also provokes reflection on how an altruist should respond to uncertainty.

For instance, is doing something with a 1% chance of helping a trillion people in the future better than doing something that is certain to help a billion people today? (The “expectation value” of the number of people helped by the speculative action is 1% of a trillion, or 10 billion – so it might outweigh the billion people to be helped today.)

For many people, this may seem like gambling with people’s lives – and not a great idea. But what about gambles with more favourable odds, and which involve only contemporaneous people?

There are important philosophical questions here about apt risk aversion when lives are at stake. And, going back a step, there are philosophical questions about the authority of any prediction: how certain can we be about whether a possible catastrophe will eventuate, given various actions we might take?

Making philosophy everybody’s business

As we have seen, longtermist reasoning can lead to counter-intuitive places. Some critics respond by eschewing rational choice and “optimisation” altogether. But where would that leave us?

The wiser response is to reflect on the combination of moral and empirical assumptions underpinning how we see a given choice. And to consider how changes to these assumptions would change the optimal choice.

Philosophers are used to dealing in extreme hypothetical scenarios. Our reactions to these can illuminate commitments that are ordinarily obscured.

Read more: Speaking with: Peter Singer on effective altruism[14]

The longtermism movement makes this kind of philosophical reflection everybody’s business, by tabling extreme future threats as real possibilities.

But there remains a big jump between what is possible (and provokes clearer thinking) and what is in the end pertinent to our actual choices. Even whether we should further investigate any such jump is a complex, partly empirical question.

Humanity already faces many threats that we understand quite well, like climate change and massive loss of biodiversity. And, in responding to those threats, time is not on our side.

References

  1. ^ coined (forum.effectivealtruism.org)
  2. ^ book (www.google.com.au)
  3. ^ Elon Musk (twitter.com)
  4. ^ Sam Bankman-Fried (www.nytimes.com)
  5. ^ making impossible predictions (schwitzsplinters.blogspot.com)
  6. ^ robot apocalypses and asteroid strikes (nymag.com)
  7. ^ intergenerational justice (plato.stanford.edu)
  8. ^ biotechnology and artificial intelligence (www.theguardian.com)
  9. ^ What do we owe future generations? And what can we do to make their world a better place? (theconversation.com)
  10. ^ criticised (www.lrb.co.uk)
  11. ^ Sanjay Koranga / Unsplash (unsplash.com)
  12. ^ The cold logic of doing good (theconversation.com)
  13. ^ insist the answer is no (www.bostonreview.net)
  14. ^ Speaking with: Peter Singer on effective altruism (theconversation.com)

Read more https://theconversation.com/longtermism-why-the-million-year-philosophy-cant-be-ignored-193538

Times Magazine

Does Cloud Accounting Provide Adequate Security for Australian Businesses?

Today, many Australian businesses rely on cloud accounting platforms to manage their finances. Bec...

Freak Weather Spikes ‘Allergic Disease’ and Eczema As Temperatures Dip

“Allergic disease” and eczema cases are spiking due to the current freak weather as the Bureau o...

IPECS Phone System in 2026: The Future of Smart Business Communication

By 2026, business communication is no longer just about making and receiving calls. It’s about speed...

With Nvidia’s second-best AI chips headed for China, the US shifts priorities from security to trade

This week, US President Donald Trump approved previously banned exports[1] of Nvidia’s powerful ...

Navman MiVue™ True 4K PRO Surround honest review

If you drive a car, you should have a dashcam. Need convincing? All I ask that you do is search fo...

Australia’s supercomputers are falling behind – and it’s hurting our ability to adapt to climate change

As Earth continues to warm, Australia faces some important decisions. For example, where shou...

The Times Features

Designing an Eco Conscious Kitchen That Lasts

Sustainable kitchens are no longer a passing trend in Australia. They reflect a growing shift towa...

Why Sydney Entrepreneur Aleesha Naxakis is Trading the Boardroom for a Purpose-Driven Crown

Roselands local Aleesha Naxakis is on a mission to prove that life is a gift...

New Year, New Keys: 2026 Strategies for First Home Buyers

We are already over midway through January, and if 2025 was anything to go by, this year will be o...

How to get managers to say yes to flexible work arrangements, according to new research

In the modern workplace, flexible arrangements can be as important as salary[1] for some. For ma...

Coalition split is massive blow for Ley but the fault lies with Littleproud

Sussan Ley may pay the price for the implosion of the Coalition, but the blame rests squarely wi...

How to beat the post-holiday blues

As the summer holidays come to an end, many Aussies will be dreading their return to work and st...

One Nation surges above Coalition in Newspoll as Labor still well ahead, in contrast with other polls

The aftermath of the Bondi terror attacks has brought about a shift in polling for the Albanese ...

The Fears Australians Have About Getting Involved With Cryptocurrency

Cryptocurrency is no longer a fringe topic. It is discussed in boardrooms, on trading apps, and at...

The Quintessential Australian Road Trip

Mallacoota to Coolangatta — places to stay and things to see There are few journeys that captur...