Peter Singer - Extinction Risk & Effective Altruism



Many Effective Altruists think a lot about the future, and there are questions of course in how much we should invest in trying to prevent small risks that we will become extinct. Another cognitive bias might be that we are not very good at accounting for small risks - we tend to dismiss them.  We can't see the difference between a 1 in 1000 chance and a 1 in 100,000 or a 1 in 1,000,000 is all that important. But really when you think about it, if there is a lot at stake, there is a huge gain in reducing the chance of risk from 1 in 1000 to 1 in 1,000,000.  Meaning it is only 1 thousandth as likely to occur...  And if what we are concerned about is the future of our species, and if we do value the idea that there should be human beings in the future, if you like the human story will continue, and that we hope there will be progress and that we will solve the problems that we are immediately faced with, and that eventually we will have potentially very large numbers of people, perhaps trillions of people or quintillions or larger amounts of people that exist over many generations - many effective altruists see this as a really important value, because they think that that is a good thing that people should flourish and have good lives.   And if you can reduce the risk of extinction, and thereby increase the probability that there will be these people living good lives, then that can be seen as an important cause to donate to - and one that has a high payoff in terms of cost effectiveness given the number of people that will benefit from it.
Now not everybody thinks like this, and there is a philosophical question that it raises.  Some people would say  - look, I'm concerned with people who exist now, and i'm concerned with people who will exist in future whatever happens, they will say it's very probable that people will exist in, let's say, 200 years time, even if none of them exist now - and we wouldn't want to make their lives miserable by, say for example continuing climate change or by leaky radioactive waste from nuclear power-plants or something like that - but they will say, if people don't come into existence at all, they have nothing to loose.  Nobody regrets never having existed because if they never exist they can't regret it.  So some people think this is not a pressing mile priority to make sure people do exist who would never know what they would have missed out on - it is not a pressing priority.  So they would see some kind of accident that caused the extinction of our species as a bad thing because it kills the 7 or 10 or 15 billion people on the planet at the time the accident occurs - and they would see that as a terrible thing - but they wouldn't see it as a terrible thing because there are these huge numbers of people who would have come into existence had this accident not occurred - but now will never come into existence.   
So in order to decide how concerned we ought to be about the risk of extinction we need to reach a conclusion on that question on whether it is a huge loss of value if very large numbers of beings who would otherwise exist, never come into existence.
Effective altruists need to have some information about the consequences of your actions - obviously you can't be effective if you don't know what consequences you are going to have - but one of the problems we have is that when we get into the more distant future, it's very difficult to predict what the world will be like - and therefore it is difficult to know how much value to place on, let's say, preventing bad things from happening.  So some people might say. well we don't have to worry about climate change beyond the next century because if we go beyond that there will be new technologies, people will be able to take carbon out of the atmosphere cheaply and effectively, and will be able to manipulate the climate the way they want.  Is that true or not?  We don't know
and it would be good if there were some way of predicting whether it is true - and the same is true for almost any problem that you can think of where we don't know what the world will be like, we don't know what technologies will be like, and we could easily waste money and resources in doing things that we thought would be useful that in fact are not useful for the world in the future.
It clearly would be good if we could become better at predicting, but it's also very hard to do, because in a way to predict new technologies requires predicting the discoveries on which they will be based - and if we could predict those discoveries, we could make them now!  We wouldn't have to wait around for 50 years for them to be invented.  
So I certainly think we should be putting more effort into trying to improve our predictions about the future, but I am not very confident that we will ever be able to get to the stage when we really can foresee what the future is going to be like.

Comments

Popular posts from this blog

Peter Fedichev - Quantifying Aging in Large Scale Human Studies

Avatar Polymorph - The Ethics of Boosting Animals from Sentience to Self...