Perhaps the simplest argument for caring deeply about the future is this obvious one: if humanity doesn’t wipe itself out, by far more people will live there than here. If you’re concerned with morally good outcomes, making everyone in China a bit better off is a lot more desirable than making everyone in Guinea-Bissau a bit better place, because you will be positively affecting a far larger number of people.
A lot of people however, do not agree that trying to increase the quantity and quality of positive experiences that occur for future potential people is just as important as improving experiences that are occuring now to people who already exist. They say amongst other things, that as future people don’t exist they have no preferences to be satisfied or nerves to experience pain. If it turns out that they never exist at all, they can hardly ‘harmed’ by this. Who can say whether they would live hapy lives or not, anyway? And how can one morally compare non-existence with existence in any case? Katja over at Meteuphoric has already done a good job of explaining where these ideas go wrong in Mistakes with non-existent people, so I’ll simply propose some thought experiments that may help us to see how these claims are indeed very strange and inconsistent with our other beliefs.
Reversible non-existence through general anaesthetic
Imagine that it were possible to take existent people and make them non-existent temporarily. There’s no need to fire up your imagination as this isn’t hard to do; with general anaesthetics used in surgery we can already do this for all practical purposes. A person under a strong enough general anaesthetic is not capable of any desires or experiences of their surroundings. If you agree that consciousness a necessary condition for having good or bad things happen to you, and if you are truly committed to not caring about potential future people, then a comatose body undergoing surgery ought to have no more value to you than a corpse. Concern for those who love them aside, as soon as a person was under general anaesthetic, you should be indifferent as to whether they get their treatment and go on living, or instead are shot in the head. We cannot compare their present non-existent state with existence; we don’t know they’ll be happy if we don’t shoot them in the head; there is nobody to benefit by not shooting them; etc. When thought about in this way, the flaws in these objections are especially transparent.
Wait a minute. you might say. This person isn’t conscious now, but they have been conscious and that matters too: I only care about the experiences of people who are conscious now or have been in the past. To which I would remind you that time is just another dimension with the curious property that we only go through it one way. If you could only walk forward, would it make sense to say you are only concerned with those standing in front of you and those behind you matter not one bit? Alternatively, imagine that our comatose person isn’t someone who has come in for surgery but is instead a person who was just been manufactured by a ‘comatose person’ machine. This new person has never been conscious in the past but if left alone will soon wake up and go about living, exactly the same as our surgery patient would have. Would it be then fine to shoot this comatose person in the head just because they won’t become conscious for another five minutes? Not to me.
You might also say that what matters is that their body exists now, even if it is not functioning so as to produce consciousness right now. Imagine then that we have a machine which can pull people’s bodies apart temporarily and then put them together again as they were before – rather like those imagined in the movies The Fly or Doom. Someone goes into the machine and is completely pulled apart. They are scheduled to be put together again in just a moment. If you are committed to the view that the integrity of the body is what matters, you would have to say there is then nothing wrong at all with stopping the machine from putting the person together again, even if that person loved life and had no desire to die before they were put into the machine. I trust you agree with me that that would be the wrong thing to do.
Non-existence and uploads
An even more interesting thought experiment involves imagining a world in which people can be turned on and off like machines. If you think mind uploading looks plausible, then this is not hard to imagine: one day we might scan and simulate human brains in computers and thereby make them conscious again. If the power went off, their present brain-state would be saved and restored when the power returned. Whether or not this seems like a plausible future to you, let’s imagine it.
Getting the electricity needed to keep your mind running might be an expensive thing to do, although if you lacked money you could always just turn yourself, or part of yourself, off temporarily. Imagine an upload that gets up at 8am in the morning every day, cleans its circuits and goes off to work rather like a human today. It earns a money from work, but due to heavy wage competition from the billions of other uploads around, it doesn’t make good money. In fact, it can’t even pay the electricity bill needed to keep all of its brain simulated for the full sixteen hours that it isn’t working. After work, it can only afford to pay to keep itself running for four hours of leisure time and then it has to turn off the parts of the brain which generate conscious experience until they are turned on again for work the next day. Due to its poverty this upload is essentially forced to only be conscious twelve hours a day.
The upload then faces a trade-off between non-work time and not just having money to spend on fun experiences, but indeed having money to spend on having experiences at all. It it clear to me that anyone who provided technology or extra resources to allow the upload to remain conscious and have enjoyable conscious experiences for longer each day, would be doing a good thing. Those who are commited to not worrying about potential future people should be indifferent as to whether these uploads ever wake up once they are turned off; those who don’t believe moral comparisons can be drawn between the desirability of consciousness and non-consciousness would be commited to saying it doesn’t matter how much conscious leisure time the upload gets each day, even if the upload clearly states their preference for being conscious more of each day.
Other interesting consequences for the importance of future people arise from the ability to copy uploads, but that will have to wait for another day.