- Home
- Francesca Minerva
The Ethics of Cryonics Page 2
The Ethics of Cryonics Read online
Page 2
So cryonics might not be the ideal icebreaker on a first date, so to speak. But weird goals are not necessarily bad or irrational goals, and investing in an unusual plan does not make anyone a bad person. Yet some of the moral objections to cryonics seem to suggest that, at the heart of the disdain towards cryonics, there may be a conflation between what is considered weird or unusual and what is considered immoral.
In this sense, scepticism towards cryonics should not come as a surprise. Most novelties in the biomedical sciences have elicited, and continue to elicit, similar reactions.
In general, it seems that the closer a technology is to interfering with issues of life and death, the stronger the scepticism or outright aversion it raises. As an illustration of this fact, it is both helpful and interesting to look at the historical change in attitude towards another kind of technology that was also considered “weird” and “immoral” at first, but that is now widely accepted—namely in vitro fertilization (IVF) with embryo cryopreservation (EC ). Despite having faced considerable initial scepticism after their introduction in 1983, IVF and EC have since found their place in the public’s sphere of acceptance and become part of standard medical practice in most countries.
Cryonics has not become as popular and is not perceived as “normal” as IVF and EC , even though it has been discussed for a very long time.1 In fact, the first real case of human cryopreservation took place in 1967, thus predating the birth of Louise Brown, the first IVF baby, by 11 years. Although the procedure was primitive by today’s standards, the patient—psychology professor James H. Bedford—remains in cryosuspension to this day.
There are several reasons why IVF and EC have fared much better than cryonics. The main reason is probably the technical complexity of cryonics: it requires far more advanced technology and knowledge to cryopreserve a fully developed person, made up of trillions of highly specialized cells, than to preserve an embryo consisting of just a few cells. But even if the cryopreservation process itself were simpler, the revival process remains incredibly difficult to realize. More than half a century after the first cryopreservation, no attempt to revive a cryopreserved patient has been made, and there is no agreement among experts about whether and when revival will be feasible. IVF and EC , on the other hand, produced reliable results soon after they were first developed, allowing them to be made publicly available rather quickly. As a result, their increasingly common use has made them look less weird and uncontroversial. Given the lack of confidence in the possibility that revival will someday become available, it is easy to understand why people choose not to invest in cryonics and keep perceiving it as weird.
But technical difficulties alone do not explain the unusual deadlock of cryonics compared to other technologies, as cryonics is surely not the first extremely ambitious plan in the history of humanity. A century ago, it would have seemed ludicrous to suggest that humans might someday visit another planet in the solar system. Since then, humans managed to land on the Moon, and now the prospect of visiting and even colonizing Mars is on the table.
One reason why there has not been as much progress in the research on cryonics is a lack of funding. Cryonics research is not supported by state funding, but is instead conducted largely by private research groups relying on private donations. This lack of funding hampers progress, and this lack of progress is, in turn, used as a reason for not investing in more research. Needless to say, one cannot be sure that more research would guarantee success; but at least it would allow a more accurate assessment of the potential of cryonics, and make it a bit less dubious whether it is a potentially successful enterprise.
In the first chapter of this book, cryonics will be frequently compared with IVF and EC . Despite the obvious anatomical differences and the (perhaps less obvious) moral differences between embryos and fully developed humans, there are a number of morally relevant similarities in how cryopreservation is applied to each of the two. For example, both practices essentially aim at interfering with natural processes—embryonic development in the case of EC and death in the case of cryonics—by using ultra-low temperatures to slow down the body’s metabolic activity.
EC and IVF carry the unique advantage of having been introduced recently enough for many readers to have witnessed first-hand the change in society’s attitudes towards the two. It took only three decades for EC and IVF to go from being seen as weird and immoral to being considered normal and, by many, good. People today who are against cryonics because it is weird need only look at the recent acceptance of EC and IVF to see that weirdness alone is a bad proxy for moral permissibility. What this consideration suggests is that, if cryonics is unethical, it must be unethical for reasons that have nothing to do with its being weird and unusual. Instead, we would need to ask whether it causes harm to individuals and/or societies, either now or sometime in the future.
Comparing cryonics to EC is also useful because it shows the enormous potential of cryopreservation beyond IVF and cryonics. In Chap. 6, we will discuss one instance of such potentiality, namely a hypothetical future technology aimed at cryopreserving human foetuses . At the moment, it is not possible to cryopreserve embryos beyond the blastocyst stage, which occurs roughly five days after fertilization, that is, when the embryo is five days old. Meanwhile, we are as far from being able to preserve foetuses as we are from being able to preserve adult humans—indeed, we are not even able to keep a foetus younger than 24 weeks alive outside the womb. We will see how cryonics, paired with hypothetical techniques that would make it possible to extract the embryo/foetus without causing damage to its tissues, could become a less controversial alternative to abortion. Given that abortion is regarded as one of the most divisive social issues in many cultures, this option would not only help women who cannot continue their pregnancy and yet do not want to abort, but would probably also help reduce the conflict between pro-life and pro-choice groups by offering a best-of-both-worlds compromise solution.
On a similar note, we will also explore the potential of cryonics as a practical alternative to euthanasia, another highly controversial medical procedure. Although euthanasia is illegal in most countries, there is a growing global movement in support of its legalization. As we will discuss in Chap. 5, so-called cryothanasia may offer a less permanent alternative to euthanasia for patients suffering from prolonged, unbearable, and incurable pain. Unlike euthanasia, cryonics does not seek to end someone’s life, but rather to pause it in the hope that future medical technology will give them a new chance to continue living.
So perhaps some technologies, including cryonics, can be used to finally overcome profound disagreements in our societies. And although such technologies often provide grounds for new conflicts between, say, the religious and the atheist, or the conservative and the liberal, we will see how technology may also be used to mend social fractures that originate in non-negotiable and irreconcilable moral views.
Starting and Ending Life in Liquid Nitrogen
Over the past two centuries , science and technology have progressed at an extraordinary speed. For better or worse, humans have gained the knowledge necessary to understand and, to a certain extent, to alter some of the most complex mechanisms governing the natural world.
One of the many groundbreaking achievements in recent decades has been the development of reproductive technology, allowing ever more control over the earliest stages of the human life cycle. Even though this kind of technology receives less public attention than many other recent feats, it has had a significant impact on society: since 1978, around 5 million people have been born thanks to in vitro fertilization (IVF). In 1983, it became possible to also cryopreserve embryos in liquid nitrogen at a temperature of −196 °C, thereby enabling prospective parents to implant their embryos long after they were conceived in the laboratory (Andersen et al., 2005; Horsey, 2006).
Given how IVF and embryo cryopreservation (EC ) tend to be last-resort options for parents struggling to conceive, it is likely that most of the
se children would not have been born if these options had not been available. IVF was also the first successful attempt at going beyond the traditional medical aim of “merely” saving lives by actually conceiving life through unnatural means, which until then had largely been considered a prerogative of gods or nature.
Modern medicine and technology have achieved extraordinary results in advancing the knowledge of the human body, restoring health, and increasing both the quality and the length of our lives. But what if medicine and technology succeeded not just at conceiving life, healing diseases, and stretching the human lifespan, but also at increasing our capacity to control the other end of the spectrum of life? What if we could bring back the dead (in the specific sense of “dead” I will specify below) to the world of the living?
Cryonics—also known as cryopreservation or cryosuspension—is the act of preserving legally dead individuals at ultra-low temperatures, typically using liquid nitrogen. Such extremely low temperatures can, in effect, “pause” metabolic processes to a point where the body is completely inactive and does not decompose, making it possible—at least in theory—to “un-pause” them at a later time. The hope is that, in the future, it will be possible to revive cryopreserved individuals and recover their body, memories, and personality (Minerva & Sandberg, 2015).2
As one cryonics research group explains on its website, a person is only really dead “once the chemistry of life becomes so disorganized that normal operation performed by a human body cannot be restored” (Alcor, n.d.).
Over time, medicine and technology have elevated the bar for what constitutes unfixable disorganization in the chemistry of life. Mere decades ago, cardiac arrest was considered a lethal event, believed to cause near-instantaneous loss of all information in the brain. Nowadays, a person is only considered dead around six minutes after the heart has stopped beating, as we now know it is only after the six-minute mark that brain death actually occurs. But the goalpost keeps inching forward, and it is not unreasonable to suppose that future technologies will allow us to buy even more time between the moment the heart stops beating and the point of irreversible brain death. As technology advances, death retreats.
The circumstances under which an individual is doomed to die keep shifting at the pace of technological advance. A good illustration is the increasingly early stage of pregnancy at which a foetus is considered viable. Only 30 years ago, a 24-week-old foetus would not have been considered viable. It would have been doomed to die within hours of being born, with no attempts made to resuscitate it and keep it alive. Nowadays, even though most of the preterm babies born at 24 weeks frequently experience lasting developmental issues compared to their full-term peers, they nevertheless tend to survive (Younge et al., 2017). The history of medicine is filled with similar examples, and it seems we are only as doomed to die as the technology we need to survive is lacking. At the moment, technology is not sufficiently advanced to allow us to survive as long as we please. Hence, the best we can do is use cryonics to try to halt the process of dying, in the hope that future medicine will fix what is killing us today.
The Information-Theoretic Criterion of Death
The criterion for declaring someone dead has been modified over the years, and it is not the same across different countries or religious views. Differences in the chosen criterion of death often have to do with the kind of technology available or with policies regulating the use of organs for donations (Sade, 2011).
For instance, the cardiopulmonary standard of death relied on the lack of cardiac and respiratory activities to certify the death of a patient. However, with the development of machines able to perform these functions artificially, such as ventilators, this definition quickly became obsolete. The whole-brain standard of death, which is the most widely accepted in current medical practice, defines death as the irreversible cessation of all brain functions, as individuals whose whole brains have stopped functioning have lost the capacity for consciousness as well as autonomous respiration. In recent years, the whole-brain standard of death has also been subject to criticism, partly because of new findings on patients with locked -in syndrome. These patients are conscious but completely paralysed, with the only exception of the eyes (in some cases). Some of them, however, do not show significantly more brain activity than brain-dead individuals—and yet they are obviously not dead, as they are conscious. It has therefore been suggested that a more adequate standard of death should refer to the loss of the capacity for consciousness, memory, beliefs, desires—in sum, to the loss of capacity of performing activities we consider specifically human (McMahan, 1995). According to this new criterion, called the higher-brain standard, death occurs when one becomes irreversibly unconscious.
Cryonicists share a different definition of death, referring to the information-theoretic criterion. The concept of “information” is used here in a very broad sense, such that also consciousness and self-consciousness can be considered as particular types of “information”. According to the information-theoretic criterion, death occurs only when the information stored in the brain becomes so corrupt that it would be impossible to retrieve it. At that point, there is no longer any chance of recovering its unique information—including consciousness and self-awareness—by any current or future technology. According to this definition of death, as long as information within an individual’s brain is not corrupted to the point that it could never be recovered, one is not irreversibly dead—because, at least in theory, it could be possible to retrieve the information stored in their brain.
Before we delve deeper, let us first consider just what it means to be alive or dead in the information-theoretic sense. For this, we need a bit of background on what it means to be alive, to have a mental life and an identity, and to be conscious.
The dominant view on human consciousness among cryonicists holds that what we commonly refer to as a “person” is essentially a unique collection of information stored inside a brain.3 This information includes everything from basic instincts and congenital quirks, through subconscious biases and preferences, and all the way up to cherished memories, defining experiences, learned skills, moral and political views, novel ideas, and so on. These are all the qualities that, when put together, define the unique identity of one human being. The brain itself, meanwhile, is an extremely complex organ that encodes, stores, and employs all of this information in a concerted effort to ensure the survival and reproduction of the genes from which it grew. Although we have quite a lot of information about the detailed workings of individual systems within the brain—chemical pathways, cells, tissues, and so on—we know very little about how it all conspires to store vast amounts of detailed information about the world, and practically nothing about the conscious experience that comes with it. According to cryonicists, what we do know is this: one aspect that makes a given person that particular person is the information stored in (and processed by) their brains (De Wolf, 2015). We might dispute whether this is a sufficient condition for personal identity per se, but it seems uncontroversial that it is at least a necessary condition. For example, we often say that people with severe dementia who no longer possess relevant information about their past “are no longer themselves”.
In order to better understand the difference between death with and without cryopreservation, let us for a moment imagine ourselves as very advanced laptops. After all, laptops also store and process information, and that information is at least unique enough to concern us if our own laptop were suddenly replaced with someone else’s. Even though these similarities are surely not strong enough to claim that we are just like computers, they are not trivial enough for this comparison to be discarded as absurd either.4
Now, suppose that someone were to throw my laptop into the lava lake of an active volcano, much like the fate that befell Sauron’s Ring of Power at the hands of Frodo Baggins in Tolkien’s famous book. After mere seconds in the scalding lava, the content stored in my laptop (assuming I have been so foolish
as to have made no backup) is irreversibly lost, its storage unit destroyed by the intense temperature. This is, in effect, what happens after a person dies and their bodies are cremated or buried: the information stored in their brain is destroyed, whether through incineration or decomposition, and hence irretrievably lost.
Imagine now a second scenario, in which a malfunction causes the lithium-ion battery in my laptop to spontaneously catch fire. Although I grab a fire extinguisher and manage to put out the fire within seconds, the sudden, intense heat severely damages the laptop’s internal components, including its precious storage unit. Upon bringing it to a repair technician, I am informed that while the storage unit is badly burned, the information encoded on it may be largely unharmed. Unfortunately, the technician lacks both the tools and the skills needed to extract the information without destroying it in the process, and he does not know of any other technician who might be up to the task. However, he reassures me that there is great demand worldwide for a workable solution to this kind of problem, and that future engineers will probably find a way to retrieve the information in my laptop storage.
In the case where my laptop is thrown into the volcano, I would cry: “My laptop is dead!” and mourn the irretrievable loss of my precious data. But in the case where the battery of my laptop caught fire but the storage unit did not get completely destroyed, experts might one day be able to retrieve my data and transfer it to a new laptop. If computers will ever be self-conscious, the “information” experts might be able to retrieve might include such self-consciousness (remember that I am using the concept of “information” very broadly). While this laptop would be technically distinct from the one that caught fire, it would, for all intents and purposes, be equivalent in terms of information contained to the one I had, because all (or at least a large part of my files) would be retrieved and uploaded. The hope of cryonicists is either that the information stored in their brain could one day be retrieved and transferred to another substrate so as to preserve their identity (e.g. through so-called brain-uploading, which, however, will not be discussed in this book) or that the original “laptop”, that is, their own original body and brain, could one day be revived with all its original information.