Modern medical ethics is profoundly concerned with the idea of informed consent - a concept that has two key components: 1) getting permission from a patient before performing a medical treatment on them (consent) and; 2) making sure that the patient understands what that treatment entails (informed). Together, it is hoped that these will limit medical paternalism and prevent doctors infringing on patient’s autonomy. In this way, informed consent is often seen as making "the impermissible permissible". A classic example is that in normal life, cutting open another person and rummaging around inside them is completely illegal, yet surgeons all over the world do this routinely and get payment and praise for doing so.
In a very limited sense, simply getting a patient to say yes to something - regardless of whether or not they understand it - would suffice for gaining “consent”, but how do we define consent as being informed? As anyone who has ever agreed to any Terms and Conditions knows, there is a large distinction between merely giving information about something and actively informing someone about what they are agreeing to. This latter, more rigorous, requirement aims to ensure that patients have not been coerced into agreeing to something they either did not understand or did not want to occur i.e. it protects their autonomy. This is why we say consent must be “freely given”, a stipulation that emerged in the wake of the atrocities that occurred under the Nazi regime and came to light during the Nuremberg trials.
Even in more ideal cases, where the doctor has made every effort to explain what the treatment involves, informed consent runs into the problem of “referential opacity”. This occurs because it isn’t necessarily the case that people have consented to what they thought they did. A classic illustration is that if a doctor suggested administering “lysergic acid diethylamide” to treat tiredness, many patients with little knowledge of science may simply nod along and swallow the tablet in the hopes of being made better. If instead the doctor instead said they were going to offer a large dose of “LSD”, many patients would understand something completely different by that statement and may change their mind, even though the actual content is factually the same. This is largely a problem of miscommunication and the assumption of prior knowledge by the doctor. Another case of referential opacity is included in case study 1.
Case Study 1 - Alder Hey Children’s Hospital
In the mid-1990s the UK was stunned by the shocking revelation that Alder Hey Children’s Hospital in Liverpool had been “stripp[ing organs] without permission from babies who died at the hospital between 1988-1996.” The subsequent inquiries and court cases resulted in an update both to the UK’s legislation involving the use of human tissue, and to laws regarding informed consent. One key sticking point emerged because while some parents had consented for “tissue” to be removed from their children either during surgery or post-mortem, they did not think that this could involve whole organs. Essentially, the medical professionals and the parents were using the same terms but with different definitions, creating a scenario where even though the parents might have consented, they were not informed. This highlights the problem of referential opacity and the need for doctors to use simple, explicit terms in their explanations.
The problem of patient understanding creates another problem for gaining informed consent. Most patients tend to be under the therapeutic misconception, which refers to the fact that most patients do not think that their doctors would ever expose them to unnecessary risk in a clinical trial and that their main goal is to make the patient better. This is perhaps unsurprising, as a key pillar of medical ethics is “nonmaleficence”, or to “do no harm”. However, Western biomedicine also currently claims to be founded upon the evidence produced in scientific studies and trials. Inherent in studying patients in the way we do is that some people run the risk of receiving treatment that may not be as effective, or that may actively harm them (while this tends to be quite rare, there are some significant cases). In this scenario, patients may enter a clinical trial and be unaware that the treatment they now receive may not be as effective as the one they would have received had they not agreed to the trial. This creates a tension in that while medical research is often seen as being of benefit to society as a whole, this does not necessarily mean that the individuals in the trial will see that benefit. Meanwhile, the risks of the research are entirely borne by the individual.
Why is this relevant? The fact that the therapeutic misconception is very deeply rooted in our society means that it may not be easily altered, even if a doctor explicitly informs the patient of the risks inherent in medical research. In this case, we can consider whether the patient is truly giving informed consent if they have a belief about the study that doesn’t match with the reality of being in a trial.
One final point to note about informed consent is that in almost all cases, because of the temporal order of gaining consent, the patient can only consent to a description of a treatment and not to the reality of the procedure itself. This can become significant when, for instance, a patient and doctor disagree about just how “minor” the discomfort of a procedure is. If a patient consented to “minor discomfort” but, on reflection, considers that they really suffered quite significantly, then is it the case that the patient gave informed consent? Ultimately, while doctors may try their best, they have very limited means to infer precisely what a patient is understanding in any of their conversations.
Why do we care about consent in the first place? One of the main arguments for informed consent is that it serves to protect patient autonomy. The idea of autonomy has a long history in ethics and is often traced back to John Stewart Mill, even though he himself never used the word. In Mill’s context, autonomy referred to something akin to individuality or self-expression, however in modern terms autonomy is more closely aligned with the idea of making choices. This may seem a slight distinction, but we can readily think of cases where “mere sheer choice” does not align with a patient’s true self-expression.
While patient autonomy is frequently quoted in medical contexts, there are some important considerations as to who precisely can be autonomous. One common problem in medicine is that some groups of patients may not be able to make clear or informed decisions regarding their own care, with the classic examples being children, those suffering from dementia and people who are unconscious. In these cases, we say that a patient does not have the capacity to make a decision, and instead family members, the patient’s previous wishes and the doctor’s own judgement are used to decide what would be in their best interest.
Even in ideal circumstances, it is important to consider just how much autonomy patients possess when encountering the medical profession. While most doctors would shy away from the explicit paternalism of much twentieth century medicine, it is worth realising that when patients are in hospital they are rarely in full control of their circumstances, they are often not in any fit state to “shop around” for the best treatment, and frequently they do not possess the time to make full and reflective decisions. More commonly, patients are presented with a small menu of possible treatments - or in some cases only one option - and it is expected that the patient will make long-lasting decisions with the information they are given. While this may be partly mitigated by gaining truly informed consent, with the appropriate type and quantity of information, it is clear this situation is far from the Millian ideal of true individuality or self-determination.
Case Study 2 - The Tuskegee Syphilis Experiment
In 1932 the United States Public Health Service began a study in Tuskegee, Alabama to determine how syphilis - a sexually-transmitted infection - would naturally progress without treatment in African American men. Partnering with the Tuskegee Institute, 600 impoverished African American men were enrolled and promised free medical care by the Federal Government; however, none were ever informed that they were suffering from syphilis. Instead, the researchers administered placebo drugs and ineffective investigations to trick the men into thinking they were being helped whilst actually documenting how their condition worsened over time, with symptoms including deafness, blindness and death. Despite the fact that penicillin, a complete cure for syphilis, was discovered in 1947 the study continued until 1972 when a public outcry halted the experiment.
In summary, autonomy is fundamentally concerned with ensuring that patients have a choice in their care and informed consent has been developed as a means to respect patients’ autonomy. Informed consent emerged from a long history of, at best, doctors showing significant paternalism towards their patients, or at worst, experimenting on patients they were supposed to care for. This resource has introduced the key points regarding autonomy and informed consent, as well as highlighting some of the problems and tensions that emerge within them.