About this Issue

Libertarians commonly take self-ownership to be an important idea: If anyone owns you, you do. This idea has many implications; one of them is that individuals are presumably the ones best suited to making decisions for their own health and welfare. In particular this includes decisions about which drugs to ingest and for what reasons. At least to hear libertarians tell it, any challenges to that presumption must meet a fairly high standard if they are to succeed.

Some decisions are not merely self-regarding, of course, but for us it is seldom sufficient merely to assert that an action has an other-regarding aspect, and that therefore it may - or must - be regulated or forbidden. Others are less persuaded by the idea of self-ownership. Perhaps they do not subscribe to it at all; or perhaps they find that the presumption it creates is rather easily rebutted. This disagreement in ethics has public policy implications that may touch on exactly who lives and who dies - and when, and of what. To discuss these matters we have invited a panel of four individuals with varying perspecitves on the right to self-medicate: Jessica Flanigan of the University of Richmond, Craig Klugman of DePaul University, Alison Bateman-House of the New York University School of Medicine, and Christina Sandefur of the Goldwater Institute. Professor Flanigan will open the discussion, with each of the others responding over the coming week. We will then host a free-form discussion through the end of the month.

We also welcome readers’ comments, also through the end of the month.

Lead Essay

Respect Patients’ Choices to Self-Medicate

Physicians are required to respect their patients’ medical choices, even when patients make choices that would undermine their health or wellbeing. The doctrine of informed consent requires that physicians respect all competent patients’ decision to refuse treatment and inform them about all relevant treatment alternatives. This means that physicians are not permitted to deceive or coerce patients or perform any medical procedures without a patient’s consent. In most cases, public officials also respect people’s self-regarding choices about their bodies and health even if their choices are dangerous. For example, officials do not legally prohibit people from getting bad tattoos, drinking alcohol, refraining from exercise, free-solo mountain climbing, becoming obese, or working as commercial fishermen even though these choices are often imprudent, dangerous, or unhealthy.

Yet for intimate and personal bodily choices that involve pharmaceuticals, physicians and public officials prohibit patients from making self-regarding medical decisions about their own bodies. In Pharmaceutical Freedom (forthcoming) I argue that patients’ rights to make important and intimate medical decisions don’t lose their moral force when patients leave their doctors’ offices. Patients have rights of self-medication as well.

Rights of self-medication refer to the rights to purchase and use unapproved treatments, prohibited drugs, and pharmaceuticals without a prescription. Existing premarket approval requirements for new pharmaceuticals and prescription drug laws violate patients’ rights of self-medication. Instead, public officials and private healthcare providers should certify drugs and provide information about the risks, side effects, and benefits of using drugs. They should not act as gatekeepers for potentially beneficial therapeutics.

Physicians and officials adopted the doctrine of informed consent relatively recently. Patients haven’t always had the legal authority to choose or refuse a recommended course of treatment. Legal protections for patients’ rights to consent to medical care developed throughout the twentieth century after courts found that performing medical procedures against a patient’s wishes was a form of assault. Yet medical paternalism persisted as recently as the 1960s; for example, researchers in one study found that almost ninety percent of oncologists reported that they routinely withheld cancer diagnoses from patients, and some substituted alternative diagnoses. Today patients and physicians recognize that paternalistic deception and medical battery are morally unacceptable. The rejection of medical paternalism over the course of the twentieth century was an ethical triumph for the healthcare profession.

Unfortunately, as patients gained the authority to make medical decisions in clinical contexts, they lost the authority to make medical choices with respect to drugs. In the nineteenth century, people recognized that patients had rights of self-medication. Yet in response to a series of devastating drug disasters throughout the twentieth century, including the deaths caused by use of Elixir Sulfanilamide in the 1930s and Thalidomide in the 1960s, public officials implemented safety and efficacy testing requirements and prescription requirements for therapeutic drugs. Initially these regulations aimed to prevent adulteration, inform patients, and make self-medication safer and more effective. But existing pharmaceutical regulations prevent self-medication entirely in cases where a patient’s judgment about whether to use a drug departs from her physician’s judgment or a regulator’s.

For the same reasons that the rejection of medical paternalism in clinical contexts has been such an achievement over the past century, the increasing acceptance of paternalistic pharmaceutical regulations has been an injustice. The same moral considerations in favor of respecting patients’ rights to make treatment decisions weigh against the paternalistic pharmaceutical regulations that developed during the same time period. And existing policies that prohibit self-medication cannot be justified without undermining these justifications for informed consent.

Specifically, the doctrine of informed consent is justified by an appeal to three kinds of moral considerations. First, a practice of respecting medical autonomy likely promotes good health outcomes on balance by fostering greater trust between patients and health workers and protecting patients from abuse. Second, patients are generally in a better position to know whether a treatment decision is in their overall interest even if physicians are in the best position to know how a decision will affect a patient’s health. Third, and most importantly, people have bodily rights and rights to make intimate and personal decisions even if those choices are unhealthy or imprudent.

These three kinds of considerations, medical outcomes, overall wellbeing, and patients’ rights, are also reasons to support patients’ rights of self-medication. Consider first the claim that respecting patients’ choices is likely to have good health effects on balance even if it means that in some cases patients will make unhealthy choices. Similarly, even if in some cases patients would make unhealthy choices when exercising rights of self-medication, rescinding prohibitive pharmaceutical regulations might nevertheless have good health effects on balance. For example, existing approval requirements harm patients’ health in two ways—by deterring innovation and by delaying access to potentially beneficial therapeutics. Even though some unapproved drugs are genuinely dangerous, it is also dangerous to enforce policies that undermine drug development in light of the ways that pharmaceutical innovation has increased income and life expectancy over the last century. And while people may consent to use dangerous drugs, they cannot consent to policies that cause them to suffer or die while waiting for potentially beneficial treatment.

The public health case in favor of prescription requirements is also weaker than it may initially seem. Prescription requirements raise the price of obtaining treatment, which could prevent people from accessing necessary drugs like birth control or inhalers because it is prohibitively expensive to obtain a prescription.  Prescription requirements may cause people to take more medical risks than they would without a physician’s endorsement of their choices. For example, people may be more likely to use addictive opioids if the drugs were prescribed by a trusted physician. On the other hand, prescription requirements cannot effectively prevent people who are addicted to dangerous drugs from using drugs obtained through black markets or drug diversion. Even in these cases public health officials should focus on “smart legalization” paired with harm reduction initiatives and access to treatment for addiction instead of a prohibitive and paternalistic approach to drug use.

It is unlikely that providing patients with unrestricted access to pharmaceuticals would have good health effects in all cases though. After all, just as the doctrine of informed consent entitles patents to refuse life-saving treatment, rights of self-medication would give patients access to drugs deadly drugs. But like rights of informed consent, even if rights of self-medication did not promote people’s health in all cases, public officials and physicians should not narrowly focus on promoting health anyhow. Rather, physicians and officials should focus on the “whole patient,” not specific medical conditions, meaning patients’ overall wellbeing should take priority over medical outcomes. And since competent and informed adult patients are generally the experts about their overall wellbeing, physicians and officials should defer to them about their treatment. Prohibitive pharmaceutical regulations deprive patients of this deference and thereby prevent them from acting in their interests by forcing them to comply with regulators’ and physicians’ judgments about what they should do.

For example, when officials at regulatory agencies such as the Food and Drug Administration consider approval for a new drug, their decision is informed by a judgment of whether the risks and side effects associated with the drug are acceptable in light of the drug’s benefits. But whether a drug is acceptably risky is not a medical or a scientific judgment – it is a normative judgment that may vary from person to person. Even when officials are medical experts they are not experts about people’s values. This is why public officials cannot effectively determine whether the risks of a drug outweigh the benefits for an entire population, as all people have different values in addition to different medical conditions.

Similarly, physicians are also poorly equipped to know whether a treatment is in a patient’s overall interest when they prevent patients from using prescription drugs. Imagine a patient who decides that it is in his overall interest to use a prescription stimulant as a cognitive enhancement. The prescription drug system empowers his physician to override this judgment on the grounds that the drug is medically risky, even though the patient is in the best position to judge whether the risks are acceptable in light of the potential benefits. Furthermore, even people who are not interested in using drugs that regulators and physicians would recommend against may nevertheless have an interest in the freedom to do so, especially if they judge existing policies are offensively paternalistic.

But most importantly, even if respecting a patient’s medical autonomy wouldn’t promote her overall or medical interests, it would still be wrong to prevent her from making decisions about her own health and body. It is disrespectful to use force or coercion to paternalistically prevent someone from making a self-regarding choice, especially when that choice is an intimate and personal one. Moreover, several rights that are widely acknowledged as especially urgent and foundational rights, in addition to rights of informed consent, support rights of self-medication in at least some cases. For example, patients with progressive or terminal illnesses have defensive rights to preserve their own lives, and approval policies that prevent them from trying potentially beneficial therapies outside the context of a clinical trial violate these rights.

In practice, rights of self-medication only require that patients have legal access to pharmaceuticals, meaning that public officials are not entitled to legally prohibit patient’s from using unapproved or unprescribed drugs by threatening patients or their providers with legal penalties. One may object that laws that prohibit people from providing unapproved or unprescribed drugs do not violate patient’s rights of self-medication. But we would not accept this conception of bodily rights in other contexts. For example, a law that banned people from providing abortions would restrict the right to terminate a pregnancy even if it women seeking abortions were not threatened with legal penalties. Officials can violate a person’s right by prohibiting others from providing the necessary means to exercise her right. On the other hand, rights of self-medication do not entail a right to access any medication at low cost; they only include rights to self-medicate without government interference.

Some people may worry that they could be harmed by a policy that affirmed patients’ rights of self-medication, but if public officials respected that right, not much would need to change for people who support the current system. Public officials could still certify drugs as safe and effective, as could private certification services. Insurance companies could refuse to reimburse patients or providers for uncertified or unprescribed drugs in order to incentivize safer drug use. And patients who were uncertain about which drug to use could ask their physicians for advice.

To the extent that officials are worried that children or people who otherwise lack the capacity to make informed decisions may access dangerous pharmaceuticals, some drugs may legitimately be available “behind the counter,” which would enable pharmacists to assess a person’s competency before providing access. Such a system could also be used to track the distribution of deadly drugs and drugs that could be used in crimes. Addictive drugs may also be restricted to behind the counter access in order to enable addicts to voluntarily join registries that restrict their access.

On the other hand, respecting rights of self-medication would have some revisionary implications. People who currently lack effective treatment options and judge that they would benefit from a path to accessing beneficial drugs would benefit from having more options. People who disagree with their physicians about whether a treatment is promising would be permitted to take their health in their own hands and access drugs without a prescription. People would not be required to pay a physician and a pharmacist just to access birth control, insulin, or inhalers.

The foregoing argument for rights of self-medication also contributes to the growing chorus of arguments against the criminalization of recreational drugs in favor of a more respectful and effective harm-reduction approach. Rights of self-medication also include rights to use deadly drugs and enhancements. Though these kinds of drug use do not primarily serve a medical purpose, the arguments in favor of self-medication appeal to the idea that we should not focus too narrowly on medical uses and the health effects of drugs; instead officials should consider people’s overall wellbeing and bodily rights. 

In summary, self-medication is a basic right and should be treated like other intimate and personal bodily rights, such as the rights to make medical decisions that are protected by the doctrine of informed consent. It is easy to overlook the harm of existing pharmaceutical regulations because their harmful effects are less obvious than the vivid dangers of risky drug use. When people die because costly approval policies deterred innovation or because a promising therapy was awaiting approval it appears as if they died from their diseases and not a lack of access. Yet the harms associated with pharmaceutical regulation are actually more morally objectionable than the harms associated with risky pharmaceutical use because informed adults can consent to the risks of using unapproved or unprescribed drugs but no one consents to the dangers of drug regulation.

For most patients, rights of self-medication needn’t change how they make medical decisions. After all, rights of self-medication do not preclude patients from consulting with physicians or using only government-certified drugs. But if patients had rights of self-medication they would be free to make intimate and personal decisions about their bodies that reflected their values rather than the values of a physician or public official. The evolution of informed consent requirements throughout the twentieth century and recent patient advocacy movements on behalf of rights to use medical marijuana, rights to die, and rights to use unapproved therapies, demonstrate that reform is possible. Today officials and health workers now recognize patients’ rights to make treatment decisions. Going forward, they should acknowledge that these rights include rights of self-medication too.

Response Essays

Don’t Let Medical Autonomy Become Abandonment

Bioethics has transformed from an interest of relatively few religious scholars and philosophers to a vibrant discipline with institutions, conferences, and presidential commissions in the aftermath of scandals dealing with human research. Whether it was the common use of American prisoners as research subjects, the German and Japanese experiments on prisoners during World War II, or the deprivation of available treatment to rural African American men in order to observe the natural history of untreated syphilis, the American public was disgusted  at how scientists have used people as a means to gain data, often at the cost of the people’s own health.

In such a context, it is no wonder that the emerging discipline allotted the place of pride to the principle of autonomy, the idea that competent people ought be able to make their own decisions. Hand in hand with this idea was the concept of informed consent: If you were going to use people as research subjects, it could only be after these subjects were able to make their own decisions. They must be competent, not decisionally impaired, whether temporarily or permanently; they must be adequately informed about the proposed research, and, free of any coercion or pressure, they must agree to be involved with it. But even in areas like human subjects research—where the emphasis on autonomy and informed consent were most pronounced—these were never the only principles to be considered. While ethicists disagree over a comprehensive list of principles, there is no disagreement that there are at least two others: justice and beneficence. Thus, even in human subjects research, there are times—for example, when a person shows up at an emergency room unconscious and with no identifiable decisionmaker and nothing that would indicate what his or her wishes would be with regard to experimental treatment — when physicians are obligated to do what they feel would be in the patient’s best interests. If the patient is bleeding heavily and seems the perfect candidate for an experimental blood clotting agent that is being tested in the emergency room, the physician may use it on the patient—thereby, without consent, transforming the patient into a research subject—if the physician believes this to be in the patient’s best interests. Once the patient is conscious, oriented, and able to make decisions, the physician must seek informed consent to continue using the experimental product on the person. But I have raised this case to show that even in the area of human subjects research, where autonomy is the paramount principle, there are instances in which it gives way to other principles, such as beneficence.

In Dr. Flanigan’s thought-provoking essay, she starts out by correctly noting that the doctrine of informed consent requires that physicians respect competent patients’ decisions, even if such decisions seem unwise. But this is not an unbounded duty that makes the physician a virtual captive to the wishes of his or her patient. The physician cannot break laws. If a patient asks her doctor to kill her, the doctor cannot do so (unless they are in a place that permits physician-assisted dying, and even then far more would be needed than just one request to one doctor.) The physician cannot be made to do things that the physician and/or the physician’s institution believe to be unethical. In such cases, the physician is required to give the patient information about other doctors/institutions to which the patient may choose to be transferred. And in certain cases, the doctor’s treatment of the patient is dictated not by the patient’s wishes but by public health need: for example, a person with tuberculosis who is deemed unlikely to complete therapy as an outpatient may be held involuntarily until their disease has been treated, so as to keep others from catching the disease. In other words, autonomy, while often treated as the most important principle in bioethics, is not the only principle. In certain times and places, it can be outranked by other principles. Indeed, in some instances, we prioritize public health so much that we implement community-wide interventions that make autonomy moot. For example, we do not ask permission before fluoridating the water or banning trans fats in restaurant food or smoking in public places.[1]

So what happens when it comes to patients who wish to self-medicate, which Flanigan defines as purchasing and using approved treatments, prohibited drugs, and pharmaceuticals without a prescription? According to the doctrine of informed consent, she argues that patients ought to have unfettered access to pharmaceuticals, so they can use them as they wish with no undue paternalism on the part of doctors, regulators, or any other actors. Rather, “public officials and private healthcare providers should certify drugs and provide information about the risks, side effects, and benefits of using drugs.” 

I’m not certain what certifying drugs entails. Perhaps Flanigan envisions the FDA has a role with respect to quality control, confirming that drugs contain what they claim to contain, in the proper dosage, without any unwanted adulterants? But I am frankly unable to foresee a day in which a reputable drug maker would sell products to a patient without a system of regulatory approval. Indeed, in the rare countries of the world without a regulatory system of some sort (the South Sudans of the world), multinational drug companies do not operate. 

Setting aside the fact that pharmaceutical companies like regulations that are stable and provide assurances of the rules of operation, what about individual healthcare providers? Could we do away with prescriptions and allow patients to purchase whatever they want over the counter in a pharmacy, guided by the advice and recommendations of their doctors and pharmacists? As someone who suffers allergies, likes “real Sudafed,” and hates having to purchase it from behind the counter, I am sympathetic to this idea. But consumer abuse of Sudafed when it was available over-the-counter (buying it to use in the manufacturing of methamphetamine) is the reason why the product was moved behind the counter and onerous quantity restrictions put in place. In the name of public safety, and in recognition of the fact that over-the-counter Sudafed was, through no fault of its own, implicated in a public health epidemic, state authorities created a system that helps prevent misuse of the product, at the cost of annoying me and my fellow allergy sufferers. Likewise, recognizing that even requiring prescriptions for opioids was insufficient to prevent abuse and a public health epidemic, states implemented other measures – like registries that track how many opioid prescriptions a person fills – in the name of public health.

In public health, there is the idea of “the least restrictive alternative.” We don’t confine a person with TB to a hospital until all less onerous options have been tried and have failed. So perhaps with medicines the idea should be that drugs are sold without a prescription until the evidence makes it clear that something else, like a prescription, is needed to prevent abuse? We do need to be clear however that we are not solely concerned about individuals misusing a drug and harming themselves: we are also concerned about individuals misusing drugs, such as antibiotics, in ways that harm everyone, for example by encouraging the development of antibiotic resistance.

In a scenario where drugs can be sold without a prescription, I would foresee a rise in the number of accidents where patients unknowingly combine medicines that ought not to be combined. Perhaps this could be forestalled by patients informing their physicians of absolutely every product they take, the physician having a very good grasp on drug interactions, and the pharmacist being consulted for an additional layer of assurance. Or we could just use prescriptions, where (provided you use the same pharmacy vendor) the pharmacist has a current record of what other products you may be using (based on your purchase history) and can warn you of possible drug interactions. I do not see this as a tyrannical exercise of state authority over individual liberty but rather as a way to aid the patient understanding the risks of the drugs they have been prescribed and making an informed decision about whether to discontinue one so as to take the other and so on.

Our healthcare system is capable of great miracles: helping infertile people to conceive children, separating conjoined twins, and allowing individuals with HIV to live a near normal lifespan while never progressing to AIDS, just to name a few. But our system also has systematic failings, including physicians seeing patients at such a rapid clip that conversations are often stunted and “informed consent” is often just a shadow of what it should be. When a doctor says, “I think we need to change your antidepressant, so I’m going to write a prescription for Cymbalta, 2 tablets a day,” that is not informed consent. The patient is not told what the relative risks of the new versus the old medication are; what the benefits of the switch would be; or what other options might be considered. The patient is certainly not told how much the new drug is going to cost, either to their insurer or out of pocket, and there is no discussion of the fact that the patient will need to take the drug a while before noticing any effects or that the patient, once on the drug, will need to wean off of it in order to avoid withdrawal. To say that “public officials and private healthcare providers should … provide information about the risks, side effects, and benefits of using drugs” so that patients can exercise their right to self-medicate is to ignore structural constraints that make sufficient information exchange in the healthcare setting unlikely. Perhaps much of the discussion and counseling could be delegated to the pharmacist, but if drugs are available over the counter, with no prescription, patients would likely come to the pharmacy, purchase their desired drugs, and leave, without speaking to a pharmacist. If you create a system in which the patient must speak to the pharmacist at the point of sale (in order to be appropriately educated), how is this any less onerous than requiring a prescription?

Patients come in all forms. Thus, while the majority of them would need counseling and information to make appropriate decisions regarding the purchase and use of drugs, there are some patients on the other end of the scale. They are familiar with drugs, dosages, side effects and benefits, and alternatives, perhaps because they work with these products or for some other reason. In the case of these informed patients, it may be that they do not need physicians and pharmacists and regulators in order to make their decisions about drug use. (I would argue that they still need regulators to ensure products are what they claim to be and are not adulterated, as that knowledge is something than can be gleaned only by testing.) But the vast majority of patients, who either do not have this expert knowledge, who are unwilling or unable to decipher the size 2 font on the literature that comes with their drugs, or who are overwhelmed with dealing with their illness, or even just the day to day demands of life, benefit by having this system in place to guide and to protect them. Recent reporting by Kaiser Health News that found “huge numbers of cancer patients lack basic information, such as how long they can expect to live, whether their condition is curable or why they’re being prescribed chemotherapy or radiation,” underscores my concern that many patients are, for a variety of reasons, not informed enough about their choices and their attendant risks and benefits to be capable of deciding which medicine to try in an unregulated marketplace.[2]

Dr. Flanigan correctly notes that risk versus benefit judgments may vary from individual to individual. But this does not stop us from making policy for populations on all manner of things, ranging from speed limits to what drugs ought be approved to what building materials may be used in housing construction. Some individuals may find these population-based policy decisions too restrictive; others may find them unsettlingly unrestrictive. We aim for a happy medium. With regard to drug regulation, the Food and Drug Administration and its peers around the world are doing so by soliciting input from patients in the drug approval process. Rather than making a decision based solely on the data collected in clinical trials, the regulators are asking patient groups to tell them what is most important to them and what trade-offs, with regard to safety or efficacy, they are willing to accept in order to get a product that addresses these issues of concern. This does not allow every individual patient to make his or her own risk/benefit calculation, but it does allow for representative members of a patient group to make clear their preferences and concerns.

This new approach was first implemented in the 2015 approval of the Maestro Rechargeable System weight loss treatment. This device, which is surgically implanted in the patient, uses intermittent electrical pulses to make the patient’s brain believe that his or her stomach feels full. In a clinical trial, patients who received the system and had it activated lost more weight than patients who had the system implanted but not activated; however, the patients with the system turned on did not lose as much weight as the device developer had predicted they would.  Because the study did not meet its pre-defined endpoint, the study was considered to show that the device failed. However, the FDA conducted a study of obese patients who would be eligible to use this device and found they were willing to accept the system’s risks in exchange for the amount of weight loss that it provided. Using this patient input, the FDA approved the Maestro Rechargeable System, despite the failure of the trial on which approval was supposed to hinge.

Between such efforts to make sure that patients’ risk versus benefit tolerances are taken into account and my pessimism that our healthcare system can ever be reformed to the point where all patients, regardless of health status, education, language, or location of treatment, can be fully educated to the point where they can confidentially and appropriately navigate the wide variety of pharmaceuticals available for purchase, I support existing pharmaceutical regulations. I do so because I believe they are the least restrictive alternative that serves both the individual needs of many patients and public health needs. I believe this will remain the case unless we engage in a wholesale restructuring of the doctor/patient/pharmacist encounter, as I’ve discussed above. I also think that relying upon the notion of patient autonomy when you cannot adequately ensure the “informed” part of informed consent is an abandonment of the patient.

Taking into consideration Dr. Flanigan’s concern about patient self-determination, I suggest that she tackle a current issue where I might be swayed in favor of autonomy: the issue of drug advertisements. At this time, only the United States and New Zealand permit prescription drugs to be advertised to patients, and manufacturers claim that such advertising is done in the name of educating patients, particularly those who are less likely to see doctors regularly. However, we know modern advertising techniques are geared toward generating desire for a product. How best, then, ought we strive to allow patients maximal awareness of pharmaceutical products that may be of benefit to them without needlessly inflating drug consumption and its attendant harms, both in terms of costs and risks to patients?

 

Notes
 


[1] Thus, if you wish to drink nonfluoridated water you must either move or procure your own source of water; if you wish to consume food made with trans fats, you must go to a restaurant in a jurisdiction without a trans fat ban; and if you wish to smoke in public you must either risk penalty or go someplace else. I’m not saying you can’t do what you want, only that the environment has been intentionally structured, via law, zoning, and the like, to make the undesired activity more difficult to do.

[2] Liz Szabo, “‘How long have I got?’: Why many cancer patients don’t have answers,” USA Today, June 9, 2017, https://www.usatoday.com/story/news/2017/06/09/kaiser-how-long-have-got…

 

Often, Doctor Knows Best

Jessica Flanigan proposes to permit individuals to purchase any drug directly from the pharmaceutical manufacturer without gatekeeping by physicians, pharmacists, or government (i.e. FDA) regulation. She posits that individuals are rational creatures, and with sufficient information, they can make their own informed choices. I hold that such a position is nearsighted and dangerous. The current dual system of pharmaceuticals—over-the-counter (OTC) and prescription—recognizes that some drugs create a harm not just to the individual but also to others that necessitates a paternalistic approach.

According to the World Self-Medication Industry, self-medication is “the treatment of common health problems with medicines especially designed and labeled for use without medical supervision and approved as safe and effective for such use.” In the nineteenth century, self-medication was the only way that one could find medications in the United States. Many of these were peddled by snake oil salesman and commonly contained opiate derivatives and marijuana. Even prescribed elixirs led to addiction and some overdose deaths. Few of these medications were effective at treating ailments. In 1906, Congress passed the Federal Drug & Food Act which established the federal government’s obligation to ensure that drugs were safe, effective, and advertised accurately.

In 1968, Bruce Brennan, an FDA attorney, argued that self-medication “has grown from the demand in our society that the individual be able to determine for himself what he wishes to do in managing subjective manifestation of physical disorders.” He further said that a right to self-medicate was based on the idea that “the individual should fight his own battles.” Of course, no right is without limit when its exercise can harm others. By 1978, an article written by pharmacists suggested that there are some medications about which people need guidance or they are subject to abuse. In all of philosophy, law, and history, there is no right to self-medicate. Neither is there a mention in the list of inalienable rights laid out in the U.S. Declaration of Independence or Constitution.

The United STates already has a system of self-medication for certain drugs. For example, I used to need a prescription from my physician to purchase an allergy pill or an acid reflux drug. After years of widespread use, these drugs proved themselves to be safe and effective, eliminating the need for a gatekeeper. Drugs that achieve over-the-counter status have a track record of not interacting with many common medications, of not having problematic side effects, and can be easily taken by an untrained person.

For other drugs, we have paternalism—under which someone who knows more than you acts to keep you safe. Consider that I do not wake up in the morning and test my water for lead and arsenic before brushing my teeth. I do not test my air for sulfur and ozone. I do not pave my own road to get to work. I do not have to test the produce I buy for the presence of bacteria or fungi. To be frank, I would not know how—I lack the knowledge, the skills, and the means. However, there are experts upon whom I rely upon to make sure that all of these things are safe and available. As Brennan said, “In our free society, government is established to do only those things that individuals cannot do alone.” To do any of this safety surveillance on my own would require expertise in chemistry and biology, having a laboratory in my home, knowing the appropriate tests and how to read them, and having the hours necessary to undertake these tasks. Realistically, we must rely on others who have the expertise and means to make sure that publicly available goods are safe and effective.

Flanigan’s proposal relies on the notion of informed consent. There are two components to informed consent: 1) informational and 2) volitional. In the first, I need to know the risks, benefits, and alternatives. In the second, a competent, capacitated, rational decisionmaker, absent of coercion or influence, makes a decision. The standard here is not simply that knowledge is passed to me in a document or a Google search, but rather that I have an understanding of it—not just reciting a list of side effects but knowing the effect on physiology and lived experience. Without standardized, rigorous clinical testing, no one can have knowledge of the risks or benefits. A new drug must be tested on thousands of people (phase three testing) to know this information. I could not conceivably test each potential drug on my own subject pool of a thousand people, nor should I trust a wiki written by a bunch of lay people self-experimenting with a compound. Given the complaints by pharmaceutical companies about the expense of clinical testing, pharma manufacturers are unlikely to undertake the same rigorous testing the FDA now requires if it were made optional.

There are many dangers to full self-medication. Studies show that choosing drugs and dosages requires knowledge of dosing, timing, and side effects which untrained people do not have. Among the dangers are incorrect self-diagnosis, incorrect choice of therapy that harms one’s health, not knowing dangers of mixing drugs with ones you are already taking, lack of knowledge of contraindications, not knowing that the same substance can be sold under different names (increasing risk of overdosing), not recognizing adverse reactions, not correctly administering, using for too long or for too short a period of time, developing dependence, and not knowing that chemicals in food can affect the actions of a drug. In addition, a person may not know the source or quality of the medication: a problem that has been raised in other studies. Still others may be susceptible to becoming dependent and addicted, especially if they have a mood disorder or existing addiction which he or she may not recognize.

The second component of informed consent is that your choices are voluntary. Marketing goes back to selling snake oil when we are encouraged to take the word of the manufacturer absent proof. Studies show that physicians are strongly influenced by pharmaceutical marketing: If a drug rep visits a physician to talk about a company’s drug, the physicians will prescribe that company’s drugs at higher levels for several months. If physicians who are highly trained in the science and clinical use of medication are so influenced by advertising, how will someone without the benefit of at least seven years of post-college education be able to escape the coercive effect of advertising?

We must also consider that Flanigan’s perspective requires a rational decisionmaker. Although we like to think that we are rational, most of are not, and even fewer patients are, which is one reason why free market approaches to health care have failed. A sick person needs prompt help, not an opportunity to consider options, costs, benefits, and risks. Rationality is compromised by the influence of others, and we are often unaware of it. While most doctors are influenced by marketing, when asked, doctors say that other physicians are influenced, but that they personally are not. Lack of awareness of the effect that others have on us is dangerous. The gatekeeper helps to mitigate this effect.

One can argue that by self-medicating, a person agrees to undertake all of these potential problems and accepts the consequences. However, as even the libertarian John Stuart Mill argued, when one’s actions pose a potential harm to others then the government is obligated to step in.  If one becomes addicted to a drug, as was common in the 1800s, that action could cause harm to one’s family and co-workers. The price might be paid for others (e.g. drugged drivers killing others in a crash). Taking something as seemingly innocuous as antibiotics can have a strong negative effect on others: Not completing a full dose of treatment because you feel better, or treating yourself with the wrong drugs, leads to antimicrobial resistance, meaning that such drugs can be ineffective against future infections and endanger others’ lives.

The reason why people choose self-medication is cost, not lack of access to medical care or convenience. This finding suggests the solution might be cost-control rather than removing patient protection. While liberty is a good thing, all rights and freedoms have their limits—usually when their practice infringes upon other people. For the moment, our society is best served by our dual system of pharmaceutical distribution: drugs with long proven records of safety and ease-of-use are sold over the counter, and all others are available through our partners in health—physicians and pharmacists.

 

References

Bennadi, Darshana. 2013. Self-medication: A current challenge. J Basic Clin Pharm 5 (1):19-23. doi:10.4103/0976-0105.128253.

Brennan, Bruce J. 1968. The Right to Self-Medication-A Continuing Conflict Between Congressional and Agency Policy. Food Drug Cosm LJ 23:487-.

Chimonas, Susan, Nicholas J. DeVito, and David J. Rothman. 2017. Bringing Transparency to Medicine: Exploring Physician’s Views and Experiences of the Sunshint Act. . American Journal of Bioethics 17 (6):4-18. doi:10.1080/15265161.2017.1313334.

Collier, Roger. 2009. Rapidly rising clinical trial costs worry researchers. CMAJ 180 (3):277-278.

FDA. 2009. Federal Food and Drugs Act of 1906 Public Law 59-384. 34 STAT. 768. Washington DC: Food and Drug Administration. https://www.fda.gov/Regulatoryinformation/LawsEnforcedbyFDA/ucm148690.htm. Accessed July 7 2007.

Iserson, K. V., R. J. Cerfolio, and R. M. Sade. 2007. Politely refuse the pen and note pad: gifts from industry to physicians harm patients. Ann Thorac Surg 84 (4):1077-1084. doi:10.1016/j.athoracsur.2007.06.032.

Lazareck, Samuel, Jennifer Robinson, Rosa M. Crum, Ramin Mojtabai, Jitender Sareen, and James M. Bolton. 2012. A Longitudinal Investigation of the Roleof Self-Medication in the Development of Comorbid Mood and Drug Use Disorders. J Clin Psychiatry 73 (5):e588-593. doi:10.4088/JCP.11n0 7345.

McGuire, Ryan. 2011. Per Patient Clinical Trial Costs Rise 70% in Three Years. Cutting Edge Information. https://www.cuttingedgeinfo.com/2011/per-patient-clinical-trial-costs/. Accessed July 10, 2017.

Mill, John Stuart. 2002. On Liberty. Mineola, New York: Dover Publications.

Schafheutle, E. I., J. A. Cantrill, M. Nicholson, and P. R. Noyce. 1996. Insights into the Choice Between Self-Medication and a Doctor’s Prescription: A Study of Hay Fever Sufferers. International Journal of Pharmacy Practice 4 (3):156-161. doi:10.111/j.2042-7174.1996.tb00859.x.

Weil, Gilbert H. 1965. The Natural Right of Self-Medication. Annals of the New York Academy of Sciences 120 (July):985-989. doi:10.1111j.1749-6632.tb56737.x.

World Health Organization. 2000. General Policy Issues: The benefits and risks of self-medication. WHO Drug Information 14 (1):1-2.

WSMI. n.d. What is self-medication? Nyon, Switzerland: World Self-Medication Industry. http://www.wsmi.org/about-self-care-and-self-medication/what-is-self-medication/. Accessed July 7 2017.

 

Tear Up the Permission Slip: Medical Autonomy Is a Fundamental Right

Thousands of people every year suffer and die in the United States, while treatments that could help them are sitting on a shelf. That’s thanks to the outcome of a slow and expensive bureaucratic process by which the Food and Drug Administration (FDA) evaluates potentially lifesaving medicines and treatments. That process takes an average of fourteen years and $1.4 billion for every drug.

Arguing in favor of a patient’s right to self-medicate, Dr. Jessica Flanigan points out that under our current system, “people may consent to use dangerous drugs,” but “cannot consent to policies that cause them to suffer or die while waiting for potentially beneficial treatment.” But it’s worse than that. Our system is not just cumbersome and costly; it’s fundamentally unjust.

That system allows people to take dangerous medicines (cancer-treating chemotherapy is essentially poison, and even acetaminophen kills more than 400 people per year), or to end their lives with a physician’s help. But it forbids them from trying medicines that might cure them or alleviate their suffering.

And while the government allows healthy volunteers to be paid to test medicines that could be fatal to them, it prohibits dying patients from trying investigational medicines that the FDA has deemed safe enough for human testing but that may have unknown side effects. But even that isn’t always the case: under the FDA’s “compassionate use” program, a lucky few—very few—can be granted access to medicines that are still going through clinical trials. And, of course, a patient who can afford it can always fly to Europe or Asia to obtain medicines that are available there, but are still barred from the market by our own FDA.

In other words, our system does not entirely prohibit patients from taking medications with unknown or harmful properties. Rather, it permits exceptions on a limited and arbitrary basis, rather than allowing access on a single, principled foundation. For example, the FDA has made exceptions to its own rules prohibiting access to unapproved treatments. The results of such arbitrariness are horrifying experiences such as that of Jenn McNary, whose sons Austin and Max were diagnosed with Duchenne’s Muscular Dystrophy—an incurable, fatal, degenerative muscle disorder. Jenn tried to enroll both boys in a clinical trial for a promising treatment, but Austin’s disease had progressed too far for him to qualify. Jenn was forced to watch while one son’s condition improved significantly under treatment, and her other son’s condition worsened until he could no longer dress or use the restroom without help. Thirteen-year-old Max became 16-year-old Austin’s caregiver.

There are plenty of other, similar stories, all illustrating the same theme: when government makes the decisions instead of patients themselves, not only does that violate a cornerstone of medical ethics—patient autonomy—but it also leads to arbitrary and unjustifiable outcomes. A freer system, which would not force patients to get a government permission slip, would be both more just and more equal: it would extend to everyone the same permission to use treatments that is already enjoyed by those fortunate enough to get a special exception, or who can afford to travel outside the United States for treatment. A freer system would respect patients as individual humans deserving respect, rather than as subjects in a science experiment, which is what the current system does.

This wasn’t always the case. As Dr. Flanigan explains, federal drug regulations focused at their inception on ensuring that products marketed to the public were safe and correctly labeled, so that patients had truthful information to make informed decisions for themselves. The law did not require manufacturers to submit information to the FDA before marketing. But federal law gradually shifted from a focus on empowering patients, to a paternalistic approach—one that is often preoccupied with erecting roadblocks. This reached fruition in 1962, when the law required manufacturers to “provide substantial evidence of effectiveness for the product’s intended use.” Thus, although the FDA is not authorized to regulate medical practice at all, its prohibition on medical access imposes de facto regulation on doctors nationwide.

Dr. Flanigan may even understate the extent of paternalism in the United States. Federal regulations not only block patients from making treatment choices; they have taken away the tools patients and doctors need to make informed choices. Consider the problem of off-label treatment, the term for using medicines to treat conditions other than what the FDA approved that medicine to treat. Off-label treatment is legal—Medicaid will even pay for it—but the government routinely censors the communication of valuable and truthful information about off-label uses that could help doctors and patients make informed decisions. Federal law strictly limits how pharmaceutical companies—which know the most about their drugs—can share information about legal off-label uses of their products.

That censorship doesn’t just hurt patients; it violates the constitutional right to free speech. Yet companies are subject to criminal penalties for communicating to doctors valuable and truthful information about lawful off-label uses for approved treatments. As the Supreme Court put it forty years ago, “information is not in itself harmful … people will perceive their own best interests if only they are well enough informed, and … the best means to that end is to open the channels of communication rather than to close them.”

But rather than ensuring that physicians and patients know what they need to know to make their own decisions, the government has become the decisionmaker, limiting what people can say, restricting what doctors can prescribe, and dictating what patients can take—all because it thinks it knows what’s best for us. Except, of course, for those who can afford to escape from the regulatory straitjacket, or who get a special exception somehow. That’s unjust and unethical—and often unconstitutional.

Defenders of the status quo often seem to assume that patients can’t be trusted to make their own choices because they don’t know all the information necessary to make the best choices. But that argument endorses an old, long-exploded fallacy: that government officials will have better knowledge, or will make better choices than patients will, because they aren’t self-interested or biased in other ways. But the regulatory system is not risk free—it’s just better at hiding the risks or passing the buck. No system will ensure against all risks. The real question is, who should ultimately decide what level of risk is “acceptable” to a patient—government officials, or patients themselves? True, people will sometimes make uninformed choices, or choices that strike others as foolish or misguided. But that’s already the case today—after all, herbal supplements, magnetic bracelets, and other quack devices are sold freely without FDA oversight. And there are documented cases of the FDA restricting or permitting access to medicine based on politics, not on science.

Respecting the right of patients to self-medicate is not an anti-science position, although some have tried to suggest that. Rather, as Dr. Flanigan says, the question of acceptable risk “is not a medical or a scientific judgment—it is a normative judgment that may vary from person to person.” It’s true that patients may make bad choices—that’s true whether or not the treatments are FDA approved. But one thing we know for sure doesn’t work is a bureaucracy that deprives people, especially the terminally ill, of the freedom to decide for themselves. The right to live as we see fit is what makes life more than mere existence.

Medical paternalism is wrong because it treats fully independent individuals as children who should be protected by the state from making the “wrong” decisions. But free people must be free to make bad decisions—and to enjoy the rewards or suffer the consequences. Otherwise, they aren’t free. Indeed, one of the most controversial elements of the Affordable Care Act—the Independent Payment Advisory Board—garners bipartisan opposition because it empowers government officials to make decisions about healthcare not on the basis of a patient’s needs, but based on bureaucratic spending limits and one-size-fits all political decrees. A system that puts bureaucrats in charge of what medical care people should get will undoubtedly deprive patients of access to needed care, increasing the politicization of medicine at the expense of individual decisions.

When he ran for president, Barry Goldwater said, “We do not seek to lead anyone’s life for him. We seek only to secure his rights and to guarantee him opportunity to strive.” The right to medical autonomy is the right to hope—the opportunity to strive—the freedom to choose what chances to take and how to answer life’s hardest questions.

That’s why the Goldwater Institute developed the Right to Try initiative. Now law in 37 states, Right to Try statutes protect the right of terminally ill patients to access medicine that has received basic safety approval from the FDA—and that is being given to patients in ongoing clinical trials—but that has not yet received final approval for sale. The success of Right to Try laws, still in their infancy, are already corroborating Dr. Flanigan’s statement that scaling back prohibitions can have good health effects. When FDA officials told Houston Doctor Ebrahim Delpassand that he could no longer treat cancer patients with a medicine that had already completed three rounds of FDA testing and has been available in Europe for years, he turned to Texas’s Right to Try law. So far, he’s treated some 100 patients—many of whom were told they had only months to live, but are still alive a year later.

Dr. Flanigan’s proposal to eliminate the government permission slip for medications might sound radical, but if patient autonomy is to be respected and lives saved, then much more is required than merely reforming our broken system. We must challenge its very foundations. Why should government have the power to make life-or-death decisions for individuals? After all, the life—the joy and the suffering—belongs to the patient, nobody else. Medical reforms must not be focused on finding different ways to beg for government permission. It should focus on tearing up the permission slip.

The Conversation

Why Autonomy Remains the Most Important Value

I am grateful for the opportunity to respond to Alison Bateman-House’s thoughtful and insightful response essay. Bateman-House argues that while autonomy is perhaps the most important principle of medical ethics, it is not the only important principle we ought to consider. Other values, such as beneficence or justice, can outweigh the value of autonomy in some cases, and in light of these other values public officials can justify some limits on rights of self-medication. In this response essay, I agree with Bateman-House’s argument that it can be important to consider other values, but I disagree that those values outweigh autonomy. In other words, autonomy sets limits on the extent that public officials can permissibly promote other values.

Bateman-House discusses four kinds of choices where other values weigh against the importance of autonomy. These include patient choices in clinical and research contexts, citizens’ choices in community and public health contexts, physicians’ choices, and manufacturers’ choices throughout the development and marketing process. In all four domains, Bateman-House suggests that public officials may permissibly limit people’s choices either paternalistically or to prevent them from violating the rights of others. I agree that officials may permissibly limit people’s choices to prevent them from violating other people’s rights, but I disagree about paternalism. On my view, it is not permissible for public officials to paternalistically coerce competent people in any of these contexts.

Consider first Bateman-House’s discussion of patient choices in clinical contexts. She argues that in these circumstances, justice and beneficence matter too. But even though this is true, the principle of autonomy still limits the extent that physicians can permissibly promote justice or beneficence. For example, physicians and public officials may not compel people to donate their bone marrow to sick relatives, even if doing so would save their relatives’ lives. And even the most consequentialist health system would not allow researchers to forcibly draw blood from a person who was immune to a contagious disease in order to develop a cure. Nor do any places currently enforce a kidney tax for the benefit of people in need of transplants. This is true even if the violation of bodily rights would be very minor compared to the number of lives these policies would save.

This view is not necessarily committed to the claim that people are never liable to be interfered with for beneficent ends. It is committed to the claim that when decisions involve people’s bodies, patients do not forfeit their bodily autonomy simply because another person could benefit. This is the underlying moral principle that justifies other rights that are central to a liberal approach to medicine, such as the right to choose abortion or to refuse to donate one’s organs to a patient in need, even if it could save several lives.

Bateman-House worries that respect for patient autonomy outside of clinical and research contexts would be especially dangerous though, citing concerns about drug interactions and accidental poisonings. This is an empirical conjecture, but even if it were true, it would not justify prohibition as a first line of response to the problem of patient ignorance. Even when it is difficult for a person to understand the nature of a choice, such as dietary or financial decisions, it is better if officials equip people with the information they need to understand their choices rather than making choices for them. There are four reasons for this. First, people may be ignorant about particular aspects of complicated medical, dietary, or financial decisions but they are experts about their own values and capacities. And while officials can communicate general information about medicine, diet, and finances to people in ways that inform their decisions, it is less feasible for each person to communicate particular information about her values to officials in ways that enable officials to promote people’s interests.

Second, the fact that people are deprived of the authority to make decisions may in part explain why they seem to lack the capacity to do so. Without any incentive to educate themselves about medicine and their medical options, patients may rationally respond by remaining ignorant of the relevant facts because they are unlikely to have the opportunity to act in light of them anyhow. Third, if patients are not capable of understanding complex information, there is nothing about rights of self-medication that prevents them from consulting with experts, such as pharmacists or physicians, and deferring to their advice as they currently do. Fourth, public officials who are concerned that patients may accidentally poison themselves should reconsider existing policies that prevent patients from accessing and monitoring their medical records and managing their own health choices. And it’s not clear that pharmacists screen for drug interactions better than patients would if they were given the same resources for detecting potentially dangerous interactions. 

Bateman-House also argues that public health officials may permissibly limit citizens’ choices in public health contexts in order to promote the community’s health on balance. Bateman-House provides three examples of permissible public health interventions: quarantine, water fluoridation, and trans fat bans. She also argues more generally that all population health requires officials to weigh risks and benefits for an entire population.

I agree with Bateman-House that it can be permissible to quarantine patients when it is necessary to prevent contagious transmission and when it is the least restrictive alternative. This is because contagious transmission of an illness violates other people’s bodily rights. It is for this reason that I also think public officials may permissibly limit access to antibiotics when it is necessary to prevent the contagious transmission of antibiotic-resistant bacteria or require vaccination in some cases. Water fluoridation is also not objectionably paternalistic. To the extent that water is a public service, people are not entitled to determine the nature of public benefits that do not violate people’s rights. Even though some libertarians object to taxation, which finances the public provision of water, it would not be a further injustice that the water has fluoride in it.

On the other hand, trans fat bans do violate people’s rights, (depending on how they are enforced). When public officials make it illegal to manufacture or sell foods with trans fats, they coercively prevent people from buying and selling certain foods not because it is harmful to others but in order to benefit those who are being coerced. In some ways public health paternalism is even worse than paternalism in clinical contexts because the officials who act paternalistically coerce a large and heterogeneous group of people they’ve never met, and are therefore even more likely to fail to promote people’s overall wellbeing. Officials are also influenced by electoral and fiscal incentives that may prevent them from acting in people’s interests.

It is for this reason that, as I have argued elsewhere, paternalistic public health policies such as seatbelt mandates and the prohibition of recreational drugs (including methamphetamine) are misguided. In contrast to speed limits or laws that prohibit pollution, which prevent people from violating each other’s rights by prohibiting activities that carry a high risk of harm, public health paternalism prevents people from making choices that are within their rights but might cause them to harm themselves. These paternalistic polices are disrespectful not only because they violate people’s rights and are likely to fail to promote people’s wellbeing, but because they fail to treat citizens as equals and instead express the offensively condescending view that public officials are in a better position to decide for citizens than the citizens themselves.

Bateman-house also considers the value of physicians’ autonomy. She notes that physicians are not morally required to facilitate all of their patients’ medical choices. For example, physicians are not morally required to break the law for a patient who requests voluntary euthanasia. I agree with this claim, but I also think that physicians are not morally required to comply with unjust laws, including the law that prevents patients from using deadly medicines. But whatever one thinks about physician’s legal responsibilities, rights of self-medication would reduce the number of circumstances in which physicians faced these dilemmas by shifting decisional authority to patients.

Rights of self-medication would also go some way in addressing physicians’ scarce time, a concern that Bateman-House raises regarding the availability of expert advice, because without a system of prescription requirements patients would not be legally forced to consult with physicians in order to access drugs. So physicians could focus on treating patients who were genuinely interested in their physician’s care and advice if all patients had rights of self-medication.

On the topic of physician autonomy, Bateman-House also addresses physicians’ right to refuse to provide care to patients when it violates their conscience. I also think physicians have this right, and that it illustrates the importance of autonomy more generally. Why should doctors have the freedom to refuse to participate in a treatment choice that is inconsistent with their values but patients do not have the freedom to choose the treatments that most reflect their values?

Let’s turn to Bateman-House’s discussion of pharmaceutical development and marketing. She claims that manufactures support the current system and appreciate its predictability and reliability. But a system of self-medication could also be predictable and reliable for manufacturers. Manufacturers may favor the current system because removing existing barriers to entry in the pharmaceutical marketplace could make the industry more competitive, but manufacturers are not entitled to be protected from competition by the current system. Bateman-House also notes that there are other ways, short of deregulation, to include patients’ perspectives in drug development, citing the Maestro Rechargeable System weight loss treatment trial. Efforts at including patients’ voices are commendable. But the reasons that a principle like “nothing about us without us” is morally praiseworthy also tells in favor of rights of self-medication. After all, self-medication would enable even more deference to patients’ perspectives and could reflect people’s judgments about risk and benefit even better than merely including patients’ voices in the current system would.

To close, Bateman-House raises a question about pharmaceutical marketing, noting that only the United States and New Zealand allow direct to consumer marketing of prescription drugs. Polices that prohibit direct to consumer pharmaceutical marketing are not only paternalistic, they are a form of censorship. Freedom of speech is justified not only because speakers have an interest and a right to express their views, but because listeners have interests and rights to hear them. Censorship prevents people from accessing information but censors are not well placed to know in advance which information is harmful or useful to people. To the extent that using a prescribed drug is currently lawful conduct, laws that prohibit marketing for prescription drugs amount to content-based restrictions on speech that advocates for lawful conduct. It is for this reason that officials should not only allow for direct to consumer pharmaceutical advertising, but off-label marketing as well.

Bateman-House is right to point out that autonomy is not the only value that matters, but it is the value that matters the most. And one feature of emphasizing respect for autonomy over all other values is that doing so enables people to pursue whatever other values they think are important as long as they respect each other’s entitlement to do the same. But even if one doesn’t value autonomy as highly as I do, as I argued in this and the previous essay, there are also consequentialist reasons to support rights of self-medication. Like other paternalistic policies that aim to balance autonomy against beneficence and wellbeing, paternalistic pharmaceutical policies are potentially dangerous and likely to fail. It is far less morally risky for officials to respect citizens’ authority to make decisions about their own bodies that reflect their own values.  

Autonomy without Warrant; Information without Guidance

In her original commentary and response to Dr. Bateman-House, Dr. Flanigan argues that autonomy should outweigh all of the other principles of medical ethics. However, she offers no philosophical or empirical reasons as to why. Autonomy is actually a recent construction. The concept finds its origins in the ideas of Immanuel Kant (1724-1804) who defined the term as meaning an ability to rationally deliberate and choose to follow the moral law. Autonomy, thus, is a characteristic but not a defense or a reason. The way Dr. Flanigan uses the term is from the 1970 book Principles of Biomedical Ethics by Dan Childress and Tom Beauchamp. These authors did not privilege autonomy and in fact stated that all four principles (autonomy, beneficence, nonmaleficence and justice) had to be balanced before making a moral choice. In the following seven editions of this book (most recently 2017), the authors warned against privileging autonomy.

We should also remember that these four principles are simply “guidelines to moral deliberation” that categorize parts of consideration that one should do when making a choice. The principles take elements of philosophy and practicality but by themselves have no force of philosophy, morality, ethics, or law. Dr. Flanigan’s insistence that only the guiding concept of autonomy should be used in decisionmaking contradicts her acceptance of seat belt laws and other cases where harm to others could curtail its application. If one privileges autonomy, then harm to others would not be a factor. In practice then, she does accept a balancing of principles (despite her statements otherwise) but only when autonomy is privileged when it is convenient. 

Dr. Flanigan’s statement also has some factual errors. Although she claims that drugs are responsible for the increase in average life expectancy, such a claim is incorrect. The largest increase in human lifespan occurred in the late 1940s and was a result of paternalistic public health sanitation efforts—dealing with sewage and better nutrition. The U.S. Centers for Disease Control attributes the increase in lifespan to motor vehicle safety, workplace safety, safer and healthier foods, nutrition, hygiene, family planning, fluoridated drinking water, clean water, sanitation, blood pressure control, and reduction in tobacco use. The only drugs they list are vaccines. Remember that when penicillin became available to the public in 1942, life expectancy was 64.7 (male) and 67.9 (female) which was a large increase from 1900 (46.3-male; 48.3-female). Thus, lifespan had already increased dramatically well before modern efficacious drugs even hit the market (even today, 70 years later, we’ve added only 10 more years of life expectancy). The U.S. today has the lowest life expectancy of any industrialized nation because of our high maternal and infant child mortality rates, high body-mass index, and lack of universal health insurance coverage. In effect, the lack of emphasis on public needs and paternalism in our society is what keeps our life expectancy so low compared to the rest of the industrialized world.

In creating an analogy, Dr. Flanigan offers a list of things that “officials do not legally prohibit people from” doing and lists “drinking alcohol.” Of course, the sale and consumption of alcohol is highly regulated. One must be 21 years of age, above the age of majority, to purchase and consume. Society limits the activities that a person who has consumed may undertake (driving, flying a plane, performing surgery). In many places, alcohol cannot be drunk in public and in others, there are no sales during certain times of the week (like on Sunday mornings). Similarly, society controls the purchase and consumption of tobacco. Private insurance companies charge different rates on health and life insurance based on one’s history of tobacco and alcohol use. States require everyone in a car to wear a seatbelt. One could argue that not wearing a seat belt is a personal choice, that in an accident if you go through the windshield, you hurt only yourself. But the reality is that the emergency services (EMT, police, fire, ER) harms others and costs others money (tax dollars supporting those services). Simply put, the government regularly restricts are rights to act when such actions can pose a harm to others or ourselves. Except for prisoners, the United States does not recognize a right to health care, never mind a right to whatever drugs you want. The closest we come is EMTALA which requires hospitals to evaluate and stabilize patients, but not to treat, cure, or make better. There is no right to self-medication recognized under law or ethics—otherwise there would not be such a battle concerning the use of medical and recreational marijuana.

Ms. Sandefur points out that there are loopholes in our prescription system, with which I agree. But her suggestion to jettison the whole system is like throwing out the baby with the bathwater. The answer is to remove the loopholes, not to abandon a system that has saved millions of lives and billions of dollars. The drug thalidomide was responsible for 10,000 children born with birth defects (only 50% survived) worldwide in the 1950s-60s. In the United States, the FDA required more rigorous testing and never approved the drug. The result is that in the United States, there were only 17 babies born with thalidomide-related problems. If an open system had been in place, those U.S. numbers would have been far higher.

Ms. Sandefur claims that Right-to-Try laws have been successful. The scientific evidence that open access to drugs saves lives does not exist. The case of Dr. Delpassand remains a myth and is the one story touted over and over. As a radiologist (not an oncologist), he would not have treated patients with cancer (unless he was working outside his scope of practice). He also has not produced any records or scientific evidence and has not answered repeated questions from the media. Claims absent scientific data should not be the basis for anyone in making decisions. In terms of making an argument, a single case study is a logical fallacy, meaning that it proves nothing. In fact, for every claim of benefit, one can easily find a story where someone was harmed by compassionate use of experimental treatment, especially since it is not unusual for trials to show that drugs make things worse for patients, as well as sometimes benefiting them.

The FDA approves well over 99% of all requests for compassionate use. The obstacle is the manufacturers, who do not want to give unproven treatments to patients. Why? If something goes wrong they could face not only liability, but a public relations nightmare that could prevent a drug from ever reaching market. If one person has a bad outcome in a clinical trial with 1,000 participants, then that is a data point. If one person has a bad outcome and that was one of two people who received the drug, then the drug is finished and will never be tested nor make it to market. Experimental drugs are just that, experimental. An open market plan will actually lead to fewer drugs reaching market because companies will not want to take the risk of bad PR and lawsuits.

Also consider that no insurance plan in the United States pays for experimental treatment. Thus, a family who is given such access has basically been given a greenlight to bankrupt themselves. Or, if they are of lower socioeconomic status, to see something they want but can never attain. Right to try increases social inequality giving the wealthy access to experimental treatments. For everyone else, the price will put the drug out of reach. An experimental stem cell treatment for ALS, a disease with no cure, was made available to patients but none took up the opportunity. Why? The procedure would have cost $100,000 per patient with no scientific evidence that it would work. Right-to-try laws have one purpose, to dismantle the FDA at the cost of safety and evidence-based medical care, not to protect people.

The reality is that the vast majority of educated experts in health law, health policy, bioethics, and medicine believe that these laws are a bad idea that will harm patients in the short and long run. In my personal conversations with executives at pharmaceutical companies, they are also against this idea of an open use, right-to-try, policy. They feel that (a) the lack of mandated testing will increase harms to patient, (b) create an unfair playing field for companies, and (c) expose them to a great deal of liability.

More information for patients is always better. However, the source of the information is key. For medicines, that source needs to come from scientific trials, not from patient blogs, personal experiences, wikis, or other ad hoc wellsprings. Individual use without control for variables is not helpful for anyone trying to making choices. The idea that open use enhances autonomy and liberty is a misconception because the information on which these decisions are made, absent rigorous trials mandated by the FDA, would be worthless. Good decisions can only come from good facts, and a libertarian pharmaceutical policy does not allow for good facts. Given the complexities involved in pharmaceuticals and medicine, information without guidance is a dangerous thing.

 

Beneficience and Nuance in Drug Regulation

It was a pleasure to see Dr. Flanigan’s response to my essay and especially to see enumerated the points on which we agree. However, there remain significant points on which we do not. First and foremost is whether autonomy is the most important principle in bioethics. I do not believe it is. The reason I mentioned human subjects research was because this is the sector of bioethics for which autonomy has been the paramount principle, due to its fraught history of scandals and abuses. Yet even here, autonomy has bounds. I went on to discuss public health to show how, in a different sector, autonomy sometimes is virtually discounted. Thus I don’t think you can claim that autonomy is the most important principle in bioethics; its importance varies in relation to context.

To argue that people are autonomous agents always able to make their own choices about the use of pharmaceuticals, you must hold to one of two beliefs. One, that personal choice is so important that we must be willing to tolerate the foreseeable individual and community harms from allowing persons—regardless of their knowledge of drugs, understanding of their medical conditions, linguistic competency, or access to sources of medical advice—unfettered access to these agents as they wish, bound only by the advice they wish and are able to seek out. Or, the second possibility: that all people are capable of obtaining expert guidance to make decisions about drug use, if and when they wish, and thus there is no need to install safeguards. I am unwilling to endorse the first. I do not agree that there is some natural, constitutional, or other right to free access of pharmaceuticals in any way strong enough to outweigh the pragmatic harms that would arise from such a policy. As for the second, I believe the structural impediments in our system of health financing and access are such that putting the onus on those who desire advice about pharmaceuticals to find it would create an unjust two-tiered system: Those with access to advice would have enhanced liberty, while those without such access would be left to their own devices about how to safeguard their health and that of their family members and community.

Dr. Flanigan does not hold either of these positions; rather, she takes a more nuanced approach in which there are legitimate reasons for placing restrictions on access to certain pharmaceuticals, most commonly the harm principle (I have no right to engage in activities that can harm others, yet I can engage in whatever activities I want if they only harm myself). The question is determining what scenarios rise to the point where restrictions on access are legitimate. After all, determining what constitutes harm to others is somewhat in the eye of the beholder. If I stockpile my medicine, am I harming others by potentially impacting the supply of the drug or its cost? If I use drugs unwisely and incur ambulance and hospital expenses, am I harming fellow taxpayers or those who share an insurer with me?

That we need to recognize nuance in making policy is in keeping with what I teach my students: that we have a “toolbox” of approaches, and there is no one-size-fits-all solution to all issues. Rather, we must analyze a situation and then determine which policy approach is best. If the goal is to reduce use of something, it would be unwise to choose to ban it and risk incurring some level of public wrath, if less intrusive measures, like education or persuasion, would be sufficient to meet the goal. Policies range in their degree of intrusiveness, and the rule of thumb is to use the least restrictive alternative that will still meet the specified objective. However, in choosing which policy alternative to use, one must be aware of context. To rely on education—say, of schoolchildren—would obviously be insufficient if many of those in the affected population don’t go to school. To rely on taxes to drive down use of some item won’t work if the item is sold primarily by unregulated vendors.

U.S. medical drug policy has evolved over time. I see this as evidence of policymakers selecting from our policy toolbox and trying approaches that seem best suited for the particular situation at the particular time. Once unregulated, drugs are now divided into categories: over the counter, prescription, and controlled (i.e., only by prescription and subject to additional regulation). Drug registries, first the subject of fear and concern, have become the norm, with every state in the nation now tracking who is prescribed opioids, in an attempt to stem the current public health crisis. At the same time, there are efforts to make certain relatively safe drugs like birth control and naloxone available without prescriptions.

As I mentioned in my original essay, policymaking is inherently a blunt tool. When trying to find an approach that works for a heterogeneous population, some individuals will no doubt be subject to restrictions that they don’t really need, and as a result, they will be able to object that a paternalistic “nanny state” is unnecessarily depriving them of individual liberties. On the other hand, other individuals for a variety of reasons could probably use even more protection that what the policy affords. By working to find the least restrictive policy to meet a specific policy goal, we seek to respect autonomy—not as an absolute or a paramount principle, but nevertheless as an important principle—while prioritizing the principle that I think motivates most public health activities: beneficence.

What Does Patient Autonomy Mean for Doctors and Drug Makers?

Both Craig Klugman and Alison Bateman-House seem to believe that affording patients greater medical autonomy would remove medical professionals from the equation altogether, eliminating “gatekeeping by physicians [and] pharmacists, or “mak[ing]the physician a virtual captive to the wishes of his or her patient.” But these fears are unfounded. Autonomy is not just for patients, but physicians, pharmacists, and pharmaceutical companies, too—each is entitled to act or refrain from acting as she deems appropriate. A doctor who doesn’t agree with his patient’s treatment choices can provide additional information, or give counseling, or even ask the patient to seek another doctor. A pharmaceutical company can refrain from making products available until it feels the treatments are ready. The point is that government shouldn’t stand in the way.

Although Bateman-House fears that deferring to patients comes at the expense of physician autonomy, she also laments that physicians currently abuse the freedom they have, failing to spend enough time with their patients, which she says undermines a patient’s ability to make informed medical decisions.

Even if it’s true that physician consultations aren’t as thorough as they once were, patients today have better access to health care information than ever before. According to the Pew Research Center, two-thirds of U.S. adults have broadband internet in their homes, and 13 percent who lack it can access the internet through a smartphone. Pew reports that more than half of adult internet users go online to get information on medical conditions, 43 percent on treatments, and 16 percent on drug safety. Yet despite their desire to research these issues online, 70 percent still sought out additional information from a doctor or other professional.

In other words, people are making greater efforts to learn about health care on their own. True, not all such information on the internet is accurate. But encouraging patients to seek out information from multiple sources is a good thing. In fact, requiring government approval of treatments may lull patients into a false sense of security. As Connor Boyack, president of the Libertas Institute, points out, “Instead of doing their own due diligence and research, the overwhelming majority of people simply concern themselves with whether or not the FDA says a certain product is okay to use.” But blind reliance on a government bureaucracy is rarely a good idea.

Bateman-House also argues that drug makers would refuse to sell treatments without a system of regulatory approval. But the existing regulatory system provides little incentive for pharmaceutical companies to help patients in need. The reason companies are unlikely to provide their products to patients outside of a government-approved clinical trial is because bad results must be reported to the FDA, even if those results are related to a patient’s other ailments and not the drug itself. These “adverse events” can damage a company’s chances of obtaining final FDA approval and even destroy the company’s financial viability. Yet data showing that the treatment is successful when administered outside of a trial is not counted in favor of the company. Given the legal barriers and disincentives imposed by the current system, why would a company go out of its way to help a patient? (True, the FDA has sometimes said it won’t use these adverse events against a company, but it refuses to put that promise in writing.)

Moreover, regulations severely restrict a drug company’s ability to charge for its products, especially before those treatments receive full FDA approval. The profit incentive attracts investors, which companies need in order to develop effective treatments, and encourages entrepreneurs to invest in developing innovative treatments. If the goal is saving or improving lives, the worst thing government can do is remove these incentives. But it’s the current regulatory labyrinth that creates “a system of all risks and no rewards” for drug companies and imposes barriers on innovation and opportunity. Allowing patients to pursue treatments without a government permission slip would strengthen, not undermine, the incentives to develop and provide those treatments.

Still, Bateman-House advocates for “regulators to ensure products are what they claim to be,” and Klugman argues that we should accept a degree of medical paternalism because “experts  …  make sure … things are safe and available.”

Of course quality control is desirable, but why assume that regulation must come from government? In fact, the benefits that government-imposed regulation purports to provide is often provided better, cheaper, and faster by market-based institutions. Online rating systems like Yelp and TripAdvisor already help consumers make informed decisions about restaurants and hotels. Private certification agencies like Consumer Reports and Good Housekeeping provide safety and effectiveness reviews. There’s no reason to think patient portals and scholarly journals couldn’t serve the same function in medicine.

In fact, the FDA is excessively risk-averse. It “has little incentive to avoid the ‘unseen’ error of blocking new medicines that could ease the suffering of millions of people,” for the simple reason that if it approves a bad treatment, it gets punished, and if it blocks a good treatment, it gets rewarded—but it doesn’t get rewarded for approving good treatments, or punished for delaying them. All the incentives are therefore on the delay side. And the premise of this incentive structure is that the FDA knows best—which is wrong. Neither pharma, nor the FDA, nor the doctor, nor the patient, knows all the information necessary to make the “right” decision about treatment. That’s all the more reason that the government shouldn’t have a monopoly on medical decisionmaking. The decision should belong to the person whose life it is.

Autonomy’s Past and Future

The argument for rights of self-medication parallels the case for informed consent. In a response essay and conversation post, Craig Klugman disputes my claim that people are generally capable of sufficiently understanding their treatment options and competently making choices about their own bodies. If Klugman thinks that people are not capable of making informed and competent medical choices, then he should also reject the principle of informed consent. But to abandon informed consent would not only have bad effects, it would also violate people’s bodily rights and disrespect patients everywhere. For this reason, we should reject medical paternalism and support rights of self-medication.

Consider the experiences of Steve Jobs as an illustration of the importance of informed consent. Jobs was diagnosed with pancreatic cancer in 2003. In a biography of Jobs, Walter Issacson writes,

To the horror of his friends and wife, Jobs decided not to have surgery to remove the tumor, which was the only accepted medical approach. ‘I really didn’t want them to open my body so I tried to see if a few other things would work’ he told [Issacson] years later with a hint of regret. (454).

Nine months after his diagnosis, Jobs had the surgery. I don’t know whether Jobs would have lived longer had he taken the advice of medical professionals and had surgery earlier. But I do know that it would have been wrong for physicians, public officials, or anyone else involved in Jobs’ life to force, threaten, or trick him into having a surgery that he did not want at the time. To do so would have been a wrongful violation of Jobs’ right to make intimate decisions about his own body, even if it could have prevented the progression of his illness.

But when it comes to other treatment decisions, such as the decision to use pharmaceuticals, Klugman argues that patients, including patients like Jobs, are not entitled to make medical choices on the grounds that they are incapable of understanding their options and making competent decisions. My argument is not that patients are always fully informed, ideally rational, or infallible. People can make mistakes about their health. Like Jobs, they may regret their choices. But whatever the standards of knowledge and volitional capacity determine whether patients are able and entitled to give informed consent ought to apply to self-medication as well. People do not lose their ability to understand complex information and make life-changing decisions about their health when their choices involve pharmaceuticals, so they shouldn’t lose their right to make medical choices in these circumstances either.

Moreover, even when ‘doctor knows best’ when it comes to medicine, each person knows what’s best when it comes to her life as a whole. Klugman also raises concerns about patients’ ability to understand or become informed about their medical choices on the grounds that people may be unable to independently access the relevant information about the risks and benefits of treatment. Yet officials and physicians also lack relevant information because they do not know about their patients’ experiences and values. Further, rights of self-medication needn’t prevent patients from consulting with experts. And laws against fraud and mislabeling are also permissible because deception violates patients’ rights and prevents them from making informed medical choices.

Klugman’s other reason for doubting that rights of self-medication are compatible with informed medical decisionmaking is that “pharma manufacturers are unlikely to undertake the same rigorous testing the FDA now requires if it were made optional.” But even though existing regulations provide incentives to test, this argument it would only justify prohibition if prohibition were necessary in order to learn about the nature of drugs. But laws that prohibit fraud, in addition to market-based incentives to provide evidence about the risks and benefits of drugs can also provide sufficient incentives to test and certify drugs. For example, a private company or public agency like the FDA could serve as an independent certification agency for pharmaceuticals, and insurance or public health care providers could refuse to pay for uncertified therapies.

Klugman also doubts that patients have the volitional capacities to make treatment decisions about pharmaceuticals. But again, it is unclear why people would have sufficient volitional capacities to make medical decisions in clinical contexts and for other potentially dangerous choices, such as the decision to drink alcohol or go rock climbing, but they are incapable of making choices about pharmaceutical use. On this point, Klugman cites the fact that some drugs are addictive, and that addiction can undermine volitional capacities. But even if this is true, it still isn’t clear that prohibiting people from buying or selling drugs is the answer, especially in light of the catastrophic health effects and enduring injustices associated with drug prohibition. Instead of prohibiting access to addictive drugs, which creates dangerous black markets and impedes treatment and recovery efforts, public health officials should instead focus on treatment and support for people who want to stop using drugs. A more permissive approach would also be more respectful toward informed and competent recreational users.

Klugman also calls people’s volitional capacities into question on the grounds that most of us are not rational, and even fewer patients are. He writes that “a sick person needs prompt help, not an opportunity o consider options, costs, benefits, and risks.” But this claim poses a false dichotomy because patients are helped when people give them the opportunity to consider their options and the authority to decide. It is distressing enough to lack control over one’s health due to an unavoidable illness, it is even worse to lack control due to an avoidable injustice like existing pharmaceutical regulations. And while I agree with Klugman that “rationality is compromised by the influence of others,” patients are not the only people subject to biases and irrationality. Public officials and physicians are people too. And they are also fallible. They are influenced by their own biases, electoral incentives, pecuniary incentives, concerns about liability, and ideology. The difference is that when a patient makes a misguided treatment decision, it is her own health she risks or harms and she doesn’t violate anyone’s rights. But when public officials make mistaken judgments about health policy and empower physicians to act as gatekeepers for treatment, they violate other peoples’ rights.  So even if people are fallible, given the choice between granting the authority to make intimate decisions about citizens’ lives either to citizens themselves or to public officials, all citizens should have rights of informed consent and self-medication. 

Another consideration that Klugman raises against rights of self-medication is that a person could make other people worse off by deciding to use a drug. It is surprising that Klugman invokes Mill’s harm principle on this point because Mill defended a conception of harm wherein restrictions on liberty were only justified to prevent the violation of people’s entitlements or non-consensual threats to important interests, not to prevent all activities that might make people worse off than they would have been. And in On Liberty, Mill argues that while public officials may legitimately enforce labeling requirements for drugs or registries for drugs that may be used in crimes, prescription requirements would be “contrary to principle” on the grounds that they violate liberty. Mill then writes, “to require in all case the certificate of a medical practitioner [for drugs] would make it sometimes impossible, always expensive to obtain the article for legitimate uses.”

In any case, Klugman references three kinds of harms to illustrate this point: the harm of antibiotic resistant bacteria, the harm of driving while intoxicated, and the harm that one person’s addiction can cause their family and coworkers. I agree that public officials can restrict access to pharmaceuticals to prevent the development of antibiotic resistant bacteria, as I noted in my previous response essay. I also agree that driving while intoxicated should be illegal, but in this case it is the driving that should be prohibited and not the intoxication. But I disagree that public officials or physicians can restrict patients rights of self-medication to prevent them from making their family and coworkers worse off, just as it would have been wrong for physicians to violate Jobs’ right to refuse treatment on the grounds that a prolonged illness would make people in his family and company worse off.

At this point in the response I would also like to briefly clarify and respond to some claims in Klugman’s initial response essay and in the most recent conversation piece. In his reply, Klugman claims that my “insistence that only the guiding concept of autonomy should be used in decisionmaking contradicts her acceptance of seat belt laws and other cases where harm to others could curtail its application.” But I do not accept seatbelt mandates, as I stated in the response and in the paper that I linked to in the response. I think that seatbelt mandates for adults are misguided and unjust. Klugman also claims I make a “factual error” in asserting that pharmaceutical innovation is responsible for some of the gains in life expectancy throughout the twentieth century, but this claim is supported by Frank Lichtenberg’s influential research on the topic, which I referenced in my initial essay, and which Klugman cites no reasons to doubt. He then writes that pharmaceuticals are not responsible for the largest increase in life expectancy, though I did not say they were. Klugman raises this point to cast doubt on my claim that deregulating for the sake of pharmaceutical innovation would be beneficial. But a reform can be beneficial and morally urgent even if it is not the most beneficial and morally urgent reform.

In closing, let’s turn to Klugman’s claims that “in all of philosophy, law, and history, there is no right to self-medicate.” Though Klugman is correct when he notes that rights of self-medication do not appear in the Declaration of Independence, it is false that the right lacks historical precedent. For example, Thomas Jefferson references rights of self-medication elsewhere when arguing in favor of freedom of conscience and against the establishment of a state religion.[1] And as Daniel Carpenter argues, the right of self-medication was widely acknowledged in the nineteenth century United States.

Though it is worth clarifying these points, I also think it is useful to step back and consider that Klugman raises these historical considerations to highlight the revisionary nature of my proposal, but throughout history many moral arguments lacked historical precedent and had revisionary implications for how we live. The fact that a moral argument in favor of greater respect for autonomy requires a departure from the status quo is not in itself a reason to reject it. Arguments in favor of expanding the scope of officials’ respect for autonomy often do. For example, officials rejected paternalistic justifications for oppression and implemented revisionary policies when they implemented legal protections for women’s rights, ended lawful slavery, or embraced the doctrine of informed consent. In all these cases, oppressors embraced paternalism and cited their principled commitment to values like health and wellbeing as considerations that weighed against freedom and respect. Today, coercive paternalism takes different forms, such as laws against recreational drug use, sex work, or self-medication. But the reasons against these policies remain the same. And as before, public officials and health workers still lack the authority to coerce people no matter how convinced they are that they know what’s best.

 

Note
     


[1]Jefferson writes “was the government to prescribe to us our medicine and diet, our bodies would be in such keeping as our souls our now,” and then argues that because governments are so fallible, officials should not be empowered to coerce people in order to impose uniformity of opinion. I do not mean to suggest that Jefferson was a reliable moral guide more generally (he really wasn’t). Rather, I raise this point to in response to the implication that the American founders did not affirm rights of self-medication.

The Track Record of Right to Try and Why It Matters

Craig Klugman criticizes medical autonomy and Right to Try, that is, laws that protect the right of terminally ill patients to try, with the recommendation of their physician, treatments that have passed Phase 1 of an FDA clinical trial and are being administered in clinical trials but haven’t yet been fully FDA-approved. Why, when patients, doctors, and lawmakers on both sides of the political aisle have supported Right to Try, does Klugman balk?

One answer he gives is that, in his view, the current drug approval system “has saved millions of lives and billions of dollars.” But he provides no evidence that the current system is superior to alternatives. In fact, despite the fact that the current system makes the FDA risk-averse and more focused on preventing unsafe or ineffective drugs from getting to market than focused on empowering patients to make informed decisions, “unsafe” drugs nevertheless routinely get FDA approval. In 1997, for instance, the FDA removed fenfluramine after 25 years on the market when reports associated it with heart conditions. In fact, almost one-third of FDA-approved drugs were later flagged or removed from the market due to safety concerns. The FDA itself recognizes that “there is never 100% certainty when determining reasonable assurance of safety and effectiveness.”

But how many patients have suffered – how many have died – because of today’s paternalistic system – one that rewards delay, reduces incentives to innovate, and takes life-and-death decisions away from patients? Our regulatory system can’t ensure safety and efficacy perfectly (no system can), but its extreme attempts to do so reduce people to the helplessness of knowing that promising treatments might exist just out of reach. Well, out of reach of those unable to afford to travel to another country, that is. The ethical conclusion here should be clear: as bioethicist Julian Savulescu writes, “[t]o delay by 1 year the development of a treatment that cures a lethal disease that kills 100,000 people per year is to be responsible for the deaths of those 100,000 people, even if you never see them.”

As for costs, report after report has exposed the expense – both financial and otherwise – that the FDA imposes on patients. The current system in fact costs lives and money that could be saved.

Klugman invokes the thalidomide example – of course; they always do – for why we need to keep the current paternalistic system. But the thalidomide incident actually teaches a different lesson. In 1962, in response to thalidomide, Congress passed the Kefauver-Harris Drug Amendments, which required manufacturers to “provide substantial evidence of effectiveness,” not just safety, for their products. As a result, drugs are now kept out of patients’ reach on grounds not just of safety, but also of efficacy. But thalidomide posed a safety problem, not an efficacy problem. A safety concern was used as an excuse to change the law from one that focused on empowering patients to make safe choices, to a slower, more costly, paternalistic approach that delays availability and erects roadblocks rather than alleviating suffering.

Now, as to Right to Try: when real-life examples of its early success are reported, Klugman’s response is to deny that they are true—and this is frankly bizarre. Dr. Delpassand has testified to Congress that within a year of his state’s enacting Right to Try, he successfully treated 78 terminally ill cancer patients using LU-177, a drug that had successfully completed its three phases of the FDA-approved clinical trials and has been available in European countries for years, but has still not received final FDA approval for sale. I should know, since I’m his lawyer: Dr. Delpassand had administered a successful FDA-approved clinical trial for LU-177 therapy for five years, but was then told by the FDA that he could not add more patients to the trial. Right to Try enabled him to continue administering LU-177 to patients suffering from neuroendocrine cancer after the FDA blocked the trial’s expansion. His patients were exceedingly grateful. One said that without Right to Try, he “would have had to go on disability to make trips to Switzerland.” Another said he “would have traveled to Switzerland for this same treatment and follow-up appointments every three months,” but thanks to Right to Try and Dr. Delplassand, he was able to stay in the United States and spend the time with his wife and kids. “This law,” he told me, “has been a life saver!”

Who is the FDA, Klugman, or anyone else to deny these patients their choice?

Supporters of the status quo often answer that the FDA already provides alternative paths to access: specifically, its clinical trial or expanded access programs. But the sickest patients often don’t qualify to participate in clinical trials, and those lucky enough to be granted admission may be given a placebo.

Even the FDA implicitly recognizes the inhumanity of its own system – after all, it calls the exception to the system “compassionate use.” Klugman repeats the FDA’s common refrain that it approves 99% of all “compassionate use” requests – and that may be true, but this statistic is an illusion, because it ignores how many patients don’t submit compassionate use requests because the approval process is so cumbersome. By the FDA’s own admission, the initial paperwork takes a doctor 100 hours to complete.[i] To administer treatment under this exception, the doctor must abide by burdensome protocols and data-reporting requirements, essentially making him responsible for overseeing a mini clinical trial for that one patient. Then an Institutional Review Board (a separate committee at a medical facility) must weigh the ethical considerations associated with the patient’s use of the treatment – and many meet infrequently. There are other restrictions, too, so that in practice, “compassionate use” is so tangled in red tape that only about 1,200 patients per year are even able to submit compassionate use requests to the FDA – even though over half a million Americans die annually of cancer alone. Razelle Kurzrock, who directs clinical trials at U.C. San Diego says that “it’s almost a self-fulfilling prophecy for the FDA to say they approve everything, because you don’t even put in the application before you sort of get a verbal approval from the FDA that it’s worth doing.” The bottom line is clear: “compassionate use” is false hope.

Nevertheless, Klugman claims that drug makers are the real barriers to access, not the regulatory system. No doubt companies play a role in curbing access, but as I discussed in my last post, companies’ hesitancy to provide treatments is exacerbated by a regulatory system that provides little incentive to help patients in need.

For example, companies fear that the FDA will delay or deny approval of their treatments if they offer treatments outside a clinical trial. And these fears are not unfounded. In 2014, the FDA put a partial hold on clinical trials run by the company CytRx after a patient received treatment through compassionate use and died. As a result of that hold, CytRx stock plunged. Under this system, why would pharmaceutical companies ever provide a drug to a patient outside of clinical trials? Even Dr. Arthur Caplan, director of medical ethics at New York University and critic of Right to Try, has called on the FDA to “put their approach to interpreting adverse events when they occur in the context of compassionate use in writing.” But they won’t.

Klugman says pharmaceutical executives he’s spoken with prefer a system of regulation. Ask yourself why that’s so. For starters, established businesses often support regulation, not in the name of the “public good,” but because it keeps out competition. The current clinical trial system benefits drug companies by letting them hide behind the FDA as a gatekeeper when making decisions about whether or not to provide treatments to patients. This lack of transparency allows the FDA to blame the manufacturers, and manufacturers to blame the FDA. As a result, nothing changes, and it’s the patients who suffer.

Klugman also opposes measures like Right to Try on the grounds that investigational treatments may be prohibitively expensive, so not everyone can afford them. That’s why charities – like that founded by Overstock CEO Jonathan Johnson – are already being formed to help patients pay for treatments.

But aside from that, does it make any sense to address this perceived disparity by denying access to all? Our current system caters to the wealthy and well connected – they have the means to travel overseas to get treatments that poorer patients can’t get in the United States, or the political or economic influence to get treatment prior to full FDA approval. Right to Try simply extends that option to everyone.

I think it’s strange that Klugman and Alison Bateman-House are so vehemently opposed to Right to Try. After all, in one sense Right to Try isn’t very radical – it doesn’t even permit self-medication, the topic of this month’s discussion. And far from seeking to “dismantle the FDA,” it’s designed to work alongside the existing clinical trial process. It only applies to drugs that have received FDA phase 1 approval, and are currently being given to patients by the FDA itself. Nor does it disrupt the process whereby the FDA reviews investigational medicines. And it only applies to terminally ill patients who’ve exhausted government-approved options and have the support of their doctors. It just affords these people the same access that the FDA is now allowing clinical trial patients. It’s certainly no cure-all. It’s not even particularly libertarian. It’s just one important step in the right direction.

But Right to Try’s greatest success has been to shine a light on patient autonomy. Suffering patients have been yearning for FDA reform for decades, but it wasn’t until the Right to Try movement that the FDA was forced to acknowledge the difficulty patients face in securing treatments. Last April, the FDA announced it was establishing a new position called a “compassionate use navigator” – a person who will guide patients through the FDA’s labyrinthine application process (the Agency is only now, over a year later, rolling out its website). Also in 2016, the Agency announced that it had streamlined its compassionate use process – again, nearly a year and a half after it had first publicized its intention to do so – purportedly reducing the paperwork burden of filing a compassionate use petition.

These improvements are welcome, and they’re a clear indication that Right to Try has transformed the national conversation about the rights of patients. But shorter forms and hand-holding bureaucrats don’t fix the system’s fundamental flaw, one Jessica Flanigan has highlighted this month: it requires patients to get the government’s permission slip to obtain treatment to save their lives. Lives that are theirs. Not the government’s. Not yours or mine. But theirs.

I emphasize this point for a simple reason. In all this talk of scientific protocols and administrative agencies, the one thing we must never lose sight of is that the life – the suffering and the joy – belongs to the patient, not to anyone else. If we can’t experience the consequences of those choices the way she can, what right have we to dictate her choices? What right have we to control such a fundamental aspect of another person’s being? The answer is: we have none. The life is the patient’s – the consequences of choices are the patient’s, and the decision is therefore the patient’s. Our drug approval system should teach, aid, and empower patients, not dictate to them the terms on which they will live, or not live, their lives.

 

Note


[i] The FDA disputes that it takes 100 hours to fill out the application, even though that estimate is published on the form itself and there is nothing on the form or in the agency’s instructions directing doctors to leave any fields blank.

Beyond the Age of Snake Oil

In thinking about the Conversation pieces that both Dr. Flanigan, Ms. Sandefur, and Dr. Bateman-House have written, I appreciate their perspectives and engagement on the argument. I am also positive that while we may all think more deeply about this issue, that none of us has been persuaded to change our minds. This experience has led me to fascinating conversations with colleagues and friends so no doubt my thinking on this topic will continue. That said, I did have a few points that I wanted to address.

Ms. Sandefur suggests that the free system of pharmaceuticals would encourage people to rely on the vast information at their fingertips—at least for the two-thirds of adults who have access to broadband and the thirteen percent more who use the internet through a smartphone. My question is, what about the other 21 percent who either live in areas without access or cannot afford it? Do they guess about what drugs to take? Rely on friends to do research for them? Why should they not benefit from the bounty of information online? This is a perfect case of social injustice where those with the least would be hurt the most.

Now consider the information that would be available. Ms. Sandefur recommends that one source of information is professional journals—that the public can read these articles and decide for themselves. The problem here, again, is access. Journal subscriptions are expensive, running from hundreds to thousands of dollars per year. If one is not affiliated with a medical institution, it is unlikely that she or he will have access to these journals. I teach at a very large undergraduate university, and since we do not have a medical school or a hospital, I have to order most medical journal articles on a fee basis.  Even purchasing access to a single article can be pricey. To “rent” an article from the Journal of the American Medical Association costs $30 for 24-hours of access. If you want to read the article in the 25th hour, you pay again. Thus only those with the financial means can have access.

The second source would be peer-review websites similar to Yelp and TripAdvisor. Presumably people would give reviews stating which drug they took, for what symptoms, what their result was, and what side effects they experienced. I think we can all accept that this is an incredibly unscientific way to collect data. Like many others, I have often found that people who leave reviews are those who had excellent or poor experiences, but rarely in the middle. Often, I find myself disagreeing with reviews which leaves me feeling misdirected. I could spend time and figure out which reviewers have tastes similar to mine and then trust their opinions more than others but the these reviews are not backed by science. When I am picking a hotel, the lack of meaningful ratings and scientific sampling is not an issue. However, when choosing what drug to take for my health—that could help me or make my condition worse—then this unscientific sampling is a problem. Also consider that many companies pay to have their products and properties reviewed positively or to post negative reviews against a competitor. Drug manufacturers are also likely to use these services to improve their ratings and increase sales.

The third option Ms. Sandefur suggests is that the role of testing and assuring safety would be taken up by non-profit organization similar to Consumer Reports and Good Housekeeping. The former is a subscription service for which you need paid access (print subscription or online). Again, this would leave a substantial proportion of the population without access to these voluntary test results unless the agencies were required to publicize their data, an idea that would seem to go against the free market position that her writing here has represented. Of the three ideas, this is the most palatable and the most potentially scientifically rigorous. However, I once worked for a magazine that did competitive reviews of products and the reason I left that position was because of the pressure to give more positive reviews to products that bought ads in the publication.

Ms. Sandefur also claimed that one of the problems with FDA testing is the long period when companies cannot charge for the drug (such as during clinical testing). This statement presumes that making money is the goal of pharmaceuticals rather than helping people. I have written elsewhere that the notion of making profit off of the pain and suffering of others is grotesque. There are some spaces that should be outside the realm of profit seeking, and health care is one of them. I truly believe that all health care related industries should be nonprofit, dedicated to helping others rather than the pursuit of the bottom line. Ideally I’d like to see single-payer, but that discussion is outside the scope of this conversation.

In her response to my essays, Dr. Flanigan stated that in the nineteenth century people were free to choose and consume any drug that they wished. I will agree with her facts and history on this issue. What she did not state is that few of these drugs were effective at treating diseases, never mind curing them. Taking many of these “tonics” did make one feel better because most contained substantial amounts of alcohol, cannabis, cocaine, morphine, or opium. The patient felt better, but their symptoms and disease were left untouched. Certainly, some drugs had an effect: quinine, digitalis, mercury, aspirin, though dosing was uncertain and these agents were as likely to poison as to help. The nineteenth century did not have the plethora of drugs available today, many of which are created in laboratories rather than found in nature. I would hope that nearly 120 years later, with thousands of clinically proven drugs available, that we would not be looking at snake oil, tinctures, and poisons as our role model.

It seems that Dr. Flanigan and I will agree to disagree on the role of pharmaceuticals in extending lifespan. She relies on an economist’s analysis, while I relied on the epidemiologists at the U.S. Centers for Disease Control & Prevention and the World Health Organization. Apparently public health and finance do not view the world or the facts in the same way.

Dr. Flanigan demonstrates an inconsistency in her arguments where she says that we should not be required to wear seatbelts in the same way that we should be able to freely buy our prescriptions. However, she also feels that there should be limitations on taking drugs if there is a concern of antibiotic resistance. She stated that there should be limits on driving under the influence. The question then, is where to draw the line between what the government can regulate and restrict for public and personal safety and where it cannot.

At the base of our debate is a question: what is the proper role of government in regulating pharmaceuticals? As Dr. Flanigan and Ms. Sandefur have stated, they believe in a government that does not take on the role of the “nanny state”—that we should be free to make decisions that may endanger ourselves, such as taking pharmaceuticals and driving without a seatbelt. As long as we are fully informed and give our consent, they hold, then we should be able to choose. On the other hand, Dr. Bateman-House and I see a role for the government in ensuring our safety and protecting us from avoidable harms. I could even begin to agree with the smaller government role for drugs if indeed the consequences of such behaviors affected only the person who made the choice. But a person’s actions always affect other people. The person who is seriously injured by not wearing a seatbelt will get medical care at a hospital, and if she does not have health insurance than taxpayers will cover the cost. Her children will be without a parent, even assuming they were not injured in the crash. Of course, police, doctors, a tow truck driver, and others had to work to help the driver, clean up the scene, and to get traffic flowing again. The consequences of our actions are never born solely by ourselves.