There is lack of agreement about how to precisely define who is a “patient” but it is generally accepted that persons who require medical care would be considered a patient. Over the past 15–20 years, our understanding of what it means to be a patient has undergone rapid change. In the recent past, patients could be treated only after symptoms of disease had already occurred. Today’s medical system, however, is increasingly focused on predicting and preventing disease1 in persons who are asymptomatic or have mild symptoms, hence expanding the patient pool. In many areas of medicine, from cardiology to oncology, early detection of disease has allowed patients to enjoy healthier and more productive lives.
These changes have also begun to transform neurology and psychiatry. Perhaps most importantly, recent advances in genetic testing, neuroimaging, and other healthcare technologies have allowed researchers to discover biological markers (or “biomarkers”) for specific brain diseases2. Some of these biomarkers can be detected years before these brain diseases present with symptoms and are typically diagnosed in the hospital or clinic. Today, for example, direct-to-consumer genetic testing allows healthy people to quickly and easily learn whether they carry genetic mutations or variations predisposing them to Alzheimer’s or Parkinson’s disease. The ability to detect these and other markers of brain disease has changed the way in which healthcare is understood by a number of stakeholders, including the general (and healthy) population, patients, clinicians, insurers, and drug developers. The earlier detection of persons at-risk for the development of these and other neurodegenerative diseases may also allow for the recruitment of these individuals for clinical trials which test the effectiveness of neuroprotective therapies. These discoveries also raise numerous questions regarding the definition of a “patient,” what qualifies as a “diagnosis,” and how patients’ knowledge of being at increased risk for a disease shapes their self-image and impacts their quality of life. In other words, how should we think about disease when it has yet to manifest with symptoms? Both patients and clinicians want to know.
In this four-part series, based on presentations given by Professors Christopher Correll (Hofstra Northwell School of Medicine, New York, USA), John Hardy (UCL Institute of Neurology London, UK) and Karen Rommelfanger (Center for Ethics, Emory College, Atlanta, USA) we explore how earlier detection of brain disorders could improve patient care. Part 1 in this series introduces the concept of prodromal and preclinical disease—a state in which patients have markers of a disease but few or no symptoms. Part 2 focuses on efforts to identify (and treat) the early stages of schizophrenia. Part 3 takes a similar approach to two neurological disorders: Alzheimer’s disease and Parkinson’s. Part 4 explores the ethical implications of preclinical diagnosis and proposes some preliminary solutions for patients in search of answers.
Today, most patients are diagnosed with brain disorders at a relatively late stage, often only after significant brain damage has already occurred. To adopt an analogy from Thomas Insel, the former Director of the National Institute of Mental Health, diagnosing someone at such a late stage is like recognizing and treating a patient’s coronary artery disease only after he or she is rushed to the hospital with a myocardial infarction3. We can do better. Like heart disease, which clinicians routinely diagnose early in the disease course using blood tests and imaging techniques, we now know that brain diseases are marked by pathological changes that begin years or even decades before the onset of clinical symptoms. The ability to identify these changes at earlier time points, especially before symptoms appear, would provide clinicians and patients with the opportunity to start disease-modifying treatments that could prevent or slow progression of the disease. As in other areas of medicine, early intervention would prevent unnecessary pain and suffering by pre-patients and their families while reducing health care costs.
Over the past two decades, especially in the case of neurological disorders such as Alzheimer’s and Parkinson’s disease, advances in both genetic and neuroimaging technologies have allowed us to discover a number of potential indicators of future disease onset. These markers have given us an understanding that brain diseases, like heart disease and many other medical problems, exist along a spectrum: disease begins in a pre-symptomatic or preclinical stage, advances to a prodromal period during which patients experience attenuated symptoms, and finally culminates in full disease onset as diagnosed by established criteria4. In neurology, for example, a healthy person who carries a strong genetic risk factor for a certain disease—such as the ApoE4 allele in Alzheimer’s disease or the LRRK2 mutation in Parkinson’s—could be said to be in the preclinical state of the disease. That same person who later exhibits slight motor signs or memory loss, perhaps considered “forgetfulness” by the patient or family members, has progressed to the prodromal state of disease. Finally, the patient may experience full symptom onset and further disease progression.
The well-known “Jack” curve—named after an Alzheimer’s researcher who proposed that different biomarkers emerge at different stages of disease—illustrates how one could understand Alzheimer’s disease as a continuum. As seen in Figure 1, amyloid beta (Aβ), a protein that forms sticky plaques in the brain, begins to accumulate in the brains of patients with Alzheimer’s disease even while they are cognitively normal, setting the stage for eventual mild cognitive impairment (MCI) and dementia. These markers of disease are explored in more detail in Part 3 of this series.