Republican officials in Washington spent much of the last year working to “repeal and replace” the Affordable Care Act, more commonly known as Obamacare.
Citing rapidly rising premiums and insurers exiting the market in certain states, Republicans have branded the policy a failure, but have so far failed to pass an alternative. While the battle over health insurance continues to rage in Washington, a far less political — and likely far more effective — means of reducing the cost of care exists: eliminating clinical waste. It is a surprisingly simple solution.
Healthcare spending as a percent of GDP has risen over the last 50 years throughout the developed world, but nowhere has it risen as fast or as high as in the United States. In 1960, we spent about 5 percent of our GDP on healthcare; in 2017, it’s nearly 18 percent — over $3 trillion, more than the GDP of the entire continent of Africa. If US healthcare were its own economy, it would be the fifth largest in the world. While the US represents just 5 percent of the world’s population, we represent almost 50 percent of global healthcare spending.
While not all of that spending is bad, I am confident we can save up to $1 trillion annually in the US while actually improving care. And imagine how much we could do with an extra trillion.
Where it all goes
The biggest portions of our healthcare spending go to hospital care (inpatient medicine), nursing homes, and long-term care, which account for nearly one-third of total spending each. Surprisingly, only about 10 percent goes to physician salaries, and between 11 and 14 percent to pharmaceuticals. While cuts in either area would certainly bring down costs, they wouldn’t “fix” the problem. And if cut in the wrong way, any reduction could compromise quality of care or innovation.
While $3 trillion is a staggering number, much of our healthcare spending is beneficial. One of the most important aspects of the US healthcare system is that our heavy spending on new technology and medicines fuels healthcare innovations around the world. Whether a pharmaceutical, imaging, or medical device company is based in the US or abroad, the sheer size of the US healthcare market can legitimize high, risky spending on innovation.
In other words: the US’s “overspending” on innovation drives healthcare forward for the entire world. Just looking back over the last 50 years: medical imaging, including MRI and CT, has revolutionized nearly everything healthcare providers do; vaccines have nearly eradicated diseases that once crippled or killed hundreds of thousands; and cancers that would kill patients just three years ago are now curable. Any change to our healthcare system has to be careful not to stifle that vital innovation.
But not everything we pay for contributes to quality of care or innovation, and the biggest driver of unnecessary spending in the American healthcare system is just that: waste.
Clinical waste refers to any treatments, tests, or procedures conducted on patients who won’t benefit from them. According to some estimates, it accounts for one-third of US healthcare spending — $1 trillion dollars.
Unnecessary care itself doesn’t just cost money; it exposes patients to additional adverse effects, often requiring further treatment — and even more money.
In order to understand when tests or treatments are wasteful and when they’re valuable, physicians and patients alike need to understand the scientific evidence for how tests and treatments relate to health outcomes, a practice commonly called evidence-based medicine (EBM). Surprisingly, EBM is a relatively new trend — the term itself was coined only in the early 1990s — and one still not completely embraced by much of the medical community.
While many of us might imagine that the practice of medicine would be based on large, well-designed clinical trials, historically — and even today — it is instead often based on “what has always been done,” and the “expert consensus” of medical guideline committees. Often, even tests and treatments that are bioplausible — that is, those that seem, on their surface, like they should work — have been shown in studies to be ineffective. While many historically bioplausible treatments — like blood-letting, leeching, and even tobacco smoke enemas, performed in the nineteenth century to release a patient’s “black bile” through the diarrhea they would inevitably cause — now seem ridiculous, many things we do today may offer patients little more benefit, and expose them to very real harm.
To Screen or Not to Screen?
In American medicine, physicians and patients alike tend to make the intuitive — but incorrect — assumption that more testing can’t hurt. But every test has a false positive rate, which can prompt further tests and eventually unnecessary treatments, ultimately leading to significant side effects.
Prostate cancer is the most common type of cancer in men. One in sex men will get prostate cancer in their lives, and one in six of them will die from it. Given these stats, it would seem almost foolish not to screen for prostate cancer.
In response, many public health organizations, including the Mayo Clinic, recommend that men over the age of 50 receive annual screening for elevated levels of prostate specific antigen (PSA). For individuals with a PSA concentration of 4.0 ng/mL or greater, prostatic biopsies, which cost several thousand dollars, are generally recommended.
Trans-rectal prostatic biopsies, which are conducted 1 million times a year in the US, are done with an ultrasound-guided needle, which must penetrate the rectum 12 times to collect prostate tissue. Those punctures create openings for the bacteria that lives in the rectum to spread to the rest of the body. Six percent of patients who undergo the procedure develop infections as a result, and roughly one in six of those will end up in the emergency room with prostatic biopsy fever. Roughly 1 in 1,000 will die as a result of the procedure.
Of those who test positive for elevated PSA, however, only roughly 1 in 4 will actually have cancer. What’s more, an estimated 15 to 38 percent of men with prostate cancer have a PSA concentration less than 4.0, leading to a false negative diagnosis.
According to systematic review by Cochrane, a collaborative of 37,000 contributors that conducts meta-analyses of medical research, in four out of five studies comprising over 340,000 patients, PSA screening had no significant impact on prostate cancer mortality rates.
A paper published this September, which received a good deal of media attention, found there may be some benefit to screening for a subgroup of patients. But, even overlooking the statistical issues that result from repeatedly analyzing more and more subgroups until a positive result is found — a technique commonly called p-hacking — the benefit seen in the subgroup (out of 1,000 patients screened, one fewer patient died from prostate cancer) was still quite small, and likely outweighed by the side effects experienced by those who received no benefit.
The irony here is that while this seemingly ineffective and potentially dangerous procedure is being performed millions of times a year in this country, a far less invasive and potentially more effective treatment exists: green tea.
Catechins, a type of antioxidant found naturally in green tea, have been shown to inhibit the growth of prostate tumors in mice, and at least one study has found a 30 percent reduction in prostate cancer risk for men who consume green tea daily, and up to a 60 percent reduction in men with “high grade” precancerous biopsies. That suggests that if, instead of spending all this money on PSA screening and trans-rectal biopsies, we got older men to drink more green tea or take green tea supplements, we could actually reduce mortality from prostate cancer significantly and protect our patients from the unnecessary risks of screening.
Hospitalization’s High Tab
Often, when treating a patient you are required to admit them to the hospital, and unnecessary hospitalizations are a significant driver of wasteful spending. In 2015, according to the latest statistics from the Kaiser Family Foundation, the average cost of just a one-day stay in the hospital, across all hospital types, is over $2,000.
The decision to hospitalize a patient is made based on a recommendation from the provider, but is often ultimately decided by the patient or patient’s family. Pneumonia, the most common cause of hospitalization for children in the US and the second most common cause for adults, is an excellent example. ER doctors, who often see these patients first, are faced with two options: send the patient home or admit them. Sending a sick patient home has obvious risks — pneumonia is the most common cause of sepsis and septic shock — but keeping a patient in the hospital, in addition to the cost, can be a risk as well. Healthcare facilities are a common source of new infections — an estimated 1.7 million every year in 2002, causing nearly 100,000 deaths annually.
So, how should doctors and patients decide, especially for those who are in the gray area of acuity between inpatient and outpatient treatment? Increasingly, they don’t have to.
Innovation by both payers (insurance companies) and research scientists, has led to the growth of observation medicine, fully supported by payers, as a “third option.” Observation units allow providers to keep patients in dedicated spaces for 24 to 48 hours before deciding to admit them, saving costs and minimizing the risk of infection.
The Hidden Risks of Testing
On the morning of Monday, March 16, Natasha Richardson, the famous English actress and wife of Liam Neeson, fell during a ski lesson on a mountain 80 miles north of Montreal. Though she reportedly laughed the fall off, the ski patrol insisted on taking her down the mountain and seeking medical care. At the hospital, she refused treatment and returned to her hotel, but within just hours she was rushed back to the hospital and ultimately airlifted back to New York City. Two days later she died of an epidural hematoma — bleeding between the tough outer membrane of the nervous system and the brain.
This is a story just about every ER doctor has heard at some point, often from a nervous patient. About 1.7 million Americans sustain traumatic brain injuries annually. Most often, these patients first seek care at a primary doctor’s office or urgent care facility, then they are sent to an ER to receive a CT scan, costing, altogether, a few thousand dollars and exposing the patient to cancer-causing radiation. The vast majority of these patients will end up getting a CT scan — about 8 percent of which will come back positive, though only about 1.5 percent will require immediate intervention.
This is important because “better safe than sorry” doesn’t apply when the testing itself poses long-term risks for patients. A single head CT, as best as we can currently tell, exposes a patient to enough radiation to increase their risk of cancer by one in 10,000, or about 170 new cancers caused annually by just this one test for this one type of injury.
The key to reducing that number is finding a way to be nearly 100 percent sure — let’s say, missing less then 1 in 10,000 patients — that the patient doesn’t need further treatment. And that’s exactly what the Canadian Head CT Rule does. It turns out that 70 percent of patients meet the criteria for this incredible rule, and if they do, their chance of needing surgery or another intervention is less than 1 in 50,000. The rule is based completely on clinical criteria, meaning that it can be applied based simply on a physician’s exam, allowing most patients to avoid a trip to the ER entirely.
Yet, very few doctors use this tool. Many have never heard of it, and many who have aren’t comfortable enough with its details to confidently apply it. More concerning, however, are the many doctors who understand the rule but ignore it, blaming patients’ expectations of treatment and — illogically — fear of liability, despite the greater risk to the patient posed by exposure to radiation.
A Surprisingly Simple Solution
Despite the staggering scale of the problem, there are numerous opportunities to support doctors in making decisions that will reduce unnecessary, costly, and often dangerous clinical waste. First and foremost, we should recognize that doctors aren’t alone in making most clinical decisions. Often weighing in alongside them are the many stakeholders in US healthcare: hospitals, which create policies and procedures; medical societies, which create guidelines; insurance companies, which decide what is reimbursable and at what rate; attorneys and courts, which shape the practices that can create liability; and patients themselves, who often demand treatments and sometimes “doctor shop” when denied them.
Each stakeholder has a role to play in helping our system become more judicious and evidence-based.
The solution to clinical decision support is actually quite simple — first, we must shine light on specific clinical decisions. We must study each through rigorous, scientific trials led by academic researchers, scientists, and statisticians in healthcare. That effort must be supported both through funding and through a culture of greater respect for this work. This is how the evidence is “created.”
Collecting evidence alone, however, is not enough. Physicians and other clinicians on the front lines are typically quite different from the researchers who create evidence. A skilled physician must not only understand medicine, but also communicate effectively with patients, gather accurate histories, understand the physical exam, and put all of that information together to make a clinical decision. In short, most are not hard scientists, and many dislike math and statistics — a challenge when they’re asked to apply clinical rules that require an understanding of complex statistical concepts.
But a new generation of tools is rising to meet this challenge. These new references gather the known evidence and, increasingly, bring it right to the patient’s bedside. Sites such as Cochrane and theNNT, which my company, MDCalc, acquired last year, offer meta-reviews of the existing literature around specific clinical questions, and break them down into a bottom line. Other sites, including my own, put forward evidence-based clinical rules with clear explanations of how to best apply them directly at the patient’s bedside, a critical step in getting research into practice. If doctors aren’t 100 percent confident they understand a rule — including all of its ins and outs when it works well when it doesn’t — they won’t use it, and will fall back on what they’ve always done or what they were taught, often practices not based on updated scientific evidence.
A shift to evidence-based medicine would be a win for patients, who would receive improved care; clinicians, who would receive greater support in decision-making and greater protection from liability; hospitals, which would endure less strain and over-crowding, and insurers, who would no longer be on the hook for expensive, ineffective treatments. In fact, the only losers in an evidence-based system are those who provide unnecessary, overpriced, or fraudulent care. But changing our system to put evidence first will require a concentrated effort from patients, physicians, hospitals, and insurers, all working together.
With just 20 percent of the estimated savings from eliminating clinical waste — $200 billion (and possibly less) — we could cover every uninsured and under-insured person in America. With just 7 percent more, we could triple the budget of the National Institutes of Health to $93 billion, greatly accelerating medical discovery and innovation, while still leaving 73 percent of the savings to spare.
What would you do with the remaining $730 billion?
Ivy Exec is proud to announce its partnership with Columbia Business School, to bring an insightful collection of thought leadership pieces for the modern-thinking strategist in finance, leadership and more to its platform.