The following is adapted from a talk delivered on board the Crystal Symphony on July 25, 2018, during a Hillsdale College educational cruise to Hawaii.

Perhaps the most astonishing thing about modern medicine is just how very modern it is. More than 90 percent of the medicine being practiced today did not exist in 1950. Two centuries ago medicine was still an art, not a science at all. As recently as the 1920s, long after the birth of modern medicine, there was usually little the medical profession could do, once disease set in, other than alleviate some of the symptoms and let nature take its course. It was the patient’s immune system that cured him—or that didn’t.

It was only around 1930 that the power of the doctor to cure and ameliorate disease began to increase substantially, and that power has continued to grow nearly exponentially ever since. This new power to extend life, interacting with the deepest instinctual impulse of all living things—to stay alive—has had consequences that our society is only beginning to comprehend and address. Since ancient times, for example, doctors have fought death with all the power at their disposal and for as long as life remained. Today, the power to heal has become so mighty that we increasingly have the technical means to extend indefinitely the shadow, while often not the substance, of life. When doctors should cease their efforts and allow death to have its inevitable victory is an issue that will not soon be settled, but it cannot be much longer evaded.

Then there is the question of how to pay for modern medicine, the costs of which are rising faster than any other major national expenditure. In 1930, Americans spent $2.8 billion on health care—$23 per person and 3.5 percent of the Gross Domestic Product. In 2015 we spent about $3 trillion—$9,536 per person and 15 percent of GDP. Adjusted for inflation, this means that per capita medical costs in the United States have risen by a factor of 30 in 90 years.

Consider the 1980s, when medical expenses in the U.S. increased 117 percent. Forty-three percent of the rise was due to general inflation. Ten percent can be attributed to the American population growing both larger and older (as it still is). Twenty-three percent went to pay for technology, treatments, and pharmaceuticals that had not been available when the decade began—a measure of how fast medicine has been advancing. But that still leaves 24 percent of the increase unaccounted for, and that 24 percent is due solely to an inflation peculiar to the American medical system itself.

Whenever one segment of an economy exhibits, year after year, inflation above the general rate, and when there is no constraint on supply, then either a cartel is in operation or there is a lack of price transparency—or both, as is the case with American medical care.

So it is clear that there is something terribly wrong with how health care is financed in our country. And a consensus on how to fix the problem—how to provide Americans the best medicine money can buy for the least amount of money that will buy it—has proved elusive. But the history of American medical care, considered in the light of some simple but ineluctable economic laws, can help point the way. For it turns out that the engines of medical inflation were deeply, and innocently, inserted into the health care system just as the medical revolution began.

***

It was the Greeks—the inventors of the systematic use of reason that 2,000 years later gave rise to modern science—who first recognized that disease is caused by natural, not supernatural, forces. They reduced medicine to a set of principles, usually ascribed to Hippocrates but actually a collective work. In the second century, the Greek physician Galen, a follower of the Hippocratic School, wrote extensively on anatomy and medical treatment. Many of these texts survived and became almost canonical in their influence during the Middle Ages. So it is fair to say that after classical times, the art of medicine largely stagnated. Except for a few drugs—such as quinine and digitalis—and an improved knowledge of gross anatomy, the physicians practicing in the U.S. at the turn of the nineteenth century had hardly more at their disposal than the Greeks had in ancient times.

In 1850 the U.S. had 40,755 people calling themselves physicians, more per capita than the country would have in 1970. Few of this legion had formal medical education, and many were unabashed charlatans. This is not to say that medical progress was standing still. The stethoscope was invented in 1816. The world’s first dental school opened in Baltimore in 1839. The discovery of anesthesia in the 1840s was immensely important—although while it made extended operations possible, overwhelming postoperative infections killed many patients, so most surgery remained a last-ditch effort. Another major advance was the spread of clean water supplies in urban areas, greatly reducing epidemics of waterborne diseases, such as typhoid and cholera, which had ravaged cities for centuries.

Then finally, beginning in the 1850s and 1860s, it was discovered that many diseases were caused by specific microorganisms, as was the infection of wounds, surgical and other. The germ theory of disease, the most powerful idea in the history of medicine, was born, and medicine as a science was born with it. Still, while there was a solid scientific theory underpinning medicine, most of its advances in the late nineteenth and early twentieth centuries were preventive rather than curative. Louis Pasteur and others, using their new knowledge of microorganisms, could begin developing vaccines. Rabies fell in 1885, and several diseases that were once the scourge of childhood, such as whooping cough and diphtheria, followed around the turn of the century. Vitamin deficiency diseases, such as pellagra, began to decline a decade later. When the pasteurization of milk began to be widely mandated around that time, the death rate among young children plunged. In 1891, the death rate for American children in the first year of life was 125.1 per 1,000. By 1925 it had been reduced to 15.8 per 1,000, and the life expectancy of Americans as a whole began a dramatic rise.

Hospital “Insurance”

One of the most fundamental changes caused by the germ theory of disease, one not foreseen at all, was the spread of hospitals for treating the sick. Hospitals have an ancient history, but for most of that history they were intended for the very poor, especially those who were mentally ill or blind or who suffered from contagious diseases such as leprosy. Anyone who could afford better was treated at home or in nursing facilities operated by a private physician. Worse, until rigorous antiseptic and later aseptic procedures were adopted, hospitals were a prime factor in spreading, not curing, disease. Thus, until the late nineteenth century, hospitals were little more than a place for the poor and the desperate to die. In 1873, there were only 149 hospitals in the entire U.S. A century later there were over 7,000, and they had become the cutting edge of both clinical medicine and medical research.

But hospitals had a financial problem from the very beginning of scientific medicine. By their nature they are extremely labor intensive and expensive to operate. Moreover, their costs are relatively fixed and not dependent on the number of patients being served. To help solve this problem, someone in the late 1920s had a bright idea: hospital insurance. The first hospital plan was introduced in Dallas, Texas, in 1929. The subscribers, some 1,500 schoolteachers, paid six dollars a year in premiums, and Baylor University Hospital agreed to provide up to 21 days of hospital care to any subscriber who needed it.

While this protected schoolteachers from unexpected hospital costs in exchange for a modest fee, the driving purpose behind the idea was to improve the cash flow of the hospital. Thus the scheme had an immediate appeal to other medical institutions, and it quickly spread. Before long, groups of hospitals were banding together to offer plans that were honored at all participating institutions, giving subscribers a choice of which hospital to use. This became the model for Blue Cross, which first operated in Sacramento, California, in 1932.

Although called insurance, these hospital plans were unlike any other insurance policies. Previously, insurance had always been used to protect only against large, unforeseeable losses, and came with a deductible. But the first hospital plans didn’t work that way. Instead of protecting against catastrophe, they paid all costs up to a certain limit. The reason, of course, is that they were instituted not by insurance companies, but by hospitals, and were primarily designed to generate steady demand for hospital services and guarantee a regular cash flow.

In the early days of hospital insurance, this fundamental defect was hardly noticeable. Twenty-one days was a very long hospital stay, even in 1929, and with the relatively primitive medical technology then available, the daily cost of hospital care per patient was roughly the same whether the patient had a baby, a bad back, or a brain tumor. Today, on the other hand, this “front-end” type of hospital insurance simply would not cover what most of us need insurance against: the serious, long-term, expensive-to-cure illness. In the 1950s, major medical insurance, which does protect against catastrophe rather than misfortune, began to provide that sort of coverage. Unfortunately it did not replace the old plans in most cases, but instead supplemented them.

The original hospital insurance also contained the seeds of two other major economic dislocations, unnoticed in the beginning, that have come to loom large. The first dislocation is that while people purchased hospital plans to be protected against unpredictable medical expenses, the plans only paid off if the medical expenses were incurred in a hospital. As a result, cases that could be treated on an outpatient basis instead became much more likely to be treated in the hospital—the most expensive form of medical care.

The second dislocation was that hospital insurance did not provide indemnity coverage, which is when the insurance company pays for a loss and the customer decides how best to deal with it. Rather than indemnification, the insurance company provided service benefits. In other words, it paid the bill for services covered by the policy, whatever the bill was. As a result, there was little incentive for the consumer of medical services to shop around. With someone else paying, patients quickly became relatively indifferent to the cost of medical care.

These dislocations perfectly suited the hospitals, which wanted to maximize the amount of services they provided and thereby maximize their cash flow. If patients are indifferent to the costs of medical services they buy, they are much more likely to buy more of them and the cost of each service is likely to go up. There is no price competition to keep prices in check.

Predictably, the medical profession began to lobby in favor of retaining this system. In the mid-1930s, as Blue Cross plans spread rapidly around the country, state insurance departments moved to regulate them and force them to adhere to the same standards as regular insurance plans. Had hospital insurance come to be regulated like other insurance, those offering it would have begun acting more like insurance companies, and the economic history of modern American medicine might have taken a very different turn. But that didn’t happen, largely because doctors and hospitals, by and for whom the plans had been devised in the first place, moved to prevent it from happening. The American Hospital Association and the American Medical Association worked hard to exempt Blue Cross from most insurance regulation, offering in exchange to enroll anyone who applied and to operate on a nonprofit basis.

The Internal Revenue Service, meanwhile, ruled that these companies were charitable organizations and thus exempt from federal taxes. Freed from taxes and from the regulatory requirement to maintain large reserve funds, Blue Cross and Blue Shield (a plan that paid physicians’ fees on the same basis as Blue Cross paid hospital costs) came to dominate the market in health care insurance, holding about half of the policies outstanding by 1940. In order to compete, private insurance companies were forced to model their policies along Blue Cross and Blue Shield lines. Thus hospitals came to be paid almost always on a cost-plus basis, receiving the cost of the services provided plus a percentage to cover the costs of invested capital. Any incentive for hospitals to be efficient and reduce costs vanished.

In recent years, hospital use has been falling steadily as the population has gotten ever more healthy and surgical procedures have become far less traumatic. The result is a steady increase in empty beds. There were over 7,000 hospitals in the U.S. in 1975, compared to about 5,500 today. But that reduction has not been nearly enough. Because of the cost-plus way hospitals are paid, they don’t compete for patients by means of price, which would force them to retrench and specialize. Instead they compete for doctor referrals, and doctors want lots of empty beds to ensure immediate admission and lots of fancy equipment, even if the hospital just down the block has exactly the same equipment. The inevitable result, of course, is that hospital costs on a per-patient per-day basis have skyrocketed.

Doctors, meanwhile, were paid for their services according to “reasonable and customary” charges. In other words, doctors could bill whatever they wanted to as long as others were charging roughly the same. The incentive to tack a few dollars on to the fee became strong. The incentive to take a few dollars off, in order to increase market share, ceased to exist. As more and more Americans came to be covered by health insurance, doctors were no longer even able to compete with one another.

Modern Developments

During World War II, another feature of the American health care system with large financial implications for the future developed: employer-paid health insurance. With twelve million working-age men in the armed forces and the economy in overdrive, the American labor market was tight in the extreme. But wartime wage and price controls prevented companies from competing for available talent by means of increased wages and salaries. They had to compete with fringe benefits instead, and free health insurance was tailor-made for this purpose.

The IRS ruled that the cost of employee health care insurance was a tax-deductible business expense, and in 1948 the National Labor Relations Board ruled that health benefits were subject to collective bargaining. Companies had no choice but to negotiate with unions about them, and unions fought hard to get them.

The problem was that company-paid health insurance further increased the distance between the consumer of medical care and the purchaser of medical care. When individuals have to pay for their own health insurance, they at least have an incentive to buy the most cost-effective plan available, given their particular circumstances. But beginning in the 1940s, a rapidly increasing number of Americans had no rational choice but to take whatever health care plan their employers chose to provide.

There is another aspect of employer-paid health insurance, unimagined when the system first began, that has had pernicious economic consequences in recent years. Insurers base the rates they charge, naturally enough, on the total claims they expect to incur. Auto insurers determine this by looking at what percentage of a community’s population had auto accidents in recent years and how much repairs cost in that community. This is known as community rating. They also look at the individual driver’s record, the so-called experience rating. Most insurance policies are based on a combination of community and experience ratings. And for most forms of insurance, the size of the community that is rated is quite large, eliminating the statistical anomalies that skew small samples. For example, a person isn’t penalized because he happens to live on a block with a lot of lousy drivers. But employer-paid health insurance is an exception. It can be based on the data for each company’s employees, allowing insurance companies to cherry-pick businesses with healthy employees, driving up the cost of insurance for everyone else. The effects of this practice are clear: 65 percent of workers without health insurance work for companies with 25 or fewer employees.

By 1960, as the medical revolution was quickly gaining speed, the economically flawed private health care financing system was fully in place. Then two other events added to the gathering debacle.

In 1965, government entered the medical market with Medicare for the elderly and Medicaid for the poor. Both doctors and hospitals had fought tooth and nail to prevent what they called “socialized medicine” from gaining a foothold in the U.S. As a result of their strident opposition, when the two programs were finally enacted, they were structured much like Blue Cross and Blue Shield, only with government picking up much of the tab. And when Medicare and Medicaid proved a bonanza for health care providers, their vehement opposition quickly faded away. The two new systems greatly increased the number of people who could afford advanced medical care, and the incomes of medical professionals soared, roughly doubling in the 1960s.

But perhaps the most important consequence of these new programs was the power over hospitals they gave to state governments. State governments became the largest single source of funds for virtually every major hospital in the country, giving them the power to influence—or even dictate—the policy decisions made by these hospitals. As a result, these decisions were increasingly made for political, rather than medical or economic, reasons. To take one example, closing surplus hospitals or converting them to specialized treatment centers became much more difficult. Those adversely affected—the local neighborhood and hospital workers unions—would naturally mobilize to prevent it. Society as a whole, which stood to gain, would not.

Finally, there was the litigation explosion of the last 50 years. For every medical malpractice suit filed in the U.S. in 1969, 300 were filed in 1990. While reforms at the state level (notably in Texas) have reduced the number, lawsuits have sharply driven up the cost of malpractice insurance—a cost passed directly on to patients and their insurance companies. Neurosurgeons, even with excellent records, can pay as much as $300,000 a year for coverage. Doctors in less lawsuit-prone specialties are also paying much higher premiums and are forced to order unnecessary tests and perform unnecessary procedures to avoid being second-guessed in court.

***

Given this short history, it followed as the night follows day that medical costs began to rise over and above inflation, population growth, and the cost of medical advances. The results for the country as a whole are plain to see. In 1930 we spent 3.5 percent of American GDP on health care; in 1950, 4.5 percent; in 1970, 7.3 percent; in 1990, 12.2 percent. Today we spend 15 percent. American medical care over this period has saved the lives of millions who could not have been saved before—life expectancy today is 78.6 years. It has relieved the pain and suffering of tens of millions more. But it has also become a monster that is devouring the American economy.

Is there a way out?

One possible answer, certainly, is a national health care service, such as that pioneered in Great Britain after World War II. But our federal government already runs three single-payer systems—Medicare, the Veterans Health Administration, and the Indian Health Service—each of which is in a shambles, noted for fraud, waste, and corruption. Why would we want to turn over all of American medicine to those who have proved themselves incompetent to run large parts of it?

A far better and cheaper alternative would be to reform the economics of the present system.

The most important thing to do, by far, is to require medical service providers to make public their inclusive prices for all procedures. Most hospitals keep their prices hidden in order to charge more when they can, such as with the uninsured. But some facilities do post their prices. The Surgery Center of Oklahoma, for instance, does so on its website. A knee replacement there will cost you $15,499, a mastectomy $6,505, a rotator cuff repair $8,260.

Once prices are known and can be compared, competition—capitalism’s secret weapon—will immediately drive prices towards the low end, draining hundreds of billions of dollars in excess charges out of the system. Posting prices will also force hospitals to become more efficient and innovative, in order to stay competitive.

Any politician who pontificates about reforming health care without talking about making prices public is carrying water for one or more of the powerful lobbyists that have stymied real reform, such as the American Hospital Association, the American Medical Association, and the health workers unions.

Second, we should reform how malpractice is handled. We should get rid of the so-called American rule, where both sides pay their own legal expenses regardless of outcome, and adopt the English rule—employed in the rest of the common-law world—where the loser pays the expenses of both sides.

Third, we need to ensure that the consumers of medical care—you and me—care about the cost of medical care. Getting patients to shop for lower-cost services is vital.

A generous health insurance policy more or less covers everything from a sniffle to a heart transplant. It shouldn’t. An insurance policy that covers routine care isn’t even an insurance policy, properly speaking—it is a very expensive pre-payment plan that jacks up premiums. Just as oil changes are not covered by automobile insurance, annual flu shots and scraped knees should not be covered by medical insurance. One way to achieve this would be for employers to provide major medical insurance plus a health savings account to take care of routine health care. If the money in the account is not spent on health care, it would be rolled over into the employee’s 401(k) account at the end of the year, giving him an incentive to shop wisely for routine medical care.

Finally, we need to get the practitioners of modern medicine to recognize an age-old reality: there is no cure for old age itself. Maybe someday we’ll be able to 3-D print a new body and have the data in our brain downloaded to it. But for the time being, when the body begins to break down systemically, we should let nature take its course.

There are enormous forces arrayed against these economically sensible reforms. Defenders of the status quo are the most potent lobbyists in Washington and the state capitals. This is not to mention the leftist proponents of single payer, who favor whatever will increase the power and scope of government. So it won’t be an easy fight. But at least we have one thing on our side—Stein’s law, named after the famous economist Herbert Stein: “If something cannot go on forever, it will stop.”