Effects of Current Health Policies
Chapter 3 How Did We Get Here?
How Did We Get Here?
A history of health care policy making in the United States could well start in 1791 with the passage of the Bill of Rights. The Tenth Amendment to the U.S. Constitution declares that those powers not expressly given to the federal government belong to state and local governments. Health and education were not expressly given to the federal government. In 1910, the Supreme Court ruled that a federal workers’ compensation system was unconstitutional. Each state then established its own system. Hadler (2013) cites this as the regulatory template for the U.S. health insurance program as it developed much later. This issue played out again in June 2012 when the Supreme Court narrowly upheld part, but not all, of the Patient Protection and Affordable Care Act (ACA).
With minor exceptions, the federal government has limited its role to financing national programs of health and education, rather than delivering services directly. Federal involvement has been justified under the welfare clause of the Constitution and also through Thomas Jefferson’s argument of implied powers. Yet the federal share of health expenditures is fast approaching half the direct cost of care, even without counting individual tax deductions for health care spending and insurance premiums and corporate deductions for employee health insurance premiums. Tax subsidies, health insurance provided to government employees, and public dollars spent on health at all levels of government account for close to 60% of all health spending.
This chapter looks at the coevolution of two separate, but linked, U.S. health systems—one for delivering medical care and one for financing it. Financing, especially the health insurance system, has impacted delivery systems; for instance, it has created incentives for overutilization or underutilization. Separate health insurance systems exist to cover expenses for dental, vision, and long-term care. Public health is financed primarily through state, local, and federal dollars obtained through taxes and fees.
3.1 Contending Visions of a System for D…
3.1 CONTENDING VISIONS OF A SYSTEM FOR DELIVERING HEALTH CARE
Conflicts between different visions of how the health system should operate have dominated U.S. health care policy making. Different ideas have been more or less dominant at different times. Yet there has not been a dominant viewpoint since the 1960s, and all of the contending approaches have remained on the table. Each ideology or philosophy falls along the continuum of alternatives represented in
. Five potential characterizations of the health care market are presented. One, a provider monopoly, has been ruled out by our legal system, even though it may best describe the U.S. health system as it existed between World Wars I and II (Starr, 1982). A monopoly occurs when the market for a product or service is controlled by a single provider, and in most cases is illegal. A monopsony exists when a single buyer controls a market. The extreme monopsony position can be represented by the original version of the United Kingdom’s National Health Service. This model is not currently a realistic contender for adoption in the United States either.
Oligopolistic competition involves a relatively open market dominated by a few large sellers and is a characteristic of many U.S. industrial sectors. Usually, three or four major sources for goods or services exist, and those sources control at least 40% of the market. In health care, two, three, or four providers often control state or local markets in the absence of a national market. National oligopolies appear to exist in many markets, such as pharmacy benefits management, Medicare managed care, replacement joints, imaging equipment, and pharmaceuticals distribution. Two or three hospital groups often control most of the relevant local market. Concentration in hospital markets has been increasing sharply enough to become a concern of the Federal Trade Commission (FTC). Although available studies of hospital concentration can yield conflicting findings (Gaynor, 2006), there can be little doubt that concentration increases pricing power. In many state markets the same is true of health insurance providers. Yet it is widely believed that market power has shifted in recent years from insurers to providers, especially larger hospitals and their associated group practices.
Figure 3-1 Stages of health care market power.
Starr (2011) describes the process leading up to the passage of the ACA as one of reaching a compromise between administered competition and consumer-driven health care, but the legislation was crafted to be minimally invasive to encourage support from interests such as hospitals, pharmaceutical companies, and the insurance industry.
Administered competition implies that there are multiple suppliers but that the market is strongly influenced by a primary (but not exclusive) buyer, usually a government creation. It may involve universal coverage, a single payer, and/or a single underwriter.
Consumer-driven health care is more of a free-market approach that assumes that consumers’ choices will help shape the market if consumers have accurate and adequate information and are not subject to perverse incentives.
Perfect (free-market) competition assumes the following conditions:
• There are large numbers of buyers and sellers so that no one controls prices.
• All buyers and sellers have complete and accurate information about the quality, availability, and prices of goods.
• All products have available perfect substitutes.
• All buyers and sellers are free to enter or leave the market at will.
Free-market ideology has been playing out in health care even in the absence of a real free market. It goes by a number of names—consumer-driven health care is one example, as is market-driven health care. Supporters of this approach call for much greater transparency and more consumer choice and responsibility. It has been implemented, in part, through innovations such as health savings accounts (HSAs) and private options for Medicare. Insurance exchanges are another manifestation of this approach and were initially suggested by conservative think tanks that support a free-market approach.
3.2 A Chronology
3.2 A CHRONOLOGY
Centuries ago, medical care was a religious calling, not a scientific field. The term hospice was more representative of health care institutions than hospital. Gradually, health care has become a calling and an industry. Well into the 20th century, U.S. physicians took whatever people could pay. Teaching institutions provided free care in return for allowing learners to work on those who could not pay. This system of combined fee-for-service and charity care existed before the Great Depression and World War II. From there, one can trace the development and gradual introduction of employment-based health insurance and prepaid group practices, leading to the establishment of health maintenance organizations (HMOs) and the industrialization of parts of the delivery system with the emergence of pharmaceutical giants, hospital chains, pharmacy chains, and large, integrated health care systems.
The Health “Insurance” Approach: Moving from Provider Monopoly Toward Provider/Insurer Oligopoly
Health insurance systems in the United States were implemented during the Great Depression to stabilize cash flows of providers. The concept existed in Europe much earlier (Starr, 1982). Many of the early systems in the United States evolved into the nonprofit Blue Cross/Blue Shield organizations.
Dr. Justin Ford Kimball, the administrator of Baylor Hospital in Dallas, is often credited with starting the U.S. medical insurance movement in 1929. He conceived of the idea of collecting “insurance premiums” in advance and guaranteeing the hospital’s service to members’ subscribing groups. Furthermore, he found a way to involve employers in the administration of the plan, thereby reducing expenses associated with marketing and enrollment. The first employer to work with Baylor Hospital was the Dallas school district, which enrolled schoolteachers and collected the biweekly premium of 50 cents (Richmond & Fein, 2005).
About the same time, prepaid group practices began in Oklahoma, but they were bitterly opposed by local medical societies. Prepaid group practices, forerunners of today’s HMOs and the organizations identified in the ACA as accountable care organizations (ACOs), were also established to provide stable cash flows, but remained a relatively minor factor for decades because of medical society opposition.
State hospital associations controlled the Blue Cross organizations, and medical societies controlled the Blue Shield organizations. Well into the 1940s, laws in 26 states prohibited anyone other than medical societies from offering prepayment plans for physician services. In 1934, the American Medical Association (AMA) set forth conditions that it argued should govern private insurance for physician services (Starr, 1982, pp. 299–300):
• “All features of medical service in any method of medical practice should be under the control of the medical profession.” This included all medical care institutions, and thus, only the medical profession could determine their “adequacy and character.”
• Patients were to have absolute freedom to choose a physician.
• “A permanent, confidential relation between the patient and a ‘family physician’ must be the fundamental, dominating feature of any system.”
• No form of insurance was acceptable that did not have the patient paying the physician and the patient being the one reimbursed.
• Any plan in a locality must be open to all providers in a community.
• Medical assistance aspects of a plan must only be available to those below the “comfort level” of income.
The Group Health Association of Washington, DC, a prepaid group practice, was established in 1937, but it faced strong opposition. In 1943, the Supreme Court (AMA v. U.S., 1943), hearing a case brought by the FTC, upheld a lower court finding that the AMA and the DC Medical Society were guilty of “a conspiracy in restraint of trade under the Sherman Anti-Trust Act” and had hindered and obstructed Group Health “in procuring and retaining on its staff qualified doctors” and “from privilege of consulting with others and using the facilities of hospitals” (Richmond & Fein, 2005, p. 34).
World War II led to the industrialization of all available nonmilitary hands, breaking the Great Depression, inducing migration from rural areas to industrial cities, increasing the power of industrial unions, and inaugurating the era of big science. It also led to an era of optimism that Americans could accomplish anything they wanted if they worked together collectively (Strauss & Howe, 1991).
Many employers had established their own health services to support their employees and the war effort. Some of these services evolved into prepaid group practices. Most notably, Kaiser Industries’ medical department became the Kaiser Permanente system, which was opened up to outside enrollees after the war. Similar systems, such as the Health Insurance Plan in New York, which started in 1947, sprang up independently.
The government imposed wage and price controls during World War II. As labor became scarce and the war turned in the Allies’ favor, workers pressed for better compensation. The Office of Price Administration held the line on wage increases, but allowed improved benefits through collective bargaining. This led to the rapid expansion of health insurance among unionized industrial and government workers. This trend was also consistent with the provision of medical benefits to the vast military establishment. Unemployment fell from 17.2% in 1939 to 1.3% in 1944, and the real gross national product grew by 75% (Richmond & Fein, 2005). Health insurance costs were not yet a serious concern for corporate managers or the government. In 1948, the National Labor Relations Board ruled that refusal to bargain over health care benefits was an unfair labor practice.
Collective bargaining became the basic vehicle for determining health benefits. Because union officers were elected by their membership, they did not choose catastrophic coverage. Rather, they sought to maximize the visibility of benefits to their rank-and-file (voting) members. This led them to bargain for first-dollar coverage for everyone and to support lifetime limitations on benefits for those who were born with or developed catastrophic or high-cost chronic conditions. It also led them to emphasize employment-related coverage for dependents. They wanted most union members to experience regular payouts from their benefit packages. If the workers were young and healthy, they would still see payment for services such as obstetric and pediatric care for their family members. Employers did not much care how their workers divided the contract settlements among wages, health benefits, and other fringes. Employers saw health insurance as an inconsequential component of the overall labor costs established through collective bargaining. If workers and their families already had individual health coverage, they still gained a tax advantage if the employer paid the premium directly. Blue Cross enrollments tripled between 1942 and 1946, while enrollment in commercial health insurance plans more than doubled (Becker, 1955).
Following World War II, most presidential administrations suggested health care reforms of some sort. The Hill-Burton Act of 1946 expanded hospital facilities. President Truman suggested developing a system of universal health insurance based on the report of the President’s Commission on Health Needs of the Nation; however, his proposal was opposed by entrenched interests and was ignored when President Eisenhower was elected. In 1950, Congress approved a grant program to the states to pay providers for medical care for people receiving public assistance. Proposals for a Medicare-type system under Social Security appeared in Congress as early as 1957, but it took 8 years of debate for Congress and the White House to reach a consensus.
The 1960 Kerr-Mills Act created a program administered by the Welfare Administration and the states for “Medical Assistance to the Aged,” which also covered “medically needy” older persons who did not necessarily need to qualify for public assistance. Richmond and Fein (2005) described Kerr-Mills as an attempt to stave off Medicare-type programs.
The Joint Commission on Mental Illness and Health, formed under Eisenhower, did not issue its final report until 1961, under the Kennedy administration. It led to the passage of the Mental Retardation Facilities Construction Act of 1963 and the Community Mental Health Centers Act of 1963.
Early in his term, President Johnson announced the formation of a Commission on Heart Disease, Cancer, and Stroke. Its recommendations led to the Regional Medical Programs legislation to advance training and research. Congress, however, added a provision that this work was not to interfere in any way with “patterns and methods of financing medical care, professional practice, or the administration of any existing institutions” (Richmond & Fein, 2005, p. 44).
While the Medicare debate continued, Congress passed many health measures as part of Johnson’s War on Poverty. Given the highly visible opposition of organized medicine, the health components of these new programs were housed outside of the U.S. Public Health Service. For example, the Office of Economic Opportunity started neighborhood health centers, and its Head Start program provided health assessment and health care components for children.
When the Johnson administration finally secured passage of the Social Security Amendments of 1965, it accommodated AMA concerns by offering three separate programs: (1) Medicare Part A, which provided hospital coverage for most older persons; (2) Medicare Part B, a voluntary supplementary medical insurance program; and (3) Medicaid, which expanded the Kerr-Mills program to help with out-of-pocket expenses such as nursing home care and drugs and extended potential eligibility to families with children, the blind, and the disabled under the Welfare Administration. Starr (2011) cites this set of programs as the beginning of the “policy trap” that haunts us today:
The key elements of the trap are a system of employer-provided insurance that conceals true costs from those who benefit from it; targeted government programs that protect groups such as the elderly and veterans, who are well organized and enjoy wide public sympathy and believe, unlike other claimants, that they have earned their benefits; and a financing system that has expanded and enriched the healthcare industry, creating powerful interests averse to change. (p. 123)
There were other compromises in the Social Security Amendments. For example, at the time, hospital-based physicians were being placed on salary so that hospitals could use some of their fee revenue to cover the capital costs of their practices. The 1965 Medicare bill specifically required that anesthesiologists, radiologists, and pathologists be paid directly, not through the hospital. That law also stated, “Nothing in this title shall be construed to authorize any federal officer or employee to exercise any supervision or control over the practice of medicine.” Some have questioned whether the government’s current 1.5% pay-for-performance bonus program violates this provision (Pear, 2006).
Bodenheimer and Grumbach (2005) labeled the years 1945 to 1970 as those of the “provider-insurer pact” (p. 167). Starr (1982) argued that the period before 1970 was characterized by an accommodation between the in