How good was the problem-based approach to quality assessment developed in the early 1970s by Kaiser Permanente’s pioneers Len Rubin, MD, and Sam Sapin, MD? Soon after its unveiling, the Comprehensive Quality Assurance System was to be put to the test.
In 1979, at the behest of the federal Office for Health Maintenance Organizations (HMOs), the first incarnation of the National Committee for Quality Assurance (NCQA), was formed. Sponsored by the Group Health Association of America and the American Association of Foundations for Medical Care, the committee invited Sapin and Rubin to join.
In short order, the committee adopted Rubin’s problem-focused review method. NCQA’s emphasis was on identifying and correcting problems, and traditional audits were not required, reported Sapin who served NCQA as a board member and surveyor from 1980 to 1987.
Sapin and Rubin knew the review method worked because they had used it to evaluate KP care in both Northern and Southern California. The KP scheme had two levels: first, identifying possible trouble spots by a variety of means and judging the problems according to 56 monitoring criteria; and second, to fix the problem through process change.
Sapin describes the Southern California Permanente Medical Group regional quality reviews of the 1980s: “The program began with a modest number of criteria, and regular reports were distributed to chiefs of service, medical directors and administrators. Medical centers were identified only by code number. The results were enclosed in a bright yellow folder. We hoped to put the recipients in a receptive frame of mind for their easy-to-recognize quality of care monitoring report,” Sapin explained.
“During the 1980s, these regular reports appeared to generate more quality assurance activity than did the previous classic (traditional) medical audits,” Sapin said.
NCQA floundered in the early 1980s due to the withdrawal of financial support. “NCQA’s status is presently precarious unless the parent organizations, the HMOs which are surveyed and some of the states, provide funds for its operation,” Sapin reported to the KP board of directors in 1983.
Even though member HMOs and the Office of HMOs inWashington, D.C., were satisfied with the surveys, there was an undercurrent pushing for a review agency independent of HMOs. James Doherty, CEO of the Group Health Association of America for 15 years, said in 1996, “HMOs needed to subject their operations to external review by an independent quality assurance body.”
In 1990, the NCQA managed to get funding to reconfigure as an independent agency with a $308,000 grant from the Robert Wood Johnson Foundation and matching funds from HMOs. The board was reconstituted to have 20 members, the majority representing purchasers (largely employers) of care, health plans or consumers.
Six physicians, including 4 medical directors of managed care plans, and Dr. Thomas R. Reardon, a trustee of the American Medical Association, also served on the new NCQA board in the 1990s, according to a 1996 New England Journal of Medicine (NEJM) article by John K. Iglehart, then NEJM national correspondent and editor of the Health Affairs journal. (Iglehart was KP's vice president of government relations in Washington, D.C., from 1979 to 1981.)
The author notes, “Although strong ties still exist (with managed care leaders), the NCQA is a conduit through which employers apply pressure on health plans to continually raise their quality horizons. This pressure creates a tension that reverberates throughout the NCQA’s relationship with health plans.”
With the reconfigured NCQA, Kaiser Permanente and 6 other large employers went to work to fashion quality performance measures. These measures, which cover inpatient and outpatient care, would come to be known as HEDIS or HMO Employer Data and Information Set.
In the 1993 Quality Agenda in Action report, KP CEO Dr. David Lawrence wrote: “HEDIS is the basis for ... a national effort of 30 major managed health care plans and a group of consumers and business representatives ... to develop a system that will enable (purchasers) to compare health plans on the basis of quality indicators.”
NCQA released its initial set of quality measures in 1991, and about 330 health plans measured their performance according to the HEDIS system and reported their results to employers, Iglehart reported in his NEJM article.
He wrote: “The NCQA standards are evolving ... A recent version (HEDIS 2.5) incorporated more than 60 performance indicators that cover quality of care, access to and satisfaction with care, the use of services, finances and management. Most indicators, however, assess administrative performance or utilization rather than quality of care.
“The nine quality measures focus on process, particularly the use of preventive services, which can be readily measured. Only two indicators measure a health outcome (low birth weight) or a proxy for a health outcome (hospitalization rates for patients with asthma),” Iglehart wrote.
On its Web site today, NCQA touts its HEDIS system as the industry standard for comparison of health care providers. “HEDIS allows for standardized measurement, standardized reporting and accurate, objective side-by-side comparisons ... We work to make sure that all measures address important issues, are scientifically sound and are not overly burdensome or costly to implement.”
Examples of current HEDIS measures include: Advising smokers to quit; antidepressant medication management, breast cancer screening, cervical cancer screening, children and adolescent access to primary care physician; children and adolescent immunization status; comprehensive diabetes care; controlling high blood pressure; and prenatal and postpartum care.