The telephone call this morning was enjoyable. I want to offer the following observations as a summary of my understanding of this project. The following is a link to SETMA’s 57-page application to the Robert Wood Johnson Foundation’s Learning from Exemplar Ambulatory Practices (LEAP) study which was conducted by the MacColl Institute under the direction of Michael Parchment. The three hyperlinks following the link to the entire document explain SETMA’s Model of Care, our Quality Metrics Philosophy and our understanding of the Limitations of Quality Metrics. Everything I addressed this morning are founded upon the concepts explained at these links:
The issues which I raised today have to do with what I perceive as the “weakness” of the PQRI (2006, voluntary), PQRS (2011, mandatory) programs and their successor the Quality Category of Merit-Based Incentive Payment System (MIPS). In SETMA’s study and evaluation of the Medicare Access and CHIPS Reauthorization Act of 2015, we discovered that SETMA’s Four Strategies for Healthcare Transformation are exactly parallel to MIPS Four Categories of Scoring System (for the full document see: SETMA.com | Letters | Four Categories Defined by MIPS Correlate with SETMA’s Four Strategies for Transforming SETMA and Healthcare -- Four Categories Defined by MIPS Correlate with SETMA’s Four Strategies for Transforming SETMA and Healthcare). The following is taken from this document and shows the parallel between SETMA’s work and MIPS:
- The methodology of healthcare must be electronic patient management - MIPS Advancing Care Information (an extension of Meaningful Use with a certified EMR)
- The content and standards of healthcare delivery must be evidenced-based medicine - MIPS Quality (This is the extension of PQRI which in 2011 became PQRS and which in 2019 will become MIPS -- evidence-based medicine has the best potential for legitimately effecting cost savings in healthcare while maintaining quality of care)
- The structure and organization of healthcare delivery must be patient-centered medical home - MIPS Clinical Practice Improvement activities (This MIPS category is met fully by Level 3 NCQA PC-MH Recognition). SETMA’s Recognition as a Tier III PC-MH extends from 2010 to 2019.
- The payment methodology of healthcare delivery must be that of capitation with additional reimbursement for proved quality performance and cost savings - MIPS Cost (measured by risk adjusted expectations of cost of care and the actual cost of care per fee-for-service Medicare and Medicaid beneficiary)
SETMA’s entire evaluation of MACRA and MIPS, including our assessment of its weaknesses and requirements can be reviewed at: SETMA.com | Letters | Complete Summary and Annotated List of All 24 Articles Discussing SETMA’s Work in Thinking About and Preparing for MACRA and MIPS. Link: Complete Summary and Annotated List of All 24 Articles Discussing SETMA’s Work in Thinking About and Preparing for MACRA and MIPS.
The above is the support for my following comments:
- If the ABFM (American Board of Family Medicine) is going to have a continuous reporting of provider performance on quality metrics as a part of the MOC process, it is going to be impossible to evaluate quality and safety issues unless there is a standardized benchmark against witch they will be measured.
- If analytics are going to be a part of this reporting process, particularly in regard to “outcomes measures,” reporting must include not only the mean but the standard deviation. After several years of outcomes reporting most practices will achieve the mean of the quality performance, but even when the mean is equal to the goal of quality performance, it does not identify how many patients are not being treated to goal. The standard deviation does. The use of both decreases the deceptiveness of the mean obscuring large groups who are not being treated effectively. Also, it allows a more legitimate assessment of a practice’s effectiveness in eliminating ethnic, age, gender and socio-economic disparities in care.
- A potential perpetuation of the weakness of the PQRI and PQRS programs into MIPS is the dependence upon 9 metrics for 2017 and ultimately 6 metrics for 2019. I would argue that such a small sample of quality is unlike to demonstrate a real improvement in quality. In SETMA’s Model of Care, we address the “Cluster” of quality metrics and the “Galaxy” of metrics.
- The problem with most quality metric sets is that they are measured retrospectively and not “at the point of care.” Even the name of the AMA’s quality metric efforts reflects this; it is entitled Physician Consortium for Performance Improvement which promotes “point of care” quality assessment.
- The dynamic behind the Performance-Improvement Continuous Medical Education (PI-CME) program is a three-step program of measuring performance, designing improvement methods and re-measuring to see if improvement has been effected. The problem is that the logical and imperative fourth step has never been included, which is a clinical decision support capability for sustaining and measuring the improvement beyond the study. Our goal should never be to have a temporary improvement in quality and safety but a permanent one.
- Objections to quality metrics:
- How do you account for quality in patients who are changing doctors very often? As a public health issue this is an important question and one which the ABFM is in a unique position to study. Once analytic algorithms are created, it is a simple process to look at subsets of your population. For instance, if you are looking out “outcomes metrics” you can look at patients who have been with the practice less than one year, more than one year but less than three and more than three years. Contrasting these groups with the “outcomes” metrics of all patients in your registry at the time of measurement can help you control for the issue of how long you have been treating the patient.
Solutions to this problem, not just the identification and/or validation of it, are the ideal. Using transitions-of-care documents, health information exchanges, and other means of establishing and maintain continuity of care across free-standing, independent practices could be done. This would expand the PC-MH model to the real PC-Medical Neighborhood.
To be sure, the risk assessment adjustment to expected cost is not perfect but it is much better than it was under the Average Annual Actual Per Capital Cost (AAAPCC) adjustment which only accounted for 1% of the difference in the cost of patient care.
- We briefly addressed the elimination of metrics when there has been successful fulfillment of it for a number of years. The benefit of electronics is that you can eliminate something from a provider’s workflow without eliminating it from your auditing process. That way, if the audit shows a slippage in performance, it can immediately be returned to the work low. The illustration was given of Japan which instituted a Pertussis immunization program, eliminating whooping cough. Over a ten-year period, vigilance waned and the immunization rate dropped to 9%, the case rate and death rate increased.
- It is imperative that quality metrics not be intrusive to the patient visit or the provider’s workflow. And, metrics ought to be met “incidental” to excellent care not as the “intention” of excellent care. Metrics should always be seen as a “guide” to excellence and not as excellence itself. Remember Abraham Lincoln’s statement: “If we can first know where we are, and whither we are tending, we can better judge what to do and how to do it.” Used as a Medical GPS service, metrics can tell us:
- Where we want to go.
- Where we are
- How to get there
The problem with most providers is through evidenced-based medicine they know where they want to go but they don’t know where they are. A one-legged GPS service is worthless for reaching your goal.
This is not complete because I did not take notes and am operating from memory but this gives the major idea I discussed. One last comment about the weakness of MIPS. If the distinction between a penalty, no change and a bonus is always based on your quality and cost aggregate score and whether you are less than one standard deviation below or above the mean, it could be possible at some time in the future that practices performing at a very high rate could still be penalized. The better case is to have a standard, still access relative performance but have the payment system based on a standard of excellence toward which practices can move.
C.E.O. SETMA
www.jameslhollymd.com
Adjunct Professor
Family & Community Medicine
University of Texas Health Science Center
San Antonio School of Medicine
Clinical Associate Professor
Department of Internal Medicine
School of Medicine
Texas A&M Health Science Center
|