Date

Fact Sheets

Home Health Compare Star Ratings

Home Health Compare Star Ratings

What Are Star Ratings?

Consumer research has shown that summary quality measures and the use of symbols, such as stars, to represent performance are valuable to consumers. Star ratings can help consumers more quickly identify differences in quality and make use of the information when selecting a health care provider. In addition to summarizing performance, star ratings can also help home health agencies (HHAs) identify areas for improvement. They are useful to consumers, consumer advocates, health care providers, and other stakeholders, when updated regularly to present the most current information available.

Why Star Ratings for Home Health?

The Affordable Care Act calls for transparent, easily understood information on provider quality to be publicly reported and made widely available  In order to provide home health care consumers with a summary quality measure in an accessible format, CMS proposes to publish a star rating for home health agencies on Home Health Compare starting in 2015. This is part of CMS’ plan to adopt star ratings across all Medicare.gov Compare websites. Star ratings are currently publicly displayed on Nursing Home Compare, Physician Compare, and the Medicare Advantage Plan Finder, and they are scheduled to be displayed on Dialysis Facility Compare and Hospital Compare in 2015.

Public reporting is a key driver for improving health care quality by supporting consumer choice and incentivizing provider quality improvement. To help consumers and their families make choices about where they receive home health care, CMS currently reports 27 process, outcome, and patient experience of care quality measures on the Home Health Compare website. The proposed star rating would become an additional measure available on the website. Several alternative methods of calculating the star rating were considered, borrowing from the methods used for other care settings, such as nursing homes, dialysis facilities, and managed care. After consideration of these alternatives, we propose the methodology below for HHC Star Ratings.

Selecting Measures for Inclusion in Home Health Compare Star Ratings

The star rating methodology proposed for use on Home Health Compare includes 10 of the 27 currently reported process and outcome quality measures. Proposed measures included in star ratings were chosen based on the following criteria:

  1. The measure should apply to a substantial proportion of home health patients and have sufficient data to report for a majority of home health agencies.
  2. The measure should show a reasonable amount of variation among home health agencies and it should be possible for a home health agency to show improvement in performance.
  3. The measure should have high face validity and clinical relevance.
  4. The measure should be stable and not show substantial random variation over time.

Based on these criteria, the proposed measures below were selected for inclusion. Appendix A provides more detail about the measure selection process.

Process Measures 

Outcome Measures 

Timely Initiation of Care

Improvement in Ambulation

Drug Education on all Medications Provided to Patient/Caregiver

Improvement in Bed Transferring

Influenza Immunization Received for Current Flu Season

Improvement in Bathing

Pneumococcal Vaccine Ever Received

Improvement in Pain Interfering With Activity

Improvement in Dyspnea

Acute Care Hospitalization


Which HHAs will Receive Star Ratings?

All Medicare-certified HHAs will be eligible to receive a star rating. Currently, HHAs must have at least 20 complete quality episodes for data on a measure to be reported on Home Health Compare. Completed episodes are paired start or resumption of care and end of care OASIS assessments. Episodes must have discharge date within the 12-month reporting period regardless of admission date. To have a star rating computed for Home Health Compare, HHAs must have reported data for 6 of the 10 measures used in the calculation.

Planned Star Rating Calculation

The proposed methodology for calculating the star rating is based on a combination of individual measure rankings and the statistical significance of the difference between the performance of an individual HHA on each proposed measure (risk-adjusted, if an outcome measure) and the performance of all HHAs. An HHA’s quality measure values are compared to national averages, and their rating is adjusted to reflect the differences relative to other agencies’ quality measure values. These adjusted ratings are then combined into one overall star quality measure star rating that summarizes each of the 10 individual measures.

The specific steps are as follows:

  1. First, an HHA’s scores on each of the 10 proposed quality measures are sorted low to high and divided into five approximately equal size groups (quintiles) of agencies. For all proposed measures, except acute care hospitalization, a higher measure value means a better score.
  2. The HHA’s score on each proposed measure is then assigned its quintile location, e.g. bottom fifth, middle fifth, etc., as a preliminary rating.
  3. The preliminary rating is then adjusted according to the statistical significance of the difference between the agency’s individual quality measure score and the national average for that quality measure. Because all the proposed measures are proportions (e.g., proportion of patients who improved in getting in and out of bed), the calculation uses a binomial significance test.
    • If the agency’s preliminary rating for a measure is anything other than 3, and the binomial test of the difference yields a probability value greater than .05 (meaning not significantly different from the national average), the preliminary rating is adjusted to the next level closer to the middle category (3). In other words, if there is no significant difference from the national average, a rating of 1 becomes 2, 2 becomes 3, 4 becomes 3, and 5 becomes 4.
  4. For each HHA, the adjusted preliminary ratings are then averaged across all the 10 proposed measures to obtain an overall average rating for the agency. The overall average rating is then translated into a star rating for reporting on HHC, using the following algorithm:

 

Algorithm for Agency Average of Adjusted Ratings Across All Measures and giving a HHC Star Rating, a 4.50 to 5.00 range is a HHC Star Rating of 5 stars, a 3.50 to 4.49 range is a HHC Star Rating of 4 stars, a 2.50 to 3.49 range is a HHC Star Rating of 3 stars, a 1.50 to 2.49 range is a HHC Star Rating of 2 stars, and a 1.00 to 1.49 range is a HHC Star Rating of 1 star.

Distribution of Home Health Compare Star Ratings

This proposed methodology was applied using Home Health Compare data for calendar year 2013. Only agencies that had data for at least 20 patients for six of the ten proposed measures were included in our analysis. Table 1 shows the distribution of star ratings among the HHAs when the proposed methodology is applied. The number of agencies with an overall rating of one star is less than one percent, while the number of HHAs receiving five stars is a little over two percent.

Table 1: Distribution of Overall Quality Measure Star Ratings, CY 2013

Quality Measure Star Rating

Frequency

Percent

1

93

0.97

2

2038

21.18

3

5006

52.02

4

2278

23.67

5

208

2.16

Appendix B provides information about the stability of HHC star ratings over time when using the proposed methodology.

Next Steps

CMS plans to solicit stakeholder feedback on the proposed star rating methodology, including the measures proposed for inclusion. This may include future Open Door Forums to continue the stakeholder dialogue. In addition, there will be a Frequently Asked Questions document posted on the CMS website, which will be updated based on questions received. The star ratings methodology will be finalized based on feedback received and additional technical analysis.

Appendix A: Evaluation of Measures for Inclusion in the Star Rating Calculation

Twenty-two of the twenty-seven measures currently reported on Home Health Compare were considered for inclusion in the proposed star ratings. The criteria used to evaluate measures for inclusion in the star rating calculation were:

  1. Applicability to a substantial proportion of home health patients, and reported for a majority of home health agencies;
  2. A reasonable amount of variation among home health agencies, and potential for improvement in performance;
  3. Face validity and clinical relevance; and
  4. Stability over time.

Table A.1 lists the 22 potential measures for inclusion with the following relevant statistics: the number of HHAs with data; the number of patient episodes of care for which each measures is applicable; national rates and distribution among home health agencies; and stability as measured by the correlation of home health agency scores between 2012 and 2013.

Most of the candidate measures met the criteria of applicability to the home health population and ability to report for most home health agencies. One process measure, “Heart Failure Symptoms Addressed,” and one outcome measure, “Surgical Wound Healing,” did not meet an acceptable threshold for this criterion.

The criteria of variability in performance and opportunity to show improvement was assessed by comparing the 20th percentile and 80th percentile columns, as shown in Table A.1. Of the thirteen process measures, eight had very little room for improvement, as indicated by an average home health agency rate of ninety-five percent or more, a similarly high 20th percentile value and an 80th percentile value of 100 percent. The process measure, “Foot Care and Education for Patients with Diabetes,” was almost as “topped out” as the other eight measures, and was marginal with respect to the number of home health agencies with enough data to report. Based on the combination of criteria, this measure was also eliminated from consideration.

Although the OASIS-based outcome measure “Improvement in Oral Medication Management” was not topped out, it showed a lower rate of improvement than the remaining outcome measures. This measure was ultimately excluded since it also showed weaker face validity than the remaining outcome measures (for example, cognitively impaired patients who appropriately rely on a caregiver for oral medication management may not show improvement in the measure).

After applying the first three measure selection criteria, the remaining measures included four process measures, five OASIS-based outcome measures, and two claims-based utilization outcome measures. To apply the final criteria, stability over time, we correlated home health agency scores of these remaining measures (shown in the last column of Table A.1) for 2012 and 2013. All of the remaining measures showed positive correlations between 2012 scores and 2013 scores, and the correlations for the process and OASIS-based outcome measures were all in the .60 to .80 range. Based on this, all four process measures and five OASIS-based outcome measures were proposed for inclusion in home health star ratings. As for the two claims-based measures, the year-to-year correlations were more modest. Only one of these claims-based measures, “Acute Care Hospitalization,” was initially proposed for inclusion in star ratings, because reducing potentially avoidable hospital use is an important national goal.  

Table A.1: Characteristics of Home Health Compare Quality Measures1

Home Health Compare Quality Measure

HHAs with Data

Episodes of Care (Thousands)

National Rate (Pct)

Average HHA Rate

20th Percentile

80th Percentile

Correlation 2012 with 2013

Timely Initiation of Care2

10,426

6,095

92

90

85

97

0.699

Drug Education on all Meds Provided to Pt/Caregiver2

10,423

6,038

93

92

88

99

0.717

Fall Risk Assessment

10,240

5,410

98

98

98

100

0.468

Depression Assessment

10,421

6,061

98

96

96

100

0.819

Influenza Vaccine Received for Current Flu Season2

10,047

3,838

72

71

58

86

0.762

Pneumococcal Vaccine Ever Received2

10,399

5,940

71

69

51

88

0.787

Foot Care and Education for Patients With Diabetes

9,103

2,110

94

94

91

100

0.659

Pain Assessment

10,438

6,123

99

98

98

100

0.751

Pain Intervention/Treatment

10,223

4,978

98

98

98

100

0.685

Heart Failure Symptoms Addressed

4,189

440

98

98

96

100

0.391

Pressure Ulcer Prevention Intervention

8,723

2,519

96

95

94

100

0.645

Pressure Ulcer Prevention in Plan of Care

8,937

2,621

97

96

96

100

0.672

Pressure Ulcer Risk Assessment

10,438

6,123

99

97

96

100

0.786

Improvement In Ambulation2

9,562

4,087

61

58

49

67

0.689

Improvement In Bed Transferring2

9,389

3,804

57

53

42

64

0.720

Improvement In Bathing2

9,625

4,190

67

64

55

75

0.740

Improvement In Pain Interfering With Activity2

9,486

3,451

68

65

54

79

0.776

Improvement In Dyspnea2

9,263

2,996

65

60

46

75

0.787

Surgical Wound Healing

4,587

689

89

90

86

96

0.544

Improvement In Oral Medication Management

9,134

3,086

51

47

37

58

0.725

Emergent Care Without Hospital Admission

9,301

2,775

12

12

15

9

0.310

Acute Care Hospitalization2

9,301

2,775

16

15

18

12

0.220

1 All statistics apply to calendar year 2013, except for the last two measures, which apply to Q4 2012 – Q3 2013. The correlations are between CY 2012 and CY 2013, except for the last two measures, which are between Q4 2011-Q3 2012 and Q4 2012 – Q3 2013.
2  Measure selected for inclusion in star rating calculation.

Appendix B: Stability of the Ratings over Time

To assess the stability of the proposed methodology from year to year, the star ratings were also calculated using the Home Health Compare data for 2012. A statistical measure of inter-rater agreement (a Kappa coefficient) was used to test the stability of star ratings between the two years. Table B.1 below shows the star ratings comparison from year to year for those agencies in which ratings could be calculated for both years. Using the proposed methodology, 34% of HHAs changed star ratings from year to year (33% changed by 1 star, 1% changed by 2 stars). No HHAs gained or lost three or more stars from year to year. The very small number of HHAs that gained or lost two stars suggest that the star ratings are fairly stable from year to year. The unweighted Kappa coefficient is 0.4511, showing good agreement between the 2012 star ratings and the 2013 star ratings. The weighted Kappa (which takes into account not only the number of HHAs that change ratings, but also the numerical magnitude of changes) is 0.5464.

Table B.1: Year-to-Year Stability of Star Ratings – CY2012 vs. CY2013

Overall Quality Measure Star Rating 2012

Overall Quality Measure Star Rating 2013

Percent

1

2

3

4

5

Total

1

20

41

8

0

0

69

0.78%

2

48

1182

774

45

0

2049

23.21%

3

1

550

3300

818

11

4680

53.02%

4

0

24

525

1222

102

1873

21.22%

5

0

0

5

69

82

156

1.77%

Total

69

1797

4612

2154

195

8827

100.00%

Percent

0.78%

20.36%

52.25%

24.40%

2.21%

100.00%

--

###