Reliability of Rating Scleroderma Digital Ulcers Doesn’t Improve with More Clinical Information
Ratings of digital ulcers by rheumatologists tend to differ according to the clinician involved. But a new study reports that including clinical information, such as the presence of pain or ulcer discharge, did not improve the reliability in how rheumatologists rate digital ulcers in scleroderma patients.
The findings were reported in the study “Does the Clinical Context Improve the Reliability of Rheumatologists Grading Digital Ulcers in Systemic Sclerosis?” published in the journal Arthritis Care & Research.
Although digital ulcers are often analyzed in clinical trials of scleroderma drugs, the rating is often done based only on visual inspection. Moreover, earlier studies have noted that individual rheumatologists seldom agree on a rating, leading to a low reliability of the measurements.
In contrast to clinical examinations, where doctors frequently ask patients to provide information about the severity and time frame of associated pain, as well as discharge, clinical trials tend to rate digital ulcers without the use of such information.
In an attempt to identify factors that may reduce the variability and improve the reliability of ratings, researchers at England’s University of Manchester randomly asked 51 rheumatologists from 15 countries to grade images of finger ulcers either with or without additional clinical information.
Researchers chose 80 images that were deemed representative of the range of ulcers seen in scleroderma patients, and asked rheumatologists to grade them on a three-point scale as either no ulcer (0), inactive ulcer (1), or active ulcer (2).
Each rater graded 90 images of which 80 were unique and 10 were repeated, to assess the so-called intra-rater reliability — the ability to rate an ulcer the same way on different occasions. The patients providing the images were also asked to grade the ulcers.
Researchers found that at times, raters with and without additional information were in perfect agreement. But in other instances, raters completely disagreed, and sometimes the clinical information did change the grading.
Analyses showed that the intra-rater reliability was generally good, both with and without additional information. This was not the case with the inter-rater reliability — raters most often disagreed with each other on how to grade an image, and the reliability was not improved when rheumatologists had access to information about the patient’s pain and discharge.
Interestingly, the study also revealed that patients and rheumatologists mostly disagreed on how an ulcer was classified.
Based on the results, researchers concluded that more research is needed to identify ways to improve ratings of digital ulcers in scleroderma, especially since many clinical trials assessing scleroderma treatments use digital ulcer ratings as a primary measurement.