Title

Can Rasch analysis enhance the abstract ranking process in scientific conferences? Issues of interrater variability and abstract rating burden

Date of this Version

1-1-2015

Document Type

Journal Article

Publication Details

Citation only

Scanlan, J.N., Lannin, N., & Hoffmann, T. (2015). Can Rasch analysis enhance the abstract ranking process in scientific conferences? Issues of interrater variability and abstract rating burden. Journal of continuing education in the health professions, 35(1): 18–26.

Access the journal

© Copyright, The Alliance for continuing education in the health professions, the Society for academic continuing medical education, and the Council on continuing medical education, Association for the hospital medical education, 2015

2015 HERDC Submission

ISSN

1554-558

Abstract

Introduction: Abstract ranking processes for scientific conferences are essential but controversial. This study examined the validity of a structured abstract rating instrument, evaluated interrater variability, and modeled the impact of interrater variability on abstract ranking decisions. Additionally, we examined whether a more efficient rating process (abstracts rated by two rather than three raters) supported valid abstract rankings.

Methods: Data were 4016 sets of abstract ratings from the 2011 and 2013 national scientific conferences for a health discipline. Many-faceted Rasch analysis procedures were used to examine validity of the abstract rating instrument and to identify and adjust for the presence of interrater variability. The two-rater simulation was created by the deletion of one set of ratings for each abstract in the 2013 data set.

Results: The abstract rating instrument demonstrated sound measurement properties. Although each rater applied the rating criteria consistently (intrarater reliability), there was significant variability between raters. Adjusting for interrater variability changed the final presentation format for approximately 10–20% of abstracts. The two-rater simulation demonstrated that abstract rankings derived through this process were valid, although the impact of interrater variability was more substantial.

Discussion: Interrater variability exerts a small but important influence on overall abstract acceptance outcome. The use of many-faceted Rasch analysis allows for this variability to be adjusted for. Additionally, Rasch processes allow for more efficient abstract ranking by reducing the need for multiple raters.

This document is currently not available here.

Share

COinS
 

This document has been peer reviewed.