Crossed Random-Effect Modeling: Examining the Effects of Teacher Experience and Rubric Use in Performance Assessments

Adnan KAN** Okan BULUT***
**Dr., Department of Education Sciences, Gazi University, Ankara, Turkey.
***Dr., Department of Educational Psychology, University of Alberta, Edmonton, Alberta,
DOI: 10.14689/ejer.2014.57.4

Abstract

Problem Statement: Performance assessments have emerged as an alternative method to measure what a student knows and can do. One of the shortcomings of performance assessments is the subjectivity and inconsistency of raters in scoring. A common criticism of performance assessments is the subjective nature of scoring procedures. The effectiveness of the performance assessment procedure depends highly on the quality and coordination of teacher and rubric. To gain a better understanding of the interaction between teachers and performance assessments, it is crucial to examine the effects of teacher-related factors and how teachers interact with scoring rubrics when grading performance assessments. One of these factors is teachers’ work and scoring experience. When grading performance assessments, the experienced teachers may be expected to grade student performances more objectively through their experience in instruction and evaluation than the teachers with less teaching and scoring experience.

Purpose of Study: This study investigates the impact of rubric use and teaching experience on teachers’ scoring behaviors in performance assessments. The effects of teaching experience and rubric use on the consistency of scores assigned by teachers is examined through an empirical study.

Methods: Crossed random-effects modeling was used to estimate rater effects, consistency among the teachers, and the effect of teaching experience.

Findings and Results: Results indicated that lack of a scoring guide may cause the teachers to establish their performance criteria and score tasks inconsistently. When teachers used a rubric, inter-rater reliability substantially increased. Experienced teachers and teachers with little teaching experience exhibited different severity patterns in scoring.

Conclusions and Recommendations: Based upon the results of this study, it appears that teachers who have more teaching experience tend to score performance tasks more leniently than teachers who do not have long years of teaching experience. The differences in the teachers’ scoring due to their teaching experience became negligible when all teachers used a scoring rubric. In addition to teaching experience, the potential effects of other external factors should also be considered to make the use of rubrics more effective in performance assessments. This study illustrated an alternative methodology to estimate variance components and the effects of fixed factors within the same analysis. A big advantage of this modeling approach over generalizability theory is that it allows for the separation of random and fixed effects from each other. Although the findings of this study enrich the limited knowledge about the effects of rubric use and teaching experience on teachers’ scoring behaviors, further research is needed to understand the reasons why these factors are influential.

Keywords: Performance Assessment, Rubric, Teaching Experience, Reliability, Rater Effects, Crossed Random Effects Model.